Notice: automatic_feed_links is deprecated since version 3.0! Use add_theme_support( 'automatic-feed-links' ) instead. in /home/kawan2012/whichlight/wp-includes/functions.php on line 2908
android « Which Light

android

innovative user interfaces for the Android using Processing

Sunday, June 3rd, 2012

Android programming can sometimes feel a bit clunky because of that gap of time between changing code and seeing the effect.  It takes some time to compile and upload the sketch to the phone, and even longer to get the virtual machine up and running.  This gap of time can slow the tinkering and exploration process.  Processing allows you to prototype interactive visualizations faster.  I’ve found Processing to be a great place to prototype ideas for Android UIs- especially new ways to interact with screens, using processing-android to export android applications.

In this post I’ll describe a bit how you can rapidly prototype gestural UIs in Processing and then hook them up to a larger Android application in Eclipse.  However, there are limitations.   As you’ll see, there are methods specific to android programming that work only in processing-android, and others that are easier to integrate in Eclipse.

The accompanying code to this post is here.

Whats kind of cool about writing this app in Processing is that my app can run on the Desktop, in the browser and (by changing two lines) on my Android phone.

Making the basic UI in Processing

The application I want to make is an interface to output low frequency waveforms.  I want to generate repeating strings of numbers, which I can then hook up to something else, perhaps an LED, or pitch parameter for a synthesizer.  I build the interface in Processing, using mousePressed to perform actions when the mouse is clicked or not.  The location of the mouse is given by mouseX and mouseY.  Basically the position along the Y axis maps to a value.  When I have the mouse pressed it begins recording numbers and I can move along the Y axis.  When I let go, it repeats that string over and over.

During the development process I just push the processing ‘play’ button to see the results of my code. I can make changes in the color, background, font, whatever, and then push play again to see the effects. Closing the time in the development loop makes it much easier to settle to a UI I like.


Above you can see the app. It is rendered in the browser with processingjs. Basically I produced a single sketch from the multiple processing (.pde) files with cat *.pde > processing_js_script.pde, and then linked to that file in a canvas element.

Moving onto the Android

Once I get that UI, I switch into Android mode and resave the sketch adding “_android”.

I do this to make changes specific to the android version of the UI. I want to first make some layout changes. I want the app to always be in landscape mode (horizontal), and I want it to take up the whole screen on my phone. I replace

size(800, 400);

in the Processing sketch to

orientation(LANDSCAPE);
size(screenWidth, screenHeight);

in the android sketch. At this point you can plug in your phone and run it on your phone with Sketch>>Run on Device. When you do that you’ll find that mousePressed is called when you touch the screen, and that mouseX and mouseY is given by the x,y coordinates of your finger.

There are other methods implemented for processing-android that you can include to further develop your user interfaces. If you wanted to include multitouch (I used this when making Duel Cities) you would include extra code in the processing-android sketch instead of the Processing sketch since Processing would not recognize the multitouch methods. Similarly you can include sound events for games in the processing-android sketch. I find the forums are a great resource to see what was been implemented in processing-android.

Now there are some features that you may want to use in your application that are not yet implemented with Processing. I found this recently when I was trying to include Bluetooth in my app. I tried a few and my processing apps crashed.

Moving into Eclipse for more Android functionality
With the basic UI completed, I can export the app into android. This is the right arrow on the processing-android controls:
.

This introduces a new folder in your sketch called ‘android’. You can open up these files as an Android project in Eclipse. You’ll notice all of the processing code is included as a class extending PApplet. I found it pretty useful to see this, as it gave me an idea of how processing-android wrapped my code to get it working on the droid. Now you can integrate your UI from processing with other libraries or into larger Android applications.

You can check out the code for the gesture looper on github.

connecting an android phone to an arduino via bluetooth

Friday, May 11th, 2012

The Goal

I’ve been working on having my Android phone talk to my Arduino wirelessly with Bluetooth.  I want to do it because I think it is pretty cool how powerful smartphone gesture interaction is- swipes and taps on a touchscreen is compelling- and I am surprised there are not more products and things around us that can talk to smartphones through local wireless communication-  even basic consumer electronics. Once something is connected to a smartphone, the possible functions expands.  Most simply, just consider any electronic object being able to interface not just with touch gesture, but also with the web through tethering.

Materials

I am using an HTC Incredible  and BlueSmirf Silver with an Arduino Uno.

Read the rest of this entry »

computer vision on the android: getting started with opencv-android

Sunday, November 6th, 2011

I had the urge to get some computer vision analysis going on my droid.  Something simple, something done before, like a live image coming through my camera with some face detection or edge detection. Along the installation process I found a few difficulties, and here I’ll outline what worked for me to hopefully save you some time.  I’m installing on Mac OS 10.6.8.

A quick aside

But before we start, you might ask:

When would using computer vision be helpful?  Just trying to get a context for its utility.

Well, here is why I’m excited about it:

-games! I want to try and make some games using the real time feed from the camera.  There have been a few, but not enough that are fun to convince me the market is saturated.

-augmented reality! I’m not really into many of the augmented reality apps I’ve seen… maybe knowing some more CV stuff could lead to some insights in that direction.

-science! People are doing analysis of facial expressions for emotions, or even for health diagnostics.  Tools to facilitate quantitative analysis of behavior can be made more mobile, and more cheaply.

-cameras of the future! I crashed the media lab sponsor week a couple weeks back. When I was wandering by the Camera Culture group, one of the researchers mentioned to me that for the most part the camera is pretty similar to what it was 40 years ago- despite significant advances in technology.  That group explores the potential of newer technologies for cameras.  Mobile 3D reconstruction has emerged in the last year for mobile devices- and I think there is much more we can do.

Ok, back to getting it installed

First, there is this awesome post that will get you started even if you have never done any android development.  As recommended, I used the Tegra downloads to get up and running fast- it’ll get you the SDK and eclipse and whatever else you need.

Then you need to get opencv-android, its basically hooking up all the power of opencv with android, by the folks at willowgarage.  The tutorial recommends you to download it from here, but when I did, I got the following issue when I had everything in Eclipse:

ERROR: resource directory <path-to-opencv-android>\OpenCV-2.3.1\modules\java\android\res’
does not exist

[Same error posted on the boards]

After trying several things in Eclipse and wondering what was going on, I found this post on the boards recommending to build from the trunk.  On this page you can find installation instructions for the trunk.

For me it didn’t work the first time.  You may have some error about ‘install_name_tool’, and fortunately thats covered in the Troubleshooting part right after the Linux and Mac OS build instructions. There may be a couple other issues that come up.

After searching for ‘install_name_tool’ (as the troubleshooting describes) you may find it elsewhere.  I found it in:

/Developer/usr/bin/install_name_tool

The CMake file ‘CMakeFindBinUtils.cmake’ I had to edit was not in /opt/local/share/… , and instead was in

/usr/local/Cellar/cmake/2.8.4/share/cmake/Modules/CMakeFindBinUtils.cmake

since I installed with Homebrew.

After I made those minor changes, the build worked.  Point to the OpenCV library in this build folder when you are in Eclipse, hit F5 to refresh the library, and things should load up correctly (this will make sense when you read through the tutorial).

I loaded the projects straight onto the phone by this process. Its kind of exciting when it first works. My favorite is the puzzle example.  Looking forward to diving into the code and making some kooky real time CV toys.  Maybe even a synth! (hah-I just came back from music hack day).

 

 

Open Mesh Networks: Innovative p2p communication projects around the world

Saturday, August 6th, 2011

A few months ago I met Josh King, who was in town for the Media Reform Conference.  At the time, I recently got an Android phone and thought a lot about why this incredible computer that fits in my pocket could not talk directly to other phones.  Instead it had to go through a tower.  It turns out that such a case is a special instance of a mesh network, which is exactly what Josh was working on.  It was exciting to hear this- because the potential of mesh networks are immense, but I knew no one who was working on them.  He told me about open mesh networks being developed in cities around Europe- and I wondered- why doesn’t it exist here?

A few months earlier the internet was shut down in Egypt.  Who would have guessed it? A week before it would be a work of fiction.  But it happened, and months later the New America Foundation received some press about their internet in a suitcase.

‘Internet in a suitcase’ is the work of the Open Technology Initiative at the New America Foundation, led by Sascha Meinrath.  He came to the Berkman center a week ago to speak of the work at OTI focusing on the topics of community broadband, M-lab, and open mesh networks.  He spoke of these projects anecdotally, and I recommend watching the webcast.  Its interesting to hear about not only the growth of OTI, but the pressure against their work, and the propaganda spread against their work by telecoms- to an extent it boils down to who has more PR money.

But one thing inspiring about the New America Foundation is that any critiques they have are backed up with solutions that are ready to be implemented.  One of the solutions, along the lines that everyone should have access to the web, is the development of open mesh networks.

I spoke with Sascha after his talk- specifically on spreading info on how to build these mesh networks.  There is so much potential in them, including options for community level wireless, creating a platform for local applications, and the ability to connect if the communication infrastructure goes down (natural disasters, oppressive regimes).   He mentioned a few projects happening, and I did some research afterwards.  NAF is working on some projects in house, but is also trying to make connections between others who are working on mesh networking in some form. And here is what we have:

OpenWrt: Wireless Freedom : “OpenWrt is a highly extensible GNU/Linux distribution for embedded devices. Unlike many other distributions for these routers, OpenWrt is built from the ground up to be a full-featured, easily modifiable operating system for your router.”

The Serval Project: “Communicate anywhere, any time … without infrastructure, without mobile towers, without satellites, without wifi hotspots, and without carriers. Use existing off-the-shelf mobile cell phone handsets. Use your existing mobile phone number wherever you go, and never pay roaming charges again.”

Commotion Wireless: “and organizers here propose to build a new type of tool for democratic organizing: an open source “device-as-infrastructure” distributed communications platform that integrates users’ existing cell phones, WiFi-enabled computers, and other WiFi-capable personal devices to create a metro-scale peer-to-peer (mesh) communications network.” There are tons of resources here, this is one of the OTI projects as well.

Byzantium: “The goal of Project Byzantium is to develop a communication system by which users can connect to each other and share information in the absence of convenient access to the Internet. “  This is a project by HacDC.

FabFi: “FabFi is an open-source, FabLab-grown system using common building materials and off-the-shelf electronics to transmit wireless ethernet signals across distances of up to several miles. With Fabfi, communities can build their own wireless networks to gain high-speed internet connectivity—thus enabling them to access online educational, medical, and other resources. ” And instructions on how to make it, all emerging form the FabLab project at the Center for Bits and Atoms.

DIY Mesh Guide: “Reliable, affordable and easy access to telecommunication services for all has been identified as key to social and economic development in Africa. Self-provisioning and community ownership of low cost, distributed infrastructure is becoming a viable alternative to increase the penetration of telecommunication services in rural Africa. The recent emergence of wireless mesh network technology (based on IEEE 802.11 a/b/g standards) can help to improve the delivery of telecommunication services in these regions. ”

 

So those are a few projects that are being carried out right now. I’d say if you’re interested in any of these, get involved. They are all open source projects, and benefit from a community of developers and users providing feedback.  In the end, it will be an immensely useful tool for all.  The people who first get involved will have an idea of the power of the projects, and like any successful platform, this will be a tool that will birth other tools.  For instance, public apps.  This has the potential to provide tons of jobs, not only to build the infrastructure bottom-up, but to also build on top of it, and think of new ways to interface with it.

Investments in public infrastructure have an immense positive impact on the well being of citizens.  Korea and Australia are currently putting in a lot of funding on a public broadband infrastructure.  The effects of this will become obvious in the years to come, and it will be clear that in the US such measures would have been beneficial.  The above information resources and projects put such matters back in the hands of people.  One can develop a mesh network with their community and allow people to connect cheaply, or even freely. Imagine the impact on education, information flow, and community engagement.

 

 

Mobile data platform and personal data storage

Saturday, July 16th, 2011

Wow. No secret that I dig the work of Alex Pentland at the Media Lab, but this next project is awesome. funf an Android app that will help you collect and store data of your cell phone usage. Previous research from his Media Lab group, Human Dynamics, has shown that such high resolution data can help explain our behaviors. Based on your cell phone usage, its possible to infer things about an individual, including preferences in food or music, but also in diagnosing illnesses.

But here’s the kicker. This app will empower users with the data. Instead of collecting all of the data and selling it, or using it for ads, its going to be an implementation (probably the first) of the personal data storage idea. Check out the video below for quick details on it. Basically, the user can choose which companies and apps to share data with.

This enables innovation for companies, and also privacy for individuals. I imagine in setting up that marketplace, the companies that are most just with the data will win out. Awesome. One of those things that seems obvious- but really at this point when so many companies have tons of our data, its hard to imagine who will adopt these practices.

People are becoming increasingly concerned of their data, and what it is being used for. So there is an incentive to build apps for this framework, and funf will allow developers to focus on the interfaces, analytics, and machine learning logic to build inferences- instead of the data collection. And funf will handle all the personal data storage stuff- what a great way to spread and implement the concept.

The PDS is just one example of a funf back-end, however it is one that we believe is a very important for an ecosystem where massive amounts of data are collected on end-users. If we want such an ecosystem to flourish, we have to protect users’ privacy, and give them tools to understand and control who has access to their data and what is done with it. [from the funf site]

Also, its open source. It will be cool to see what sorts of remixes people build using the framework, and what additional sensory peripherals are included.

Now, if you had this kind of mobile data, what would you make with it? I’d try and figure out things like ‘data signatures’. Would there be a way that I could identify myself more accurately solely by my behaviors? Credit card companies already use this idea. How could it become useful once harnessed by a mobile phone? Anyway, I’m excited to hear thoughts of other implications.