Natural User Interfaces: Adding The Human Component To Computing
Guest post by Lewis Shepherd, the Director of the Microsoft Institute for Advanced Research in Government. A different version of this post appeared as “Using the body in new virtual ways“ earlier on his blog Shepherd’s Pi.
A lot of people don’t realize that Microsoft spends billions of dollars a year on research and development, with laboratories near Seattle, Boston, and around the world. Most of the time, the researchers are huddled together and working hard in private, but every once in a while they come out to show off what they’re working on.
I recently attended a big event in Atlanta nicknamed CHI 2010, which stands for the much longer Conference on Human Factors in Computing Systems, hosted by the Associate for Computing Machinery (got that?). Some of Microsoft’s research involving what you might call “putting the human back in computing” was well-represented there, and I wanted to highlight three interesting, cutting edge technologies that may very well change the way you work – 5 or 10 years from now.
Skinput: Your Arm Is the Keyboard Now
Skinput is essentially a new technology that turns your arm into a touch screen. Technologists call this an example of Human-Body User Interface, and this particular example is joint work by Chris Harrison of Carnegie Mellon University and Desney Tanand Dan Morris of Microsoft Research.
In a great showcase of both biological and computer science research,Skinput uses bio-acoustic technology that enables the location and vibration of finger taps on hands and arms to be analyzed and used as signals. As consumers demand more computational capability while mobile,it’s going to be worthwhile to realize that, as the authors state in their research paper, “The body is portable and always available, and fingers are a natural input device.”
Manual DESKterity: Simultaneous Pen and Touch
While the iPad launch got Apple a lot of recent coverage, and it’s certainly a significant device, one big thing it’s missing is any way to input besides your finger. While finger-touch screens are useful, the addition of stylus input (as seen in various tablet PCs) offers a more precise and multifaceted input. A recent article from MIT’s Technology Reviewsums this up nicely, “Touch screen interfaced may be trendy in gadget design, but that doesn’t mean they do everything elegantly. The finger is simply too blunt for many tasks.”
Microsoft Research has been working toward better tying together hardware, software, stylus, and human touch, and they published some of this research in a CHI 2010 paper called, “Manual Deskterity: An Exploration of Simultaneous Pen + Touch Direct Input,” with a number of co-authors led by Ken Hinckley of Microsoft Research. Their digital-drafting-table prototype makes innovative use of Microsoft’s existing Surface technology, with multitouch and gesture-recognition as Ken describes:
We explore a division of labor between pen and touch that flows from natural human skill and differentiation of roles of the hands. We also explore the simultaneous use of pen and touch to support novel compound gestures.
Here’s a great video that shows Manual DESKterity in action!
https://www.youtube.com/watch?v=9sTgLYH8qWs
Telepresence: Embodied Social Proxies (ESP)
Teleworking or other versions of working on virtual teams is becoming more and more common with the rise of technologies like Web 2.0 social tools, video conferencing, and mobile smartphones. Everyone knows, however, there’s nothing that can replace face-to-face meetings; yet, those can be expensive or impossible much of the time. Is there a compromise?
Well, my colleagues at Microsoft Research have been working on this, too. In a paper called, “Embodied Social Proxy: Mediating Interpersonal Connection in Hub-and-Satellite Teams” a group of researchers describe something that might appear a lot like a robot which can act as a stand-in for a remote worker in team meetings. It’s a little hard to describe to you in words – luckily, we have an awesome video of ESP in action. Particularly in large, distributed companies like Microsoft, or similarly large, distributed entities like government agencies from the U.S. Army to the Forest Service, these new forms of virtual collaboration will see many applications.
The ESP mixture of high-tech and low-tech means that, essentially, anyone could deploy this device in an office building to enable anyone from their organization to participate in real-life meetings by being “virtually embedded” in the room. Pretty cool.