The history of computer revolutions will show a logical progression from the Mac to the iPad to something like this SpaceTop 3-D desktop, if computer genius Jinha Lee has anything to say about it.
The Massachusetts Institute of Technology grad student earned some notice last year for the ZeroN, a levitating 3-D ball that can record and replay how it is moved around by a user. Now, following an internship at Microsoft Applied Science and some time off from MIT, Lee is unveiling his latest digital 3-D environment, a three-dimensional computer interface that allows a user to “reach inside” a computer screen and grab web pages, documents, and videos like real-world objects. More advanced tasks can be triggered with hand gestures. The system is powered by a transparent LED display and a system of two cameras, one tracking the users’ gestures and the other watching her eyes to assess gaze and adjust the perspective on the projection.
Click to enlarge. Image via TEDLee’s new 3-D desktop, which he just showed off at the annual TED conference in Long Beach, California, is still in the early stages. But it lights the way toward the sort of quantum leap that’s all too rare in computer interfaces. It took decades to get from the command-line interface to the graphical user interface and Apple’s Macintosh. It took decades more to get from the Mac to the touch interface of iPhones and iPads. Lee and people like him might just get us to the next revolution sooner.
Others are working along similar lines. Gesture-based control has been incorporated into Microsoft’s Kinect, Samsung’s Smart TV platform, and products from startups like Leap Motion and SoftKinect (not to mention in cinema fantasyland ). Three dimensional display interfaces, meanwhile, have been brewing at the University of Iowa (home to “Leonar3Do” ), in the Kickstarter gaming sensation Oculus Rift.
“Programming the world will alter even our daily physical activities.”
Lee’s SpaceTop weaves these two threads together, joining 3-D interface with 3-D gesture controls, a smart convergence that will likely become more common. In his talk, Lee said SpaceTop and ZeroN, which he also demonstrated, are part of a broader shift toward interfaces we can grab with our hands. Humans seem to prefer collaborating via physical interfaces; think of a scale model, map, or whiteboard. People also like interacting in multiple modalities; think of reading a book, underlining words and scribbling in the margins in pencil, and taking separate notes on a pad.
Today’s computers allow none of this, flattening all interaction onto a single screen.
“If you somehow allow computers to accept different types of modalities in the same workflow, that will be much more effective,” Lee said in an interview. “Physical activities like how you dance an how you play sports – there will be some sort of digital aid in there.”
At TED, Lee showed SpaceTop and ZeroN alongside a collapsible pen that can be pushed “inside” a computer display; as the pen folds into itself, the monitor shows the end of a pen moving deeper and deeper into the display. He also showed a video of a smartphone app that, when paired with augmented reality goggles, would allow the user to “try on” a virtual watch from an online store before ordering the real thing. The common thread between these systems, Lee says, is that they bring the physical world and digital world much closer together, allowing automated physical interaction he refers to as “programming the world.”
“Programming the world will alter even our daily physical activities,” he told the crowd. “With our two hands we’re reaching into the digital world.”
It’s not clear whether this type of user experience will remain stuck in a niche – embraced chiefly by say architects, geneticists and other 3-D designers and researchers – or whether it has the potential to go mainstream. People are used to gently flicking computer mice and grazing keyboards and tablet screens; do they really have the stamina to reach into their computers and flail their arms around?
Lee thinks so. He says he’s not looking to replace lazy interfaces for activities like writing email or consuming video. But 3-D interaction makes sense for certain uses cases – collaboration, design, and potential new activities Lee envisions like trying on virtual clothes.
Much of the success of systems like SpaceTop and ZeroN will ride on the details, like how much space the user must traverse and when a 3-D interface is suggested to the user. That’s why this technology deserves attention from companies that can be smart about refining it, like Microsoft, or even Apple, which popularized the computer tablet after such devices had been languishing in obscurity. For now, 3-D computing seems to be off to a good start in Lee’s careful hands.
“It shouldn’t be in the hands of scientists, it should be in the hands of normal people,” Lee says. “It’s really important to have that eye when we think about what we want to do with this to design a beautiful world. It could be anything when the power of digital escapes the screen but it’s really our responsibility to design this together.”
This article has been modified from the original, which incorrectly described the state of development of the system for “trying on” a virtual watch. Feb. 26 10:45 pm ET
Thalmic Labs co-founder Stephen Lake building the Myo. Photo: Thalmic Labs
Forget about robots rising up against humans for world domination. In the future we’re all going to be robot-human hybrids with the help of wearable computers. We’ve already seen Google Glass, the search giant’s augmented-reality glasses, and now the latest Y Combinator startup to come out of stealth, Thalmic Labs, is giving us a wrist cuff that will one day control computers, smartphones, gaming consoles, and remote-control devices with simple hand gestures.
Unlike voice-detecting Google Glass, and the camera-powered Kinect and Leap Motion controller, Thalmic Labs is going to the source of your hand and finger gestures – your forearm muscles. “In looking at wearable computers, we realized there are problems with input for augmented-reality devices,” says Thalmic Labs co-founder Stephen Lake. “You can use voice, but no one wants to be sitting on the subway talking to themselves, and cameras can’t follow wherever you go.”
I’d argue that thanks to Bluetooth headsets and Siri, we’ve already been talking to ourselves for the last decade, so talking to my glasses isn’t a huge stretch. But, I won’t deny that it looks cool to casually flick my hand to change the song on my MacBook, which is what Thalmic Labs is promising with its $149 forearm gadget called the Myo (a nod to the Greek prefix for muscle, but rhymes with Leo), which has an adjustable band that can accommodate almost anyone.
Using a technique called electromyography, which measures the electrical impulses produced by your muscles when you move them, the Myo’s sensors can detect when you make a gesture and translate that to a digital command for your computer, mobile device, or remote controlled vehicle. “When you go to move you hand, you’re using muscles in your forearm which, when they contract and activate, produce just a few microvolts of electrical activity,” says Lake. “Our sensors on the surface of the skin amplify that activity by thousands of times and plug it into a processor in the band, which is running machine learning algorithms.” Similar technology is found in high-tech arm and hand prosthetics, as well as the Necomimi Brainwave Controlled Cat Ears.
Since most humans activate the same muscles when they point their finger or wave their hand, Thalmic Labs was able to compile a set of specific electrical patterns based on our movements and translate them into thousands of digital commands. As you wear the Myo over time, Lake says, it begins to learn your unique electrical impulses and accuracy improves. The device also has haptic feedback – a small vibration – to tell you when you’ve completed a recognized gesture, such as a hand swipe or finger pinch. That helps shorten the learning curve, says Lake.
In a video showing off the Myo, the device controls video and audio playback, switches between screens on a computer, and directs remote-controlled devices, but Lake says there are many more ways to use it. “If you think about your daily life, you use your hands to interact with and manipulate just about everything you do, from pressing numbers on your phone to picking up your coffee,” says Lake. “Now think if we can take all those motions and actions and plug them into just about any computer or digital system, the possibilities are endless.” When the Myo ships in late 2013, Thalmic Labs will offer an open API so that developers can connect it to other systems or build their own programs.
The finished Myo wristband. Photo: Thalmic LabsThough the idea of a motion control wristband might only appeal to the hardest-core of wearable computer enthusiasts right now, Lake has high hopes that the trend will eventually reach the masses. ‘Right now we’re just on the cusp of a major shift in computing, and whether it’s a Google product or something else, at some point in the next couple years wearable computing devices are going to change how everyone will communicate and interact with technology,” he says. “Ultimately the line between us and our devices will start becoming a lot more blurred.”
Thalmic Lab’s timing is spot-on. Google finally pulling the curtain back on its Project Glass augmented reality glasses has spurred (mostly positive) chatter about wearable computers, and how they’ll change our relationship with technology. Though Glass and Myo have a few years to go before more than just a slice of population will want to have them, it’s easy to picture a future in which everyone is wearing a computer. And it’s not a stretch to imagine the same people who would don a pair of glass-less glasses that can record video and photos, send emails and text, and look up anything on Google, would also slide their arm into a muscle-sensing band that can control computers with a hand gesture. If you’re one of those people, pre-order for the Myo starts today.