Post from Harry Hutchinson:
I always tell people how cool my job is. I get to talk to all kinds of interesting people—researchers, inventors, regulators, rule-makers, and rebels. I see some very clever stuff and sometimes play with it.
Ahmed Noor is a frequent contributor to Mechanical Engineering magazine. His most recent article, “Intelligent and Connected” in the November issue, is a forward-looking discussion of smart transportation systems.
He also heads a lab, the Center for Advanced Engineering Environments, at Old Dominion University in Hampton, Va. When I was invited to an open house there, I was eager to go.
The lab works with commercial partners to further the development of computers as engineering tools, with a particular emphasis on communication and interfaces.
When I showed up, everyone was standing around talking to a telepresence robot. Michael Clark, from the Institute for Software Research at Carnegie Mellon, had brought it along. He has four of the things, which go by the brand name Anybots, and he studies ways to use them in education.
The robot fits into a box about the size of a nightstand. It rolls around on two wheels like a Segway. It has an adjustable pole for a neck and a head with two eyes that are cameras. One of the eyes contains a laser pointer. The robot’s forehead has a small screen where you can see the operator.
In this case it was Scott Friedman, an M.D. controlling the robot from his home near Pittsburgh, 350 miles away. Through the robot, Friedman could follow us from room to room, see what was going on, and make his presence known.
Another demonstration at the open house was a walk-through of a highly detailed plant simulation developed by Eon Reality. It simulated a petroleum site in Angola.
Mats Johansson, Eon’s president, said it was developed to train technicians. You can open a schematic of the plant, click on the site you want, and the simulation will show you how to get there and what you will see. Then you can walk someone through the steps of what to do. The job checklist shows up in a window on the screen.
Eon also has a prototype of augmented-reality spectacles that work with a smartphone. You can call up an engineering drawing or a CAD model, for example, and see it projected on the lenses while you’re working on the equipment.
There was also a presentation on another emerging technology. Noor’s lab is working with a company called Emotiv on a computer interface that reacts to brain waves.
A student from the lab, Hari Phaneendra Kalyan, showed how the system works. He was able to open Internet Explorer and make Google.com appear on the command line, just by thinking it.
Then I got to try my head at it.
The controller is an EEG headset with 14 contact points. A schematic on the computer screen shows them black (no contact), yellow (getting warm), or green (go).
Kalyan and a fellow student, Ben Cawrse, helped me get ready. We discovered it’s a challenge to make green contact with all the electrodes if you wear a pony tail.
After a bit of trial and error, they worked some electrodes under my hair and put others on my forehead and behind my ears. That gave us 11 green lights out of a possible 14.
Given that many greens, I thought hard to move the cursor, but nothing happened.
The graph of brain activity came on screen. The blue line, which reads frustration level, was very high, so I knew the system was working.
I said I was stymied in trying to move the cursor, so Cawrse moved to another window labeled “mouse” and clicked an icon. Suddenly, wherever I looked on the screen the cursor went with me. The unexpected ease made the experience downright eerie.
Cawrse switched to a page with a list of commands. He selected “push.”
An image on the page showed a box floating in the air. Kalyan told me to think hard about pushing it. So I did, gritting my teeth, even leaning into it.
Nothing happened. I wondered where my blue line was—probably pretty high.
Then the screen changed a little, but not because of anything I did or thought. As usual, I was about a page behind. Nothing was supposed to happen on the screen during that exercise. We were teaching the system to know my “push” brainwaves.
It learned well. When I thought “push” at the right time, that virtual box started to slide into cyberspace.
There is a team at the lab working with Emotiv. They include Kalyan and Cawrse, who are computer science students, and Ajay Gupta of the computer science faculty.
Ahmed Noor estimates that this technology right now is about where voice-recognition was 20 years ago. He told me the lab is moving towards more advanced applications.
You’re not going to create much by pulling or pushing images on a screen. But if you can do that much today just by thinking it, where is this technology going to be in 20 years? Will people now locked silenced by severe physical disabilities be able to share some of their genius with us?