In 1962, Seattle hosted a futuristic world’s fair called Century 21. Remnants from the fair still exist, such as a monorail system and the Space Needle. A humanoid robot being developed near the landmark structure would probably feel right at home there.





“Robots that act as human agents are a staple of science fiction literature and futuristic television shows like ‘The Jetsons,’” says Rajesh Rao, associate professor of computer science and engineering at the University of Washington. Rao and his colleagues claim they have taken “a primitive first step” down this road by controlling the movement of a humanoid robot with signals from a human brain.

“An individual can order a robot to move to specific locations and pick up specific objects merely by generating the correct brain waves that reflect the individual’s instructions,” explains Rao. “It suggests that one day we might be able to use semiautonomous robots for such jobs as helping disabled people or performing routine tasks in a person’s home.”

The individual who controls the robot wears a cap embedded with 32 electrodes. The electrodes pick up brain signals from the scalp based on a technique called electroencephalography. The person watches the robot’s movements on a computer screen via two cameras-one mounted on the robot and another above it.

The “thought commands” are currently limited to a few basic instructions. For instance, a person can instruct the robot to move forward, choose one of two available objects, pick it up, and bring it to one of two locations. Preliminary results show 94 percent accuracy in choosing the correct object.

Objects available to be picked up are seen by the robot’s camera and conveyed to the user’s computer screen. Each object lights up randomly. When the person looks at the object that he or she wants to pick up and sees it suddenly brighten, the brain registers surprise.

The computer detects this characteristic pattern of brain activity and conveys the choice back to the robot, which then proceeds to pick up the selected object. A similar procedure is used to determine the user’s choice of a destination once the object has been picked up.

“One of the important things about this demonstration is that we’re using a ‘noisy’ brain signal to control the robot,” explains Rao. “The technique for picking up brain signals is noninvasive, but that means we can only obtain brain signals indirectly from sensors on the surface of the head, and not where they are generated deep in the brain.”

As a result, Rao says the user can only generate high-level commands, such as indicating which object to pick up or which location to go to. “The robot needs to be autonomous enough to be able to execute such commands,” he points out.

Rao and his colleagues plan to extend the research to use more complex objects and equip the robot with skills such as avoiding obstacles in a room. This will require more complicated commands from the human brain and more autonomy on the part of the robot.

“We want to get to the point of using actual objects that people might want the robot to gather, as well as having the robot move through multiple rooms,” notes Rao. “One goal of future research is to make the robot’s behavior more adaptive to the environment.”