DALLAS—Engineers at the University of Texas at Dallas have harnessed artificial intelligence technology to teach robots how to recognize objects. The new system allows a robot to push items multiple times until a sequence of images are collected, which in turn enables the system to segment all the objects in the sequence until the robot recognizes the objects. Previous approaches have relied on a single push or grasp by the robot to “learn” the object.

The technology is designed to help robots detect a wide variety of everyday objects and to identify similar versions of items, such as ketchup and water bottles, that come in multiple brands, shapes and sizes.

“After pushing the object, the robot learns to recognize it,” says Yu Xiang, Ph.D., assistant professor of computer science and director of the Intelligent Robotics and Vision Lab at UT Dallas. “With that data, we train the AI model so the next time the robot sees the object, it does not need to push it again. By the second time it sees the object, it will just pick it up.”

The robot pushes each item 15 to 20 times. Multiple pushes enable the machine to take more photos with its RGB-D camera, which includes a depth sensor, to learn about each item in more detail. This reduces the potential for mistakes.

“To the best of our knowledge, this is the first system that leverages long-term robot interaction for object segmentation,” claims Xiang. “[Our next step] is to improve other functions, including planning and control, which could enable tasks such as sorting recycled materials.”