Vision systems used to guide robots pose different challenges for manufacturers than vision systems used for part inspection. The biggest challenge is guiding robots in 2-1/2D applications, where guidance is used so the camera maintains a proper perspective of a part that is level but included in a stack.

An M-900iA Robot uses vision to pick parts from a conveyor. Photo courtesy FANUC Robotics America Corp.

Vision systems used to guide robots pose different challenges for manufacturers than vision systems used for part inspection. The biggest challenge is guiding robots in 2-1/2D applications. Two D and 3D applications are easier and more straightforward.

“A 2D application involves vision guidance for parts that sit on a flat surface,” says Steve Prehn, senior product manager for vision for FANUC Robotics America Corp. “Whereas 3D guidance is for parts that are tilted or of different heights.”

By contrast, in 2-1/2D applications, guidance is used so the camera maintains a proper perspective of a part that is level but included in a stack, says Prehn. To a 2D camera positioned above the stack, parts can appear to be a different size than those lying flat on a conveyor. The apparent size of the part can be used to calculate and account for the height of the part and the distance from the camera to the part so that the robot can move to pick it up.

“A more advanced 2D method uses multiple features of a part to calculate not only height, but also tilt. Imagine a piece of paper on a flat surface in front of you that is 8.5 by 11 inches,” says Prehn. “You can tell where they corners are at in space. If you raise up two corners and keep the rest flat, you have a trapezoid with two edges that appear to be bigger or farther apart than they were originally.



In 2-1/2D applications, guidance is used so the camera maintains a proper perspective of a part that is level but included in a stack. Graphic courtesy FANUC Robotics America Corp.

“However, you know the sheet didn’t change in size. Instead, you need to understand that there’s a geometric relationship between those opposing corners that tells you that they are tipped at a certain angle relative to your vision. The camera must do the same for the robot in 2-1/2D applications.”

Prehn says several factors can impact the reliability of a vision system used for robot guidance, including lens selection, lighting and vision-software algorithms.

To select the proper lens, a manufacturer needs to know how far the part is from the camera and the required magnification to clearly see the part. These two data will determine the camera’s field of view, says Prehn.

“Also be sure of the part size and its position when photographed by the camera,” says Prehn. “This helps us calculate part distance relative to the camera and determine what distance the robot should move to engage the part.”

Lighting is important, but not as critical for guidance as it is for part inspection. Prehn says the lighting needs to provide enough illumination so the camera can clearly detect the part’s edges. He recommends using LED lighting and says lighting needs to be consistent and strong enough to drown out any present ambient light.



To a 2D camera, parts in a stack can appear to be a different size than those lying flat on a conveyor. This misperception can be avoided by training the camera to calculate and account for the height of the part and the distance from the camera to the part. Graphic courtesy FANUC Robotics America Corp.

As for vision-system algorithms, Prehn says they must always meet the manufacturers’ goals-which need to be specifically defined. For example, the company needs to know exactly what it wants to measure and or locate.

Prehn says iRVision, FANUC’s proprietary vision-system software, features several algorithms that perform both basic and advanced vision tasks. At the most basic level, the software enables the camera to immediately tell the difference between a part and its background.

At a more advanced level for example, the software features a caliper tool that measures part width. iRVision also has a geometric pattern match tool that helps the camera recognize a part shape and find its location within an image.

Another way iRVision helps end-users improve vision-system reliability is by performing multi-level calibration. The first level involves converting pixels to a unit of measure, which is millimeters. The second level is relating pixels into the x-y-z space where the part is located.

“Calibration is very important,” says Prehn. “It helps make sure the robot is always moving in the proper direction, even if the camera changes its orientation, like it does when it is mounted on the end of the robot's arm.”