With machine vision, robots can accurately manipulate parts, regardless of how they're positioned.

Henry Ford provided the first quantum leap in modern manufacturing: the assembly line. A half century later, automation brought manufacturing another giant step forward. But, even though automation has improved considerably over the years, the basic concept of using specialized, precise repetition to raise output and lower costs has remained unchanged. Today, "blind" automation is reaching the limits that economy of scale can provide. Manufacturing's next leap forward is intelligent robotics, and it is not surprising that, once again, the future can be found on the automobile assembly line.

Robots have certainly improved output by increasing the speed and precision of assembly. However, the Achilles' heel of the robot has always been its inability to react to change. If the position of a part is off by just a millimeter, the robot may be unable to do its job, requiring costly stops to make corrections.

Today, vision guidance systems are helping robots overcome that limitation. Vision guidance systems give robots the ability to see what they are doing and react, as a human would, to changes in positioning. With 3D visual information provided by a single camera, a computer can tell the robot how to move its six axes (X, Y, Z, pitch, yaw and roll) to pick up and deliver parts precisely. Ford, GM and other auto manufacturers are now using vision-guided robots to handle transmission housings, cylinder heads, intake manifolds, brake rotors and other parts.

Attempts at vision-guided robotics in the early 1980s promised much but could deliver only in laboratory-controlled conditions. Early systems were problematic, not simply because of the limited technology available, but also because of the approach. Stereo imaging was not that accurate, since any visual noise in the system was magnified greatly. Also, given the size of cameras then, mounting and calibrating two at a time were physically impractical. Laser imaging could only provide limited information, and it was susceptible to changes in ambient temperature. Finally, computers had less memory and processing power than today's giveaway cell phones. In short, vision guidance was the right idea at the wrong time. No system was reliable enough to be practical on the plant floor.

In contrast, today's vision guidance systems rely on a single compact CCD camera mounted on the robot's end-effector, and software extracts 3D information from a single 2D image. The software's underlying principle is as old as the raindrop: projective distortion. All optical systems, including our eyes, use lenses to form images. Lenses, by their nature, cause varying degrees of distortion. As a result, features or landmarks change in appearance and relative distance and position when either the object or the lens moves. If the actual size and shape of the object without distortion is known, then comparisons to the apparent image can give information on position, distance and orientation.

The human brain uses this principle to read depth into a flat picture. As an example, imagine a book pictured against a white, featureless background. If it is rectangular, a shape that compares with what we know the book to actually look like, we see it as lying flat and seen from a head-on viewpoint. If the shape is more of a parallelogram, we perceive the book to be at an angle. If the size and features of the book are known, then we also have a sense of relative distance and positioning. The human brain makes these calculations almost instantaneously. A vision guidance system uses the same principles to locate an object from visual cues embedded in a digital image. For moving objects, the system can capture and analyze a continuous stream of snapshots, enabling the robot to track parts passing by on an assembly line.



Seeing Is Believing

The applications and benefits of vision-guidance are widespread, and the possibilities are exciting. Retrofitting existing assembly lines with vision-guided robots can significantly decrease equipment costs, downtime and workplace injuries. Because the robots can adjust to changes in positioning, expensive, custom-built fixtures are no longer needed to keep the workpiece positioned within narrow tolerances. Instead, a fork truck or tug can simply drop a bin of parts in front of the robot. With the parts only loosely positioned in a nominal volume, the robot will be able to locate each part accurately in 3D space and then remove it safely and quickly. Before vision guidance, this would have been impossible, and the parts would have to be loaded manually.

Vision-guided robotics can also help carmakers meet consumer demand for custom vehicles. Because special orders can be accommodated to a considerable degree without the need to pause or reprogram the robot, low production numbers no longer mean higher costs. Many car models share a basic platform, but differ only in trim lines and other options. Robots that can look for those differences and adjust their programs accordingly reduce the need for multiple lines or the need to stop the line for a changeover. A simple example is found in the application of soundproofing sealant for door panels. A carmaker may have six different models, each with a different door shape. With intelligent robotics, all six can be processed on the same line. The robot sees and identifies each door frame, takes spatial measurements, and accurately applies sealant to each.

At Ford's Essex Engine Plant in Windsor, ON, assembling cylinder heads to blocks in different engine configurations presented some challenging problems. Not only were the decking tolerances for each model quite severe, there were several "no-touch zones" on the newly milled heads. Each head model was packed in expensive, enclosed shipping containers to reduce contamination of the decking surface. With a robotic system provided by ABB Inc. (New Berlin, WI) and powered by Braintech's vision-guidance software, the 50-pound engine heads are now removed from closed, stacked containers, delivered to the correct blocks, and docked with zero contamination of the no-touch zones. Productivity and safety have vastly improved. Contamination has been eliminated. And, the new system requires significantly less floor space. Ford was so impressed with the results that it incorporated vision-guided robotics into its plans for rebuilding its flagship Rouge plant in Dearborn, MI.

Another potential benefit of vision-guided robotics is improvement in supply chain management. With each part that is seen by the system added to a database, it is very easy to track efficiencies in ordering, delivery and inventory.



The Future of Vision-Guidance

From the beginning, the imperative on any assembly line has been to keep the line moving. Vision-guided robots already help meet that requirement by minimizing unnecessary line stoppages. But this is only a start. The real sea change will take place when robots can fully duplicate the hand-eye coordination of human assemblers.

Whether by human or robot, manufacturing on the assembly line is still a series of stop-stations, where the work moves and is developed at each succeeding station. This inherently limits cycle time. Current vision guidance systems haven't changed that basic structure. They operate on a "look and move" paradigm, in which the robot looks and then calculates the next move. With the ability to track objects in three dimensions, in real-time, robots will be freed from fixed stations and can continue to work as the product moves down the line. This will significantly decrease cycle time and enable robots to truly cooperate on complex tasks.

And, vision is just the beginning. Development in force-feedback sensors will soon give robotics the sense of touch. They will be able to feel texture, sense minute pressure changes and perform more delicate operations. Other senses such as taste and even smell will be possible with enough processing power.

For more information on vision-guided robotics, call 604-988-6440 or visit www.braintech.com.