When it comes to vision systems, manufacturers want more cameras for more accuracy. Some have even applied this principle to their robots. For example, several automotive and white goods manufacturers use dual- or single-arm robots equipped with a two-camera vision-guidance system.

The robots are made by Motoman Robotics Div. of Yaskawa America Inc. VisionPro 3D image-processing software, made by Cognex Corp., ensures that the robots accurately locate, pick up and move parts from a pallet onto a conveyor or into a machine for assembly.

Cameras are located on each arm or the head of the dual-arm robot, and on each side of the single-arm robot’s end-effector. The cameras are Gigabit Ethernet connected to a vision processing unit that is connected through Ethernet to the robot controller.

“Using several images from each camera, the software calibrates real-time, three-dimensional position information for each part,” says Greg Garmann, technology leader of software and controls for Motoman Robotics. “This information then guides the robot arm or arms to consistently perform precise part retrieval and placement.”

Yesterday and Today

“Multicamera vision systems have been used by manufacturers since the late 1970s,” says Phil Arsenault, director of imagery for ATS Automation Systems Inc. “But the concept of multicamera systems can actually be traced back to Hollywood filmmakers and aerial photographers.”

Early multicamera vision systems featured the same components used today: lighting, lenses, fixturing, dumb cameras, controllers and image-processing software. However, early systems had analog cameras, which were slow and produced poor images. As a result, each camera was connected to a separate controller, which was large and expensive.

By the 1980s, digital cameras started to be used, providing manufacturers with better-quality images. End-users also began adding multiplexers and frame grabbers to their systems.

Multiplexers select one or more low-speed digital input signals and forward them into a single line. A frame grabber captures individual, still frames from a digital video stream and transfers the data to a controller.

Smart cameras also appeared in the 1980s but are not recommended for multiple-camera systems. These cameras are self-contained, feature an internal processor and are much more expensive than dumb cameras. For these reasons, smart cameras are better suited for single-camera systems; applications where multiple cameras must operate independently or asynchronously; or when vision is needed at multiple inspection points along a production line or within an assembly machine.

“In the 1980s, multiple cameras could be connected to one controller,” says Ben Dawson, director of strategy and development for Teledyne DALSA. “But multiple frame buffers were required to synchronize the cameras’ image acquisition. Modern cameras include a frame buffer so synchronization is not an issue.”

In the early 1990s, digital interfaces were introduced, eliminating the synchronization problem. Simultaneously, the price of digital cameras came down because CCD (charge-coupled device) sensors were replaced with CMOS (complimentary metal-oxide semiconductor) and DMOS (double-diffused metal-oxide semiconductor) sensors, which are less expensive to produce.

Another technology used on digital cameras that has improved vision systems is frame buffer memory. This technology allows users to retrieve images at a later time than when they were captured.

As the technology of multicamera vision systems has changed, so has the industries that use them. In the 1970s and 1980s, manufacturers of electronics and medical devices were among the first users of these systems.

Since the 1990s, the popularity of multicamera systems has significantly increased as more manufacturers have automated their assembly processes. This is especially true for those in the auto, aerospace and white goods industries.

“The old systems were fine for slow assembly processes or inspections, but verification of assembly in process is the focus now,” says Arsenault. “Manufacturers no longer need to rely on end-of-line inspection, which couldn’t prevent or improve a bad assembly. Today’s systems improve inspection consistency and are smaller, faster, cheaper and better than ever.”

They are in Control

Several suppliers offer controllers for multiple Gigabit Ethernet (GigE) cameras. Teledyne DALSA makes the GEVA, a dual-core processor with iNspect or Sherlock image-processing software.

Dawson says the GEVA features two dedicated GigE camera ports, each with enough bandwidth to support simultaneous inspection from up to eight 640-by-480-pixel mono cameras. The ports are compatible with mono or color, area and line-scan GigE cameras of varying resolutions. (An area-scan camera features a matrix of pixel sensors, whereas a line-scan camera features a single row.)

GEVA also has several external interfaces for system integration. These include a separate Ethernet port for factory protocols, USB ports for setup and run-time control, extensive digital inputs and outputs, and dedicated trigger inputs for inspection timing.

One automaker uses the GEVA to control a three-camera system that verifies the dimensions of a fuel-injector spindle; guides a robot into position to pick up and insert the spindle; and ensures the spindle is properly positioned within the assembled fuel injector.

Each spindle is brought into the workcell by a conveyor. Before assembly begins, a camera images the spindle and the GEVA verifies that its length is 2.5 inches.

If the spindle is too long or too short, it is dropped into a reject bin and replaced with another spindle. If the spindle is the proper length, GEVA triggers the second camera, which is mounted to a robot. This camera guides the robot to the correct areas so it can assemble the injector.

Once assembled, the fuel injector is moved to an inspection area where a third camera verifies the presence and orientation of the spindle. Cycle time is less than one minute. All three cameras are small and dumb.

“GEVA not only helps the automaker keep track of the spindle through the process,” says Dawson. “It also tells them why a spindle was rejected.”

Last fall Keyence Corp. of America introduced its XG-8000 series of controllers, which can handle up to eight area scan cameras or four line scan cameras simultaneously. The controller must be used with any of 14 Keyence cameras that have resolutions of 310,000, 2 million or 5 million pixels.

Because of its triple-core processing power, the XG-8000 can quickly compile each line of image data obtained by a line-scan camera into a 67-megapixel image with very high resolution. Bob Ochiai, machine vision support engineer for Keyence, says these compiled images enable manufacturers to better inspect parts or assemblies that lack sufficient lighting.

The XG-8000 can also create high dynamic range (HDR) images automatically. HDR involves taking multiple images at different exposures and combining them to create an image with a wide range of colors so it more closely represents what is seen by the human eye.

Ochiai says HDR lets users set parameters such as illumination level increments, image tone values and number of images they wish to capture on each part. As each part is viewed, illumination and tone levels are automatically varied for the number of image captures specified.

XG-8000’s predecessor, the XG-7000, is a single-core processor that has been used by manufacturers since September 2010. One automaker is using the XG-7000 to control a multicamera system that inspects engine cylinder heads. Up to four cameras are used, and each one takes an image of a particular cylinder head to prevent short shots (insufficient material injected into the mold) and burrs (unwanted raised edges).

Another automaker uses an XG-7000-controlled system to make sure each automobile has the proper hood and doors. Separate cameras take images of the VIN numbers of hoods and doors. Keyence’s vision software then uses optical character recognition to translate the images into machine-encoded text and verify the hood and doors are a match.

Systems integrator ATS Automation has been offering its ATS Cortex vision system to outside customers for about a year. Customers can order the ATS Cortex with or without ATS SmartVision image-processing software.

An integrated hardware and software vision system that interfaces with a PLC, the ATS Cortex can trigger up to eight cameras and 12 lights. Discrete I/O ports are programmable and can be used to trigger any combination of camera devices.

ATS SmartVision software features several analysis tools, each of which can be used to control one camera in a multicamera system. Each camera can be programmed to perform an independent sequence of actions, such as for part verification or assembly inspection.

One medical device manufacturer uses two ATS Cortex units to operate a nine-camera vision system. One Cortex operates five cameras that assure the quality of plastic parts; the other Cortex operates four cameras, each of which verifies an assembly of two parts or subassemblies.

Vibratory bowls feed five different plastic parts into separate assembly lines at a rate of hundreds per minute. Cameras at the start of each line inspect each part so it has no markings or other imperfections. Bad parts are blown off the line with compressed air.

After a part from line one is placed into a holding device, a part from line two is delivered to the device. The two parts are assembled, and camera one inspects this first subassembly. Bad subassemblies at this and all future stages are blown off the line.

The first subassembly is then moved to a location where part three is delivered and joined, creating a second subassembly. Camera two inspects the second subassembly.

On a separate line, parts four and five are assembled into a third subassembly and inspected by camera three. Next, the second and third subassemblies are bonded with a UV-cure adhesive. Camera four performs a 360-degree inspection of the bond line to verify proper assembly. The finished products are then moved to the shipping area.