Where Are the Parts?
All the automated assembly machines in the world won't help your assembly process, if the various parts aren't in the right place at the right time. But machine vision systems, smart cameras and sensors are increasingly playing a key role in verifying that the right part is in the right place at the right time. They are adept at recognizing parts and how they are positioned.
Even though it seems simple, locating parts in today's production environment can be extremely challenging. This is because many variables can alter the way a part appears to a machine vision system, which is trained to recognize parts based on a model image of that part. Such variable conditions include part rotation, changes in optical scale, inconsistent lighting conditions and normal variations in part appearance.
Several options are available to ensure correct part location and orientation, including PC-based machine vision systems, smart cameras and sensors.
Before a company can implement any sort of vision system into its assembly line, it is helpful to have a vision application requirement checklist. Answers to the following questions will make vision integration much easier:
- What makes a part good or bad?
- What are the manufacturing tolerances of your parts?
- Are the parts moving or stationary?
- If the parts are moving, what are the speeds?
- Are the parts fixtured, loosely fixtured or not fixtured at all?
- Do parts change color, appearance, size or position?
- How many models are there?
- Are measurements to be in pixels or another unit?
PC-Based Machine Vision
PC-based machine vision systems are more complex than smart cameras or sensors. A separate camera, processor, frame grabber and display are combined into a system directed by machine vision software.
Vision hardware includes the camera that captures an image of the item to be inspected, lighting to enhance the contrast of features of interest, and optics that accurately represent the image to the camera by minimizing distortion and resolution loss. This hardware works with a processor to capture, digitize and display images for analysis. This analysis generates answers, such as whether a part is in the correct location and orientation.
Vision software is the backbone of the processor. By comparing specific features of interest within the image to stored data that comprises a standard or model of the part, these software tools perform image processing and analysis of the captured image. "There is a wide assortment of software tools available for performing many different types of inspection operations that enable vision systems to make decisions about a part," says Cliff Fitzgerald, senior manager of worldwide educational services at Cognex Corp. (Natick, MA).
For example, blob analysis is a standard task in image processing. Blob analysis allows you to identify connected regions of pixels within an image, then calculate selected features of those regions. The regions are commonly known as blobs.
Blobs are areas of touching pixels that are in the same logical pixel state. This pixel state is called the foreground state, while the alternate state is called the background state. Typically, the background has a zero value, and the foreground is everything else. Although some control is generally provided to reverse the sense. In many applications, operators are interested only in blobs with features that satisfy certain criteria. Because computation is time-consuming, blob analysis is often performed as an elimination process. Only blobs of interest would be considered in further analysis.
The steps involved in feature extraction are:
- Analyze an image and exclude or delete blobs that don't meet determined criteria.
- Analyze the remaining blobs to extract further features and determine their criteria.
Alternatives to PC-Based Vision
Smart cameras have built-in intelligence to deliver image data directly to the host, bypassing a PC. These devices have enough power for most everyday assembly applications. Some high-end smart cameras even offer many of the functions featured in PC-based systems. "From a functionality standpoint, there isn't a real difference [between a smart camera and a PC-based machine vision system]. The smart camera, which is what all of our products are based on, has the processing capability of a PC imbedded into the camera, itself. You don't need a separate PC. You can do any type of inspection with a smart camera that you can with a PC-based system," says Bob Settle, director of marketing at DVT Corp. (Duluth, GA).
According to Settle, DVT's smart camera is a complete machine vision system contained in a single, palm-sized unit. With this system, there is basically an onboard computer on the camera. And via Ethernet, it communicates to the programmable logic controller and different components on the line. It can inspect parts, and if it detects a bad part, it can communicate to a reject mechanism down the line to have that part removed.
Another alternative to PC-based machine vision is the sensor. According to Tom Rosenberg, industry manager at Balluff Inc. (Florence, KY), if the complexity of the part isn't too great and there is a known reference point on the part, a machine vision system might not be needed. "Let's say parts are coming down a conveyor and they're positioned against one edge pretty securely. Often times, there is no real need for a classic vision system. Smart lasers can capture that easily," says Rosenberg. However, if the part is bouncing on the conveyor belt, then a standard sensor would not be a good choice for that application. "You would need an image capturing system and some good processing power to capture that image in space, because you don't know exactly where that image is going to be," he says.
Vision sensors generally require no programming and provide user-friendly interfaces. Vision sensors are self-contained and don't require a PC, VME or PCI to run vision tools. They can be easily integrated with any machine to provide single-point inspections with dedicated processing. Most vision sensors offer built-in Ethernet communications for networking. "With Balluff's simple sensor systems, everything is contained in one housing. There are no lamps or lights. It has its own light source built in. It's basically a point-and-shoot auto-learn system. There is no real programming involved," says Rosenberg.
Machine Vision Lighting
The correct lighting is important to machine vision. According to Fitzgerald, without a good image, the vision task may be very difficult, if not impossible, to complete.
Lighting position depends on the lighting technique to be achieved. The main goal of lighting is to improve the contrast of the features of interest. This is accomplished by minimizing shadows, freezing the motion of the moving objects, increasing the sharpness of edges, removing specular reflection, and making the foreground and background distinctly different gray values.
When testing a lighting technique, it is always a good idea to test the technique on parts that are in the correct orientation and position, as well as those that are not. Users should also consider environmental factors that may make this technique difficult or impossible to implement. Depending on the positions of the parts, light source and camera, one of the following lighting techniques will be suitable for your needs:
- Backlighting-provides the maximum contrast between the part and background. It also simplifies the image by providing a silhouette of the part.
- Structured lighting-measures Z-axis height and depth and shows surface profiles on low-contrast parts.
- Diffuse front lighting-provides even illumination from all directions and is good for minimizing shadows on 3D parts.
- Direct front lighting-provides maximum contrast to the image, and its "spot lighting" is easily installed.
- On-axis lighting-creates a bright-field effect. Specular surfaces are bright; diffuse surfaces are dark.
- Cloudy day illumination-offers perfectly diffuse illumination over 180 degrees (like a cloudy day) and avoids hot spots and shadows on reflective, curved surfaces.
- Diffusing light-offers softer, more even lighting without glare and shadows, and covers a large area without creating hot spots.
- Collimating light-all light rays travel in the same direction, which intensifies the light and maintains a higher output further from the source.
- Polarizing filters-reduce specular reflection.
- Color filters-enhance contrast in color scenes.
Along with lighting techniques, there are various types of light sources to choose from:
- Halogen-provides a constant output and won't degrade over time.
- Incandescent-provides a direct light source, but the filament will break down over time.
- Fluorescent-provides a diffuse light source, but it also degrades over time.
- Laser-provides a structured light source. All the rays have the same wavelength and are all in phase.
- LED-lasts in excess of 10,000 hours, is monochromatic and can be strobed.
- Xenon flash-provides high-intensity light for short time periods. It can be strobed.
The camera or sensor should be positioned at points where things are being done to the part. An assembly line may use multiple cameras, when it's inspecting at various stages of subassembly and assembly. Or just one camera or sensor may be used.
Cameras can be positioned below, above or on the side of the conveyor. If a robot is involved, the camera or sensor might even be mounted on the robot. In the end, it depends on what kind of image is needed of the part-top view, side view, bottom view. Just about any type of a view is possible.