Vision system integration has come a long way. With increased processing power, more powerful algorithms, and lighting and optics designed for more demanding requirements, more applications can be solved today than ever before. Even with these advances, however, deploying a complete inline inspection system can be daunting. Being armed with the right tools will make the process much easier.

Jump to:

An enormous range of vision products is available today from an ever-growing base of manufacturers. For example, some optics manufacturers carry upwards of 300 lens options to cover a wide breadth of applications, and even with this many options they still may not be able to solve every need. To add to the complexity, products from different manufacturers may need to be integrated to produce the best solution.

As consumers of these products, engineers must come to the table prepared with the right information and questions to discern which products and suppliers are best-suited to meet their application needs. This upfront information includes the specific application requirements; the type of short- or long-term support required, and of course, the budget. Often the application requirements will drive the type of support needed.

Application Requirements

The application requirements are the specifications that are directly related to the parts under inspection: the information to be extracted from those parts, and what will be done to the parts or processes after the results are collected. A system spec can be generated from this information. In the long run, how comfortable the engineer is with the system spec will determine the system’s success. The system spec will drive most of the decision-making on the components to be integrated.

To write a good system spec, first put together a complete list of everything the system should be able to inspect, the type of data that it should collect, and the accuracy requirements for that data. Next and more importantly, list the reasons for each area of inspection, as well as how important each inspection is to the desired outcome.

After this information is assembled, engineers must take what may be the hardest step of the application—decide what they wantthe system to do and what it needsto do. Of course, everyone wants the system to do as much as possible, but in the long run adding what seem to be small feature sets that are not really required for ultimate system success can lead to greatly increased cost. It can even jeopardize the accuracy achieved in the critical portions of the application.

At this point, engineers should consult a vision system integrator. Integrators are good at identifying what operations will be costly to perform and what could be deal breakers. For systems that require high levels of measurement accuracy, long working distances, highly reflective parts, complicated part geometry or finishes, or different parts sizes on the same line, engineers are well-advised to contact optics and lighting experts. For many applications, the camera and software are the brains of the systems, while the optics and lighting are the heart and soul. Both are equally important and must be matched correctly for the system to perform optimally.

After the spec is written, two other critical factors must be considered. First, where will the system physically be placed in the operation and to what extent will it communicate with material handling equipment? More than anything else, these two issues can drive initial costs as well as cost overruns. These areas must be addressed from the start, since they will determine many of the system components.

Finally, consider the budget for the system. All areas of cost savings should be considered to get a good idea of how to justify the cost of the system. Many engineers might be shocked at the cost of a good, robust vision system. In most cases, though, even what appears to be a high price can quickly repay itself through higher throughput, higher reliability, fewer customer returns, increased customer satisfaction, less rework, less downtime and less human interaction with the manufacturing process.

With the prep work done, it’s time to start matching optics, lighting, camera and software to build the system’s backbone. A well-written spec will help engineers wade through the wide variety of products on the market, creating a short list of components needed to build the correct system. At the end of the day, most engineers will want to produce evenly balanced, robust, high-contrast images with enough pixel information to maximize the software’s algorithm capabilities. Many factors must be considered to produce such images.

Lighting

Lighting the object is usually the first thing that needs to be tackled. In many cases, this is the trickiest area. The more complicated the object’s size or geometry, the more types of materials in the object, the wider the range of material characteristics, and the more highly reflective the object, the more difficult it will be for the system to create repeatable, even contrast images.

In many cases, simple objects can be illuminated by basic, cost-effective, directional lighting that is simply designed and easy to use and mount. Conversely, complicated objects require multiple lights that require more cumbersome mounting. Such setups are more costly and more sensitive to misalignment.

Illumination sources come with a wide range of options. They have different color characteristics, lifetimes, functionalities, variations with temperature and environment, and varying levels of ruggedness. All these issues should be considered when building a system and must be related to the factory environment and not the lab where the system will initially be built.

One final warning on illumination: Changes to the materials used to make the parts can wreak havoc on a vision system’s ability to perform after the changeover. Most times, this is related to the illumination used. If the process uses various materials, make the integrator aware of this fact up front. Additionally, if the engineer is looking to change the manufacturing process after a system is deployed, verify with the integrator that everything will work or have it update the system before running the new parts. If not, expect a high rejection rate from the vision system as soon as the switch is made.

Cameras

A variety of camera resolutions, imager sizes and features are available. This range of products only grows when one also considers the intelligence of their associated image-processing algorithms.

One of the biggest mistakes made in many systems is not having enough pixels on a given feature to yield accurate and repeatable results. Understanding where algorithms maximize their capabilities will go a long way to yielding the desired results, not to mention allowing the engineer to directly correlate how many pixels the imager needs.

For example, consider an object with a group of dark circles on a light background. It’s easy to count how many circles are in a given area even if the imager only has an area of 2 x 2 or 3 x 3 pixels on each circle. Each one will appear to be a dark spot on a bright background and will be easily analyzed by the software. Now extend the requirement a bit. What if you wanted to measure the roundness of each circle? Even with the most powerful blob analysis, edge detection and subpixel algorithms, highly accurate and repeatable results will be impossible.

By simply stepping up a level in camera resolution, engineers can increase the number of pixels being analyzed by a factor four and thus greatly leverage those powerful algorithms. This is one example of why writing a good system spec is so important. Obviously, counting dots is much different than measuring them. Luckily, engineers do not need to figure this all out on their own. The integrator, with the help of the camera or software provider, can provide the optimal camera to produce the best results.

Lenses

The last part of the system that needs to be determined is the lens. While every part of the system is critical, choosing the wrong optics can make all other efforts wasted.

One of the most critical things to remember here is that even if two lenses appear to have the same specification, they may not be equivalent products. For example, four different lenses could all be listed as 25-millimeter lenses with the same mounting types, same angular field of view, and same F-stop settings. One may have been designed for the security purposes, one for document processing, one for high-end photography and one truly for machine vision.

The last part of the system that needs to be determined is the lens. While every part of the system is critical, choosing the wrong optics can make all other efforts wasted.

Returning to our dots example, all four lenses would probably be satisfactory for counting the dots. But to measure the dots for roundness, engineers will find that the lens designed for machine vision will far outperform the others. Again, a well-written spec will lead to the best choice, and engineers are well-advised to consult an optics expert.

Additional Factors

A few other parameters must be considered when choosing optics and tying the entire system together. These include field of view, working distance, depth of field, resolution and contrast.

Field of view (FOV) is the viewable area of the object under inspection. In other words, this is the portion of the object that fills the camera’s sensor. The size of the camera sensor can have dramatic effects on the FOV produced and should be accounted for when putting the system together.

Working distance is the distance from the front of the lens to the object under inspection. The height of the object and the total track of the system can play a large role. Additionally, the lighting for the application can have a great effect on the working distance required for the lens in the system. Some lighting options must be integrated between the optics and the object. These can take up significant space and must be accommodated.

Depth of field (DOF) is the maximum object depth that can be maintained entirely in focus. The DOF is also the amount of object movement—in and out of focus—allowable while maintaining an acceptable focus.

The resolution of a lens is the minimum feature size or detail of the object that can be reproduced by the lens.

Contrast is linked with resolution. Contrast describes how well the blacks can be distinguished from the whites. In real life, black and white lines will blur to some degree into grays. Noise and blurring of edges will cause contrast to go down. Two lenses can have the same resolution, but grossly different contrast. The greater the difference in intensity between a light and dark line, the better the contrast.

This is intuitively obvious, but it’s more important than it may first appear. The contrast is the separation in intensity between blacks and whites. Reproducing object contrast is as important as reproducing object detail, which is essentially resolution.

Let’s take the dot example one more time. If two different lenses can both resolve the spots, but one can do it with a contrast of 10 percent and the other with a contrast of 50 percent, the lens with 50 percent will far outperform the other.

Consider two dots placed close to each other and imaged through a lens. Because of the nature of light, even a perfectly designed and manufactured lens cannot accurately reproduce an object’s detail and contrast. Even when the lens is operating at the diffraction limit, the edges of the dots will be blurred in the image. When they are far apart—in other words, at a low frequency—the dots are distinct, but as they approach each other, the blurs overlap until the dots can no longer be distinguished. The resolution depends on the imaging system’s ability to detect the space between the dots. Therefore, the resolution of the system depends on the blur caused by diffraction and other optical errors, the dot spacing, and the system’s ability to detect contrast.

Another import point to understand about resolution and contrast is that they are not same for every point in the field. The farther out from the center of the image, the more resolution and contrast will fall off. This can lead to rejected parts being passed or good parts being failed.

There are other things that can be done to enhance a lens’s performance within the system. If using only one color, then chromatic aberration is no longer an issue. If the system does not need to be color-corrected over the entire spectrum, the lens design can be simpler. Going monochromatic may also simplify the illumination system, because monochromatic LEDs use less power and create less heat than white light incandescent bulbs. This effect also can be achieved by using color filters with a white light source. Filters can be a low cost way of improving the system’s capabilities. Additionally, monochromatic light sources and filters can be used to perform color analysis.

Distortion is a geometric optical error in which information about the object is misplaced in the image, but not actually lost. Distortion can come in a couple of different forms.

There is monotonic distortion, which means that the distortion is consistently positive or negative from the center of the image out to the edges. Distortion that is not monotonic will actually go back and forth between negative and positive distortion, as the engineer works his way from the middle of the field to the edges.

Distortion that is not monotonic can occur during the design of the lens to reduce overall distortion in the lens or from factors specifically related to the design type. Whether the distortion is monotonic or not, software can be used to factor out the distortion so accurate measurements can be made.

Using measurement software and a dot target of known size, one can measure the distortion at different distances from the center of the image. Distortion is not linearly correlated to the distance from the center of the field. This is true for monotonic and nonmonotonic designs. After this is done, distortion can either be processed out of the image or taken into account during measurement. Removing distortion from an image and redrawing the image can be a processor-intensive operation.

Telecentricity

Perspective errors, also called parallax, are part of everyday human experience. In fact, parallax is what allows the brain to interpret the 3D world. Humans expect closer objects to appear relatively larger than those placed farther away. This phenomenon also is present in conventional imaging systems in which the magnification of an object changes with its distance from the lens. Telecentric lenses optically correct for this occurrence so objects remain the same perceived size independent of their distance, over a range defined by the lens.

For many applications, telecentricity is desirable because it provides nearly constant magnification over a range of working distances, virtually eliminating perspective angle error. This means that object movement does not affect image magnification. Thus the accuracy and repeatability of the system are greatly increased.

In a system with object space telecentricity, movement of the object toward or away from the lens will not result in the image getting bigger or smaller, and an object which has depth or extent along the optical axis will not appear as if it is tilted. For example, a cylindrical object whose cylindrical axis is parallel to the optical axis will appear to be circular in the image plane of a telecentric lens. In a nontelecentric lens, this same object will look like the Leaning Tower of Pisa. The top of the object will appear to be elliptical, not circular, and the sidewalls will be visible.

Deployment

With the right camera, lens and lighting in hand, it’s time to deploy and test the system on the shop floor. Bear in mind, a system that works fine in the lab may not work equally as well on the floor. Engineers may need to build enclosures to protect the system against environmental hazards or to eliminate unwanted ambient light.

It will cost a bit of money to get a vision system up and running. Each part of the system has an equal load to bear. Be sure that each gets the time it deserves and that long-term support is sorted out both internally and externally to maximize ROI.

 For more information on vision systems, call Edmund Optics at 800-363-1992 or visit www.edmundoptics.com