Machine Vision: Vision Parameters for Photonics Automation
Editor's note: This article was originally published in The Proceedings of the Fiber Optic Automation Conference, December 2002.
Machine vision is critical for aligning photonic components during automated assembly. Vision systems are also important for postproduction inspection. To specify vision systems for automated photonics assembly, it's important to understand the critical parameters that affect their overall performance.
The importance of these parameters was emphasized during a recent automation project. An X-Y-Z gantry robot was used to place photonic components with solder backing onto a gold package at the die bonder. The robot then transported the gold package from the die bonder to an oven to reflow the solder and bond the components to the package. A machine vision camera was mounted to a gantry. The vision system had a resolution of 8 microns per pixel and employed 1/20 subpixelization to achieve a final resolution of 0.4 micron. The vision system had to detect any part movement of more than ±2.5 microns after the parts were transported from the die bonder to the oven.
It is important to establish the baseline performance of the vision system before it can be used to qualify the performance of the motion system. To qualify the vision system, a series of experiments were performed.
Experiment 1: Static Reticle. A chrome-on-glass reticle was used as the workpiece. A chrome square at the upper left corner of the field of view was chosen as the reference fiducial. Similarly, a chrome square at the lower right corner was chosen as the target fiducial. The objective of this experiment was to determine the change in distance between the target fiducial and the reference fiducial when 30 images were captured. The experiment was performed with the chrome-on-glass reticle placed on both the die bonder and the oven. The reticle was not moved between images.
At each location, 30 images were captured, and the positions of the reference and target fiducials in the X and Y directions were determined. The distance between the two fiducials was measured using the average of five pictures, as in the actual assembly process. Since the change of the fiducial-to-fiducial distance (Δ Fid) determined by the vision system was expected to be zero, the mean of the data was assumed to be zero. However, we found that, over the 30 samples, Δ Fid varied by ±1 micron at the die bonder and ±0.77 micron at the oven.
Experiment 2: Transferred Reticle. The second experiment was to determine the performance of the vision system before and after the reticle is transferred from the die bonder to the oven. Again, the distance between the target and reference fiducials was determined with the average of five pictures at the die bonder. The robot then transferred the reticle to the oven, and the distance between the target and reference fiducials was measured again. The difference between the fiducial-to-fiducial distance at the two locations was calculated to form one data point. The same procedure was repeated 30 times to obtain 30 data points.
We found that Δ was ±2.4 microns. This corresponds with a ±1.7-micron deterioration from the Δ observed in Experiment 1.
Experiment 3: Glued Reticle. The same reticle used in the previous experiments was glued onto the gold package used in the actual assembly. An internal corner of the gold package was selected as the new reference fiducial, because it could be viewed within the same field of view as the target chrome square on the reticle. Part inspection was done first at the die bonder and then at the oven. The location of the target fiducial on the reticle was determined with respect to the reference fiducial on the package. The distance between the fiducials was calculated using the average of five pictures at the die bonder and five at the oven. Again, the procedure was repeated 30 times to obtain 30 data points.
This time, Δ was ±2.6 microns. This is a ±0.2-micron deterioration from the results obtained in Experiment 2.
Experiment 4: Production Parts. In this experiment, instead of a reticle, the actual photonic component was used. The component was glued onto the bottom of three gold packages. The robot transported the packages from the die bonder to the oven, one at a time. An exterior corner of the components was used as the target fiducial.
Again, inspections were done at both the die bonder and the oven, using an average of five pictures. The location of the target fiducial on the part was determined with respect to the reference fiducial on the package. The distances between the fiducials at the die bonder and the oven were compared, and the difference was recorded over 30 transfers.
We found that Δ was ±6.2 microns. Since the real component was glued onto the gold package, no part movement was possible. Therefore, we could say that the performance of the vision system deteriorated from ±1 micron in Experiment 1 to ±6.2 microns in Experiment 4.
The results obtained from these experiments raised a big question: What caused the progressive deterioration of the results between the experiments? We found that several factors affected the performance of the vision system.
Critical Vision ParametersStatic repeatability. Static errors occur due to noise inherent in the vision system. The static repeatability of a vision system is defined as how much a fiducial moves around in the image even though nothing has been intentionally moved. Static repeatability is a measure of how well a vision system can determine the position of a perfect fiducial. It also establishes the base line, or "best case," of the system's repeatability. The objective of Experiment 1 was to determine the static repeatability of the vision system.
Static repeatability can also be affected by vibration of the platform on which the vision system is installed. Machine vibration will cause camera jitter and lighting fluctuations. The images acquired under these conditions will have poor contrast and poor repeatability.
The target mean for static repeatability is zero, and the system's performance is gauged by how close the actual mean matches the target. To improve the system's accuracy, engineers can use stationary cameras and isolate the vision system from vibration from the floor and moving components within the machine.
To illustrate the importance of vibration isolation to vision system performance, we collected data from a vision system under three different levels of vibration isolation. The vibration source was a fan-powered HEPA filter mounted to the top of the machine's guarding structure. HEPA filters are commonly used in photonics automation to achieve a high level of cleanliness. The data was collected from a motion stage equipped with a linear encoder with a 40-nanometer resolution. The stage was mounted on a granite table with a steel frame.
Data was collected with the HEPA filter turned on under three different vibration isolation setups. In the first setup, the guarding was attached to the steel frame, with no vibration isolation. In the second setup, the guarding was attached to the steel frame, and vibration isolation mounts were installed under the granite table. In the third setup, the guarding was detached from the steel frame, and vibration isolation mounts were installed under the granite table.
We observed oscillation in the stage of ±140 nanometers in the first setup, but only ±20 nanometers in the third setup.
If a vision system is installed on a machine with poorly designed vibration isolation, camera jitter will occur, and the system's performance will be greatly affected. However, vibration isolation will add cost to the automated equipment, so it's important to specify the right amount of vibration isolation for the application. We believe that the fiducial-to-fiducial difference of 0.77 to 1 micron obtained in Experiment 1 was partially due to the effect of machine vibration.
Orientation change. Another vision parameter that played a part in our experiments is orientation change, or the change in relative orientation between the camera and the part. In our experiments, if the orientation of the camera or part changed between the two locations for image acquisition, there would be errors in determining the fiducial locations. These errors may be caused by lens distortion or magnification changes; that is, sections of the part move closer or further away from the camera. Orientation changes also decrease the repeatability of the vision system, due to errors in locating the fiducial. In general, if the part is placed into the field of view in a repeatable orientation, the vision system will be more accurate and repeatable.
We believe that in Experiment 2, the orientation of the reticle changed between the picking and placing. Also, since the camera was carried by the gantry robot, the camera's orientation may have changed as well. Either way, there's no doubt that orientation changes account for some of the 1.7-micron discrepancy in Δ between Experiment 1 and Experiment 2.
Contrast. Contrast is defined as the difference between the light and dark features in an image. A high-contrast image has a sharp, but not saturated, intensity gradient between the subject and the background. This allows the vision software to find an edge with precision. This is especially true in gauging applications, where edge detection and interpolation are used to obtain a resolution greater than one pixel, also known as subpixelization.
Nonuniform contrast will hinder the system's ability to accurately locate a fiducial. In some cases, nonuniform lighting can be mapped out by analyzing the background and applying a transform to the image. This approach of correcting for illumination problems has limits, and should only be used if all attempts at achieving uniform lighting have failed. A high-resolution lens may also improve contrast. In general, the modulation transfer function (MTF) determines how well the contrast of an image is preserved in percentage terms after it passes through an optical system. A high MTF lens will not only resolve finer features, it will also improve image contrast. However, the performance of many machine vision systems is limited by the camera itself, and adding a high-resolution lens will only be able to do so much.
Other factors that affect contrast are filter color, lighting color, lighting angles, camera and lens angles, and the reflectivity of the subject and background surfaces.
In our first two experiments, the chrome-on-glass reticles created very high contrast images because of the high difference in the reflectivity between the chrome squares and the glass background. Once the reticle was replaced with actual parts in Experiments 3 and 4, the effect of contrast started to play a role in the performance of the vision system. Orientation changes also had some effect on the contrast.
Changes in contrast from one image to another can be partially corrected for through online histogram analysis. If the lighting is uniform, but the intensity has been altered, adjusting presets, such as threshold levels and the subpixelizer's dynamic range, can improve vision performance. A comparison between key elements of the original histogram and the current histogram can be used to generate a histogram transform that will enforce the intended threshold levels and overall dynamic ranges.
Fiducial quality. The quality of the fiducial greatly affects the ability of a vision system to obtain precise, repeatable measurements of its position. In general, high-contrast fiducials with sharp edges are preferred. Also, the size of the fiducial must be proportional to the size of the field of view. The fiducial should be large enough to provide a significant number of data points for image processing. A minimum fiducial size of 20 pixels is preferred.
If the critical edges of a fiducial are not oriented in the same direction as the camera pixels, the performance of the vision system may be affected if the software used is sensitive to the orientation of the fiducial. This is especially true if the quality of the fiducial is not good. It is better to use a simple, high-quality fiducial than a geometrically complex, low-quality fiducial.
The location of the fiducial is critical, as well. A poorly located fiducial could cause cycle time and accuracy issues.
The difference in results between Experiments 2 and 3 was mainly due to the effect of the change in the reference fiducial. In Experiment 3, the reference fiducial was a feature of the gold package instead of a chrome square in the reticle. However, since the difference in the result between the two experiments was so little (±2.4 microns vs. ±2.6 microns), we can conclude that the quality of the two reference fiducials was comparable.
On the other hand, the difference in results between Experiments 3 and 4 was mainly due to the significant change in the quality of the target fiducial. The difference was more than 138 percent (±2.6 microns vs. ±6.2 microns). The quality of the target fiducial on the real part was not very high. Rough part edges made it difficult for the vision software to detect the exact location of the two edges to define the part location. In this case, fiducial quality greatly affected the performance of the vision system.
Camera resolution. Resolution is a measure of the camera's ability to distinguish, or resolve, object details. It is related to the number of pixels used to represent the image within the field of view. As the camera resolution increases, the accuracy of the vision system improves as well.
Camera resolution is usually specified based on accuracy requirements and the size of the field of view. In our experiments, the camera was chosen so that the reference fiducial and the target fiducial appeared in the same field of view. After subpixelization, the final vision system resolution was 0.4 micron. We believe that the 0.77 to 1 micron difference observed in Experiment 1 was partially the result of
Camera calibration. Camera calibration involves the determination of the scaling factor for the size of each pixel within the camera's field of view. In some cases, two different scaling factors are needed for the two axes in the field of view. Camera calibration also involves the determination of the transformation in position and orientation between the camera frame and the motion platform frame.
One common method for camera calibration is to obtain a set of image points of an object whose size is known. Another common method is to calibrate the vision system with respect to the motion system. The scaling factor along one axis can be determined by first determining the position of a target fiducial (a high quality chrome on glass reticle is preferred). Then, the camera is moved by a known distance along the same axis within the field of view using the motion stage, and the new position of the target fiducial is determined.
The change in position of the fiducial, in pixels, can be correlated to the distance moved by the motion stage to determine the vision system scaling factor for that axis. The accuracy of the motion system and the quality of the image will determine the overall accuracy of the camera calibration.
In our experiments, the vision system was calibrated against the motion system. The motion system has an encoder accuracy of ±3 microns per meter. The distance moved during
calibration was 10 millimeters. The expected error in the scaling factor can be estimated as ±0.03 micron over 1,250 pixels along the 10-millimeter distance. Therefore, the final calibration scaling factor error was 2.4 x 10-5 micron per raw pixel. After 1/20 subpixelization, the error will likely be an insignificant amount of 1.2 x 10-6 micron per raw pixel.
The objective of the vision inspection in our experiments was to determine the difference between two measured distances. One method we used to further minimize the effect of the calibration error was that the distance between the reference fiducial and the target fiducial was expressed in units of pixels. The scaling factor was not used until the very last step, when the difference between the two measurements had to be converted from pixels into microns.
How Did the Story End?To test what we learned, one final experiment was performed. Like Experiment 4, real parts and real gold package were used. However, no glue was used this time, and the part was free to move on the package during transportation, as in the actual production situation. As with Experiment 4, the location of the part was determined with respect to a feature of the package.
Assuming a normal distribution, if the fiducial-to-fiducial distances changed more than ±3 sigma, then we can conclude with 99.74 percent certainty that the placed parts did, in fact, move. Instead of the ±6.2 microns calculated for the ±3 sigma number, an additional 13 percent safety factor was added and a value of ±7 microns was used as a go or no-go criterion. If motion was detected at or below the value of ±7 microns, there would not be enough evidence to tell whether the part has moved or not, and the part would pass the test. If the part had a fiducial-to-fiducial difference of greater than ±7 microns, it would be certain that the part had moved and it would be considered a failed part.
Out of the 72 samples used for this experiment, one part (less than 1.5 percent) failed the test and was rejected. For the one rejected, the fiducial-to-fiducial difference was 9.1 microns. During the course of the 3-day test, maintenance work was being done on the machine while the experiment was being conducted. We believe that some external vibration from that work must have caused the part to move in the rejected sample.
The machine has been shipped to the customer's site and put into production. It appears that the motion stage is so smooth and stable that it is not disturbing the parts on the package at all during normal operation. The customer is happy with the performance of the machine, and it does not see a need to install vision inspection systems in subsequent machines. Through careful engineering and understanding critical design parameters, a high-performance automation system can be designed with the most cost-effective equipment possible.