content provided by pleora

“I want the best of both worlds.” Sammy Hagar likely wasn’t singing about machine vision and AI while fronting Van Halen, but it’s a hot topic for manufacturers.

Undoubtedly, there’s a lot of hype around AI.  Trade shows have a heavy emphasis on AI, from companies developing AI models through to more intelligent self-navigating factory floor robotics. Venture capital invested $75 billion in AI startups in 2020. Beyond startups, a number of well-established players in the vision industry are now applying their machine vision expertise in new AI solutions.

But often hype around new technologies is a few leaps ahead of reality for end-users. That’s not unusual, the best technology companies are developing solutions in anticipation of our future needs.

In reality, we’re somewhere between hype and reality for AI deployment and quality inspection. It’s like the evolution of mobile devices towards the iPhone.  Where was the keyboard? Why would I need Internet access on a mobile device, when I had a laptop in my briefcase?

The earlier devices were simple to use, comfortable, and did the job needed it do to. But as more users adopted the next generation of mobile devices, we quickly saw gaps in our technology choice. You can see maps on your phone? Browse the web? Wait, there’s a camera?

Machine vision and AI are similar. Speaking with a number of manufacturers at a recent event, the general consensus: “machine vision does a good job”. It’s proven technology, backed by decades of deployments and technology investments. But, as manufacturing becomes more complex and end-users increasingly demand perfection, it has capability gaps.

Rules-based inspection excels, until manufacturers are producing products that may have different thresholds for what is considered an error. That may include regional or customized products, or products that are graded for different end-markets. Traditional rules-based inspection also struggles with irregularities, such as textiles, and metal and glass where reflection is an issue. As a result, manufacturers are dealing with downtime and costs as they complete a secondary inspection following an inconclusive machine vision decision.

AI promises to help fill that gap, primarily by bringing a degree of consistency to those subjective decisions. One example is textile manufacturers, where a certain level of inconsistency is acceptable, or even desirable depending on the end-market. For hardwood flooring, scratches and defects are unacceptable but a certain amount of grain and inconsistency is desirable. It’s a subjective decision, difficult to program in a rules-based system, but well-suited to a more adaptive AI approach.

So back to Van Halen. With a hybrid AI approach, manufacturers can mix “the best of” machine vision with new AI capabilities. This means retaining your existing machine vision infrastructure, processing, and end-user applications, while taking advantage of new edge processing capabilities to add-in AI capabilities.

In this scenario, the AL algorithm is trained and deployed to an edge processing device, which acts as an intermediary between the camera and host PC. The embedded device “mimics” the camera for existing applications and automatically acquires the images and applies the required AI skills. Processed data is then sent to the inspection application, which receives it as if it were still connected directly to the camera.

Depending on the manufacturer, as a first step they could begin using AI as a secondary inspection tool by processing imaging data with AI skills in parallel to traditional processing tools. If a defect is detected, processed video from the embedded device can confirm or reject results as a secondary inspection. Images and data can also be used to continue to train the AI model until the manufacturer has complete confidence in results.

Alongside advanced in edge processing, new “new code” AI algorithm training tools help bypass what has traditionally been time-consuming and potentially expensive steps to train, optimize, and deploy models. With more a more intuitive drag-and-drop approach, manufacturers can develop computer vision and AI plug-ins without requiring specialized skills.