Machine Vision Comes of Age
Vision technology is taking its place as a major sensing technology. Control engineers are finding it can be an indispensable tool for gathering system status information that can feed automatically into control systems—especially when there is motion involved. As vision technology has matured, image sensors have grown into multi-megapixel resolution, while frame grabbers and eventually merged into....
When I first started dealing with machine vision technology about 35 years ago, digital cameras were a laboratory curiosity, frame-grabber boards were just arriving as commercial products, and image processing was a black art. As vision technology matured, image sensors grew to multi-megapixel resolution, while frame grabbers grew in sophistication and eventually merged into the camera’s electronics. Finally, cameras even absorbed image processing computers as well. Imaging software—originally a library of utilities for functions called “thresholding,” “blob analysis,” and other arcane operations—morphed into wizards that engineers could use to create useful applications even if they had only rudimentary understanding of machine vision concepts.
Verifying the proper packing of aluminum cans in cartons using laser-based point scanners would require N+1 sensors, where N is the number of cans per case. A single smart camera can do the job.
Today, vision technology is ready to take its place as a major sensing technology. Control engineers are finding it can be an indispensable tool for gathering system status information that can feed automatically into control systems—especially when there is motion involved.
Many vision suppliers have dropped terms like “camera,” and “machine vision system” in favor of terms like “vision appliance,” and “vision sensor,” which better describe what their offerings can accomplish for engineers. Along the way, emphasis has shifted from system specifications to discovering what vision can do for you. What vision can do is collect huge amounts of real-world data very rapidly, and turn it into information concisely describing what engineers need to know.
With the new emphasis on what vision can do, instead of what it is, prices for some units have actually come down. Inexpensive units designed with just enough resources to do a specific class of tasks can now compete on price as well as performance with more traditional point-sensor arrays gathering real-time data in control applications. The advantage is that vision systems can be easier to set up, easier to train, and easier to maintain than complex multi-unit point-sensor arrays.
To be clear on the difference, point sensors measure a single parameter at a single location in space. Examples include thermocouples, inductive proximity sensors, position encoders, and laser beams. Vision systems, on the other hand, use area or line-scan camera technology to acquire data from a large number of points within a clearly defined field of view (FOV). They also cover a wide spectral range, from infrared (IR), through visible light (VIS), to ultraviolet (UV). This image-acquisition technology is backed up by powerful computing resources that extract exactly the information needed by the control system, whether that is measuring the temperature of a certain point on a motor, reporting the statistical distribution of quality-parameter variations, or checking proper placement of components on a printed circuit board.
Harley engineers catch the vision
The idea that vision, when used cleverly, can replace a surprising number of sensors is not new. Seven years ago I wrote an article on what was then a new robotic inspection system at the Harley-Davidson motor assembly plant. The system had been built by a Midwestern system integrator, whom I interviewed for the article.
Three ways to verify a threaded fastener: observe proper seating, count threads protruding from the assembly, or observe deformation of a lock washer.
Initially, the system was installed to verify correct installation of the twin-cam 88 engine timing chain. Needless to say, word spread fast among the engineers at Harley that a vision system was being installed and nearly everyone had an idea on how they would like to use it. By the time installation was complete, the system was programmed to perform 30 tests, including checking to make sure the sprockets were fully tightened down.
Ensuring that a fastener is tightened does not seem the sort of thing a vision system would be good at, but it is. There are at least three ways vision can verify that a nut is torqued in a manufacturing environment:
Verify proper seating — When an automated system installs a nut, it drives the nut home against something else, such as a washer or a captured part that the nut is intended to retain, the machine vision system can easily determine whether the nut is seated against it. If not, there will at least be a dark shadow exposing a gap, instead of a thin line where the nut meets the surface it should be pressed against.
Count threads — Today’s precise and repeatable automated manufacturing systems make every component the same, including the number of threads on any threaded fastener. By measuring the amount of thread protruding from a nut, a vision system can actually observe how much a bolt stretches when properly torqued.
Observe deformation — If the nut captures a split-ring or star lock washer, a wave washer, or even a captured deformable component, a vision system can see whether and by how much the captured part has deformed due to compression.
Since writing that article, I’ve seen a large number of applications where engineers used vision to solve difficult sensing problems by thinking outside the box. The box, for machine vision is, of course, automated inspection.
Seeing outside the box
Everyone knows that you can use machine vision to inspect assemblies, gauge dimensions, and read logos. It’s out-of-the-box applications that are the most interesting, and potentially the most useful for control engineers.
For example, last year I ran across an application where an electric utility wanted to monitor the level in oil-filled insulators atop high-voltage transformers. To efficiently transport large amounts of electric power, utilities boost the voltage into the megavolt range. Dissipative losses in transmission lines scale with the current carried, and the current required to transport a certain amount of power is inversely proportional to the voltage. A transmission line carrying, say, 1.2 megavolts can carry 10,000 times the power per ampere that a transmission line operating at 120 V can before exceeding its safe current level. To boost the voltage, or return it to service level, utilities use transformers.
High voltage transformers, however, are subject to power-robbing internal discharges. To prevent such discharges, transformer builders fill all the interior spaces with high-purity oil, which is a non-conductor. In addition to filling all the spaces, liquid oil resists damage from electrical discharges, so it has a much longer service life than solid insulators, such as epoxy or even glass.
The utility realized that the transformer oil picks up heat from the current-carrying copper transformer windings and transports it out to the case. If the oil level is low, the case will not be completely filled. The side wall will be warm up to the oil level, and cooler above it. The utility used an infrared camera to measure the height at which this abrupt temperature change occurred, and thereby measured the oil-fill level remotely without having to go past the safety fence enclosing the substation. Nothing had to be shut down, and a single picture could measure levels in many transformers. While the particular application was not automated at the time, it is easy to see how such a technique could be automated and used to control the level of any warm or cool liquid.
Because machine vision can measure any variable whose value translates into a difference in UV/VIS/IR emission or reflection, and observe up to millions of locations simultaneously, its range of applicability is limited only by the imagination of the engineers using it.
Today’s machine vision systems leverage advanced computer technology, including compact, powerful processors, advanced software algorithms, sophisticated user interfaces, and high-speed networks to acquire data, reduce it to the information needed, and deliver it to the control system on an as-needed basis. Packaged as “smart vision” products, these systems offer control engineers flexible sensing technology at a competitive cost-of-ownership level. ce
C.G. Masi is a senior editor with Control Engineering. Contact him by email at email@example.com
- Events & Awards
- Magazine Archives
- Oil & Gas Engineering
- Salary Survey
- Digital Reports
- Survey Prize Winners
Annual Salary Survey
Before the calendar turned, 2016 already had the makings of a pivotal year for manufacturing, and for the world.
There were the big events for the year, including the United States as Partner Country at Hannover Messe in April and the 2016 International Manufacturing Technology Show in Chicago in September. There's also the matter of the U.S. presidential elections in November, which promise to shape policy in manufacturing for years to come.
But the year started with global economic turmoil, as a slowdown in Chinese manufacturing triggered a worldwide stock hiccup that sent values plummeting. The continued plunge in world oil prices has resulted in a slowdown in exploration and, by extension, the manufacture of exploration equipment.
Read more: 2015 Salary Survey