Warehouse, collaborative robotics expand vision guided robotics universe

Vision-guided robots have evolved into tools that can be used to automate all kinds of new tasks.

By Winn Hardin, AIA August 10, 2017

The evolution of vision-guided robotics (VGR), especially applications that can "see" in 3-D, is allowing manufacturers and distribution centers to precisely automate tasks ranging from inspection at multiple distances to bin picking of random parts.

"General customer demands in VGR boil down to speed, accuracy, and reliability in the sense that the systems need to be able to pick or guide without error from the machine vision side," said David Dechow, staff engineer for intelligent robotics/machine vision with Fanuc America Corp. Robotics suppliers also are meeting users’ expectations for lower costs as even the most complex 3-D vision guidance systems are coming down in price.

In terms of system deployment, traditional VGR applications such as machining, machine tending, and assembly, particularly in the automotive industry, remain strong. Meanwhile, "we are seeing heavy growth in the cutting-edge fields of aerospace, warehousing and distribution, and order fulfillment," Dechow says.

Dechow credits increased deployment in these areas to technological improvements such as better and more resolution, as well as better algorithms to locate and differentiate objects and to pick databases of objects for part tracking or mixed-part picking.

One segment of the warehouse environment adopting VGR is the palletizing and depalletizing of structured and unstructured loads of boxes and even the unloading and loading of trailers. Furthermore, while machine vision has been guiding robots in the industrial space to pick parts from a bin and place them on a conveyor, the technology is now staking its claim in large distribution centers where the speed/accuracy/reliability triad is critical in the movement of high-volume products.

"VGR is used for individual part or mixed-bin picking to fulfill random, on-demand orders," Dechow says. "Of course that involves some 3-D vision for robotics and other technologies associated with robotics, such as gripping technology and force sensing to accurately and reliably pick parts for those types of applications.

"While not every robot guidance application requires 3-D vision, bin picking is nearly impossible without it. The biggest challenge in 3-D robotic guidance for bin picking in order fulfillment isn’t the vision itself but rather finding a gripper that is suited for the wide mix of parts and the broad way they will be presented to the system, Dechow says. "Grasping a teddy bear is dramatically different than grasping a box containing a USB drive."

Simplifying the user experience

Machine vision integrator Artemis Vision is fielding fewer requests for picking and placing parts on conveyors for assembly and inspection. Instead, interest in VGR is coming from customers looking to drive instrumentation to specific positions for precise inspection and measurement. "Customers also are asking to guide robots for cutting, deburring, or other actions taken on a part," says Tom Brennan, president of Artemis Vision.

One thing helping companies such as Artemis develop VGR applications is the availability of more off-the-shelf 3-D machine vision options than there were even five years ago. "These tools make it a lot easier to test the system because you can essentially set things up in the lab and see how effectively the camera generates a 3-D point cloud," Brennan says. "You’re not setting up your own light, your own camera, and your own lens and trying to calibrate everything to make the package work."

On the hardware side, Brennan sees liquid lens technology as a valuable tool to enable flexible configurability in VGR because it allows inspections at multiple object distances in a way that’s quicker and more robust than using a lens with a motorized focus. "A camera with a liquid lens mounts on the robot and performs inspections all around a part, whether the object distance is 2 inches or 8 inches," he says.

Other developments in VGR hardware and software aim to simplify the end-user experience. In September 2017, Fanuc will introduce robot controller called R30iB-PLUS. iRVision, the company’s robotic vision software, incorporates the camera interface hardware into the controller’s computer architecture, in addition to having the full iRVision software suite operate in the robot controller processing environment along with the teach pendant and PC interfaces. Additionally, iRVision’s general-purpose tools, from pattern matching and blob analysis to inspection, are continually being updated and improved, according to Dechow.

The application will be driven by a wizard that helps the engineer or end user walk through some of the difficult aspects of robotic guidance, such as camera setup, calibration, and the initial setup of the application. Furthermore, Fanuc is making changes and additions to some of the 3-D VGR tools and components, as well as enhancements to existing algorithms.

Dechow expects that such integrated functionality will someday be commonplace in a robot controller. "Machine vision, along with other sensing technologies and capabilities, ultimately will be standard embedded parts of any complete robotic system, although the exact architecture of such a system might not be something that we can completely envision today," he says.

Let’s work together

The arrival of collaborative robots on the factory floor also presents new opportunities for vision-guided projects. In collaborative applications, robots and humans work safely side by side without the fence or light curtains that separate the two in traditional industrial robots bolted to the floor.

Factories are deploying collaborative robots for a variety of applications, including multimachine inspection. "Collaborative robots allow end users to have flexible systems where they can bring the camera to many positions without having to provide all the safety mechanisms or purchase tons of cameras," Artemis Brennan says.

To create a flexible collaborative work cell, "vision-guided robotics is the technology to enable it [to perform a task]," Dechow says. "The natural progression would be having the collaborative robot see what is going on in its entire space."

The nature of collaborative applications, however, presents some challenges when deploying VGR. In a fixed cell, where the robot is guarded and the camera constrained, "we can set the lighting and imaging so it doesn’t change dramatically," Dechow says. But in a collaborative work cell with more freedom of movement, a human can interfere with those parameters and affect the reliability of the robotic processes.

"As we provide better and easier-to-use machine vision in those environments, particularly in application-based processes, we will better overcome those collaborative challenges," Dechow says.

While collaborative robots are promoted as intrinsically safe, manufacturers are still on the learning curve with this relatively new technology. "The biggest challenge goes back to safety because that familiarity is not there yet, and there hasn’t been a definition of how to implement it around a collaborative robot," says Shelley Fellows, vice president of operations for machine vision integrator Radix Inc.

The goal of the integrator, Fellows says, is to work closely with the customer to establish a comfort level for the operator while maximizing the efficiency of the collaborative robot.

"Perhaps the biggest obstacle in implementing VGR is the misconception among customers that the technology is hard to use. That comes from the need for more information," Dechow says. "We need to better train engineers on how to use vision-guided robotics and how to use it efficiently to make it very reliable. If we have a robot that can see and work in a more flexible environment, that makes the customer more productive and the application more efficient."

Winn Hardin is contributing editor for AIA. This article originally appeared on Vision Online. AIA is a part of the Association for Advancing Automation (A3). A3 is a CFE Media content partner. Edited by Carly Marchal, content specialist, CFE Media, cmarchal@cfemedia.com. 

Original content can be found at www.visiononline.org.