Robotic vision systems: What is doable?

Although robotic vision is constantly evolving, there is still a technological gap between the interpretation of what a robot sees and what a human sees. What about 3-D versus 2-D vision?

By Mathieu Bélanger-Barrette June 1, 2016

When someone is inexperienced with robotics, it’s easy to misunderstand the robot’s vision capabilities. How the robot sees things is probably the most complex part of the robotic process. Robotic technology is closing the gap in terms of flexibility and grasping abilities, but there is still a large technological gap between what a robot sees and interprets and what a human can see and interpret. With robotic vision constantly evolving, where does this part of the robotics equation currently stand in robotics used in the industry? 

Evaluating robotic vision systems

When talking to a customer a few years ago about robotic vision, there was some hesitation. Since industrial cameras were not as advanced as they are today and considering that robotic logic was somewhat unreliable, most of the "dream applications" customers were considering were not technically possible. Now with the democratization of camera technology (thanks to smartphones) and the advancement of user-friendly "smart cameras," vision technology is easier than ever to introduce into a robotic cell. Some limitations include: 

  • Adaptability: Most robotic vision applications rely on very well-defined applications with pre-programmed features. They can detect a particular pattern and detect it really well. However, if something really unusual passes in front of the camera, it might be missed by the application. A good example of this would be the completely automated public turing test (CAPTCHA) in Figure 1, where simple letters (that are generally easy to detect) are slightly deformed and any type of vision system or character analysis system cannot detect them. Although this example is a problem at the moment, it is only a matter of time before this obstacle will be overcome by robotic vision systems. 
  • Detecting Trends: Unless the vision system has been programmed to detect trends or patterns, it won’t be able to detect them. While humans are really good at interpreting and associating things, robotic vision systems have issues with associations. Each feature that is detected is often treated individually, so it is listed in a report for a human to analyze. If you are conducting quality checks, for example, a list of errors will be shown to a human worker where he/she can analyze them and determine if there is a problem with a machine somewhere within the manufacturing process. However, as of right now, a vision system cannot determine that the milling machine has a broken tool and then stop the production line, for instance.
  • Reliability: The main advantage of a vision system is its consistency and reliability. If a robotic vision system is looking in the right place, it will always see when something is wrong. A robotic vision system in comparison to human eyes will not get tired and will always use the same parameters. Humans are more error-prone throughout the day as a worker may become increasingly tired and less attentive. One of the reasons why manufacturers are introducing robots is because of their consistency and accuracy, so it only makes sense to fit them with a vision system that has the same attributes.

Robotic vision camera location

Depending on the application, the vision system will be placed in different locations in the robotic cell. With all of the different types of robots, cameras, and applications out there, there are an infinite number of solutions as to where a camera can be placed and what to do with it. However, there are the principal ways to set up the camera: 

  • End of arm: Since different applications need to monitor what the robot is grasping, some robot manufacturers are embedding cameras directly on the robot wrist. This allows the camera to move in various directions in space, locate a part, and, according to the robot’s kinematics, grasp a part. Since the camera is often close to the gripper, it can also monitor if the part has been grabbed properly or if it has been dropped during manipulation. Placing a camera on the end of a robot’s arm means it’s constantly moving. If a picture of the grasping area is required, stop the robot in the right position, make sure the camera is stable, and then take a snapshot. If an application requires a really short cycle time, you may have to rethink this option. 
  • Scene application: Another type of vision system can be fixed and constantly looking at a scene; for example, where parts are presented in different positions and orientations on a conveyor. Once the part passes in front of the camera, a snapshot is taken and analyzed to see where the part is and its orientation relative to the robot as shown in Figure 2. Then, the robot can go grab it. 
  • Cell monitoring: There is also a type of robotic vision that is used for safety requirements. A camera or a set of cameras can be installed directly on the robot or looking at the robot to see if a human is entering the robot’s workspace. Since most collaborative robots do not have external safety guarding, this method can be used to regulate the robot’s speed according to the distance between the robot and the worker. Note that each robotic cell integration requires a risk assessment according to regional rules and regulations.

What is the difference between 2-D and 3-D cameras?

Vision technology is evolving quickly. In recent years, 3-D movies have been growing in popularity because of the realistic aspect they bring while watching a movie. The same is happening with industrial applications; 3-D vision technology is one step further along in terms of complexity and can provide more information on an object that has to be manipulated by the robot. However, improvements still need to be made.

Two-dimensional vision systems have a better track record and are simpler to use. There are a lot of 2-D cameras that are available with really good features. The price of high-quality 2-D cameras is dramatically dropping because of the widespread use particularly in smartphones and other high-tech applications.

Two-dimensional vision systems have problems with shadows. Most 2-D vision applications will use flash or lights placed in the same orientation as the camera to limit shadows. Other applications will install parts on illuminated tables, so the camera can see the contour of the part clearly. Popular 2-D vision applications include:

  • Inspection
  • Part location
  • Q-code
  • Barcode reading.

Three-dimensional vision is the next big thing, but it is still not technologically ready for widespread use within the industry. Since it is more complex to use and program, it still requires a very good understanding of this technology to be able to introduce 3-D vision into a robotic cell. Furthermore, there are very few vision libraries and camera models available on the market. This means that the price and reliability of the cameras are still a long way away from a 2-D camera’s reputation. Three-dimensional vision systems are generally used for scene observation, part location, part modeling, and complex vision applications. 

Machine vision system integration

Integrating a vision system depends on what needs to be done with the device. There are tips in order to figure out what needs to be taken into account to introduce a vision system into a robotic cell: 

  • Sit down with the operator and document the actual process. If "blowing on the part" is not written anywhere in the operation guidelines, but the operator does it every day because the parts are always dusty when he/she receives them, then an air blower could be integrated into the automated system. 
  • If the parts’ normal surface appearance is variable, keep in mind that the robot won’t be as adaptable as humans are. Consider reordering some process steps to make the vision process easier for the robot. 
  • Define the boundary numerically between what is acceptable and what is not-the maximum defect length, the accepted color, etc. When it cannot be defined numerically, more examples will be needed to train the system.
  • Include the operator in the automation process. Operators are knowledgeable when it comes to a part’s specifications, defect definitions, and other variables that will influence the appearance of the part; use their knowledge. 
  • Train the system with parts from various batches, made on different days, to have multiple "normal" surface appearances. The vision system will thus be adjusted to take some variations into account. Keep in mind that it’s desirable to have an accurate system that can be flexible and not too desensitized.

Finally, it all comes down to the robot’s function. What is the level of acceptance and flexibility you are ready to accept for the application? Start small and build on that from experience. Walk around the shop floor, look at existing applications, and choose the one that is easiest to automate with a vision system. Once this application is set, you can learn from errors and move on to more complex applications. Always make sure to work with the operator of the robotic cell to make sure every part of the process has been identified. And most importantly, remember to control a process before automating it.

Mathieu Bélanger-Barrette is a production engineer at Robotiq. Edited by Emily Guenther, associate content manager, CFE Media, Control Engineering, eguenther@cfemedia.com.

MORE ADVICE

Key Concepts

  • Limitations of vision technology and robotic cells
  • How to integrate robotic vision systems
  • The benefits of robotic vision systems.

Consider this

Do the benefits of integrating robotic vision systems create more solutions or raise more questions?