Robots need to understand and think more
Robots need to know the reason why they are doing a job if they are to effectively and safely work alongside people. In simple terms, this means machines need to understand motive the way humans do, and not just perform tasks blindly, without context.
According to an article by the National Centre for Nuclear Robotics, based at the University of Birmingham, U.K., this could herald a profound, but necessary, change for the world of robotics.
Lead author Dr. Valerio Ortenzi at the University of Birmingham argued the shift in thinking will be necessary as economies embrace automation, connectivity and digitization and levels of human-robot interaction increase dramatically.
The paper explores the issue of robots using objects. “Grasping” is an action perfected long ago in nature but one which represents the cutting-edge of robotics research.
Most factory-based machines are “dumb,” blindly picking up familiar objects that appear in pre-determined places at just the right moment. Getting a machine to pick up unfamiliar objects, randomly presented, requires the seamless interaction of multiple, complex technologies. These include vision systems and advanced AI so the machine can see the target and determine its properties. Potentially, sensors in the gripper are required so the robot does not inadvertently crush an object it has been told to pick up.
Context is critical
Even when all this is accomplished, researchers highlighted a fundamental issue: what has traditionally counted as a “successful” grasp for a robot might be considered a real-world failure because the machine does not take into account what the goal is and why it is picking up an object.
The paper cites the example of a robot in a factory picking up an object for delivery to a customer. It successfully executes the task, holding the package securely without causing damage. Unfortunately, the robot’s gripper obscures a crucial barcode, which means the object cannot be tracked, and the firm has no idea if the item has been picked up or not; the whole delivery system breaks down because the robot does not know the consequences of holding a box the wrong way.
Ortenzi and his co-authors give other examples, involving robots working alongside people. “Imagine asking a robot to pass you a screwdriver in a workshop. Based on current conventions the best way for a robot to pick up the tool is by the handle. Unfortunately, that could mean that a hugely powerful machine then thrusts a potentially lethal blade towards you, at speed. Instead, the robot needs to know what the end goal is: to pass the screwdriver safely to its human colleague,” Ortenzi said.
“What is obvious to humans has to be programmed into a machine and this requires a profoundly different approach. The traditional metrics used by researchers, over the past 20 years, to assess robotic manipulation, are not sufficient. In the most practical sense, robots need a new philosophy to get a grip.”
The research was carried out in collaboration with the Centre of Excellence for Robotic Vision at Queensland University of Technology, Australia, Scuola Superiore Sant’Anna, Italy, the German Aerospace Center (DLR), Germany, and the University of Pisa, Italy.
Suzanne Gill is editor, Control Engineering Europe. This article originally appeared on the Control Engineering Europe website. Edited by Chris Vavra, production editor, Control Engineering, CFE Media, firstname.lastname@example.org.
Keywords: robotics, programming, artificial intelligence
As robots evolve, their ability to think and react to suggestions must also.
A robot might be able to complete a task, but if it’s done in the wrong context, the act is meaningless and possibly dangerous.
Researchers are working to program robots so they act in a more nuanced and human way.
Robots’ ability to anticipate human variability is what they’ll have the most challenge with.