Artificial intelligence’s impact on the robotics industry

Researchers and manufacturers are looking to teach robots how to learn and handle complex tasks with artificial intelligence (AI), but their capabilities are quite a bit removed from what people believe robots are capable of achieving.

By Tanya M. Anandan, RIA April 4, 2018

Researchers and entrepreneurs with decades of working in artificial intelligence (AI) are trying to help people better understand its elusive nature. They’re working to reduce some of the confusion and misconceptions around AI and show how it’s being used in robotics for industrial applications.

"I think the biggest misconception is how far along it is," said Rodney Brooks, chairman and CTO of Rethink Robotics. "We’ve been working on AI, calling it AI since 1956 (when the father of AI, John McCarthy, coined the term "artificial intelligence"), so roughly 62 years. But it’s much more complicated than physics, and physics took a very long time. I think we’re still in the infancy of AI."

Brooks believes much of the AI hype comes from recent press covering jaw-dropping demonstrations of anthropomorphic and animal-inspired robots, or spectator sports pitting AI systems against humans playing chess, Jeopardy!, ping-pong, and Go. AI is here, but it is taking baby steps.

Some of the misunderstanding stems from equating machine performance with competence. When we see a human perform a certain task, we can assume a general competence—skills and talent—the person must possess to perform that task. It’s not the same with AI.

"An AI system can play chess fantastically, but it doesn’t even know that it’s playing a game," Brooks said. "We mistake the performance of machines for their competence. When you see how a program learned something that a human can learn, you make the mistake of thinking it has the richness of understanding that you would have."

Knowing what AI is and isn’t

AI has become a marketing buzzword. Like "robot" before it, now everything is seemingly AI-powered. What is and isn’t AI is sometimes difficult to pinpoint. Even the experts hesitate when it comes to identifying definitively what is and isn’t AI. As Brooks noted, what was considered AI in the 1960s is now taught in the very first course on computer programming. But it’s not called AI.

"It’s called AI at some point," Brooks said. "Then later it just becomes computer science."

Machine learning, and all of its variations, including deep learning, reinforcement learning, and imitation learning, are subsets of AI.

"AI was a very narrow field for a while. Some people saw it very specifically around a set of search-based techniques," said Ken Goldberg, a professor and distinguished chair in industrial engineering and operations research at the University of California (UC) Berkeley. "Now AI is widely seen as an umbrella term over robotics and machine learning, so now it’s being embraced as a whole range of subfields."

Advanced forms of computer vision are a form of AI.

"If you’re just inspecting whether a screw is in the right place, we’ve had that since the ’60s. It would be a stretch to call that AI," Goldberg said. "But at the same time, a computer vision system that can recognize the faces of workers, we generally do think of that as AI. That’s a much more sophisticated challenge."

Lack of context

An important distinction between human intelligence and machine intelligence is context. As humans, we have a greater understanding of the world around us. AI does not.

"We’ve been working on context in AI for 60 years and we’re nowhere near there," Brooks said. "That’s why I’m not worried that we’re going to have super intelligent AI. We’ve been successful in some very narrow ways and that’s the revolution right now, those narrow ways. Certainly speech understanding is radically different from what we had a decade ago. I used to make the joke that speech understanding systems were set up so that you press or say ‘2’ for frustration. That’s no longer true."

He cited Amazon’s Alexa as an example. Google’s Assistant and Apple’s Siri are two more.

"You say something to Alexa and it pretty much understands it, even when music is playing, even when other people in the room are talking," Brooks said. "It’s amazing how good it is, and that came from deep learning. So some of these narrow fields have gotten way better. And we will use those narrow pieces to the best advantage we can to make better products.

"When I started Rethink Robotics, we looked at all the commercial speech understanding systems. We decided at that point it was ludicrous to have any speech recognition in robots in factories. I think that’s changed now. It may make sense. It didn’t in 2008."

Speech recognition compiles the right word strings. Brooks said accurate word strings are good enough to do a lot of things, but it’s not as smart as a person.

"That’s the difference," he said. "Getting the word strings is a narrow capability. And we’re a long way from it being not so narrow."

These narrow capabilities have become the basis for many wildly optimistic AI predictions that are overly pessimistic about our role as humans in that future.

AI research in the real world

Goldberg stresses multiplicity over singularity, noting the importance of diverse combinations of people and machines working together to solve problems and innovate. This collaboration is especially important as AI’s applications exit the lab and enter the real world.

Pieter Abbeel, a professor in the department of electrical engineering and computer sciences at UC Berkeley, who is working to bring AI to the industrial world as president and chief scientist of Embodied Intelligence, also stresses the importance of humans and machines working together.

"That’s part of the challenge," Abbeel said. "How are humans able to use this technology and take advantage of it to make themselves smarter, rather than just have these machines be something separate from us? When the machines are part of our daily lives, what we can leverage to make ourselves more productive, that’s when it gets really exciting."

While Abbeel is excited about AI’s prospects, he thinks some caution is warranted."I think there is a lot of progress, and as a consequence, a lot of excitement about AI," he said. "In terms of fear, I think it’s good to keep in mind that the most prominent progress like speech recognition, machine translation, and recognizing what’s in an image are examples of what’s called supervised learning."

Abbeel said it’s important to understand the different types of AI being built. In machine learning, there are three main types of learning: supervised learning, unsupervised learning, and reinforcement learning.

"Supervised learning is just pattern recognition," Abbeel said. "It’s a very difficult pattern to recognize when going from speech to text, or from one language to another language, but that AI doesn’t really have any goal or any purpose. Give it something in English, and it will tell you what it is in Chinese. Give it a spoken sentence, and it will transcribe it into a sequence of letters. It’s just pattern matching. You feed it data-images and labels-and it’s supposed to learn the pattern of how you go from an image to a label.

"Unsupervised learning is when you feed it just the images, no labels," Abbeel continued. "You hope that from just seeing a lot of images that it starts to understand what the world tends to look like and then by building up that understanding, maybe in the future it can learn something else more quickly. Unsupervised learning doesn’t have a task. Just feed it a lot of data.

"Then there’s reinforcement learning, which is very different and more interesting, but much harder. (Reinforcement learning is credited for advancements in self-driving car technology.) It’s when you give your system a goal. The goal could be a high score in a video game, or win a game of chess, or assemble two parts. That’s where some of that fear can be justified. If AI has the wrong goal, what can happen? What should the goals be?"

It’s important humans and artificial intelligence don’t evolve in a vacuum from each other. As we build smarter and smarter machines, our capabilities as humans will be augmented.

"What makes me very excited about what we’re doing right now at Embodied Intelligence is that the recent events in artificial intelligence have given AI the ability to understand what they are seeing in pictures," Abbeel said. "Not human-level understanding, but pretty good. If a computer can really understand what’s in an image, then maybe it can pick up two objects and assemble them. Or maybe it can sort through packages. Or pick things from shelves. Where I see a big change in the near future are tasks that rely on understanding what a camera feed is giving you."

Deep learning for robot grasping

Goldberg’s Autolab has been focused on AI for over a decade and has been applying it to projects in cloud robotics, deep reinforcement learning, learning from demonstrations, and robust robot grasping and manipulation for warehouse logistics, home robotics, and surgical robotics.

The lab’s Dexterity Network (Dex-Net) project has shown AI can help robots learn to grasp objects of different size and shape by feeding millions of 3-D object models, images, and the metrics of how to grasp them to a deep-learning neural network. Previously, robots learned how to grasp and manipulate objects by practicing with different objects over and over, which is a time-consuming process. By using synthetic point clouds instead of physical objects to train the neural network to recognize robust grasps, the latest iterations of Dex-Net are much more efficient, achieving a 99% precision grasping rate.

In the long term, Goldberg hopes to develop highly reliable robot grasping across a wide variety of rigid objects such as tools, household items, packaged goods, and industrial parts. He’s also very interested in algorithms that can work across robot types.

Deep-learning collaborative robots

Rethink Robotics’ Intera 5 software is designed to make Baxter and Sawyer collaborative robots smarter. Brooks said there’s a lot of AI in the robots’ vision and training capabilities.

"Traditional industrial robots don’t have much intelligence," Brooks said. "But going forward, that’s what we’re doing. We’re putting deep learning into the robots. We’re trying to deal with variation because we think that’s where 90% of manufacturing is with (robots) working in the same space as humans."

Sawyer and Baxter robots have a train-by-demonstration feature that puts AI to work.

"When you’re training it by demonstration, you show it a few things by moving its arm around and it infers a program called a behavior tree," Brooks said. "It writes a program for itself to run. You don’t have to write a program."

Intera 5 is a graphical programming language. Brooks said you can view it, modify it, or if you want, you can write a program in a behavior tree, which bypasses the option for the program to do it automatically.

"That means someone working on the factory floor who is not a programmer can get the robot to do something new," Brooks said. "It infers what they are asking it to do and then writes its own program."

AI changes robot programming

AI is changing the way robots are programmed. Abbeel and his team at Embodied Intelligence are harnessing the power of AI to help industrial robots learn new, complex skills.Their work evolved from Abbeel’s research at UC Berkeley, where they had a major breakthrough in using imitation learning and deep reinforcement learning to teach robots to manipulate objects. The startup uses a combination of sensing and control to teleoperate a robot. For the sensing, an operator wears a virtual reality (VR) headset that shows the robot’s view through its camera.

On the control side, VR devices come with handheld devices that the operator holds. As the operator’s hands move, that motion is tracked. The tracked coordinates and orientation are fed to a computer that drives the robot. That way the operator has direct control, like a puppeteer, over the motions of the robot grippers.

"We allow the human to embed themselves inside the robot," Abbeel said. "The human can see through the robot’s eyes and control the robot’s hands."

He said humans are so dexterous that there’s no comparison between robot grippers and our hands. By working through the VR system, the operator is forced to follow the robot’s constraints.

"You teach the essence of the skill to the robot by giving demonstrations," Abbeel said. "It doesn’t mean that it will be robotically fast at that point. It will do it at human pace, which is slow for most robots. That’s the first phase (imitation learning). You teach the robot through demonstrations. Then in phase two, the robot will run reinforcement learning, where it learns from its own trial and error. The beauty here is that the robot has already learned the essence of the task. Now the robot only has to learn how to speed it up. That’s something it can learn relatively quickly through reinforcement learning."

Abbeel said their technology is suited particularly for challenging vision and manipulation tasks that are currently too complex for traditional software programming techniques. Applications include working with deformable objects that change shape during handling such as wires, cables, and textiles. Bin picking is another potential application.

"We’re changing the way robots are programmed," Abbeel said. "We write code for imitation learning, and we write code for reinforcement learning. Once that code is in place, when you want a new deployment, we don’t write new code. Instead, we collect new data. So the paradigm shifts from new software development for a new deployment, to new data collection for new deployment. Data collection is generally easier. It’s a lower bar than new software engineering."

Eventually, Embodied Intelligence will let other people use this software to reprogram their robots by doing their own demonstrations. This will allow any company, large or small, to quickly redeploy robots for different tasks.

AI’s brain in the cloud

Disruptive technologies and emerging technological trends like Industrie 4.0 and the Smart Home are increasingly becoming interdependent. Advances in deep learning using image classification and speech recognition have relied heavily on huge datasets with millions of examples. AI requires vast amounts of data, more than can reside on most local systems. Cloud robotics can help enable AI-powered robots.

Cloud robotics enables information sharing so that intelligence, basically learned skills, can be collectively shared across all the robots in a connected environment. It also allows for collaboration so two or more remote robots, or human-robot teams, can work together to perform a task, even when miles apart.

As more AI-powered robots enter the market, they will need to be connected to a common platform. CloudMinds Technology, a startup founded in 2015, wants to be that common platform, branding itself the world’s first cloud robot operator.

"The reason we wanted to create CloudMinds is because we feel this is an opportunity to apply our telecommunications background to robotics and AI," said Robert Zhang, cofounder and president. "We want to be the operators for robots."

CloudMinds plans to be at the forefront when the new wave of mobile manipulators goes online. They envision a community of millions of robots sharing what they’ve learned in the cloud.

Our collective potential

Cloud robotics, machine learning, computer vision, speech recognition—all the facets of AI are making progress, and at times remarkable strides in specific areas. However, AI still has nothing on humans.

Even if robots, with the help of AI and human engineering, are someday able to approach our dexterity, they may never truly grasp the world around them in all of its fragility and potential. Context and ingenuity will remain in the realm of humans. Technology is neither bad nor good; it’s how we use it. With AI and robotics, we humans have tremendous potential for good.

Tanya M. Anandan is contributing editor for the Robotic Industries Association (RIA) and Robotics Online. RIA is a not-for-profit trade association dedicated to improving the regional, national, and global competitiveness of the North American manufacturing and service sectors through robotics and related automation. This article originally appeared on the RIA website. The RIA is a part of the Association for Advancing Automation (A3), a CFE Media content partner. Edited by Chris Vavra, production editor, Control Engineering, CFE Media, cvavra@cfemedia.com.

MORE ANSWERS

KEYWORDS: robotics, artificial intelligence

Artificial intelligence (AI) is developing, but it isn’t as smart as humans.

Humans can use machine learning to help robots learn new skills with the help of AI.

Cloud robotics can assist collaborative robots; AI requires large quantities of data.

Consider this

What particular skills could robots be taught that would have a major impact in manufacturing and industrial automation?

Original content can be found at www.robotics.org.