Robotic intelligence expected to improve with faculty grant

Faculty members at the University of Arizona received a $3 million grant for the next five years to expand research on a self-teaching robot.

November 10, 2011

With recorded success developing a self-learning robot, a University of Arizona (UA) team of researchers is working under a five-year, $3 million grant to improve what the robot is teaching itself.

Computer scientists, engineers and other researchers have trained robots to cook, clean lab surfaces, deliver supplies, play table tennis, dance, and even mimic human expressions.

But the all-evasive goal has been to develop a self-learning robot that does not rely on extensive programming – in effect, constructing robots to be autonomous and also cognitively aware.

To create such a robot, Paul Cohen and Ian Fasel of the University of Arizona School of Information: Science, Technology, and Arts, or SISTA, won a five-year, $3 million grant from the Defense Advanced Research Projects Agency.

The newly funded project, "Robotoddler: Grounded Language Learning for Shared Human-robot Tasks," is aimed at developing a robot that can learn simple language and be instructed to perform tasks in that language.

"The challenge is, really, very simple, deceptively simple – that robots should learn the meanings of a few hundred words so that we can communicate with them in ordinary language," said Cohen, who directs SISTA and is an expert in artificial intelligence.

"But that’s easy to say and hard to do," said Cohen, who has earned other grants from the agency, DARPA, in recent years. "It’s not that the tasks are difficult, but understanding concepts like stacking or putting something in a container – that stuff is all so hard."

And yet, Fasel – Cohen’s primary collaborator and co-principal investigator – and his graduate students initiated a project that has had some compelling results.

Working under a seedling grant from DARPA last year, Fasel’s team built a self-learning robot that "pokes at the world and acts much like a scientist doing little experiments," Fasel said.

The robot, which has a 3-D vision system, an arm and the ability to move about, serves as the basis for the current project. Now, the newly funded team – which will include a group of postdoctoral students – will expand upon what the robot already has taught itself.

"Importantly, the robot is learning in an intelligent way, though we haven’t hard-coded everything about how or what to learn," said Fasel, an assistant professor for SISTA and a specialist in the area of robot learning, specifically child-like cognitive development. "The novel thing is that we are not putting the meanings in its mind. It constructs its own meanings from experience," he added.

Such a feat is far from the common programming when it comes to human-to-robot communication. To teach robots to do anything at all today requires extensive algorithms.

"If you have a robot that delivers supplies in a hospital, it is entirely programmed," Cohen said. "There is no way for anyone to communicate with that robot. It might as well be a train on tracks."

But the robot being developed is not a "blank slate," Fasel said.

"It is intended to be analogous to human learning. When toddlers are picking stuff up, putting it in their mouths, poking at it, they’re not just being random," Fasel said. "They may look like lunatics to us, but there is a purpose: They are finding out about the world."

Same with the team’s robot.

The team has taken a machine-learning approach, offering the robot some general strategies for learning. The robot’s responsibility, then, is to figure out exactly what it should be doing to best learn about the objects around it.

At first, the robot was not familiar with any of the objects – boxes, cups, toys, balls, metal pipes and other items – or categories in its environment. It uses its sensors to grasp, shake and drop objects, "listening" to the sound those objects makes before placing them one of 10 categories.

Thus, the robot is able to "formulate beliefs about what kind of object it is" while simultaneously learning concepts and how to conduct experiments, Fasel said.

The next step is to expand what the robot knows about objects and the differing ways of describing those items – something Fasel calls "affordances," or the varying types of actions one object could afford.

"So, if I ask the robot to help fill this container with water, it should understand that it needs to find an object that is hollow, has a solid bottom, is not porous, and is of a size that a human can get their hands around," Fasel said.

"Then it can figure out the sequence of actions to solve the task using the objects in the room," he also said. "To me that’s the most important goal of this project: To allow robots to solve tasks without us having to tell them exactly how at every step."

Such an act could serve as a huge advancement in research around human-robot communication.

"People speak in ambiguous ways. If I say, ‘Go get me a bucket,’ that doesn’t mean that I wanted only a specific bucket, which is why communicating with robots can be very difficult," Fasel said.

"We’d like the robot to understand us to the degree that we don’t have to literally say, ‘Point your body in this particular direction. Move your body this distance. Now more your arms at this distance and grab this particular item,’" he said.

"No one talks that way but a computer scientist," he added. "But we would like people who are not computer scientists to be able to communicate with robots."

https://sista.arizona.edu 

School of Information: Science, Technology, and Arts (University of Arizona)

www.arizona.edu 

University of Arizona at Tucson

– Edited by Chris Vavra, Control Engineering, www.controleng.com