Lab teaches robot to put down objects properly
Cornell University researchers, using a 3D camera, have taught a robot how to put down simple household objects accurately in office and home scenes.
For robots, picking up objects is easy. Putting them down – in proper context – is not so simple.
Researchers at Cornell’s Personal Robotics Laboratory, led by Ashutosh Saxena, assistant professor of computer science, has found that placing objects for robots is harder because there are many options. Drinking cups, for example, go upright when placed on a table, but must be placed upside down in a dishwasher.
In early tests, the researchers placed a plate, mug, martini glass, bowl, candy cane, disc, spoon and tuning fork on a flat surface, on a hook, in a stemware holder, in a pen holder and on several different dish racks.
The research robot surveyed its environment with a 3D camera, then it randomly tested space as suitable locations for placement. For some objects the robot tests for “caging” – the presence of vertical supports that would hold an object upright. It also gives priority to “preferred” locations: A plate goes flat on a table, but upright in a dishwasher.
After training, the Saxena lab’s robot placed objects correctly 98 percent of the time when it had seen the objects and environments previously. When working with new objects in new environments, the robots placed the objects correctly 92 percent of the time.
But first, the robot has to find the dish rack. Just as humans assess a room when walking in, the Cornell researchers (Saxena and colleague Thorsten Joachims, Cornell associate professor of computer science) have developed a system that enables a robot to scan a room and identify its objects.
Pictures from the robot’s camera are stitched together to form a 3-D image of the entire room, which is then divided into segments.
The researchers trained a robot by giving it 24 office scenes and 28 home scenes in which they had labeled most objects. The computer examines such features as color, texture and what is nearby and decides what characteristics all objects with the same label have in common. In a new environment, it compares each segment of its scan with the objects in its memory and chooses the best fit.
In tests, the robot correctly identified objects about 73 percent of the time in home scenes and 84 percent in offices. In a final test, the robot successfully located a keyboard in an unfamiliar room. Again, context gives this robot an advantage. The keyboard only shows up as a few pixels in the image, but the monitor is easily found, and the robot uses that information to locate the keyboard. Saxena will present some of this research at the Neural Information Processing Systems conference in Granada, Spain, in December 2011.
- Edited by Chris Vavra, Control Engineering, www.controleng.com
- Events & Awards
- Magazine Archives
- Oil & Gas Engineering
- Salary Survey
- Digital Reports
Annual Salary Survey
Before the calendar turned, 2016 already had the makings of a pivotal year for manufacturing, and for the world.
There were the big events for the year, including the United States as Partner Country at Hannover Messe in April and the 2016 International Manufacturing Technology Show in Chicago in September. There's also the matter of the U.S. presidential elections in November, which promise to shape policy in manufacturing for years to come.
But the year started with global economic turmoil, as a slowdown in Chinese manufacturing triggered a worldwide stock hiccup that sent values plummeting. The continued plunge in world oil prices has resulted in a slowdown in exploration and, by extension, the manufacture of exploration equipment.
Read more: 2015 Salary Survey