Robot ethics, movement key to future technology developments
Researchers at several universities are working on technological developments that are blurring the line between human-robot interaction and what they can ethically do for people.
Disruption will come in many forms as robots permeate our daily lives; it’s not just technological. Social, ethical, legal and economic issues will raise concerns about privacy, liability, potential job loss, continued learning, and social conventions. One university is taking a closer look at the societal impact of robotics innovation.
At the heart of Corvallis, a city in central western Oregon about 50 miles from the Pacific Coast, we find a hidden gem. Part of the Willamette River Valley, the soil is very fertile here. Fertile ground for a rising star in the robotics field.
Oregon State University (OSU) is the city’s largest employer and home to the Collaborative Robotics and Intelligent Systems (CoRIS) Institute. Established in 2017 by OSU’s College of Engineering, CoRIS is mobilized to advance the design, development, and deployment of robots and intelligent systems able to interact seamlessly with people.
“We’re moving away from the idea that robots are over there behind the fence and people are on this side,” said Kagan Tumer, director of CoRIS and a professor in the School of Mechanical, Industrial and Manufacturing Engineering at Oregon State. “We’re interacting with robots everywhere, from factories, to work, to even in homes now we’re starting to see AI and robots bought by consumers. Understanding how people interact with a robot, whether it’s a simple vacuum cleaning robot or a home-care-level talking robot, there are a lot of questions about what it means to interact with a robot.”
OSU researchers strive to address these questions through a strong collaborative research culture that is the hallmark of CoRIS. Multiple disciplines come together under one roof. There is also a unique focus on ethics and policy.
“That’s something we take very seriously,” Tumer said. “Usually institutions like this have a research director and an academic director. We specifically have a policy and ethics director for the deployment side because we think it’s critical. We are one of the only places I know that have graduate-level robot ethics courses. We want our graduates to not only be technologically savvy, but also understand the implications of the robotics technology they put out into the world.”
Oregon State’s CoRIS emphasizes the human element of robotics and AI. Researchers explore the ethical, political and legal implications of robotics to understand the scope and scale of the social and technological disruption, and its impact on the future of science, technology and society.
Robotic legged locomotion
Ethics and policy become more important as robots begin to share the same spaces as humans. Soon they will walk among us.
Cassie, a bipedal robot developed in the labs at Oregon State, garners a lot of attention as it strolls around campus. The robot may resemble a pair of ostrich legs, but biomimicry was not the mission. Cassie’s developers simply wanted to create the most stable legged platform for varied terrain and unpredictable environments.
The way Cassie would end up at OSU was no accident. In an effort to recruit top robotics talent, Tumer sought out Jonathan Hurst, who has a doctorate in robotics from Carnegie Mellon. He became the first Oregon State faculty member devoted to robotics.
Hurst’s passion is legged locomotion, specifically passive dynamics of mechanical systems. He established the Dynamic Robotics Laboratory and his group designed and built ATRIAS, an early prototype to Cassie. ATRIAS gets its passive dynamics from series-elastic fiberglass springs, which act both as a suspension system and means of mechanical energy storage. The technology is based on the spring-mass model, a theory associated with the energy-efficient bouncing gait of animals. Imagine jumping on a pogo stick. Energy is stored in the spring when it’s compressed. When it expands, energy is released and you are thrust upwards.
“ATRIAS was a science experiment,” Tumer said. “It was never meant to be a robot in the real world. It was testing the idea of the models and the way that the passive dynamics of the robot works, and whether you can actually design a robot with very simple principles that would duplicate animal gait. Cassie is the outcome of that experiment.”
With control over two more joints in each of its legs compared to ATRIAS, Cassie is able to maintain its balance even when standing still or crouching. Full range of motion in the hips enables Cassie to steer. It’s also half the weight of its predecessor but twice as powerful and more energy efficient. A sealed system allows it to operate in rain and snow. Many of Cassie’s components were custom-developed in OSU’s lab when the team was unable to find off-the-shelf components that were small enough or had the required performance.
Oregon State spinoff Agility Robotics is marketing Cassie as a robust bipedal research platform for academic groups working on legged locomotion. The California Institute of Technology and University of Michigan are testing algorithms on Cassie to develop next-gen prosthetics and exoskeletons for persons with paraplegia. Beyond personal/assistive robotics, Tumer says the creators envision a career path for Cassie in package delivery and search and rescue applications.
“We’re not that far now from having driverless vehicles,” Tumer said. “If you can imagine a delivery truck that drives itself to your neighborhood, how do you handle that last 100 to 300 feet? That’s when legged robots pop out of the truck, deliver the package to your door, go back to the truck and drive to the next stop.”
Cassie’s creators are working on arm-like appendages to carry those packages and to right itself in case of a fall. Because Cassie will eventually need “eyes” to see your front door, vision and other sensors are on the agenda.
“If you look at the area around any house, from the curb to the sidewalk, to the slight slope of the driveway, to one or two steps in front of the house, it’s a hazard course for any type of wheeled robot,” Tumer said. “When you can pair a legged robot with a self-driving truck, you’re done. Being able to walk in environments designed for humans is going to be a big thing.”
Another significant research area for OSU is multi-robot coordination, Tumer’s main focus. He says many interesting real-world scenarios require multiple robots, or humans and robots, to work together. Search and rescue operations are one example.
“You might have unmanned aerial vehicles (UAV) looking for debris. You might have unmanned ground vehicles (UGV) moving around. You may have legged robots. You will have a lot of components doing a lot of different operations,” explains Tumer. “The critical aspect is how we determine what each one of those robots should be doing so the team does what you want it to do. Determining the objectives that you need to provide to all of these different robots is a key part of our research.”
Tumer said the different robots in a multi-robot team would need to have some level of awareness of the task they are trying to achieve, so they can determine how to best contribute to the team. His group is trying to impart that high level of coordination capability to robots.
Tumer’s research in multi-robot coordination may also apply to underwater robots. Oregon State has a strong oceanography department and they collaborate with CoRIS, particularly with OSU professor Dr. Geoff Hollinger who focuses on underwater autonomy.
“There’s a lot of underwater science that we do with robots,” Tumer said. “This is all about the health of the ocean, looking at how rivers bring the water and sediment, and how they propagate. There are a lot of research questions about how our environment is effected by everything we do, from runoff from rivers, to algae, to everything else. We have teams of intelligent gliders out there trying to collect information for our scientists.”
These “intelligent gliders” or autonomous underwater vehicles (AUV) look like small torpedoes, but have no engine. They glide with the water currents rather than being self-propelled. On-board sensors collect data on water salinity, temperature, nutrients and oxygen concentrations at various depths. The gliders can autonomously change their buoyancy to submerge up to 1,000 meter depths and then surface hours later to broadcast their data and location via satellite. They repeat this process every six hours or so, collecting data 24 hours a day for weeks at a time.
Oregon State researchers are also equipping undersea gliders with bioacoustic sensors to identify different kinds of marine animals using their unique acoustical signatures. This helps scientists study the distribution of predators and prey, and their relationship to oceanic conditions.
Advanced control algorithms developed by Hollinger and the Robotic Decision Making Laboratory allow the gliders and other AUVs to more efficiently navigate strong currents and environmental disturbances, and respond to environmental cues. Enabling intelligent AUVs to gather information in environments outside the reach of human divers has long-term benefits for sustaining the fishing industry, protecting marine life, and understanding climate change.
As more robotic systems enter our waterways, streets and homes, researchers say we will need formal means of validation and testing to support deployment on a larger scale.
Robotics validation and testing
Martial Hebert, director of the Robotics Institute at Carnegie Mellon University, thinks the not-so-exciting, but perhaps most critical, research area for robotics over the next 5 to 10 years will be integration, validation and testing. This is especially critical as human-robot interaction becomes a part of our daily lives. To illustrate his point, Hebert draws an analogy to the aircraft industry.
“The flying public feels safe in a plane because we have 150 years of experience with a technology that has been validated and tested,” he said. “We don’t yet have those tools for AI and robotics. How do we do this for systems that learn over time, that adapt? Whose systems depend on the data they use to learn? How do we do this for a system that has complex interaction with people? For this relatively new field of robotics, we don’t yet have those engineering tools that allow us to guarantee performance, guarantee behavior of those systems. It’s what we need to be able to really use them in everyday applications. It’s this collection of best practices and formal tools that we need to get to a system you can actually trust.”
Trust will play a critical role in the acceptance of intelligent autonomous systems. Systems we can entrust to care for our loved ones and our most vulnerable populations, our children, our elderly; robots that will perhaps share our most intimate spaces; systems that will have access to our private data, details about our everyday activities, and privy to our conversations; robotic systems to which we will relinquish control. For that, they will need to earn our trust. Our bright future with robots depends on it. Researchers are helping us realize that future.
Tanya M. Anandan is contributing editor for the Robotic Industries Association (RIA) and Robotics Online. RIA is a not-for-profit trade association dedicated to improving the regional, national, and global competitiveness of the North American manufacturing and service sectors through robotics and related automation. This article originally appeared on the RIA website. The RIA is a part of the Association for Advancing Automation (A3), a CFE Media content partner. Edited by Chris Vavra, production editor, Control Engineering, CFE Media, firstname.lastname@example.org.
Original content can be found at www.robotics.org.