Machine vision systems pushing automotive industry towards full autonomy

Automotive companies are turning to machine vision companies to help them master deep learning algorithms that help autonomous cars safely navigate public roadways while handling the heavy processing demands from the data being generated.

By Winn Hardin, AIA October 31, 2017

The race is on to make self-driving vehicles ready for the road. More than 2.7 million passenger cars and commercial vehicles equipped with partial automation already are in operation, enabled by a global automotive sensors market estimated to reach $25.56 billion by 2021.

Estimates about the arrival of fully autonomous vehicles vary. The research firm BCG expects that vehicles designated by SAE International as level 4 high automation—in which the car makes decisions without the need for human intervention—will appear in the next five years.

Most automotive manufacturers plan to make autonomous driving technology standard in their models within the next two to 15 years. Tesla, whose autopilot system features eight cameras that provide 360 degrees of visibility up to 250 m, hopes to reach level 5 full autonomy in 2019.

Carmakers are building upon their automated driver-assistance systems, which include functions such as self-parking and blind spot monitoring, as the foundation for developing self-driving cars. The core sensors that facilitate automated driving—camera, radar, lidar, and ultrasound—are well developed but keep undergoing improvements in size, cost, and operating distance.

The industry still must overcome other technological challenges, however. These include mastering the deep learning algorithms that help cars navigate the unpredictable conditions of public roadways and handling the heavy processing demands of the generated data. To help them carve a path toward total autonomy in driving, automakers are turning to machine vision software companies as an important player in the marketplace. 

Algorithms get smarter

The machine vision industry is no stranger to the outdoor environment, with years of experience developing hardware and software for intelligent transportation systems, automatic license plate readers, and border security applications. While such applications require sophisticated software that accounts for uncontrollable factors like fog and sun glare, self-driving vehicles encounter and process many more variables that differ in complexity.

"Autonomous driving applications have little tolerance for error, so the algorithms must be robust," said Jeff Bier, founder of Embedded Vision Alliance, an industry partnership focused on helping companies incorporate computer vision into all types of systems. "To write an algorithm that tells the difference between a person and tree, despite the range of variation in shapes, sizes, and lighting, with extremely high accuracy can be very difficult."

Algorithms have reached a point where, on average, "They’re at least as good as humans at detecting important things," Bier said. "This key advance has enabled the deployment of vision into vehicles."

AImotive is one software company bringing deep learning algorithms to fully autonomous vehicles. Its hardware-agnostic platform uses neural networks to make decisions in any type of weather or driving condition. It is comprised of four engines: 

  • The recognition engine uses camera images as the primary input.
  • The location engine supplements conventional map data with 3-D landmark information
  • The motion engine takes the positioning and navigation output from the location engine to predict movement patterns of surroundings.
  • The control engine guides the vehicle through low-level actuator commands such as steering and braking.

For an automated vehicle to make critical decisions based on massive volumes of real-time data coming from multiple sensors, processors would have had to become more powerful computationally while consuming less operational power. Software suppliers in this space are developing specialized processor architectures "that easily yield factors of 10 to 100 times better efficiency to enable these complex algorithms to fit within the cost and power envelope of the application," Bier says. "Just a few years ago, this degree of computational performance would have been considered supercomputer level."

To make safe, accurate decisions, sensors need to process approximately 1 GB of data per second, according to Intel. Waymo, Google’s self-driving car project, is using the chipmaker’s technology in its driverless, camera-equipped Chrysler Pacifica minivans, which are currently shuttling passengers around Phoenix as part of a pilot project.

However, the industry still needs to determine where the decision-making should occur. "In our discussions with manufacturers, there are two trains of thought as to what these systems will look like," said Ed Goffin, marketing manager for Pleora Technologies. "One approach is analyzing the data and making a decision at the smart camera or sensor level, and the other is feeding that data back over a high-speed, low-latency network to a centralized processing system."

Pleora’s video interface products already play in the latter space, particularly in image-based driver systems for military vehicles. "In a military situational awareness system, real-time high-bandwidth video is delivered from cameras and sensors to a central processor, where it is analyzed and then distributed to the driver or crew so they can take action or make decisions," Goffin said. "Designers need to keep that processing intelligence protected inside the vehicle. Because cameras can be easily knocked off the vehicle or covered in dust or mud, they need to be easily replaceable in the field without interrupting the human decision-making process." 

Off the beaten path

While the self-driving passenger car dominates media coverage, other autonomous vehicle technology is quietly making its mark. In September 2016, Volvo began testing its fully autonomous FMX truck 1,320 m underground in a Swedish mine. Six sensors, including a camera, continuously monitor the vehicle’s surroundings, allowing it to avoid obstacles while navigating rough terrain within narrow tunnels.

Meanwhile, vision-guided vehicles (VGVs) from Seegrid have logged more than 758,000 production miles in warehouses and factories. Unlike traditional automated guided vehicles (AGVs)—which rely on lasers, wires, magnets, or floor tape to operateVGVs use multiple on-vehicle stereo cameras and vision software to capture existing facility infrastructure as their means of location identification for navigation.

As Bier pointed out, the Roomba robotic vacuum cleaner—equipped with a camera and image processing software—falls under the category of autonomous vehicles. Whether operating in the factory or on the freeway, self-driving vehicles promise to transport goods and people in a safe, efficient manner. Debate persists over when fully autonomous cars will hit the road in the U.S. As the industry overcomes technical challenges, governmental safety regulations and customer acceptance will affect the timing of autonomous vehicles’ arrival.

In the meantime, automakers and tech companies continue to pour billions of dollars into research and development. Each week seems to bring a new announcement, acquisition, or milestone in the world of self-driving vehicles. And vision companies will be there for the journey.

Winn Hardin is contributing editor for AIA. This article originally appeared in Vision Online. AIA is a part of the Association for Advancing Automation (A3), a CFE Media content partner. Edited by Chris Vavra, production editor, CFE Media, cvavra@cfemedia.com.

www.controleng.com keywords: machine vision, autonomous vehicles 

  • Automotive companies are relying on machine vision companies to help develop smarter autonomous vehicles.
  • Automated vehicles need to make critical decisions based on massive volumes of real-time data coming from multiple sensors.
  • Companies are developing smarter algorithms and processors that use less power to analyze data.

Consider this

What other issues should the automotive and machine vision industries need to consider as they develop autonomous vehicles?

Original content can be found at www.visiononline.org.