AI Winter for the Autonomous Car?

nivek

As Above So Below
Self-driving cars are headed toward an AI roadblock

There’s growing concern among AI experts that it may be years, if not decades, before self-driving systems can reliably avoid accidents. As self-trained systems grapple with the chaos of the real world, experts like NYU’s Gary Marcus are bracing for a painful recalibration in expectations, a correction sometimes called “AI winter.”

Over the past ten years, deep learning — a method that uses layered machine-learning algorithms to extract structured information from massive data sets — has driven almost unthinkable progress in AI and the tech industry.

It powers Google Search, the Facebook News Feed, conversational speech-to-text algorithms, and champion Go-playing systems. Outside the internet, we use deep learning to detect earthquakes, predict heart disease, and flag suspicious behavior on a camera feed, along with countless other innovations that would have been impossible otherwise.

But deep learning requires massive amounts of training data to work properly, incorporating nearly every scenario the algorithm will encounter. Systems like Google Images, for instance, are great at recognizing animals as long as they have training data to show them what each animal looks like. Marcus describes this kind of task as “interpolation,” taking a survey of all the images labeled “ocelot” and deciding whether the new picture belongs in the group.

Engineers can get creative in where the data comes from and how it’s structured, but it places a hard limit on how far a given algorithm can reach. The same algorithm can’t recognize an ocelot unless it’s seen thousands of pictures of an ocelot — even if it’s seen pictures of housecats and jaguars, and knows ocelots are somewhere in between. That process, called “generalization,” requires a different set of skills.



“Driverless cars are like a scientific experiment where we don’t know the answer,” Marcus says. We’ve never been able to automate driving at this level before, so we don’t know what kind of task it is. To the extent that it’s about identifying familiar objects and following rules, existing technologies should be up to the task. But Marcus worries that driving well in accident-prone scenarios may be more complicated than the industry wants to admit. “To the extent that surprising new things happen, it’s not a good thing for deep learning.”

.
 
Top