Google designed its self-driving cars in a manner such that they follow all the road and traffic rules. But now, it is teaching them how to drive like humans, so that they can perform activities like cutting corners, edging into intersections and crossing double-yellow lines as needed, says a report from the Wall Street Journal.
Google “humanizing” its cars
Google robots assume the worst, so as soon as they spot any potential danger with their ‘digital’ eyes they tap the brakes frequently, and this sometimes prompts other drivers to stop abruptly. According to the WSJ, an autonomous vehicle from the company tapped the brakes at seemingly odd times during a recent test drive.
In a conference in July, Chris Umson, who is leading Google’s efforts to develop driverless cars, said the cars are somewhat more cautious than they should be, and therefore, the company is trying to make them drive more “humanistically.” Google plans to commercialize self-driving cars, and recently hired auto-industry veteran John Krafcik as the chief executive of its car project.
Making cars that have already run more than a million miles on the public roads move more seamlessly among the human drivers is a big challenge for the company and for Krafcik. Google is studying various awkward moments in driving for now, and also some general driving behaviors to try and capture those into an algorithm that would make its cars drive more “humanistically.”
Deep learning techniques to help
Since 2009, Google’s self-driving cars have resulted in 16 minor accidents, and in 12 of those accidents, the vehicle was rear-ended. Though this accident rate is higher than the national average, Google says the national statistics does not include many minor accidents similar to those experienced by its cars. Google has clarified that it was not at fault in any of the crashes. That said, some believe the driverless vehicles’ habit of braking to avoid real but marginal risks could be responsible for the higher accident rate, says the WSJ.
Nvidia Corporation designs the powerful graphics processors that help Google’s cars to recognize objects. “Why is it getting rear-ended? It drives like a computer,” Nvidia’s CEO Jen-Hsun Huang said. Huang notes that Google has some “deep learning” techniques, with the help of which the computers will be able to recognize images and objects, and their ability will improve over time. The firms are hopeful that this deep learning of human driving behaviors by the software will solve the problem.