Tech Talk: Google Cars Are Learning "Human" Driving

Let's say you, dear reader, live on a narrow dead-end street four years or so from now. You want to visit a friend downtown, so you fire up an app and ask for a self-driving car. It's there in a matter of minutes, waiting outside your door, with a handy notification that lets you know that it's out there. You get in, and the car proceeds to jar and jerk you through a fast, brutally efficient three-point turn that frightens you out of your skin and threatens to cause whiplash. Not cool, right? Google agrees, and they're working on teaching their cars to avoid doing things like that.

The challenge for self-driving cars doesn't just boil down to learning to avoid collisions and obey traffic laws; besides, as this month's report from Google points out, even if the cars behave perfectly, they'll still get hit by human drivers. This month, for example, a yielding Google car was rear-ended at low speed by somebody who didn't spot the car, which was almost stopped, in time. Getting rear-ended, T-boned, and otherwise knocked about can just be a reality of the road sometimes. No, Google's autonomous autos are diversifying their learning a bit more by trying to learn how to drive naturally, in a fashion that syncs up better with the expectations and comfort zone of their human passengers. This means not only learning to drive safely, but to do it in a way that takes the human body, scare factors, and other variables like the usual flow of traffic on a given street into account.

This is why Google is not only teaching the cars to drive in a manner that's comfortable to human passengers along for the ride, but to take in their environments in detail. The cars can identify a huge range of obstacles, rivaling the knowledge and ability to discern road obstacles of most human drivers, and can figure out how best to get around them with no problem. The real issue is comfort; the cars are already programmed to drive slowly in autonomous mode, but even in fully autonomous mode, quick, hard turns can jar passengers, and sudden braking can do the same. Avoiding these is something that Google has been teaching the cars manually, mostly through letting the cars "watch" while they're driven in manual mode and adapt to the style of driving that they're seeing from the human inside them. This allows them to make slower turns, softer merges, and more careful braking a priority in their daily driving.

A self-driving car is replete with sensors, cameras, and other data-gathering apparatus; it also has the full knowledge of Google Maps and Street View behind it. The car uses a combination of machine learning and things that technicians have taught it through programming to figure out how best to approach any obstacle while driving. In their most recent report, Google makes it clear that this isn't the limit of a self-driving car's information reserves on how to drive, and they're working to expand that limit even further. In the aforementioned three-point turn, for example, a self-driving car's "gut instinct", so to speak, would be to approach it as efficiently and quickly as possible. It would probably do most of the turn in reverse, in the shortest arc possible. This may make a human driver a bit nervous, or if the robot really followed its "instincts", as in the objective learnings that it has obtained on how to make such a maneuver quickly and efficiently, it may even whip around the turn quickly and sharply, which would not be pleasant for anybody inside.

The cars have tons of sensors, road experience, and logic that tells them how a turn can be completed quickly and efficiently, but figuring out how to approach a turn in a way that's comfortable to a human passenger isn't something that a robot can learn on its own without plenty of help from actual humans. Even with over 2 million miles logged in autonomous mode and 1 million plus in manual, these cars still have a ways to go before they're ready for primetime, but the fact that they're able to get better over time at all speaks volumes about Google's passion for machine learning and their aptitude for injecting it into all of their products.

Copyright ©2019 Android Headlines. All Rights Reserved
This post may contain affiliate links. See our privacy policy for more information.
You May Like These
More Like This:
About the Author
2018/10/Daniel-Fuller-2018.jpg

Daniel Fuller

Senior Staff Writer
Daniel has been writing for Android Headlines since 2015, and is one of the site's Senior Staff Writers. He's been living the Android life since 2010, and has been interested in technology of all sorts since childhood. His personal, educational and professional backgrounds in computer science, gaming, literature, and music leave him uniquely equipped to handle a wide range of news topics for the site. These include the likes of machine learning, voice assistants, AI technology development, and hot gaming news in the Android world. Contact him at [email protected]
Android Headlines We Are Hiring Apply Now