If you keep up with tech news or news about self-driving cars, you've likely heard by now that a Tesla Model S in autopilot was recently involved in a fatal accident in Florida. The car failed to recognize another vehicle, a tractor trailer, because it was white against a pale sky. While the tractor trailer suffered minimal damage, the man inside the Tesla Model S passed away. It was determined that the Model S was at fault, and regulators had no choice to but to begin scrutiny of not only Tesla, but of self-driving cars as a concept. Tesla did fulfill their legal obligation by reporting the crash to the National Highway Traffic and Safety Administration, but the damage was done.
While Autopilot on Tesla vehicles features a disclaimer that the driver must pay attention and the vehicle is not fully autonomous, the initial crash as well as two others were due to errors that for all intents and purposes, any system claiming to be fully autonomous should not make, an important detail as Tesla is working towards fully autonomous cars. Some may purport that the latter two accidents could be motorists who made mistakes trying to take advantage of the Florida crash, but it is entirely possible that they're telling the truth. Without concrete data on whether or not Autopilot was engaged, there is no way to know. While Tesla's disclaimer that Autopilot is not fully autonomous and the fact that the two more questionable crashes may not have involved Autopilot may get them out of hot water, wider scrutiny is set to fall on self-driving cars in general as a result of those crashes. Logically, this is not only the correct course of action for authorities to take, but an event that was inevitable. No matter how perfect self-driving cars may ever be, they are still in need of a comprehensive legal framework to not only put in solid terms what happens in a crash or system failure, but to allow them to be used at all in some places, and to define what would qualify an autonomous system for full self-driving operation. Anthony Foxx, the chairman of the Department of Transport in the US, was supposed to be working with lawmakers to get a framework in place at a federal level by July 14, but as that date rapidly approaches, it's becoming quite clear that there will be a delay.
With that being the case, Google had the foresight to bring on legal counsel for the self-driving car project. That legal counsel came in the form of former Climate Corporation attorney Ken Vosen. With Vosen on board, Google now has a legal barrier between their self-driving cars and the mob of prosecutors, lawmakers, news outlets and consumers that would inevitably be ready to jump at them if their self-driving systems ever had a catastrophic failure. While their systems are being tested much more carefully than Tesla's, who simply handed the feature over to consumers with a disclaimer that it wasn't fully ready, the possibility of a failure still exists. The fact that people just can't seem to stop crashing into Google's self-driving cars, even when they're driving in an objectively perfect manner, should also be taken into consideration. Tim Papandreou of the San Francisco Municipal Transportation Agency's Office of Innovation is also on board to help with partnerships, which means that Google plans to put their technology into automakers' hands once it's ready. This could easily make their self-driving technology subject to the whims of a conventional car's computing system, potentially causing failure and even crashes. Google will be working closely with all partners for integration, much in the same way they worked with Chrysler to produce a fleet of 100 vehicles for testing, and in the same way they are working with Ford. While this will help integration efforts, it is no guarantee of perfect function; perfection, however, is exactly what's required. While Google's system is made to know when it's in over its head and ask a human to take over, catastrophic failure is still a possibility that is being prepared for in the wake of the Tesla crashes and the widespread scrutiny that came of them.
While lawmakers haven't said anything to any other self-driving car makers, it's likely inevitable. With so many players in the game, however, each using different systems, it will only be that much tougher to create the legal framework they need to operate freely. China's Baidu is working on self-driving tech, Apple is reportedly in the game, and even a large number of automakers are working on their own autonomous tech, or at least semi-autonomous features for human-driven cars. All of these agents will have to consider the implications of Tesla's recent misfortunes as they build out their systems, and likely retain some sort of legal counsel as they work with lawmakers to see their technology eventually fall into the hands of consumers in one form or another, even if consumers don't own the cars and simply hail them.