According to Google its self driving cars have been involved in 11 minor accidents since the project first started six years ago. This is actually an amazing safety record especially considering that Google's self driving cars have covered nearly 2 million miles in six years. Google goes on to claim that none of the 11 accidents were a direct result of its autonomous self driving vehicles. Three of the eleven accidents happened in the period between September 2014 and May 2015, this is still negligible considering there are currently 48 autonomous self driving cars from Google on California roads. But the question that is many a mind is who is to blame when an autonomous vehicle causes a fatal accident. This is bound to happen sooner or later if Google is able to achieve its goals of putting self driving cars on the road in five years time.
Who is to blame in the event a fatal accident occurs is a difficult question to answer especially when an autonomous vehicle is involved. This is a question that needs an answer considering that there are over 5.5 million accidents in the U.S every year and they result in nearly 33,000 deaths. With this statistics in mind the answer to this question is of utmost importance. In case of a fatality is the owner of the car to blame yet he/she was not in control of the vehicle at the particular moment. Is it the self driving car AI(Artificial Intelligence) to blame? Is it the company that developed the vehicle to blame? or is it the software developer who wrote the program that enables the self driving vehicle to make decisions to blame. Accidents caused by robots are on the increase with hundreds of robot related accidents each year some being fatal. In 1983 a court ruled in favor of Robert Williams an assembly-line worker in a Ford plant who was killed instantly when the robot's arm crushed him while they were working side-by-side. The court awarded his family $10 Million placing the blame on the company.
Another question that arises is whether the self driving car's robot(AI) made the right decision giving rise to the age old trolley problem. A runaway trolley is rushing down a set of railway tracks. Ahead, on the same tracks, there are five people tied up and stuck. You have access to a lever. If you move this lever, you can switch the trolley to another set of tracks. But there is another person on this alternate track, who is not aware of the danger. There are two options: (a) Do nothing, and let the trolley kill the five people on the current track. Or (b) move the lever, shifting the trolley to the side track where it will kill one person. Which is the correct choice? This gives rise to many scenarios where the robot has to make decisions to actually put the 'driver' at risk to save the life of others. Will the manufacturers of the autonomous vehicles agree on the rules and standards to program into the cars? for example whose safety will the car prioritize?
This is a difficult question to answer and there may be no correct answer and will lead to new problems for the Law and industries alike. For example person vehicle insurers, since these are self driving vehicles. There could be pressure to dismantle laws that require auto insurance, and the entire revenue stream will dry up, replaced by auto manufacturers perhaps self-insuring their liability. Whatever the case a solution to this question is needed to help ease the transition.