There is quite a lot of work to be done before self-driving cars will be ready to take over the wheel from human drivers, in both technology as well as building a proper ethical foundation upon which self-driving car "behavior" should be developed. The latter topic was recently explored in greater detail by Science in a new study called "The Social Dilemma of Autonomous Vehicles", which tries to figure out how people would like their self-driving cars to behave in certain situations, specifically when faced with a moral decision that could result in the loss of human life. It's a relatively old though experiment called the "trolley problem", and while, in reality, humans may not be faced with such moral dilemmas, self-driving cars will, and should take these things into consideration. This turns the trolley problem from a thought experiment into a real ethical question that needs to be addressed before self-driving vehicles become the norm.
The main benefit of building a self-driving car infrastructure is the reduction of traffic accidents. Generally speaking, humans are more prone to make errors while driving a car for a variety of factors including – but not limited to – stress, fatigue etc. Evidently, fully-autonomous vehicles are meant to nullify these problems, but that's not to say that these types of vehicles will not be faced with decision making. On the contrary, self-driving vehicles could be faced with moral decisions, such as the aforementioned trolley problem detailed in the screenshot below. Here we are given an example where a self-driving vehicle is faced with a traffic situation involving "imminent unavoidable harm". In the given case, the car must decide between (A) killing one passerby or several pedestrians; (B) killing one pedestrian or its own passenger; and (C) killing several pedestrians or its own passenger. John Bonnefon, a psychological scientist at the National Center for Scientific Research in France, explains that in these types of scenarios human drivers "may not even be aware that they [are facing a moral situation], and cannot make a reasoned decision in a split-second" which is why it's difficult to compare human drivers with autonomous cars. Humans cannot be programmed, but self-driving vehicles can and are programmed, which is why scientists have to figure out an ethical answer to the trolley problem.
Interestingly enough, the study shows that participants are generally in favor of reducing the number of public deaths, or take a "utilitarian approach" for solving the moral dilemma, i.e., participants are oftentimes of the opinion that a self-driving car should be programmed to sacrifice itself and its passengers in order to save the lives of pedestrians and / or other drivers. However, when asked whether or not they would actually buy a self-driving vehicle programmed to prioritize the life of pedestrians, the study shows that participants would rather own a car that puts their personal safety ahead of others. Needless to say and regardless of whether or not self-driving cars are ready from a purely technological standpoint, humans may have to answer very difficult ethical questions before they will be comfortable to utilize the hardware at its full potential.