Tech Talk: Questions Still Loom Over Autonomous Vehicles

Google Self Driving Car IO AH 1

Whether or not the average consumer is ready to embrace driverless technologies has become an urgent question as experts predict an acceleration in the overall timeline for self-driving vehicles. That isn’t necessarily helped by industry leaders and conglomerates that are continually pressing against the boundaries of what can be accomplished in fields of automation. The question, which generally centers around ethics, security, safety, and trust, has, in fact, probably begun to become mundane for many people. However, while those concerns certainly aren’t new and the discussion has been ongoing since the first conceptualization of an A.I.-driven automobile, they are increasingly important for consumers in the U.S.  Despite that the questions and concerns surrounding the technology are not new, they have remained mostly unanswered – or at least not answered to the satisfaction of many consumers. Furthermore, there are problems with infrastructure that will need to be addressed, bringing in entirely new issues associated with government spending.

Meanwhile, many experts believe that the U.S. will be the defacto front-runner in breaking through the barriers currently preventing the industry from going mainstream. That’s primarily down to the fact that the country’s government has taken a decidedly light-handed approach to regulating the industry. The industry itself appears to be looking to slip the vehicles into the mainstream consciousness through travel and mobility services – including ride-hailing services and product or package delivery and shipment mediums. That may make the transition easier and may serve to at very least solve the trust problem for the majority of customers if everything goes smoothly. Having said that, the move isn’t likely to address the other concerns quite so easily even if it does dampen or suppress them

For starters, the ethical and safety questions are likely to become less pressing at the initial outset. With human-driven automobiles, the current rate of accidents and death associated with vehicular travel are actually very high in the U.S. In fact, the CDC says that auto accidents are the second leading cause of death in the country on a yearly basis – with 36,000 American lives ending on an annual basis as a result. With consideration for where the technologies behind autonomous vehicles stand, that number is likely to drop significantly as those vehicles go mainstream. However, the issue will almost certainly be thrust back into the American subconscious with every accident or death that occurs because of or even just involving a self-driving vehicle and that’s because A.I. will not be perfect. Moreover, because the vehicles will be driven by programming, there will be complex questions about where to place blame, how the circumstance could have been avoided, and more. It stands to reason that an A.I. should attempt to avoid accidents to whatever degree it is possible to do so but there are going to be circumstances under which a death is going to occur regardless of how the system reacts.


Making matters worse, even under the best circumstances, A.I.-driven vehicles are going to malfunction, break down, and perform in ways that they are not necessarily programmed to behave. Human-driven vehicles already have a similar problem and it isn’t well-understood how those failures impact the number of accidents and deaths that occur on the road every year. Since the systems are arguably made much more complicated by additional software, hardware, and operating mechanisms, there is also much more room for error. Because the vehicles will effectively be computer systems on wheels, there is also a substantial risk caused by the possibility and near-inevitability that malicious software alterations and attacks on those systems will happen. Vulnerabilities and bugs effectively exist in every computerized or technological system. Often, those vulnerabilities aren’t recognized until well after an attack or security breach happens. Worse still, there isn’t really any comprehensive way to prevent that and, although each respective company does make every attempt to find and patch holes in security as quickly as possible, security lapses happen with astounding regularity.

Moving beyond those issues, the advent of the self-driven future will require substantive changes to infrastructure. Some of those changes will be more obvious, such as the need to rethink road signs or traffic lights in order to provide a given A.I.-system with the means to determine the appropriate action to take – whether through machine learning, machine vision, algorithmic programming, or a combination of those. There are almost certainly going to be unforeseen changes required and infrastructure build-outs needed in order to ensure that autonomous vehicles work the way they are intended to. In any case, it goes without saying that each solution will likely require a considerable amount of investment and spending to enact on the nationwide scale. Bearing that in mind, a significant amount of political discussion has, for more than a few years, been focused on the topic of government spending. In fact, it has become a relatively charged issue among politicians and constituents alike, irrespective of what the subject under consideration is, and often with splits drawn along various party lines. There’s no guarantee the discussion surrounding self-driving vehicles will be left out of that fray.