The self-driving vehicle efforts of Intel and Mobileye are taking some heat after recently beginning a massive autonomous vehicle test in Jerusalem, with criticism centered around the companies’ trial method. It isn’t the fact that the testing process and onboard A.I. depend only on one sensor at a time, either. Instead, there are concerns from the ground up with several assumptions that the company has made and it’s arguable that those concerns aren’t wholly unfounded. A central tenant of the Israel trials is that A.I. can learn to drive with only either camera sensors or Lidar being tested individually. Moreover, those would be backed by mathematical proofs augmented the number of miles the system drives. In short, the companies effectively claim that one system error for every 30,000 hours of driving can equate to one error every 1 billion miles, expounding the figure to include Lidar as a backup sensor. From that claim, the companies expect to only need to test each system individually.
Ordinarily, companies making headway in self-driving vehicles test each component of the system in tandem over many millions of miles. While both time-consuming and expensive, the tests being conducted in Jerusalem basically shirk that approach entirely. Mobileye representatives have said that the separated testing allows for better error resolution and easier data validation. The problem with that is that testing that way ensures that there are no overlapping issues once the system attempts to use all onboard systems simultaneously. Although each sensor included on an autonomous car or truck is read, and data points fed in, individually, there are still interactions between that data on the software side. As a rule, introducing more systems and complexity only compounds the number of errors that are likely to occur. One example of that is that the system will need to resolve which data point is in error – in those situations where an error is present – and respond appropriately. There’s a question of how it can accomplish that without significantly more road time where the system is using both or all onboard sensors simultaneously.
Meanwhile, there’s the problem of backing up the claims with real-world driving hours. It’s easy enough to create a mathematical model that shows only one error in either onboard system but the figures aren’t necessarily true. For starters, as mentioned above, there needs to be some consideration for how the A.I. will handle both sensors’ data simultaneously. However, going far beyond that, the next wave of road-ready A.I.-driven vehicles is going to be a technology that needs to earn trust. Mathematical models are a great tool for whittling away at a problem and generating accuracy in predictions over time. They also have a tendency to get more accurate as more data is fed in. But they also aren’t always correct and sometimes require more data than expected. Since this kind of proof hasn’t really been tried out for this application, there’s no way to be sure that the models are as accurate as they need to be. That’s made more potent by the fact that human lives depend on their accuracy.
It goes without saying, of course, that none of that guarantees the method being pursued by Mobileye and Intel is going to fail. The two companies could very well be onto something, despite its competitors having an enormous headstart. There is an argument to be made that giving the A.I. a framework for rules pertaining to safe following distance, right of way, and being cautious around occluded objects is a great place to start. However, that may be all that it is. It doesn’t equate to perfect testing method that it doesn’t have issues of its own – or which can easily sidestep extensive real-world testing.