RNS
Browse Projects > Detailed View

Quantifying Theoretical Obstacles to Achieving Safe Fully Autonomous Driving

Description:

There is great anticipation on the part of the driving public over the claims of fully autonomous vehicles (AVs) being available in the not-too-distant future. DOTs have undertaken programs to promote and support AV research citing the expectation that the technology will reduce accident deaths [1][2]. However, there is also an increasing level of concern voiced by academic experts over whether safe autonomy will ever be achieved given the limits of the mathematics underlying Artificial Intelligence(AI) [7]. Ultimately the concerns extend to whether any combination of software and hardware will be capable of even safe assisted driving because partial autonomy still requires the self driving system to reliably detect accident-prone situations [4]. The scientific issue is that while it’s not known whether safe autonomous vehicles will be ultimately possible, the evidence, if the experts’ concerns are borne out, is that the goal is currently impossible. Providing scientific evidence of whether such concerns are valid is the crux of this proposal.

Objective:

Although some of the current information concerning obstacles to safe autonomous driving is found in academic journals, for example the recent publications describing the susceptibility of neural network based Machine Learning (ML) to adversarial attacks [5][6], much of the concern about the viability of AVs has been expressed in blog entries [4][7][8][9] and newspaper articles [3]. To be useful for engineering and policy formulation, the concepts presented in such outlets must be formalized into information upon which engineering, investment and policy decisions can be based.

Knowing whether it is possible, or not, to base AV functionality on an existing mathematical framework is vitally important to the DOT engineers and policy makers. There are a small number of overlapping research questions that could bring clarity to the prospects of achieving any level of safe autonomy.

First, how mathematicians distinguish between ML and AI [7] is important. The former being the existing collection of statistical methods for recognizing patterns in structured data (such as images). AI introduces the ability for a level of human-like reasoning, for example, the ability to forecast the evolution of a traffic situation far enough into the future to either automatically avoid an accident or to provide the driver sufficient time to ascertain the situation and then execute the proper response. The ability to emulate human reasoning is needed for a vehicle to correctly respond to situations that have never been encountered before based solely on previous information. This is known as the ability to generalize [8][10]. At this time AI is an aspiration [7] that is far from certain. It is also known that ML will never rise to the level of AI [10]. Thus a question of intense interest would be to answer the question “Is there evidence that autonomous driving can be achieved without AI?”

Second, without AI, two other software strategies have been the basis for attempts at implementing AV functionality. ML can be represented in terms of neural networks (for a pattern matching strategy) or what is called “good old fashioned AI”(GOFAI) which is a rule-based paradigm. With regard to neural network based ML, Brandom [4] quotes Professor Gary Marcus to argue that ML is inadequate:

...“Driverless cars are like a scientific experiment where we don’t know the answer,” Marcus says. We’ve never been able to automate driving at this level before, so we don’t know what kind of task it is. To the extent that it’s about identifying familiar objects and following rules, existing technologies should be up to the task. But Marcus worries that driving well in accident-prone scenarios may be more complicated than the industry wants to admit. “To the extent that surprising new things happen, it’s not a good thing for deep [machine] learning.”

“Surprising new things” tend to be difficult to represent as a pattern, otherwise they wouldn’t be “surprising.”

Autonomous car companies are apparently shifting towards the rule-based GOFAI in an attempt to surmount limitations imposed by neural network based ML. As with the problems that Marcus sees with “surprising new things” that AVs must be able to handle, Brandom [4] describes a deeper problem:

The experimental data we have comes from public accident reports, each of which offers some unusual wrinkle … Each accident seems like an edge case, the kind of thing engineers couldn’t be expected to predict in advance. But nearly every car accident involves some sort of unforeseen circumstance, and without the power to generalize [from the situations that resulted in previous accidents], self-driving cars will have to confront each of these scenarios as if for the first time. The result would be a string of [flukey] accidents that don’t get less common or less dangerous as time goes on.

From a mathematical point-of-view, Brandom is raising the prospect of the number of unique situations that could lead to a crash as either very large, but finite, or, worse, infinite. If the upper-bound is large enough, neither ML nor GOFAI would provide a means to implement autonomy. Thus, the question that must be answered is “What is the upper bound on the number of accident-prone situations?”

Benefits:

If safe autonomy is proved to require AI, achieving it will depend on a revolutionary breakthrough that is currently unanticipated. That could raise doubts about the advisability of investment in technology to support autonomous vehicles: there is no way to estimate a timeframe to expect such a breakthrough. This research could enable engineers and decision makers to base decisions on scientific evidence as to what is possible, or not, given the state of current mathematics.

DOTs and policy makers could also benefit from understanding the consequence of forging ahead in the absence of safe autonomy. For example, Jordan [7], who argues that AI is required for full autonomy, makes a prognosis of what the transportation system will need to evolve into:

... consider self-driving cars. For such technology to be realized, a range of engineering problems will need to be solved that may have little relationship to human competencies (or human lack-of-competencies). The overall transportation system ... will likely more closely resemble the current air-traffic control system than the current collection of loosely-coupled, forward-facing, inattentive human drivers. It will be vastly more complex than the current air-traffic control system, specifically in its use of massive amounts of data and adaptive statistical modeling to inform fine-grained decisions. It is those challenges that need to be in the forefront, and in such an effort a focus on human-imitative AI may be a distraction.

In-other-words, AVs might not have a driver, but they also will not be completely autonomous until AI is available as an implementation medium.

This research could serve as a counter weight to the benefits of AVs claimed by parties with an of interest in promoting autonomous vehicles [11]. One argument is that AVs will improve over time [12]. A second argument is that the sooner AVs are introduced, the more lives will be saved [13]. Theoretically, arguments such as these depend on the number of situations that could lead to a crash being small enough to be completely enumerated.

Finally, a set of clear, accurate, and accessible descriptions of the drawbacks and benefits of AVs based on scientific analysis would make possible a vigorous debate of all the alternatives, ensuring that the choices that best serve the clients of the transportation system are picked. These range from trading off the cost of dedicated lanes, new agencies, and other technology to support AVs versus, for example, other mass transit options such as light rail.

Related Research:

[1] Florida Department of Transportation, Florida Automated Vehicles Program, automatedfl.com, http://www.automatedfl.com/

[2] Virginia Department of Transportation, Virginia Connected and Automated Vehicle Program, virginiadot.org,

https://www.virginiadot.org/programs/connectedandautomated_vehicles.asp

[3] Siddiqui, Faiz (2019, July 17). Tesla floats fully self-driving cars as soon as this year. Many are worried about what that will unleash, The Washington Post,

www.washingtonpost.com/technology/2019/07/17/tesla-floats-fully-self-driving-cars-soon-this-year-many-are-worried-about- what-that-will-unleash/

[4] Brandom, Russell (2018). Self-Driving Cars are Headed Toward an AI Roadblock, theVerge.com,

www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber

[5] Akhtar, Naveed, and Mian, Ajmal (2017). Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey, arXiv.org,

arxiv.org/pdf/1801.00553.pdf

[6] Eykholt, Keven, Evtimov, Ivan, et al, (2018) Robust Physical-World Attacks on Deep Learning Visual Classification, arXiv.org, arxiv.org/abs/1707.08945

[7] Jordan, Michael (2018). Artificial Intelligence ― The Revolution Hasn’t Happened Yet, medium.com,

medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7

[8] Marcus, Gary (2018). Deep Learning: A Critical Appraisal, arXiv.org,

arXiv:1801.00631v1

[9] Nield, Thomas (2019). Is Deep Learning Already Hitting its Limitations?, towardsdatascience.com,

towardsdatascience.com/is-deep-learning-already-hitting-its-limitations-c81826082ac3

[10] Judd, J. Stephen. Neural Network Design and the Complexity of Learning PhD Thesis. California Institute of Technology, 1988.

authors.library.caltech.edu/26705/1/88-20.pdf

[11] Litman, Todd (2019). Autonomous Vehicle Implementation Predictions: Implications for Transport Planning, Victoria Transport Policy Institute, www.vtpi.org/avip.pdf

[12] Martin, Jeremy (2019) When Will Autonomous Vehicles be Safe Enough? An interview with Professor Missy Cummings, Union of Concerned Scientists Blog,

blog.ucsusa.org/jeremy-martin/when-will-autonomous-vehicles-be-safe-enough-an-interview-with-professor-missy-cummings

[13] David G. Groves and Nidhi Kalra (2017), Enemy of Good: Autonomous Vehicle Safety Scenario Explorer, Rand Corporation www.rand.org); url: www.rand.org/pubs/tools/TL279.html.

Relevance:

This research could be conducted under the auspices of the NCHRP program. It is also appropriate for PhD thesis projects.

Sponsoring Committee:AFB80, Geospatial Data Acquisition Technologies in Design and Construction
Research Period:24 - 36 months
Research Priority:High
RNS Developer:Stephen J. Bespalko
Source Info:Presentations at the Transportation Research Board 2018 Annual Meeting:
• P18-20629 Daniel H. Baxter, Testing and Integration of Autonomous Vehicle Systems
• P18-20654Tyler Weldon, Autonomous TMA Truck
• P18-20656 Monali Shah, Mapping Used in Navigation Solutions For Vehicles Transitions from 2-D to 3-D
• P18-20607 Thomas Fisher, Evolution of Automobile Technologies
• P18-20615 Kevin P. Dopart, Digital Infrastructure and Status of Technologies Through FHWA’s Perspective
• P18-20620 Key Systems for Autonomous Vehicles and Technolgies: Flash Lidar, Cameras, and Radar
AFB80 2018 Summer Workshop, Sacramento CA
AFB80 2019 Summer Workshop, Datona Beach FL
Date Posted:10/31/2019
Date Modified:11/11/2019
Index Terms:Future, Level 5 driving automation, Autonomous vehicles, Artificial intelligence, Theory,
Cosponsoring Committees: 
Subjects    
Highways
Design
Planning and Forecasting
Safety and Human Factors
Vehicles and Equipment

Please click here if you wish to share information or are aware of any research underway that addresses issues in this research needs statement. The information may be helpful to the sponsoring committee in keeping the statement up-to-date.