This Blog was written by Paul Hightower the CEO of Instrumentation Technology Systems. He discusses a range of technology and raises some interest questions regarding Autonomous Vehicles in general.
Autonomous vehicles are coming to a street near you; very soon. AUVSI’s 2017 Automated Vehicle Symposium was a great resource to explore the current landscape in the industry. Topics ranged from new sensor developments, mapping software, connected vehicles, ADAS, proving ground testing, regulation, and liability. It is all coming fast and furiously, but there seems to be a lot of unanswered questions when looking at this new transportation technology as a whole.
I listened to speakers from Baidu, Intel, NVIDIA, the DOT and more. I spoke with developers from all sectors of the industry and gained valuable insight that sparked some questions. I learned that thousands of test AVs are on the streets right now gathering experience that will be used to deliver competent autonomous vehicles (AV). I also learned that qualifying and testing AVs will be quite different than what we have been doing for the past 100 years. The difference? AVs will be decision makers not just extensions of human arms and legs. I also learned that AVs, what software they use and the sensors they see, hear and feel with will change and that none of this is mature.
Today, auto and tech companies are racing to market with products integrating this exciting autonomous technology. While many of the technologies to deliver self-driving vehicles are in use or demonstration phases, regulatory issues are not resolved. For example, while listening to the head of the Colorado DOT speak on the complications of rule-making, the chief of state police posed an interesting question, “How do I pull over an autonomous vehicle?” A techie quickly answered, “Why would you need to? The autonomous vehicle always follows the vehicle code.” Not surprisingly the response from the chief was, “What if the person inside broke parole or robbed a store or stole the vehicle? How do I pull it over?”.
This wasn’t just a technical question, it was a legal question. When an officer signals you to pull over you can do so voluntarily or fail to yield. If pursuit ensues in an AV, however, what would its nature be? The AV is personal property; does the officer have authority to obtain control over it? I never heard any clear answers but that exchange sure gives pause for thought.
In listening to technical presentations by Baidu, Intel and NVIDIA, I discovered that while much has been achieved, there is a long way to go. Even the operating system on which all of the software will run is not determined. Intel said they have made solutions highly flexible because “Intel knows for sure the platform envisioned today will be entirely different only five years from now.”
Realisation of AVs relies on artificial intelligence (AI) coupled with machine learning (ML). Combined, AI and ML transform automation into decision making. In automation, the exact process and decision tree that a computer follows is prescribed, consistent and known in advance. Conversely, an AI engine makes decisions based on probability of success in achieving an objective (go from here to home for example). That probability is modified based on experience that is captured in the ML and analysis current conditions perceived by the AI element. The framework on which the ML resides is generally a neural network. Neural networks provide a multipath scheme to save and recall information in a manner similar to how our brains work.
The decisions an AI system makes from instant to instant will depend on its perception of the local environment (objects in “view”, road conditions, condition of the vehicle, load the vehicle is carrying, local rules of the road and the experience accumulated since put into service). That is a lot of variables. So many, that even the same vehicle may not behave the same way when traversing from A to B. In other words, autonomous cars will have a personality of sorts.
For AVs, a starting experience is loaded into the neural network at the time of production. The experience would be objective outcomes under a wide range of circumstances that can be embedded with probability of success based on historical information derived from experts (human drivers). Once on the street, more experience is accumulated that may change probability and add variances that change the decisions made by the AI engine. The ML has to be able to add new conditions not previously considered.
Even though AVs will have different experiences, they will have the same “mind”. Does that mean every car (Toyotas, Fords, Chevys, RAM Trucks, Nissans, Ferraris, and Maseratis) will behave the same way? I don’t think so.
Vehicle makers need differentiation. Of course, part of the differentiation might be luxury, entertainment, driving range, etc. But vehicles have different purposes; hauling, commuting, pleasure, basic transport, status, expression of self, and so on. What could/will be done with the AI/ML systems to provide differentiation? There may be a driving style that is crafted for a manufacturer or model that is used to attract different consumers. Perhaps there will be selectable styles much like current vehicles that permit you to set transmission shifting profiles and suspension characteristics. The result is a myriad of AI/ML behaviors or personalities.
How does one evaluate AVs then? Would you trust the manufacturer who has a vested interest in getting to market and avoiding liability? Moreover, in driving there are many variables that can impact success (getting there). The objective may not be just ‘getting there,’ in real life it is ‘getting there at a specific time’. The time objective adds parameters that vary with the travel itself. Just getting there is a path. Getting there on time may be a different path that is selected or “recalculated” due to high traffic flow, weather, road conditions, construction, accidents, parades, or countless other external uncontrolled circumstances. Some input will change a path (drive around something), stop progress (traffic), or perhaps be ignored (a rat crossing the road). These circumstances may influence the ML database to add experience that affects the probability of an objective.
Since the vehicle is making the pathway, segment speed, obstruction avoidance, collision avoidance, travel time decisions and more, testing an AV is not just about braking performance or how a vehicle crushes in a collision. One must evaluate how the vehicle responds to its environment with pressures from its objectives and desired driving style of the chief occupant1. Testing decision making capability and appropriateness of a vehicle is a whole new world. It should be a necessary practice to ensure vehicle performance and safety in the extremely wide range of conditions that AVs will be subjected to.
Another question is longevity. After the vehicle has been in service for a number of years, could the size of the learning database outgrow ML capacity? If it could, what experience can be “thrown out” to make room for the new? How does the AI/ML engine prioritize what can be discarded? Will these decisions be the right ones?
Will a manufacturer that is cost sensitive, responsibility adverse, and in a hurry to compete provide the tools to independently evaluate incidents to help identify better decision paths in the future and improve the safety and performance of AVs?
At present, Electronic Recording Device regulations only require data to be collected for a short time (less than a second) around an impact sufficient to deploy the airbag system. However, I can think of many incidents in my life that never deployed the airbag. These incidents were caused human error. In the future, we can replace the cause with AI decision error (although the hope is far fewer). Since humans may be passive and not even paying attention, it will likely be important to include an independent witness of events to understand the circumstances of an incident. If the AV has human controls (steering wheel, brake, accelerator), what had control at the time of the accident? The witness may need to independently detect lesser incidents and keep a record for future analysis.
A testing body2 may need to observe vehicle performance in a manner that is independent of the AI/ML system in control. After a major incident or string of incidents, collection of data or independent testing by the NHTSA may be required to certify an AV’s readiness to be sold and put on the road or to determine causes and corrective actions, separate from AI/ML interpretation.
An independent witness compiling vehicle sensor information, video and control data would be valuable to evaluate whether the AI/ML system is properly interpreting sensor data. Additionally, the independent witness could help determine whether the decisions made are at least within the acceptable range defined by expert drivers.
Even with semi-autonomy, how is control handed off between the AI/ML driver and the human? Do both see the same danger and select the same corrective action? Which has the liability if the corrective action is insufficiently effective? How do you know who (AL/ML or human) has control at the instant of the incident? Scenarios of this type have been investigated by the University of Leeds in the UK. In situations such as this, who has the liability for injury and property damage? Did the human take control away from the AI processor? Did the AI processor override human input? Or was the ultimate response to a situation a combination of both? If control was handed back to the human, were they paying attention and given enough time to respond?
It would seem to me that AV testing is far more complex than the vehicles we now drive. Test ranges will need to be instrumented differently and likely need more out of vehicle equipment to observe and evaluate behavior. The rules issued by Departments of Transportation should require more test independence to assure that manufacturers can’t cheat the system (read: emissions testing) by not only putting polluting vehicles on the street, but underdeveloped ones as well.
Many of the building blocks are being tested in vehicles now. In my 2015 car, the system uses a front view camera to read speed limit signs and tell me if I am speeding or not, determine if I am closing in on the car in front of me and warn me to pay attention, and it can “see” a pedestrian and even start braking the car to avoid contact. These Advanced Driver Assistance Systems are elements that are needed to make an Av work, but right now they don’t take control away from you; they warn you and suggest action. It is quite clear you are still in charge and responsible. More systems are needed. After all, the AI/ML system has to evaluate a scene, evaluate what the vehicle is doing at every instant in time, and then make decisions; just as you do. It will come, but this is a process not an event. As such, we is to be released soon (2020?), may still be a teenager so to speak.
The sensors that replace your eyes, ears and butt to tell you what is going on around you and what your car is doing are many. They range from an array of cameras, radars, sonar, laser imagers, accelerometers, the on- board GPS, and even the clock. The data coming from this collection of sensors must fuse together to form a real- time equivalent of what you see and understand in any instant of a driving situation. To do so, makes the AI processing very busy.
Classification (mailbox, rat, human, tree, debris) is a vital role of AI decision-making (ignore, avoid, protect, watch, unknown) process. Distance and relative velocities are also a part of the decision tree (moving away, moving toward, intersecting course, divergent course, intersect imminent, intersect probable or not probable). What happens if a sensor malfunctions? What backup is there should one or more sensors fail and the AI/ML system is partially blinded? What happens if a sensor is displaced (functioning or not) and thus information needed by the AI processor is flawed or missing altogether? What happens during a collision when sensors may progressively be damaged or destroyed while the AI processor is trying to navigate the vehicle to safety? Can it still construct and properly classify objects and predict trajectories? If it determines that the probability of failure is high, what does it do? If there are no human controls, does it just stop? How does it find a safe place?
Understanding the order of events, viability of sensors and responses of the AI processor will be necessary to result in good outcomes in the future. This process will lead us to mature drivers… later. In an accident, how could you know what went wrong without an independent black box that collects raw, unprocessed sensory information being sent to the AI/ML system?
What a quandary: this black box will cost more than any manufacturer will want. A black box will not likely only add a $1 to the cost of a vehicle. A good black box that collects enough video and data to really understand what is going on and what happened will have to be clearly defined. In order to have a level playing field (cost and specifications) it will likely have to be a part of the legislation.
All of this suggests that the nature of vehicle testing is due for a huge change. I can also expect that all of the AI/ML systems will be different as vehicle manufacturers try to find their competitive edge and satisfy the demographics of their target marketplace. AI processors from different manufacturers will not likely be equal? If so, all of these scenarios need to be evaluated for each brand perhaps each model and each personality available for each model.
This all builds a case for an onboard independent witness that can not only provide evidence in an accident investigation, but can lead to the development of new criteria to improve AI processors and the experience and skill loaded into the ML of new vehicles. This also builds a case that the instrumentation at vehicle test ranges may not be adequate to evaluate AVs and certify a design as ready for release and test changes and modes.
I went to this symposium to see what role my company’s capabilities and products could bring to the table that would help evaluate and mature AV transportation systems in general. I learned that our company could deliver products that can independently capture what the sensors give to the AI processor. We currently have products that can capture and record data synchronized with video. In doing so, we can capture each picture taken by the cameras, each sound bite, each radar image each action taken by the AI processor, each response by the vehicle systems (brakes, steering, engine) and each input by the chief occupant. All of it collected with each picture will help us humans, the experts, form our own picture of what happened. Such information is particularly important in fault analysis. We can evaluate the decisions made by the AI processor and how the chief occupant interacted that resulted in the incident. Armed with this information, all stakeholders can learn the cause of an incident and plot a path to prevent it from happening again.
At test ranges, independent data/video collection would also serve as a tool to independently evaluate if the data from all of the sensors and cameras were correctly integrated to form an “image” of the scene similar or even better than a human would have assembled. But this comparison should be through the eyes of the sensors themselves so that expert evaluators can see what information and imagery was presented to the AI processor to create its interpretation of its physical environment. Similar to the black box, external monitoring of vehicle behavior may be an important tool to qualify, test and improve AI performance.
1 I use the term chief occupant in place of a driver. The chief occupant in this context means the person that sets the destination, time of departure/arrival, driving style and can issue other commands.
2 Perhaps Insurance Institute for Highway Safety (IIHS) or National Highway Traffic Safety Administration (NHTSA) New Car Assessment Program (NCAP).
Contact us for more information on the equipment offered by ITS for AI.