Ushering in an Era of Safe Self-Driving Cars with Behavioral ContractsNote, the ideas/work presented below were developed in collaboration with Tung Phan-Minh, Soon-Jo Chung and Richard M. Murray at Caltech. The specific papers can be found in the References section.
Imagine 15 years in the future, we see on our Twitter (or maybe our Tik Tok) feed: self-driving car causes human fatality. A few days later, one more fatality. And then, yet again, one more. News of each crash obliterating human trust in self-driving cars and instantly ushering us into a self-driving car winter—a time where humans no longer want to invest in this technology at all.
How do we avoid such an ominous fate? We propose an approach that will require universal adoption across all self-driving car companies. Instead of having each individual company define their self-driving car's behavior, we propose there should be some fundamental set of rules that defines how all self-driving cars should behave. We give this fundamental set of behavioral rules a name: a behavioral contract.
But how in the world do we go about defining this behavioral contract? We look to human drivers as inspiration. Humans are able to interact with others in highly dense, complex, interactive settings like the one shown in the following video. If we look closely at the video, we can see that most of the time, humans are following some set of simple rules like... maintain distance, yield to others, follow traffic rules and a few more.
In our research, we define a behavioral contract for self-driving cars in a discrete-game setting. We design a behavioral contract, which defines the set of rules the self-driving car uses to ultimately select the action it will take. If a self-driving car selects actions according to the behavioral contract we propose, it will act much like humans do: maintain distance, yield to other agents that have priority, and follow traffic rules. It should be noted that in our work, we assume all the cars are taking actions according to the contract. We also assume that all the cars must be able to minimally communicate with other cars nearby.
Most importantly, by having agents select actions according to our behavioral contract, we can formally guarantee that collisions will never occur . And then, with some additional assumptions on traffic density, we can also formally guarantee that all self-driving cars will be able to make it to their respective destinations. These guarantees show the power of designing rules for a collective set of agents, instead of focusing on designing the behavior of each agent at a time. These results can be shown in the simulated animations below:
Unfortunately, human drivers—unlike the cars that we program—are prone to losing their attention and violating rules, so this paradigm doesn’t quite yet capture humans in-the-loop. In the near future, we hope to extend this behavioral contract paradigm to capture human behaviors as well.
Nevertheless, we have proposed this idea of defining some fundamental set of rules all self-driving cars should follow and we have presented an initial design of what these rules should be. This top-down approach of focusing on the collective, instead of the individual, is one we hope all self-driving car companies will embrace together. In doing so, we can successfully usher in an era where humans and self-driving cars operate seamlessly with one another.
- Cai, K. X., Phan-Minh, T., Chung, S. J., and Murray, R. M. (2021). Rules of the Road: Safety and Liveness Guarantees for Autonomous Vehicles. arXiv preprint arXiv:2011.14148 https://arxiv.org/pdf/2011.14148v2.pdf
- Phan-Minh, T., Cai, K. X., and Murray, R. M. (2019, December). Towards Assume-Guarantee Profiles for Autonomous Vehicles. In 2019 IEEE 58th Conference on Decision and Control (CDC) (pp. 2788-2795). paper here.