Photo by Jose Castillo on Unsplash

Use game theory to predict autonomous vehicle collision outcome

Almas Myrzatay
8 min readJan 18, 2024

--

In this article, we will apply game theory concepts into autonomous vehicles (AV) to address “self-driving crash dilemma” [1]. The dilemma in a nutshell is that crash is imminent and AV must make decision on what to do next. We will use game theory to pit against each other AV manufacturers that have adopted different autonomous driving philosophies and try to predict an outcome.

Table of Content

  1. Introduction
  2. Motivation & Impact
  3. The goal
  4. Experimental set up
  5. Scenario 1: Single AV on the collision course
  6. Scenario 2: Two AVs with same collision philosophies
  7. Scenario 3: Two AVs with different collision philosophies
  8. Conclusion & further questions
  9. Resources

Introduction

Every (AV) on the road is a decision maker. At all times on the road, it must make technical decisions such as speed and direction. You might argue that AV doesn’t have cognitive ability (yet), and hence doesn’t fully qualify as an “economic agent”, and you will be right. But I would like to argue that as far as economic theory is concerned, AV is a ‘player’ that evaluates benefit, uncertainty and risk given its environment [2]. AV may be choosing from a pre-programmed list of options, but the selection from the pool of choices is a (semi)-stochastic one. So I would argue that if human is a fully stochastic agent, the AV is a (semi)-stochastic agent. Furthermore, for the purpose of this article, we are not comparing humans vs. machines, and all our actors are the same, they are all (semi)-stochastic.

In addition, I am not imposing rationality (Rational Choice Theory) on AVs [3]. The argument is confined to the simple scenario where crash is imminent and economic agent must make a decision on what to do. There is no assumption whether an agent is a rational or not. The only assumption is that AV is an agent that is either fully or partially stochastic (as opposed to being fully deterministic).

Motivation & Impact

The US AV market is estimated to be $14.79 billion in 2024, and is projected to be $37.56 billion by 2029 [5]. Given its size and expected growth, this topic will have impact on job creation and markets.

Undoubtedly, this issue is more impactful because it involves human lives, and hence serves as the main motivation for this article. With economic impact being an afterthought.

The goal

All the possible choices in the self-driving crash dilemma have some merit to them. Philosophers and theoreticians can speculate all the edge cases, and each will have some rationality behind the thought. Given an open-ended nature of the moral dilemma, this article will not attempt to conclusively determine the most ethical choice. Rather, the goal of the article is to apply economic reasoning into philosophical thinking, which will be thought provoking and generate a discussion.

Experimental set up

More concretely, consider following diagram where V1 is an autonomous vehicle that faces obstacles in all four directions (denoted as D1-D4), as indicated in the Figure-1. We will also define two important terms used in our payout strategy:

  • Chance of driver injury — probability of driver receiving physical damage as a result of collision
  • Chance of incidental damage — probability of driver causing damage to property and/or people as a result of collision

In real life, this could be a street where AV is about to collide with someone in front of them, and buildings on both side of the vehicle V1.

Figure-1.0: Experimental set up

Diagram presented in Figure-1 will be foundation of our discussion. For example, direction-1 (D1) might be defined as [100, 100] from V1 perspective, which represents 100% chance of driver injury and 100% chance of incidental damage.

We will we change the payout matrix depending on the auto-maker’s philosophy collision philosophy. Consider a scenario where driver-first approach will seek to minimize impact on driver; whereas minimize-casualty philosophy can choose to collide head on into obstacle if it means multiple lives will be saved. Given that AV philosophy per auto-maker differs, so will the payout matrix.

In this article, for the sake of simplicity, we will use 2D matrix. This will make math easier.

Figure-1.1: Simplified experimental set up

For simplicity of the math, we will set up table with 2 outcomes

Table-1.0: Payout matrix set up

The filled out payout matrix will look as follows:

Table 1.1: Payout matrix along with

In the Table-1.1, we illustrate payout for each player in our simplified simulation. In this case, Vehicle-1 will avoid injury to the driver when they select choice D1, but which will result in 100% incidental damage. The same is true for the Vehicle-2 when they pick D1.

One interesting observation is the importance of the asymmetric information. Selection a choice of D1 for either driver might seem as the best choice, but in reality they are will crash with 100 probability into each other. The table above doesn’t encompass this, because they will not find out it is the worst case scenario until after the crash.

Experiments:

In the next sections, we will evaluate different scenarios putting two different economic agents against each other and applying Game Theory [4].

Scenario 1: Single AV on the collision course

The diagram in Figure — 1 by itself isn’t really exciting because knowing AV’s manufacturing philosophy will answer the question on what it will do in this scenario. For example, Tesla announced while back in 2016 that it will focus on minimizing the casualties [8]. On the other hand, Mercedes Benz said it will prioritize the driver [9].

Each of the auto-makers that create and design algorithms for the AVs chooses a ‘philosophy’ that serves as a guiding principle. As a result, given imminent crash, we can predict that each car will act in accordance with provided guidelines.

Scenario 2: Two AVs with same collision philosophies

Now, let’s assume the same auto-maker AVs are on the path of imminent path. We are going to use Figure-1, and create two vehicles that are symmetrical in the set up. The details are illustrated in Figure-2

Figure-2: Two AVs with symmetrical paths

We will use Table 1.3 to calculate outcome given both have the same philosophy of saving the driver, each will opt to minimize driver injury and opt for top left box. However, this might seem as optimal, but if we factor in asymmetrical information, we will notice that D1 will lead to collision of both vehicles and perhaps even implicate the pedestrians.

Table 1.3

Scenario 3: Two AVs with different collision philosophies

In this scenario, two AVs will be from different auto-makers. Consider Figure-2 as a set up, but with modified payout illustrated in Table-1.4

In this experiment, two different AV philosophies will be used: driver-first vs. minimize casualty approaches. The V2 will be left as is and treated as driver-first approach. The V1 will be modified to be minimize casualty, which means avoiding the pedestrians. The assumption is that V1 will anticipate that collision with V2 will result in less casualties since we are hitting a vehicle, as opposed to hitting unprotected pedestrian(s) or elderly, for example.

Table-1.4: Different AV philosophies

In the Table-1.4, we have decreased the probability if incidental damage by half for V1, so that V1 will try to avoid pedestrian. Nothing has changed from the perspective of V2, but V1 has shifted it’s choice which resulted in different outcome. Based on our payout set up, V1 driver will continue head on and leave accident without hitting anyone/anything, but V2 that uses philosophy driver-first will collide with pedestrians.

Note: as you probably realized payout matrix numbers and set up guide the outcome and our interpretation of what happens during the accident. These numbers are highly subjective and their determination involves art and science. They might be influenced by how much one values human live or favor economic outcome such as cost of total accident, for example.

Conclusion & Further questions

Scenarios above are not all encompassing nor answer all the questions on what to do when more than one autonomous vehicle is at play. This article barely scratches the surface of the ethical AI and AVs. We can further ask ourselves many more questions. For example, what happens if AVs could communicate ahead of crash? Is there an impact when payout varies in the game theory? What if there are rational vs. irrational players ? Or fully stochastic agents like humans against AVs ? These are just some of the questions in the realm of ethics and decision making.

The aim of this article was to start discussion and ensure we, as a society, are both aware of the way AI will be engineered and applied. I believe AI will be complimentary to humans, but only when ethics of AI creation is factored in during its development. The main goal was to try to use economics tool such as game theory to assist in addressing ethical question.

Thanks for reading my article!

Check out my other stories and make sure to follow me for more beginner friendly tech content!

If you liked it, or have any comments/questions, let me know! Feel free to connect on social media: Instagram and LinkedIn

--

--