- Good Bot
- Posts
- Why the Trolley Problem Is Irrelevant in the Context of Self-Driving Cars
Why the Trolley Problem Is Irrelevant in the Context of Self-Driving Cars
Real-World Self-Driving Cars Don't Face the Trolley Problem's Simplistic Moral Dilemmas

The Trolley Problem for Self-Driving Cars
Whenever I talk to people about self-driving cars, a few familiar topics always come up. We might chat about Waymo or Tesla's Full Self-Driving (FSD), but more often than not, the conversation shifts to the Trolley Problem.
The Trolley Problem is a classic ethical dilemma that forces a person to make an impossible moral choice between two potential outcomes. Here is the scene: imagine a runaway trolley is barreling down a track, and you are standing right next to a lever. You could do nothing, and the trolley will continue on its path and slaughter five people who are tied to the tracks. However, you can pull the lever, and the trolley will be diverted to another track where only one person is tied up.
So, what do you do? Do you choose to kill one person in an attempt to save five, or do you do nothing?

An illustration of the trolley problem
The Trolley Problem is a deep ethical and philosophical question with no concrete answer.
However, in the context of self-driving cars, the dilemma becomes relevant when considering how autonomous vehicles should be programmed to handle life-or-death scenarios.
Should a self-driving car prioritize the lives of its passengers or those of pedestrians? Should the autonomous vehicle hit a cyclist or a bystander in the crosswalk? What if the cyclist wasn't wearing a helmet or the bystander was jaywalking?
To fully understand these hypothetical Trolley Problems—and to explain why they're irrelevant in the context of autonomous vehicles—we first need to break down how self-driving cars make decisions in real-time.
How Do Self-Driving Cars Make Decisions in Real-Time?

Many self-driving cars use some form of Model Predictive Control (MPC), an advanced control strategy designed for autonomous systems.
To appreciate how MPC works, imagine trying to navigate through a crowded room after piling your plate to the brim at a buffet. You constantly adjust your path every few seconds to avoid bumping into people or spilling food, making split-second decisions while keeping your ultimate goal in mind: reaching your table safely.
In the context of self-driving cars, MPC uses a mathematical model to solve an optimization problem every millisecond, assigning a value or weight to each potential trajectory. Based on that, it then determines the best control inputs while adhering to given constraints like speed limits, collision avoidance, or lane boundaries. This makes MPC ideal for self-driving cars, which must navigate complex and ever-changing environments while keeping both passengers and pedestrians safe.
Exposing the Trolley Problem in the Context of AVs

Now that we understand how self-driving cars use probabilistic decision-making to navigate, here's a classic variation of the Trolley Problem for autonomous vehicles:
“A self-driving car is driving down a road, when suddenly, two people, a baby and a grandma enter the path of the vehicle. If the vehicle continues, it will kill both of them.
Should the vehicle swerve to hit the grandma or the baby?”
Let's break down the first issue with this scenario: "A self-driving vehicle is driving down a road, when suddenly, two people…"
Self-driving cars are equipped with a range of sensors, including LiDAR, radar, cameras, ultrasonic sensors, and, in rare cases, even heat or infrared sensors, all of which are redundant devices, meaning that if one sensor fails, another will almost always pick it up.
In addition, self-driving cars are excellent at mapping their intended path while estimating the positions of nearby objects. If the autonomous system 'thinks' that a person might jump out at any moment, the car will slow down considerably to account for that possibility.
But for argument's sake, let's say that both the baby and the grandma pop out from behind a bush at the last possible moment.
Accepting that, let's move on to the next part of the scenario: "...a baby and a grandma enter the path of the vehicle…"
How can the autonomous vehicle be so sure that there is a baby or a grandma in front of it? What classifies a baby or a grandma? How do we know that the baby is indeed a baby, not just a doll?
These are tricky questions, made even more complicated when they need to be answered in less than a few milliseconds.
Given the sheer number of possibilities, hypotheticals, and cases the autonomous system would need to figure out, a tremendous amount of computational power would be required—technology that isn't yet fast or compact enough to travel in a car.
Also, even if this technology were magically made available, the algorithm's reliability would need to be nearly 100% since it involves human lives at stake.
However, even if such technology existed and was 100% accurate, the final issue with this scenario is the assumption that the autonomous vehicle can swerve at the last millisecond.
“...If the vehicle continues, it will kill both of them. Should the vehicle swerve to hit the grandma or the baby?”
Several factors influence how a car can react in a split second: the depth of the tire tread, recent rainfall, the type of pavement, or even loose gravel under the tires. Due to all these dynamically changing variables, it's impossible to program a solution or design an algorithm that can consistently handle swerving properly. If even one variable shifts slightly—like a decrease in the coefficient of friction because the ground is a bit damp—the outcome could be drastically different. In a realistic scenario, it's, at best, a coin toss, and more often than not, the odds of executing the successful swerve are far less than 50/50.
So, If the Trolley Problem Isn't Really Applicable to Self-Driving Cars, Why Is It So Popular?
Sensational media is partly to blame.

Headlines are crafted to generate clicks, as more clicks lead to increased site traffic and, ultimately, more profit. Media outlets often use buzzwords like 'kill' and 'die' to capture attention.
These fear-inducing terms also play on emotions, especially since many people find concepts like machine learning and probabilistic algorithms hard to understand. As a result, these technologies are often perceived as more dangerous or mysterious than they really are.
To Learn More
MIT has developed a tool called the Moral Machine, which presents users with various trolley problem scenarios and asks them to choose who should die in each situation.

Sample question from MIT’s Moral Machine
Also, check out these: