Exploring Ethics and Leadership from a Global Perspective

AI and the Trolley Problem

The trolley problem has long been a thought experiment used to explore the complexities of ethical decision-making. It presents us with two scenarios involving a runaway trolley and the choice between causing harm to a smaller group or a larger group. In recent years, this moral quandary has taken on new relevance in the context of self-driving cars. As autonomous vehicles become more prevalent, they too must grapple with difficult choices in potential accidents. This has sparked debates about the responsibility of programmers, manufacturers, and the moral implications of AI-driven decision-making. In this case study, we delve into the trolley problem’s evolution and its connection to the ethical considerations surrounding artificial intelligence and driverless cars.

Transcript

If you recall, when we discussed the trolley problem we talked about two scenarios where a runaway trolley was about to hit a group of people. In one of the scenarios, you had the choice to divert the trolley with a switch which would change the trolley’s direction and hit only one person who is killed by the impact. In the second scenario, instead of a switch however, you would have to push a person in front of the trolley to stop it – thus saving the group of five – but killing the person you pushed.

Nearly everybody chooses to divert the trolley with the switch, and nearly all object to pushing a person into its path. Now this dichotomy highlights the important aspect of proximity in people’s decision-making, such as how proximate or close we are to a given context, or how personal it feels can alter our decisions completely.

In recent years, the trolley problem has morphed into other dilemmas that have become popular in the news and the media. This is especially true for AI and self-driving cars. With autonomous vehicles on the horizon, self-driving cars have to handle choices about accidents – like causing a small accident to prevent a larger one.

So, this time, for our hypothetical scenario, instead of a runaway trolley, think of a self-driving car, and instead of a switch to redirect the car, the “switch” is the self-driving car’s “programming”. So for example, imagine a self-driving car driving at high speeds, with two passengers, where suddenly three pedestrians enter into the crosswalk in front of the car. The car has no chance of stopping. Should the car hit the three pedestrians, who will likely get killed? Or crash into a concrete barrier which would lead to the two passengers likely dying?

Now imagine you are the passenger of the car, what would your answer be then? And what car would you ultimately buy? A car that saves you, the passenger, at all costs in any scenario, or one that minimizes harm to all – but which ultimately may affect you?

If there was no self-driving vehicle, and you were the driver, whatever happened would be understood as maybe a reaction, a panicked decision and not something deliberate. However, in the case of a self-driving vehicle, if a programmer has developed software, so the vehicle will make a certain type of decision depending on the context, then in an accident where people are harmed, is the programmer responsible for that? Is the car manufacturer responsible for that? Or, who is responsible? Is there even an answer to what a self-driving car should do?

Now researchers at MIT, the Massachusetts Institute of Technology, further revived this moral quandary back in 2014. They created a website they called the Moral Machine, and through that website, respondents around the world were asked to decide in various self-driving vehicle scenarios, such as whether to kill an old man or an old woman, an old woman or a young girl, the car passenger or pedestrians, and many other similar questions. Since its launch, the experiment has generated millions of decisions, and analysis of the data was presented in a paper in the scientific journal Nature in 2018.

The study sparked a lot of debate about ethics in technology, which is the purpose of this book. So given that, we’d like to ask you a few questions.

Notes on AI and the Trolley Problem:

It is important to note that the trolley problem is fundamentally about showing how we process information and to highlight blind spots in our decision-making. Doing so hopefully helps us improve our choices by demonstrating the need of our morality and our sense of responsibility to humanity in our decision-making. And to the extent we think that morality, emotion, and humanity are important and worth developing, you could say that by linking AI and driverless cars to the trolley problem, we may be doing the opposite of what was intended and missing the point altogether, possibly to our mutual disadvantage. We should be wary that we are making the whole conversation less proximate.

Discussion Questions

  • Who should you trust? Should we trust AI? Or should we trust humans?
  • Who’s responsible if something bad happens? So in the context of an autonomous driven vehicle, is the car manufacturer responsible? Is the software programmer responsible? Or another stakeholder? 
  • What is the role of culture in all of this? Let’s consider these questions together.

Additional Readings

Related Videos

Quick Access