The trolley problem has long been a thought experiment used to explore the complexities of ethical decision-making. It presents us with two scenarios involving a runaway trolley and the choice between causing harm to a smaller group or a larger group. In recent years, this moral quandary has taken on new relevance in the context of self-driving cars. As autonomous vehicles become more prevalent, they too must grapple with difficult choices in potential accidents. This has sparked debates about the responsibility of programmers, manufacturers, and the moral implications of AI-driven decision-making. In this case study, we delve into the trolley problem’s evolution and its connection to the ethical considerations surrounding artificial intelligence and driverless cars.
Transcript
If you recall, when we discussed the trolley problem we talked about two scenarios where a runaway trolley was about to hit a group of people. In one of the scenarios, you had the choice to divert the trolley with a switch which would change the trolley’s direction and hit only one person who is killed by the impact. In the second scenario, instead of a switch however, you would have to push a person in front of the trolley to stop it – thus saving the group of five – but killing the person you pushed.
Nearly everybody chooses to divert the trolley with the switch, and nearly all object to pushing a person into its path. Now this dichotomy highlights the important aspect of proximity in people’s decision-making, such as how proximate or close we are to a given context, or how personal it feels can alter our decisions completely.
In recent years, the trolley problem has morphed into other dilemmas that have become popular in the news and the media. This is especially true for AI and self-driving cars. With autonomous vehicles on the horizon, self-driving cars have to handle choices about accidents – like causing a small accident to prevent a larger one.
So, this time, for our hypothetical scenario, instead of a runaway trolley, think of a self-driving car, and instead of a switch to redirect the car, the “switch” is the self-driving car’s “programming”. So for example, imagine a self-driving car driving at high speeds, with two passengers, where suddenly three pedestrians enter into the crosswalk in front of the car. The car has no chance of stopping. Should the car hit the three pedestrians, who will likely get killed? Or crash into a concrete barrier which would lead to the two passengers likely dying?
Now imagine you are the passenger of the car, what would your answer be then? And what car would you ultimately buy? A car that saves you, the passenger, at all costs in any scenario, or one that minimizes harm to all – but which ultimately may affect you?
If there was no self-driving vehicle, and you were the driver, whatever happened would be understood as maybe a reaction, a panicked decision and not something deliberate. However, in the case of a self-driving vehicle, if a programmer has developed software, so the vehicle will make a certain type of decision depending on the context, then in an accident where people are harmed, is the programmer responsible for that? Is the car manufacturer responsible for that? Or, who is responsible? Is there even an answer to what a self-driving car should do?
Now researchers at MIT, the Massachusetts Institute of Technology, further revived this moral quandary back in 2014. They created a website they called the Moral Machine, and through that website, respondents around the world were asked to decide in various self-driving vehicle scenarios, such as whether to kill an old man or an old woman, an old woman or a young girl, the car passenger or pedestrians, and many other similar questions. Since its launch, the experiment has generated millions of decisions, and analysis of the data was presented in a paper in the scientific journal Nature in 2018.
The study sparked a lot of debate about ethics in technology, which is the purpose of this book. So given that, we’d like to ask you a few questions.
Notes on AI and the Trolley Problem:
It is important to note that the trolley problem is fundamentally about showing how we process information and to highlight blind spots in our decision-making. Doing so hopefully helps us improve our choices by demonstrating the need of our morality and our sense of responsibility to humanity in our decision-making. And to the extent we think that morality, emotion, and humanity are important and worth developing, you could say that by linking AI and driverless cars to the trolley problem, we may be doing the opposite of what was intended and missing the point altogether, possibly to our mutual disadvantage. We should be wary that we are making the whole conversation less proximate.
Discussion Questions
- Who should you trust? Should we trust AI? Or should we trust humans?
- Who’s responsible if something bad happens? So in the context of an autonomous driven vehicle, is the car manufacturer responsible? Is the software programmer responsible? Or another stakeholder?
- What is the role of culture in all of this? Let’s consider these questions together.
Additional Readings
- Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J. F., & Rahwan, I. (2018). The Moral Machine Experiment. Nature, 563, 59–64. Retrieved from https://www.nature.com/articles/s41586-018-0637-6 (paywall)
- Huang, E. (2018). The East and West Have Very Different Ideas On Who To Save In A Self-Driving Car Accident. Quartz. Retrieved from https://qz.com/1447109/how-east-and-west-differ-on-whom-a-self-driving-car-should-save/
- Hao, K. (2019). Giving Algorithms a Sense of Uncertainty Could Make Them More Ethical. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/612764/giving-algorithms-a-sense-of-uncertainty-could-make-them-more-ethical/
- Sage, A., Bellon, T., & Carey, N. (2018). Self-driving car industry confronts trust issues after Uber crash. Reuters. Retrieved from https://www.reuters.com/article/us-autos-selfdriving-uber-trust/self-driving-car-industry-confronts-trust-issues-after-uber-crash-idUSKBN1GY15F
- Kaur, K., & Rampersad, G. (2018). Trust in Driverless Cars: Investigating Key Factors Influencing the Adoption of Driverless Cars. Journal of Engineering and Technology Management, 48, 87-96. Retrieved from https://doi.org/10.1016/j.jengtecman.2018.04.006
- Verger, R. (2019). What will it take for humans to trust self-driving cars? Popular Science. Retrieved from https://www.popsci.com/humans-trust-self-driving-cars
- Baram, M. (2018). Why the Trolley Dilemma for Safe Self-Driving Cars is Flawed. FastCompany. Retrieved from https://www.fastcompany.com/90308968/why-the-trolley-dilemma-is-a-terrible-model-for-trying-to-make-self-driving-cars-safer
- Ryall, J. (2019). Japan edges closer towards brave new world of self-driving cars but hard questions remain. South China Morning Post. Retrieved from https://www.scmp.com/news/asia/east-asia/article/2180828/japan-edges-closer-towards-brave-new-world-self-driving-cars
- Bogost, I. (2018). Who Is Liable for a Death Caused by a Self-Driving Car? The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2018/03/can-you-sue-a-robocar/556007/
- Hao, K. (2018). Should a self-driving car kill the baby or the grandma? Depends on where you’re from. MIT Technology Review. Retrieved from https://www.technologyreview.com/s/612341/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/
- Maxmen, A. (2018). Self-Driving Car Dilemmas Reveal That Moral Choices Are Not Universal. Nature. Retrieved from https://www.nature.com/articles/d41586-018-07135-0