Exploring Ethics and Leadership from a Global Perspective

AI and the Trolley Problem -Cultural Differences and Biases

Transcript

One of the most interesting things to come out of the MIT study was the way various elements of culture and biases came out and the potential programming implications from an AI perspective. The media reported the study was talking about the cultural implications of this Moral Machine. Basically, different cultures prioritize life differently. 

The study found that Chinese respondents were more likely to choose to hit pedestrians on the street, instead of putting car passengers in danger. And we’re more likely to spare the old over the young. Western countries or people from Western countries tended to prefer inaction, letting the car continue its path. So, kind of like inertia. While Latin Americans preferred to save the young. 

How do you feel about the results? Would you agree to the discovery? 

While cultural preferences and even biases clearly exist, the bigger question then is what we should do with them, especially when programming new technology that affects people’s daily lives. We both know that one of the challenges that lawmakers or ethicists like us, or companies as they’re trying to create a moral code for their employees, is trying to create a moral code that permeates culture and goes across country lines.  

For example, you have a company based in China that does business all over the world. You write a code of conduct in Beijing, and then you have to apply it everywhere. But potentially what is right in Beijing may actually be questionable in a different cultural context. Or, it may be “right” from a legal or moral perceptive, but communicated in a way that doesn’t resonate with local people. 

So, if we transport that into AI or just technology in general, automobiles or the automobile industry is a global industry. We have car manufacturers in China, Japan, Korea, the United States, and a whole host of other places. So, the programmer who sits somewhere in Asia with a certain cultural context is programming a particular type of AI into a vehicle with some results potentially from the MIT study. Then the vehicle is imported to the United States, and that particular cultural context bleeds into how that vehicle operates in a different cultural context. Then that vehicle will interact with other vehicles that have a different cultural influence on the road. How do those all interact? It’s a micro-cosmo of a greater host of challenges that AI will bring to the forefront.  

So, is it possible, or should it even be a goal from an AI and technology standpoint for us to create a uniform sense of morality? 

 

Discussion Questions

  • Should there be a goal to create a uniform standard of morality regarding self-driving cars and their decision-making? Why or why not? 

Related Videos

Quick Access