Exploring Ethics and Leadership from a Global Perspective

AI and the Trolley Problem -Trust and Proximity

Transcript

David Lee asked David Bishop a tricky question. 

“David, you’ve been a passenger in a car that I’ve driven before. So, who do you trust more, me or the autonomous-driven vehicle?” 

Now think about this question yourself. Who do you trust more? You or your friend who is driving the car or an autonomous-driven vehicle? 

This is a question that is disconcerting for a lot of people. We have a question of potentially thousands of these completely autonomous, non-human actors that are going to be out there with these large vehicles roaming around.  

The reality is, even if we would want to trust ourselves or our friends who are good drivers, empirically, it is a lot safer to ride with autonomous vehicles. 

But why do so many people are so resistant to that? Do we over-trust ourselves? Do we under-trust technology? Or is it the other way around? 

As humans, we tend not to trust things that we don’t understand. I don’t understand how this exactly works, so I’m going to distance myself from this, or I’m going to be suspicious of it until I do understand how it works. There is also the idea of over-confidence. As humans, we tend to be more overconfident in our own abilities than we probably should be. The combination of those two things creates this situation of people resistant to the change.  

Remember what we discussed earlier the principle of cultural lag? For a lot of people, they’re going to be very comfortable continuing in that perceived safe method of travel, when in actual fact, the numbers don’t bear that out. And they’d be willing to persist with a situation where they’re the driver rather than going into a potentially safer autonomous vehicle.  

This leads to another interesting point which is often a criticism of the trolley problem. It presents this binary, almost illogical situation where you have to choose between one person dying or five people dying. And that’s not the case at all in real life. 

In terms of autonomous driving vehicles and AI, one of the real reasons why people should trust autonomous vehicles is because they can communicate with each other seamlessly and simultaneously.  

Historically when you look at the earliest versions of autonomous driving, there’s an idea that these vehicles would not be independent. They would somehow be in sync with each other to make driving much more efficient. So the more advanced forms of autonomous driving will be a linked network of vehicles that collectively gauge risks, and overall, holistically make things safer.  

Coming back to the false dichotomy of binary options, we tend to assume situations where the answer is either zero or one. But if you talk to people who operate in the space of autonomous vehicles, either an auto manufacturer who’s trying to go into autonomous vehicles, or on the software side, people who are developing the software, almost uniformly, they will tell you that it’s never binary. 

Based on what we discussed earlier about AI, machine learning and deep learning, these systems will go through multiple permutations with historical data as well as data that’s coming in live. They will predict multiple outcomes and select the solutions according to their safety protocol.

 

Discussion Questions

  • Would you trust a self-driving car? Why or why not?
  • Do you think your opinion will change as technology advances? 

Related Videos

Quick Access