Transcript
Let’s go back to the basic framework that we introduced earlier about models that potentially are dangerous.
- Is the model opaque and not transparent?
- Does it have the possibility to scale and be used by a large number of people?
- Does it have the potential to be unfair in a way that will negatively impact or even destroy people’s lives?
In the example of a home loan, the answer is definitely yes to all three.
First, banks definitely will not willingly open up their AI or decision-making model to tell you how they made a decision. Second, the scalability of this is quite large. You can imagine some of the largest banks in the world with hundreds of thousands of clients being impacted. And lastly, for each of those individual customers, the impact on their life could be huge. The difference between having and not having a home – what could be more fundamental to a person’s well-being, and psychological stability, than the opportunity to purchase a home?
So do we trust humans, even though we have biases? Or do we trust the AI, even though that also has some level of bias?
We think we need both.
In our seeking of efficiency of how we use AI, one of the big draws is that it will make us more efficient. We won’t have to do the repetitive tasks that take up a lot of our time. But in our pursuit of this greater efficiency, we still need to sacrifice a little of efficiency to keep a human element.
In the home loan example, what would be great is if banks continue to have somebody there to explain, review and follow up on rejected cases. There may not be a lot of organisations willing to do that, but some will, as they reconsider this balance of responsibility and trust among the different stakeholders involved.
To think about it in another way, although financial institutions wouldn’t willingly allow people to see their algorithm or data, could government or regulatory bodies interfere? Just like a patent. When you’re granted a patent, you have to provide very public data on the creation and various components of that particular device. We wonder if there would be either a public or a private governmental disclosure required to monitor biases within AI systems.
This is not just us thinking out loud. On a broader level, lawmakers and politicians are talking about this in the context of large technology companies. They have grown so much and become such a big part of our lives that they shouldn’t just be considered as normal companies. Discussions have arise around whether they should be regulated like a financial company or even a public utility. We will discuss more on this in the next chapter.