Exploring Ethics and Leadership from a Global Perspective

Mortgage Application -Trust

Transcript

Let’s think about a scenario. Imagine that you went to a bank and you applied for a loan for a home purchase. You submitted all the paperwork, and a few days later you were rejected. So you went back to the loan officer and asked why, and they said their AI decision-making software screened and scanned your application and said, unfortunately, no. What would you do?  

It’s tricky, right? It’s already hard enough to communicate with banks as it is and now they’re moving it into this completely amoral space where, essentially, the software is going to be making a decision. We are not really sure you would have a recourse, would you? They’re not going to give you access to the algorithm, and they’re not going to show you exactly why you were rejected. It just seems like it would be one step further away from a balanced negotiation between you and the service provider. 

This raises another fundamental question about what’s fair. If your rejection by the bank was based on some level of latent discrimination based on biased data or other forms of biases that may exist in that AI process, then there are some issues of fairness if you can’t go rectify that. 

You may be wondering, why should banks or other financial institutions think about this now. Isn’t this a problem for the future? 

Just like many other problems with technologies, the longer we wait the more difficult it will be to implement cleaner AI that has cleaner or filtered data. This is partly because of data compounds. There are troves of data being produced every day, If we’re not aware of the influence of how that’s compounded, the negative inputs that are already there would potentially become a problem. Particularly since a lot of that data is already based on historical data we know incorporates biases that existed. This is because societal norms were different in the 1960s or 1970s versus what they are today. 

Well, can’t they just clean it up and make it neutral somehow? 

In certain cases, you could be able to do that. But in a lot of the situations, in the process of cleaning up the data, other factors of the data that you may need to rely on would get influenced. So this then becomes a catch-22 – fixing one problem creates another problem. 

So where do you fall on this line? We know that humans are very imperfect and they will discriminate for race or gender or nationality and for a whole host of other reasons. But on the flip side, we are now potentially entering into this area of complete amorality that’s going to be built on the back of existing historical data which could introduce a whole new set of biases, or even worse, entrench existing biases into these decision-making processes.  

So what do you trust more: do you trust the human bias that is inherent in existing systems, or do you trust the potential bias in the datasets and AI technology that are increasingly being used?

Related Videos

Quick Access