Transcript
In the last section, we explored AI, particularly in relation to autonomous vehicles, and considered really important topics around trust, accountability, and the impact of culture. Next, we’ll look into AI bias, specifically in the context of assisting human decision-making.
Since data is so critical to AI as well as to many of the other technologies that underpin FinTech, it is important that not only the right data is being used but also ensuring such data is not biased. The phrase “garbage in, garbage out” has probably never been more apt, nor as important, than when describing AI.
Bias can find its way into AI in a few ways. Let’s take a simple example – if a computer’s model is using data that is already contaminated by some level of discrimination then the output will also inevitably be prejudiced. So say, for instance, your AI relies on data from apartheid-era South Africa, chances are that data incorporates the widespread racist policies that existed at that time. Obviously, this will lead to less than ideal outcomes.
Even assuming your data is free of bias, there are other ways for bias to possibly creep into AI. For example, cultural bias and norms can inadvertently be programmed into AI because a programmer from one culture might value some characteristics differently than a programmer in another part of the world. We’ll explore this a bit further when we revisit the trolley problem.
There are also other potential issues that relate to bias too. AI is driven by algorithms and models. In her thought-provoking book, Weapons of Math Destruction, or what Harvard-trained mathematician Cathy O’Neil refers to as “WMDs”, she identifies three characteristics of a possibly dangerous model:
First, the model is opaque and not transparent. This means the system is what is called a “black box”, and it’s difficult for those from the outside to understand what is going on behind the scenes.
Second, the model is potentially scalable and can be used broadly or across large populations. This has been a key component of what we’ve talked about thus far. The issue with AI and other forms of technology is that they can scale beyond anything that we’ve seen before.
The third aspect is that the model has the potential to be unfair in a way that will negatively impact or even destroy people’s lives. For example, if AI was used to determine who could get a mortgage to purchase a home, who has access to credit, etc., these things could have a significant negative impact if someone was not granted access to them.
So despite all the good that will certainly accompany the rise of AI, it’s also pretty clear that biased data in conjunction with possibly suspect models have the potential to create more risks, unfairness, and inequality, which is why it’s important to be aware of their impact and invest time thinking about how to prevent such problems now before the technology is fully mature and permeates our lives.
Discussion Questions
- What are some examples you could think of that shows the negative impact of AI misuse.