Transcript
We’ve talked about regulation both in the context of this module, as well as in previous modules. But more broadly, should people be informed that they are being recorded and that their images are being analysed, processed, stored and used in other ways? Is that something that regulation should be concerned about?
The easy answer is yes, of course. If someone’s recording you, you’re going to want to know it and most laws around the world do already have some level of notification requirement unless there’s a journalistic exception within the law.
But here’s the problem. With anything that’s ubiquitous, meaning it’s around us all the time, we become so desensitised even to warnings, that we just tend to ignore them. Could you recall the last time you saw a sign that said “CCTV in operation”? They are everywhere and they are probably there just because of a legal requirement. And if they were to use that video recording against you or perhaps in a court of law, they would be able to say, we were authorised to do so because we met this bare-level requirement.
If we think about the Taylor Swift example, very few people are going to read through the terms and conditions when they buy a ticket to go to a concert. We don’t and we are lawyers. We are sure that the same thing is probably for you. On a daily basis, we click “I Accept” on so many notifications, that the ubiquity desensitises us to the fact that these are real legal notifications.
As a society, we have to start thinking if we’re going to take this stuff seriously. What are not only the moral, but the legal implications in a very practical context to make sure that we’re taking these notifications seriously, and that we actually understand what rights we’re giving away? Because the reality is, every single day, we’re giving away pretty significant rights.
There is a large area of technology applications that is still completely unregulated, which is what we’re dealing with now in the context of AI. What legal obligations do you have if you’re the one who processed or analysed this towards the person that you’ve actually recorded? In many places in the world, this is completely unsettled, so much so that there are people or companies who can profit from selling these data to third parties.
In module 1, we talked about cultural lag – the idea that it often takes time for the culture within a society to catch up to the rapid changes in technology. One example of that is in David Bishop’s classroom. When asked if they have brought a camera to class, most of his students would remain silent for a few seconds before realising they all have a camera on their smartphones.
If you think about that from a cultural lag perspective, these technologies change so quickly that we have them on our person at all times. This means that we as individual citizens are also the ones that are surveilling those that are around us. You go on YouTube and you’ll see interactions of an auto accident where everyday cars are filming everything that’s going on. You’ll see individuals getting into a fight or an altercation, they automatically whip out their phones.
These technologies are expanding so fast that they are really around us all the time. We have to take some time to evaluate from a cultural perspective how we expect these things to evolve. Because if we don’t, the companies through various forms of capitalism are going to make those decisions for us.
Discussion Questions
- From a cultural lag perspective, how do you think the fast-developing facial recognition technology will impact our lives?
- What are some of the potential benefits and risks?
- And would you ever consider wearing any makeup or mask that covers part of your face to hide from facial recognition cameras?