What we can learn about AI’s ethical issues from Disney’s Fantasia

Ethical issues related to AI are becoming more and more important. Discover the lessons we can learn from a certain iconic animated film.

Pranita Tamang
5 MIN|May 12, 2020
Illustration woman with magic broom

As technologies like machine learning proliferate across every aspect of our lives, they’ll also appear more and more across the business technology landscape. So, now is as good a time as any to explore an important question for AI research that all kinds of organisations will need to be aware of. Why is it important that AI is ethical? And, specifically, what are the ethical dilemmas associated with AI? To answer that, I’ll draw on a source you may not expect, which happens to be one of the most iconic animated films of all time. But first things first. Before we get down to the details of these ethical issues, let’s start by exploring what the ‘ethics of AI’ really means.

The Terminator lied to you

It’s vital that future applications of AI do good for humanity. In popular culture, we’ve often looked at AI as something that’s either intrinsically good or evil in terms of its intent. Often, it’s a sinister digital being that seeks mankind’s downfall: Skynetthe MatrixMegatronHAL 9000, etc.

This idea of AI having good or bad intentions is a red herring – at least right now, with the level the technology is at. We’re still a long way off machines with sentience or sentiments. AI is still very much a tool, with no intent of its own except what we programme for it. Terms like ‘good’ and ‘bad’ are better applied to AI in terms of the end results of its actions. AI may be programmed with the intent to serve us well, but the road to hell, as they say, is paved with good intentions.

The Sorcerer’s Apprentice

There’s a story I often bring up when talking about the dangers of AI: The Sorcerer’s Apprentice. It originally appeared in Johann Wolfgang von Goethe’s 18th century poem, but you might have seen it in Disney’s extravaganza of animation and classical music: Fantasia.

The sorcerer’s apprentice, played by Mickey Mouse in the film, is tired of cleaning the sorcerer’s home, so he enchants a broom to do the work for him. This is AI fulfilling the basic mission statement of all technology. Right back to stone tools and the wheel: we create a machine to do the work to save us time and energy.

So far, so good. However, the enchanted broom is so good at its job that the place is soon flooded with water. Poor Mickey didn’t programme it to stop cleaning or set the right parameters for what ‘clean’ means. All the broom knows is that it was told to clean. The situation quickly spirals out of control.

This is the danger that AI really poses for us, right now. Not an evil robot wanting to take over the world, but a tool that’s good at doing a task we’ve given it, and the instructions we’ve given it are flawed. Or, in the case of AI that learns how to make decisions and do a job by itself, that it has learned the wrong lessons. AI is a great student: we just have to ensure we’re a good teacher.

It’s a matter of trust

Trust is very, very important when it comes to AI. Popular culture has already led to some distrust – the portrayals of the evil robots in the movies. But, in reality, we don’t connect these images with the many everyday instances of AI making our lives easier all the time. Alexa. Google. Snapchat filters. Amazon and Netflix recommendations. We already trust AI to do so much for us.

As time goes by, we’ll be trusting AI with even more important matters. Whether your self-driving car decides to speed up or slow down, or whether it decides it’s seen a plastic bag in the road or a pedestrian. Or an AI checking medical records for signs of disease. You want to be able to trust that it’s making the right decisions, which could potentially be matters of life and death.

Explain yourself, AI!

This need to trust AI is where a concept called ‘explainability’ comes into play. If your mortgage decision has been turned down by an AI, you’re going to want to know why – or at least know that somebody, a human somebody, can understand why. That the AI’s thinking can be explained in terms we understand and we can say “OK, fair enough”.

The problem is, the smarter AI gets, the more it’s able to look at data and draw its own conclusions. That’s kind of the whole point: we don’t want to have to be constantly supervising and teaching AI, but to be able to let it learn to do its job from the data it gets. But the smarter AI becomes, seeing patterns we’d never see in huge, complex datasets, the harder it is for us to understand its thinking. It’s making connections we never would, because it’s got access to more information than we can handle, and it can see patterns that we can’t see in both the big picture and the tiny details.

What’s in the (black) box?

This lack of explainability is referred to as the “black box of AI”: AI decision-making as a closed box that we cannot see into, and therefore we cannot trust. A machine intelligence that is different to our own, which we cannot count on to look after our best interests and act for good. This is how the villain of popular culture manifests itself in modern AI, but not as an evil robot. It’s a machine trying to a good job for us, a dog keen to fetch the sticks we throw, but such an advanced learner that its decision-making is beyond our understanding and may mean it’s not making the right choices for us.

Explainability poses huge ethical issues in AI research, and it’s a safeguard that AI developers are working to build into their software. As AI becomes more and more widespread throughout our lives, there is going to be a call from the public for these safeguards to be used in the digital tools they come into contact with.

The next hot-button ethical issue?

When an AI developer puts ethical AI at the core of its research, they’re committed to an aspect of AI that may become increasingly demanded. It’s an issue that already affects us all right now, but its importance is set to skyrocket in the coming weeks, months and years. With the advent of data protection regulations like the European Union’s General Data Protection Regulation (GDPR), we’ve already seen data protection and cybersecurity become hot-button tech issues of our times, and it’s likely that AI ethics will become another.

Responsible tech companies, and those trusted by the public, will be the ones who learn the lesson of The Sorcerer’s Apprentice. Instead of blindly getting carried away with AI’s potential to work for us more and more efficiently, we must also make sure it’s working for us in ways we can trust.

If your business needs to communicate corporate social responsibility messages about AI ethics, data protection or sustainability in tech, Fifty Five and Five can help. We understand the issues and the technology and have the experience and expertise to tell your stories and make your selling-points shine.