Climbing into the trolley: Cinema’s use of AI to extend moral and ethical dilemmas

Since Fritz Lang’s Metropolis, film-makers have given AI human characteristics in order to create the kinds of moral dilemmas typified by the infamous ‘trolley problem’ thought experiment. But what does this say about the important ethical decisions we need to make in our relationship with AI technology?

It is not hard to see why AI is an interesting starting point for a movie. Beyond the obvious storylines that explore the threat to our perception of being the dominant intelligent species, AI has become a lens through which to consider more existential questions – a way to interrogate the very condition of ‘being human’.

A.I.: ARTIFICIAL INTELLIGENCE (2001)

In order to do this, a persistent habit in cinema has been to cast AI in the form of a human body. Whether it’s from as far back as Fritz Lang’s Metropolis (1927) with Maria’s robot double, or more recent examples such as the childlike android David in Steven Spielberg’s A.I. Artificial Intelligence (2001), the question of what it is to be human is explored through the decision making of a more-than-human. But what do these embodiments of artificial intelligence tell audiences about our own moral and ethical condition?

Before we dive into cinema’s role in presenting these issues, it is worth noting that cinema is still struggling to overcome significant challenges in casting AI into gendered forms. In most cases, manifestations of AI in a male form demonstrate a desire to exert power and seek intellectual superiority. Female embodiments may seek to explore the same issues but come with an added dimension of sexualisation, a trait which exemplifies the biases that lie behind some large-scale datasets.

The ‘trolley problem’

While cinema audiences of the 1960s were contemplating the power of Alpha 60, a sentient computer system that has complete control of the city of Alphaville in the Jean-Luc Godard film of the same name, or HAL 9000 in Stanley Kubrick’s 2001: A Space Odyssey, the onboard computer that prioritises its own ‘life’ and the spacecraft’s mission over the lives of the crew, academics were developing thought experiments to explore moral and ethical dilemmas. Of the many experiments that emerged, the ‘trolley problem’ resonates with many of the cinematic plots through which audiences explore human deliberation and the logic of machines.

The trolley problem is relatively simple. There is a runaway trolley (or train), ahead of which there are five people tied to the tracks. On a sidetrack is one person who is also tied down. You stand at a lever on the train and are faced with two options: do nothing and allow the train to continue on its path and kill five people, or pull the lever, divert the train toward the sidetrack and kill only one person.

Image by McGeddon

As AI has crept into our lives this thought experiment has become less abstract. In the hands of scientists, it has been aligned with the grand challenge to “help [the scientists] learn how to make machines moral”. Studies such as Moral Machine developed by the Scalable Cooperation group at the MIT MediaLab, place viewers in a series of scenarios in which the trolley is swapped for an autonomous vehicle. The moral dilemma is complicated through the introduction of more information about the consequences of a decision: that you might kill subjects of different ages, genders, physical health and species (human or cat).

Cinematic narrative as trolley problem

Of course, these dilemmas make for good plots in movies involving AI, immersing the viewer in a moral quandary where the decision-making of an AI in human form is in conflict with a human protagonist or a community that they represent. Most recently we see it used in the Netflix film Outside the Wire which places a human alongside an AI, in what appears initially to be collaborative circumstances. As the story unfolds, the scriptwriters put the duo in increasingly contradictory moral dilemmas where the AI and human have differing views.

The opening scenes see our human hero Harp, a drone pilot based in a ground control station in the US, in the first of a series of these dilemmas. He is monitoring an incident involving peacekeeping American troops stationed in Eastern Europe, fighting pro-Russian insurgents. Harp decides to disobey his commanders and deploys a Hellfire missile killing Americans and Russian ground troops but ending the incident. During the subsequent military trial, Harp justifies his actions by stating, “There were 40 men on the ground, and I saved 38.”

Harp is punished for ignoring a direct action to hold fire, and is sent into action where he is assigned to Captain Leo, an advanced AI masquerading as a human officer. The scriptwriters construct a moral bond between the pair as Captain Leo asserts that Harp had made the right decision at the time, revealing that he had more data about the circumstances of the incident than both the troops on the ground and the senior officers in command. Tension is built throughout the story, as the audience is put in situations that place stress on the relationship between the human and the AI, as moral decisions change according to the politics of each scene.

However, as the story moves towards its conclusion, the intentions that inform Captain Leo’s decisions become more clouded and Harp struggles to follow the logic. As we approach the final dilemma, the audience and Harp are led to understand Leo’s reasoning behind his decision-making process – that he sees his kind (autonomous robots) as an inevitable cause of future conflict and that the correct moral action is to launch a nuclear warhead at the USA to prevent them from using AIs in the future. Literally targeting American audiences with a moral dilemma that places them on the railway tracks of the ‘trolley problem’, Harp pleads with Leo, arguing that humanity must learn to design better AI in order to avoid the unnecessary deaths of millions of innocent people. I’ll let you watch the movie to find out what our all-American hero does next.

Outside the Wire may not be a great movie. But what is particularly interesting is the decision of the scriptwriters to place the responsible development of AI in the hands of the viewer. It suggests that AI won’t be going away anytime soon, but it’s likely we will have to play a part in an increasing amount of moral and ethical decisions to manage its outcomes.

Comments are closed.

Related posts

What Doesn’t Need To Be New: Two Launches, One Week, One Paradox
What Doesn’t Need To Be New: Two Launches, One Week, One Paradox

Last week brought two events exploring regenerative futures from very different angles. On one after...


What does it mean for a university to be alive?
What does it mean for a university to be alive?

The RSA (The royal society for arts, manufactures and commerce), RMIT’s Regenerative Futures I...


Launching RFI…!
Launching RFI…!

Universities are extraordinarily good at adding things. Sustainability offices. Innovation hubs. Int...


Three provocations on designing futures worth watching / reflecting…
Three provocations on designing futures worth watching / reflecting…

Over recent weeks, we’ve been hosting talks from visitors who come through Melbourne. Always fun to...


What Would It Take to Read the Label?
What Would It Take to Read the Label?

February’s Futures Collider at RFI put three provocations in a room and asked people to act ou...


What Gets Counted When Institutions Choose Speed
What Gets Counted When Institutions Choose Speed

Two Sessions at FACT 2026 Reflections on qualitative knowledge, AI efficiency pressures, and what ge...


Two Rooms, Two Temporalities
Two Rooms, Two Temporalities

Two events at RMIT over the past couple of weeks revisited the temporal challenges at the heart of h...


Temporal Traps
Temporal Traps

Ending the year between collapse and care: three December gatherings on time, action, and giving bac...


Notes toward the 6th finger
Notes toward the 6th finger

I’ve spent 20 years watching designers optimise products that score well environmentally while...


Rep / Non-Rep & Foreclosure
Rep / Non-Rep & Foreclosure

Catching up with things, and the first of two posts this week, reflecting on events last week. Stayi...


“This communication is not for you.”
“This communication is not for you.”

Looking to connect 2 recent events / conversations (as is my want) this time to explore a fundamenta...


Design Frequencies: Sharing International Practice in Design Research
Design Frequencies: Sharing International Practice in Design Research

Already deep into semester two here. Last semester School of Design RMIT College of Design and Socia...


The Labour of the Rejected / “Walk the Plank”
The Labour of the Rejected / “Walk the Plank”

Still playing catchup with so many events. A few weeks ago during hashtag#DIS2025, Mafalda Gamboa an...


Design Contradictions
Design Contradictions

Two projects during Melbourne Design Week with collaborators Michael Dunbar and Liam Fennessy to exp...


Paradox of Collaborative Speed
Paradox of Collaborative Speed

Two events in Melbourne over the past 10 days week revealed a tension across contemporary technology...


Slow Materials, Slow Money: Can Design Decelerate?
Slow Materials, Slow Money: Can Design Decelerate?

Two events that I’m trying to tie together to glean some connections. The CHI panel on Regenerative...