{"id":2316,"date":"2021-03-15T20:12:41","date_gmt":"2021-03-15T20:12:41","guid":{"rendered":"https:\/\/chrisspeed.net\/?p=2316"},"modified":"2021-03-24T12:19:35","modified_gmt":"2021-03-24T12:19:35","slug":"climbing-into-the-trolley-cinemas-use-of-ai-to-extend-moral-and-ethical-dilemmas","status":"publish","type":"post","link":"https:\/\/chrisspeed.net\/?p=2316","title":{"rendered":"Climbing into the trolley: Cinema\u2019s use of AI to extend moral and ethical dilemmas"},"content":{"rendered":"\n<p>Since Fritz Lang\u2019s Metropolis, film-makers have given AI human characteristics in order to create the kinds of moral dilemmas typified by the infamous \u2018trolley problem\u2019 thought experiment. But what does this say about the important ethical decisions we need to make in our relationship with AI technology? <\/p>\n\n\n\n<p>It is not hard to see why AI is an interesting starting point for a movie. Beyond the obvious storylines that explore the threat to our perception of being the dominant intelligent species, AI has become a lens through which to consider more existential questions \u2013 a way to interrogate the very condition of \u2018being human\u2019.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img decoding=\"async\" src=\"https:\/\/chrisspeed.net\/wp-content\/uploads\/2021\/03\/Screenshot-2021-03-15-at-20.08.51-1024x574.png\" alt=\"\" class=\"wp-image-2194\"\/><figcaption>A.I.: ARTIFICIAL INTELLIGENCE (2001)<\/figcaption><\/figure>\n\n\n\n<p>In order to do this, a persistent habit in cinema has been to cast AI in the form of a human body. Whether it\u2019s from as far back as Fritz Lang\u2019s Metropolis (1927) with Maria\u2019s robot double, or more recent examples such as the childlike android David in Steven Spielberg\u2019s A.I. Artificial Intelligence (2001), the question of what it is to be human is explored through the decision making of a more-than-human. But what do these embodiments of artificial intelligence tell audiences about our own moral and ethical condition?<\/p>\n\n\n\n<p>Before we dive into cinema\u2019s role in presenting these issues, it is worth noting that cinema is still struggling to overcome significant challenges in <a href=\"https:\/\/www.wired.com\/2015\/04\/ex-machina-turing-bechdel-test\/\">casting AI into gendered forms<\/a>. In most cases, manifestations of AI in a male form demonstrate a desire to exert power and seek intellectual superiority. Female embodiments may seek to explore the same issues but come with an added dimension of sexualisation, a trait which exemplifies the biases that lie behind some <a href=\"https:\/\/www.nature.com\/articles\/s41746-020-0288-5\">large-scale datasets<\/a>.<\/p>\n\n\n\n<p><strong>The \u2018trolley problem\u2019<\/strong><\/p>\n\n\n\n<p>While cinema audiences of the 1960s were contemplating the power of Alpha 60, a sentient computer system that has complete control of the city of Alphaville in the Jean-Luc Godard film of the same name, or HAL 9000 in Stanley Kubrick\u2019s 2001: A Space Odyssey, the onboard computer that prioritises its own \u2018life\u2019 and the spacecraft\u2019s mission over the lives of the crew, academics were developing thought experiments to explore moral and ethical dilemmas. Of the many experiments that emerged, the \u2018trolley problem\u2019 resonates with many of the cinematic plots through which audiences explore human deliberation and the logic of machines.<\/p>\n\n\n\n<p>The trolley problem is relatively simple. There is a runaway trolley (or train), ahead of which there are five people tied to the tracks. On a sidetrack is one person who is also tied down. You stand at a lever on the train and are faced with two options: do nothing and allow the train to continue on its path and kill five people, or pull the lever, divert the train toward the sidetrack and kill only one person.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/chrisspeed.net\/wp-content\/uploads\/2021\/03\/Screenshot-2021-03-24-at-12.18.49-1024x345.png\" alt=\"\" class=\"wp-image-2328\" width=\"574\" height=\"193\" srcset=\"https:\/\/chrisspeed.net\/wp-content\/uploads\/2021\/03\/Screenshot-2021-03-24-at-12.18.49.png 1024w, https:\/\/chrisspeed.net\/wp-content\/uploads\/2021\/03\/Screenshot-2021-03-24-at-12.18.49-300x101.png 300w, https:\/\/chrisspeed.net\/wp-content\/uploads\/2021\/03\/Screenshot-2021-03-24-at-12.18.49-768x259.png 768w, https:\/\/chrisspeed.net\/wp-content\/uploads\/2021\/03\/Screenshot-2021-03-24-at-12.18.49-1536x518.png 1536w, https:\/\/chrisspeed.net\/wp-content\/uploads\/2021\/03\/Screenshot-2021-03-24-at-12.18.49-2048x691.png 2048w\" sizes=\"auto, (max-width: 574px) 100vw, 574px\" \/><figcaption>Image by <a href=\"https:\/\/en.wikipedia.org\/wiki\/Trolley_problem#\/media\/File:Trolley_Problem.svg\">McGeddon<\/a><\/figcaption><\/figure>\n\n\n\n<p>As AI has crept into our lives this thought experiment has become less abstract. In the hands of scientists, it has been aligned with the grand challenge to \u201chelp [the scientists] learn how to make machines moral\u201d. Studies such as <a href=\"https:\/\/www.moralmachine.net\/\">Moral Machine<\/a> developed by the Scalable Cooperation group at the MIT MediaLab, place viewers in a series of scenarios in which the trolley is swapped for an autonomous vehicle. The moral dilemma is complicated through the introduction of more information about the consequences of a decision: that you might kill subjects of different ages, genders, physical health and species (human or cat).<\/p>\n\n\n\n<p><strong>Cinematic narrative as trolley problem<\/strong><\/p>\n\n\n\n<p>Of course, these dilemmas make for good plots in movies involving AI, immersing the viewer in a moral quandary where the decision-making of an AI in human form is in conflict with a human protagonist or a community that they represent. Most recently we see it used in the Netflix film <a href=\"https:\/\/www.youtube.com\/watch?v=u8ZsUivELbs\">Outside the Wire<\/a> which places a human alongside an AI, in what appears initially to be collaborative circumstances. As the story unfolds, the scriptwriters put the duo in increasingly contradictory moral dilemmas where the AI and human have differing views.<\/p>\n\n\n\n<iframe loading=\"lazy\" src=\"https:\/\/www.youtube.com\/embed\/u8ZsUivELbs\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture\" allowfullscreen=\"\" width=\"560\" height=\"315\" frameborder=\"0\"><\/iframe>\n\n\n\n<p>The opening scenes see our human hero Harp, a drone pilot based in a ground control station in the US, in the first of a series of these dilemmas. He is monitoring an incident involving peacekeeping American troops stationed in Eastern Europe, fighting pro-Russian insurgents. Harp decides to disobey his commanders and deploys a Hellfire missile killing Americans and Russian ground troops but ending the incident. During the subsequent military trial, Harp justifies his actions by stating, \u201cThere were 40 men on the ground, and I saved 38.\u201d<\/p>\n\n\n\n<p>Harp is punished for ignoring a direct action to hold fire, and is sent into action where he is assigned to Captain Leo, an advanced AI masquerading as a human officer. The scriptwriters construct a moral bond between the pair as Captain Leo asserts that Harp had made the right decision at the time, revealing that he had more data about the circumstances of the incident than both the troops on the ground and the senior officers in command. Tension is built throughout the story, as the audience is put in situations that place stress on the relationship between the human and the AI, as moral decisions change according to the politics of each scene.<\/p>\n\n\n\n<p>However, as the story moves towards its conclusion, the intentions that inform Captain Leo\u2019s decisions become more clouded and Harp struggles to follow the logic. As we approach the final dilemma, the audience and Harp are led to understand Leo\u2019s reasoning behind his decision-making process &#8211; that he sees his kind (autonomous robots) as an inevitable cause of future conflict and that the correct moral action is to launch a nuclear warhead at the USA to prevent them from using AIs in the future. Literally targeting American audiences with a moral dilemma that places them on the railway tracks of the \u2018trolley problem\u2019, Harp pleads with Leo, arguing that humanity must learn to design better AI in order to avoid the unnecessary deaths of millions of innocent people. I\u2019ll let you watch the movie to find out what our all-American hero does next.<\/p>\n\n\n\n<p>Outside the Wire may not be a great movie. But what is particularly interesting is the decision of the scriptwriters to place the responsible development of AI in the hands of the viewer. It suggests that AI won\u2019t be going away anytime soon, but it\u2019s likely we will have to play a part in an increasing amount of moral and ethical decisions to manage its outcomes.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Since Fritz Lang\u2019s Metropolis, film-makers have given AI human characteristics in order to create the kinds of moral dilemmas typified by the infamous \u2018trolley problem\u2019 thought experiment. But what does this say about the important ethical decisions we need to make in our relationship with AI technology? It is not hard to see why AI [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[],"class_list":["post-2316","post","type-post","status-publish","format-standard","hentry","category-articles"],"_links":{"self":[{"href":"https:\/\/chrisspeed.net\/index.php?rest_route=\/wp\/v2\/posts\/2316","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/chrisspeed.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/chrisspeed.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/chrisspeed.net\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/chrisspeed.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2316"}],"version-history":[{"count":7,"href":"https:\/\/chrisspeed.net\/index.php?rest_route=\/wp\/v2\/posts\/2316\/revisions"}],"predecessor-version":[{"id":2330,"href":"https:\/\/chrisspeed.net\/index.php?rest_route=\/wp\/v2\/posts\/2316\/revisions\/2330"}],"wp:attachment":[{"href":"https:\/\/chrisspeed.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2316"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/chrisspeed.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2316"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/chrisspeed.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2316"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}