Have you seen this video?

You’ll find it on this page http://moralmachine.mit.edu/ at the MIT website, ‘A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars. We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you judge which outcome you thing is more acceptable.  You can then see how your responses compare with those of other people.’  You are offered 10 languages to choose from.

There are scores of posts about this topic on the web. In August 2016 The Verge wrote:

The Moral Machine adds new variations to the trolley problem: do you plow into a criminal or swerve and hit an executive? Seven pregnant women (who are jay-walking) or five elderly men (one of whom is homeless) plus three dogs? It’s basically a video game, and you’re trying to min-max human life based on which people you think most deserve to live and how active you are willing to be in their death…A serious question: what is the intended use for this information? The website describes it as ‘a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas,’ but the information that’s actually being gathered is more unsettling. Any output from this test will produce some kind of ranking of the value of life (executive > jogger > retiree > dog, e.g.), and since the whole test is premised on self-driving tech, it seems like the plan is to use that ranking to guide the moral decision-making of autonomous cars?

This week the Verge wrote on this topic again: ‘If self-driving cars become widespread, society will have to grapple with a new burden: the ability to program vehicles with preferences about which lives to prioritize in the event of a crash. Human drivers make these choices instinctively, but algorithms will be able to make them in advance. So will car companies and governments choose to save the old or the young? The many or the few?’

The Guardian’s 24 October 2018 post’s headline reads ‘Who should AI kill in a driverless car crash? It depends who you ask: responses vary around the world when you ask the public who an out-of-control self-driving car should hit’

The article begins, ‘Responses to those questions varied greatly around the world. In the global south, for instance, there was a strong preference to spare young people at the expense of old – a preference that was much weaker in the far east and the Islamic world. The same was true for the preference for sparing higher-status victims – those with jobs over those who are unemployed.  When compared with an adult man or woman, the life of a criminal was especially poorly valued: respondents were more likely to spare the life of a dog (but not a cat).’

In a 24 October 2018 post PBS News Hour writes:

On Wednesday, the team behind the Moral Machine released responses from more than two million people spanning 233 countries, dependencies and territories. They found a few universal decisions — for instance, respondents preferred to save a person over an animal, and young people over older people — but other responses differed by regional cultures and economic status.

The findings are important as autonomous vehicles prepare to take the road in the U.S. and other places around the world. In the future, car manufacturers and policymakers could find themselves in a legal bind with autonomous cars. If a self-driving bus kills a pedestrian, for instance, should the manufacturer be held accountable?

The study’s findings offer clues on how to ethically program driverless vehicles based on regional preferences, but the study also highlights underlying diversity issues in the tech industry — namely that it leaves out voices in the developing world.

Nature’s web page (24 October 2018) posts an article about the publication of the paper.

‘…People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots, and what we show here with data is that there are no universal rules,’ says Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology in Cambridge and a co-author of the study.

‘The survey, called the Moral Machine, laid out 13 scenarios in which someone’s death was inevitable. Respondents were asked to choose who to spare in situations that involved a mix of variables: young or old, rich or poor, more people or fewer…’  The post includes this video:

The research has been published (24 October 2018) in the journal Nature Nature (volume 563, pages 59–64 (2018) and can be read online at the link.

Abstract: ‘With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.’

At first sight this topic may not directly relate to my blog area of ‘Vision: Tech in IB Schools’ but it will in the near future, and the moral implications of teaching AI how to make decisions that IB learners and teachers can support should be discussed in school communities now, and often.