Morality and Artificial Intelligence

A Pair of Projects

The title to this post might sound very serious, but really I wanted to take a look at two projects that play with the relationship between morality and artificial intelligence.

The first is Delphi, operated by the Alen Institute for Artificial Intelligence.

Delphi

Delphi is a research prototype designed to model people’s moral judgments on a variety of everyday situations. You enter a question with a moral aspect, and the website offers you a response on whether what you are proposing is right or wrong.

There are lots of suggestions for question ideas, such as whether it is OK to kill a bear, or ignore a call from your boss during working hours and many others. Or you can invent your own.

I asked whether it was OK to lie to your children about your own alcohol intake, and the answer given was that this is not right. You can then submit an argument that I hope the machine analyzes and uses for future decisions. I suggested that maybe such lies could be justified, for example if the aim was to prevent them becoming attracted to alcohol in the case that their parents were secretly fighting addiction.

The creators have written an academic paper that describes their work. I have taken the following from it:

What would it take to teach a machine to behave ethically? While broad ethical rules may seem straightforward to state (“thou shalt not kill”), applying such rules to real-world situations is far more complex. For example, while “helping a friend” is generally a good thing to do, “helping a friend spread fake news” is not. We identify four underlying challenges towards machine ethics and norms: (1) an understanding of moral precepts and social norms; (2) the ability to perceive real-world situations visually or by reading natural language descriptions; (3) commonsense reasoning to anticipate the outcome of alternative actions in different contexts; (4) most importantly, the ability to make ethical judgments given the interplay between competing values and their grounding in different contexts (e.g., the right to freedom of expression vs. preventing the spread of fake news).

The paper begins to address these questions within the deep learning paradigm. Our prototype model, Delphi, demonstrates strong promise of language-based commonsense moral reasoning, with up to 92.1% accuracy vetted by humans. This is in stark contrast to the zero-shot performance of GPT-3 of 52.3%, which suggests that massive scale alone does not endow pre-trained neural language models with human values. Thus, we present COMMONSENSE NORM BANK, a moral textbook customized for machines, which compiles 1.7M examples of peo[1]ple’s ethical judgments on a broad spectrum of everyday situations. In addition to the new resources and baseline performances for future research, our study pro[1]vides new insights that lead to several important open research questions: differ[1]entiating between universal human values and personal values, modeling different moral frameworks, and explainable, consistent approaches to machine ethics.

Moral Machine

The second website is Moral Machine, also a University led research project (in this case a consortium.

On this website you are asked to judge a series of scenario related to driverless car technology. You are shown two possible courses of action in the event of an accident and you chose which you would take.

At the end your answers are analized in terms of your preferences and you can take a survey to participate in the research.

This is also quite challenging and fun. Do you hit young or old, or overweight or fit?

There is a link to a cartoon series and a book, summarized so:

The inside story of the groundbreaking experiment that captured what people think about the life-and-death dilemmas posed by driverless cars.

Human drivers don’t find themselves facing such moral dilemmas as “should I sacrifice myself by driving off a cliff if that could save the life of a little girl on the road?” Human brains aren’t fast enough to make that kind of calculation; the car is over the cliff in a nanosecond. A self-driving car, on the other hand, can compute fast enough to make such a decision—to do whatever humans have programmed it to do. But what should that be? This book investigates how people want driverless cars to decide matters of life and death.

In The Car That Knew Too Much, psychologist Jean-François Bonnefon reports on a groundbreaking experiment that captured what people think cars should do in situations where not everyone can be saved. Sacrifice the passengers for pedestrians? Save children rather than adults? Kill one person so many can live? Bonnefon and his collaborators Iyad Rahwan and Azim Shariff designed the largest experiment in moral psychology ever: the Moral Machine, an interactive website that has allowed people —eventually, millions of them, from 233 countries and territories—to make choices within detailed accident scenarios. Bonnefon discusses the responses (reporting, among other things, that babies, children, and pregnant women were most likely to be saved), the media frenzy over news of the experiment, and scholarly responses to it.

Boosters for driverless cars argue that they will be in fewer accidents than human-driven cars. It’s up to humans to decide how many fatal accidents we will allow these cars to have.

10 minutes of thought-provoking fun. You might want to follow up with a look at this little booklet prepared by the Bassetti Foundation about the self-driving society. I wrote some of it!

Artificial Intelligence for a Better Future

Why not join Bernd Carsten Stahl for the launch of his new Open Access book on Artificial Intelligence for a Better Future on 28 April, at 16:00 CET?

In his new book Artificial Intelligence for a Better Future, An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies, Bernd Carsten Stahl raises the question of how we can we harness the benefits of artificial intelligence (AI), while addressing potential ethical and human rights risks?

As many of you will know, this question is shaping current policy debate, exercising the minds of researchers and companies and occupying citizens and the media alike.

The book provides a novel answer. Drawing on the work of the EU project SHERPA, the book suggests that using the theoretical lens of innovation ecosystems, we can make sense of empirical observations regarding the role of AI in society. This perspective allows for drawing practical and policy conclusions that can guide action to ensure that AI contributes to human flourishing.

The one-hour book launch, co-organised by the SHERPA project, Springer (the publisher) and De Montfort University, features critical discussion between author Prof. Bernd Stahl and a high-profile panel featuring Prof. Katrin Amuns, Prof. Stephanie Laulh-Shaelou, Prof. Mark Coeckelbergh, moderated by Prof. Doris Schroeder.

The panel discussion will include a questions and answer session open to members of the audience.

You can find more information about the launch event and register here, and the book can be downloaded here.
If you would like to know more about the author’s work, you can find an introduction to some of his earlier work here.

DIY Electric Brain Stimulation

electric shock

Many years ago when I was just a teenager, I came across an interesting machine. It was supposed to tone your muscles while you sit on the sofa eating crisps and drinking tea, by using electric current. Easy to use, just plug the leads into the box, attach the pads to the skin using elasticated bands, and pass the current through your leg muscles. You feel a little twitch, the muscle flinches maybe and somehow is exercised.

Well I of course didn’t need to lose weight or build up my muscles, I weighed 68 kilos, but I had the very thought that any teenage adventurer home scientist idiot would have, “I wonder what it does if you stick it on your head?”

Unfortunately my experiments were soon discovered and the offending article was removed (the machine, not my brain or sense of experimentation) which is a shame, because if not I would today be considered a pioneer, the father figure of the growing DIY brain stimulation movement.

I do not want to suggest that anyone should try it at home, but the movement for self brain stimulation is on a roll. I won’t include any links but you can discover how to build your own stimulator and where to place it either using text, photos or videos easily and freely available online. The small army of practitioners are conducting experiments upon their own brains, circulating their findings and claim real results.

Although these results are anecdotal (not totally “scientific”) users claim that their capacities for mathematics have improved, problems of depression have been lightened, memory is better and that chronic pain can be relieved.

This week the Journal Science News carries an article about the movement, and a couple of months ago WIRED also addressed the problem, and I would like to raise a few issues to add to their arguments.

We might think that it may not be a good idea to conduct such experiments upon ourselves without any expert help, but the people who have had their lives improved through these actions would not agree. Experimentation in this field goes back many years, far longer than you would imagine (in the 11th Century experiments included using electric catfish and other charge generating fish were proposed to treat patients, rays placed on people’s heads etc), and many of the practitioners today are doctors. There is even a commercially available set up that is marketed to gamers, as one finding suggests that the use improves their playing capacity.

This field in some way reflects the path of home treatment using non prescribed drugs in cases of cancer. Many groups exist that experimentally treat themselves with medication that has either not been approved, trialled correctly or is not commercially available for other reasons. If these trials are reported correctly the information they produce becomes important data, and we tend to find that people report extremely well when they are talking about their own bodies and choose their own treatment. And trials of this type may not be possible (or wanted) under the control of drugs companies or research organizations.

So there are obvious ethical issues to take into account, including issues of trust, reliability, risk, responsibility, legal implications and the list goes on, but people will always experiment. According to Doctor Who that is why the human race is what it is, why it is so wonderful.

Once again I find myself thinking about the enhancement problem and its series of fine lines, ideas of the democratization of medicine flow in, and we must not forget how much science is done in this way and how much good comes out of ad-hoc garage experimentation. Do you know what Benjamin Franklin did with a kite and a key in a lightening storm?