Why not join Bernd Carsten Stahl for the launch of his new Open Access book on Artificial Intelligence for a Better Future on 28 April, at 16:00 CET?
In his new book Artificial Intelligence for a Better Future, An Ecosystem Perspective on the Ethics of AI and Emerging Digital Technologies, Bernd Carsten Stahl raises the question of how we can we harness the benefits of artificial intelligence (AI), while addressing potential ethical and human rights risks?
As many of you will know, this question is shaping current policy debate, exercising the minds of researchers and companies and occupying citizens and the media alike.
The book provides a novel answer. Drawing on the work of the EU project SHERPA, the book suggests that using the theoretical lens of innovation ecosystems, we can make sense of empirical observations regarding the role of AI in society. This perspective allows for drawing practical and policy conclusions that can guide action to ensure that AI contributes to human flourishing.
The one-hour book launch, co-organised by the SHERPA project, Springer (the publisher) and De Montfort University, features critical discussion between author Prof. Bernd Stahl and a high-profile panel featuring Prof. Katrin Amuns, Prof. Stephanie Laulh-Shaelou, Prof. Mark Coeckelbergh, moderated by Prof. Doris Schroeder.
The panel discussion will include a questions and answer session open to members of the audience.
You can find more information about the launch event and register here, and the book can be downloaded here.
Many years ago when I was just a teenager, I came across an interesting machine. It was supposed to tone your muscles while you sit on the sofa eating crisps and drinking tea, by using electric current. Easy to use, just plug the leads into the box, attach the pads to the skin using elasticated bands, and pass the current through your leg muscles. You feel a little twitch, the muscle flinches maybe and somehow is exercised.
Well I of course didn’t need to lose weight or build up my muscles, I weighed 68 kilos, but I had the very thought that any teenage adventurer home scientist idiot would have, “I wonder what it does if you stick it on your head?”
Unfortunately my experiments were soon discovered and the offending article was removed (the machine, not my brain or sense of experimentation) which is a shame, because if not I would today be considered a pioneer, the father figure of the growing DIY brain stimulation movement.
I do not want to suggest that anyone should try it at home, but the movement for self brain stimulation is on a roll. I won’t include any links but you can discover how to build your own stimulator and where to place it either using text, photos or videos easily and freely available online. The small army of practitioners are conducting experiments upon their own brains, circulating their findings and claim real results.
Although these results are anecdotal (not totally “scientific”) users claim that their capacities for mathematics have improved, problems of depression have been lightened, memory is better and that chronic pain can be relieved.
We might think that it may not be a good idea to conduct such experiments upon ourselves without any expert help, but the people who have had their lives improved through these actions would not agree. Experimentation in this field goes back many years, far longer than you would imagine (in the 11th Century experiments included using electric catfish and other charge generating fish were proposed to treat patients, rays placed on people’s heads etc), and many of the practitioners today are doctors. There is even a commercially available set up that is marketed to gamers, as one finding suggests that the use improves their playing capacity.
This field in some way reflects the path of home treatment using non prescribed drugs in cases of cancer. Many groups exist that experimentally treat themselves with medication that has either not been approved, trialled correctly or is not commercially available for other reasons. If these trials are reported correctly the information they produce becomes important data, and we tend to find that people report extremely well when they are talking about their own bodies and choose their own treatment. And trials of this type may not be possible (or wanted) under the control of drugs companies or research organizations.
So there are obvious ethical issues to take into account, including issues of trust, reliability, risk, responsibility, legal implications and the list goes on, but people will always experiment. According to Doctor Who that is why the human race is what it is, why it is so wonderful.
Once again I find myself thinking about the enhancement problem and its series of fine lines, ideas of the democratization of medicine flow in, and we must not forget how much science is done in this way and how much good comes out of ad-hoc garage experimentation. Do you know what Benjamin Franklin did with a kite and a key in a lightening storm?
Facebook are back in the news again, this time for conducting research without the consent of their users. Although maybe that is a false statement, users may well have signed those rights away without realizing too.
All Facebook did was to “deprioritizing a small percentage of content in News Feed (based on whether there was an emotional word in the post) for a group of people (about 0.04% of users, or 1 in 2500) for a short period (one week, in early 2012). Nobody’s posts were “hidden,” they just didn’t show up on some loads of Feed. Those posts were always visible on friends’ timelines, and could have shown up on subsequent News Feed loads”. This is the explanation offered by the author of the report about the experiment. Read the full text here.
Simply speaking they wanted to adjust the type of information a user was exposed to to see if it effected their mood. So if a user receives lots of positive news, what will happen to them? What will they post about?
Some studies have suggested that lots of Facebook use tends to lead to people feeling bad about themselves. The logic is simple, all my friends post about how great their lives are and about the good side we might say. I who have a life that has both ups and downs are not exposed to the downs, so I feel that I am inadequate.
This sounds reasonable. I am not a Facebook user but the odd messages I get are rarely about arguing with partners, tax problems, getting locked out of the house, flat tyres, missed meetings or parking tickets. I presume Facebook users do not suffer from these issues, they always seem to be smiling.
So in order to test the hypothesis a little manipulation of the news feed. More positive or more negative words, and then look to see how the posts are effected. The theory above does not seem to hold water as a statistic however, although bearing in mind the methodology etc (and the conductor) I take the claims with a pinch of salt. More positive words tend to lead to more positive posts in response.
Hardly rocket science we might say.
I have a degree in sociology, an MA in Applied Social research and work in the field. Conducting experiments of this type is not allowed in professional circles, it is considered unethical, there is no informed consent, rights are infringed upon and the list goes on. What if somebody did something serious during the experiment?
Of course “The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product”.
If readers are interested in looking at a few other fun experiments that might be considered ethically dubious I can offer a few. Check out the Stanley Milgram experiment, where people administered (False) electric shocks to other people who got the answers to their questions wrong. Yale University here, not a fringe department of Psychology. Researchers were investigating reactions to authority, and the results are very interesting, but you couldn’t do it today.
Or how about the so-called Monster study. The Monster Study was a stuttering experiment on 22 orphan children in Davenport, Iowa, in 1939 conducted by Wendell Johnson at the University of Iowa. After placing the children in control and experimental groups, Research Assistant Mary Tudor gave positive speech therapy to half of the children, praising the fluency of their speech, and negative speech therapy to the other half, belittling the children for every speech imperfection and telling them they were stutterers. Many of the normal speaking orphan children who received negative therapy in the experiment suffered negative psychological effects and some retained speech problems during the course of their life. The University of Iowa publicly apologized for the Monster Study in 2001.
Terrible as these experiments may sound, they were conducted in the name of science. Their results may have proved useful. Facebopok (along with 23andME and other commercial entities) are behaving in the way they are because they want to make more money, their interest is solely there (even if they dress it up as better user experience). And in the case of Facebook they have access to 1.3 billion users, and mandate to do whatever they like with them.