Facebook’s Social Research Experiment

I-need-help1
Facebook are back in the news again, this time for conducting research without the consent of their users. Although maybe that is a false statement, users may well have signed those rights away without realizing too.

All Facebook did was to “deprioritizing a small percentage of content in News Feed (based on whether there was an emotional word in the post) for a group of people (about 0.04% of users, or 1 in 2500) for a short period (one week, in early 2012). Nobody’s posts were “hidden,” they just didn’t show up on some loads of Feed. Those posts were always visible on friends’ timelines, and could have shown up on subsequent News Feed loads”. This is the explanation offered by the author of the report about the experiment. Read the full text here.

Simply speaking they wanted to adjust the type of information a user was exposed to to see if it effected their mood. So if a user receives lots of positive news, what will happen to them? What will they post about?

Some studies have suggested that lots of Facebook use tends to lead to people feeling bad about themselves. The logic is simple, all my friends post about how great their lives are and about the good side we might say. I who have a life that has both ups and downs are not exposed to the downs, so I feel that I am inadequate.

This sounds reasonable. I am not a Facebook user but the odd messages I get are rarely about arguing with partners, tax problems, getting locked out of the house, flat tyres, missed meetings or parking tickets. I presume Facebook users do not suffer from these issues, they always seem to be smiling.

So in order to test the hypothesis a little manipulation of the news feed. More positive or more negative words, and then look to see how the posts are effected. The theory above does not seem to hold water as a statistic however, although bearing in mind the methodology etc (and the conductor) I take the claims with a pinch of salt. More positive words tend to lead to more positive posts in response.

Hardly rocket science we might say.

I have a degree in sociology, an MA in Applied Social research and work in the field. Conducting experiments of this type is not allowed in professional circles, it is considered unethical, there is no informed consent, rights are infringed upon and the list goes on. What if somebody did something serious during the experiment?

Of course “The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product”.

If readers are interested in looking at a few other fun experiments that might be considered ethically dubious I can offer a few. Check out the Stanley Milgram experiment, where people administered (False) electric shocks to other people who got the answers to their questions wrong. Yale University here, not a fringe department of Psychology. Researchers were investigating reactions to authority, and the results are very interesting, but you couldn’t do it today.

Or how about the so-called Monster study. The Monster Study was a stuttering experiment on 22 orphan children in Davenport, Iowa, in 1939 conducted by Wendell Johnson at the University of Iowa. After placing the children in control and experimental groups, Research Assistant Mary Tudor gave positive speech therapy to half of the children, praising the fluency of their speech, and negative speech therapy to the other half, belittling the children for every speech imperfection and telling them they were stutterers. Many of the normal speaking orphan children who received negative therapy in the experiment suffered negative psychological effects and some retained speech problems during the course of their life. The University of Iowa publicly apologized for the Monster Study in 2001.

Terrible as these experiments may sound, they were conducted in the name of science. Their results may have proved useful. Facebopok (along with 23andME and other commercial entities) are behaving in the way they are because they want to make more money, their interest is solely there (even if they dress it up as better user experience). And in the case of Facebook they have access to 1.3 billion users, and mandate to do whatever they like with them.

Yawn Free Coffee

Colourful

EDITOR NOTE: This is Jonny’s 100th post From his humble beginnings writing about elective amputation, Jonny has taken Technology Bloggers by storm! Jonny started as a contributor, soon after earning himself author status and he has recently been awarded editor status. Congratulations and thank you from me and the rest of the community Jonny, you deserve it. Here is to the next 100! 😉 – note by Christopher

Oh I am rather tired this morning, like many others. I need to have my daily coffee. Sometimes I imagine a world where my surroundings understand me, my needs and wishes. I had a teas-maid once, that was the closest I ever came to automated good life, but times have moved on.

Face recognition software offers the dream of a newly serviced life. And the dream is here already, well not here exactly but in South Africa.

Yes coffee producer Dowe Egberts have built a coffee machine that uses a camera and software that can read your face. When it sees a person yawn it automatically produces a free cup of coffee for them. Check out this video on Youtube. Or get a free coffee by yawning next time you pass through the O.R. Tambo International Airport.

This is of course all done for publicity, but it does open up a train of thought that leads into science fiction.

This is not my first post about face recognition software. I wrote one earlier this year about Verizon’s project to fit it to TV top cable boxes, and the year before about mobile recognition apps, and since then there have been a few developments that I would like readers to note.

Researchers have been working on identifying individual animals using the same software. Cameras are often used to count wildlife in studies, but the problem often arises of determining which animals may have been counted twice. This problem could be overcome if the software could recognize the individual beasts, and scientists at Leipzig zoo have been working on such a project.

Do you know this one?

Do you know this one?

They have 24 chimpanzees to work with, and have designed a system that recognizes individual animals with up to 83% accuracy. The difficulty is getting good photos in the wild though, and in dim light the accuracy quickly drops, so the researchers have been designing new parameters to improve broader recognition.

Check out the article here to learn more.

On a slightly less positive note Facebook are again at the helm of recognition privacy. Once again, proposed changes to its privacy policy mean that already uploaded information is to be used differently.

Facebook has indicated that it will now reserve the right to add user profile pictures to its facial recognition database. Currently, only photos that a Facebook friend uploads and tags with a user’s name go into the facial recognition system. By opting out of the tag suggesting feature and declining to allow friends to tag him or her, a user can avoid being included in the social network’s facial recognition database.

No More might this be the case!

The change would mean that every user, of a population of a billion, whose face is visible in his or her profile photo would be included in the database. To sidestep the new feature, users will have to avoid showing their faces in their profile photos and delete any previous profile photos in which their faces are visible.

Facebook have however had problems implementing their recognition policies in Europe, and in fact the system was turned off in August of last year, but the new regulations seem to be another attempt at opening the door. See this article for a review of the arguments.

Regardless of whether you as an individual take these precaution, millions will not, and the database will grow massively overnight. And that will be worth a lot of money to somebody somewhere down the line, and have implications for all of us.

How Much Freedom Does the Internet Bring You?

On the surface Internet living seems to bring a great deal of freedom to many different parties. Last month for example I posted from the USA, Italy and the UK, we can work from home, buy direct and have access to all kinds of information.

This might make us feel that the web itself creates freedom, or that it is free to operate as we wish. I am not so sure that this is the whole story however, and others agree.

How much freedom of speech really exists?

How much freedom of speech really exists?

Last week Security technologist Bruce Schneier gave a talk as part of the TEDx Cambridge series. Schneider is very interested in security and perceptions of security as this previous TED video shows, but last week’s talk was different.

He took the problem of Internet freedom as his topic, and raised some very interesting arguments. The following quotes are taken from his speech as reported on our local Boston.com website:

“Which type of power dominates the coming decades? Right now it looks like traditional power. It’s much easier for the NSA to spy on everyone than it is for anyone to maintain privacy. China has an easier time blocking content than its citizens have getting around those blocks.”

We can see that there is some evidence to support this case, if we look at this article that appeared in the Huffington Post a couple of years ago. It recounts the tale of Google pulling out of China because they no longer wanted to censor their searches. Google chose to redirect users to their non censored search engine based in Hong Kong. The Chinese government managed to block the results anyway, so users were left in the same position as before, no access to the information.

If we take a broader look though we find that it is not just China but other countries that are making repeated requests for Google to censor their content. CNN report the revelations of the recent Google Transparency report, where Canada, France, the UK and the USA feature strongly in the league of requested censorship. The report is here, easy to follow and a 5 minute thumb through might change your ideas regarding freedom and regulation on the web.

Just yesterday Linkedin announced that they challenging the US government over data requests. US organizations are allowed to publish the total number of data requests, but cannot break the figure down to reveal the number made by security services. Linkedin say this legal situation makes no sense, and many other companies agree. Read about it here.

“Cyber criminals can rob more people more quickly than real-world criminals, digital pirates can make more copies of more movies more quickly than their analog ancestors. And we’ll see it in the future. 3D printers mean control debates are soon going to involve guns and not movies.”

Just this week The Independent ran a story about Europe’s criminal intelligence agency that is fighting unprecedented levels of crime across several fronts as gangs capitalise on new technology. We are not talking about a few individuals hacking into the odd bank account here and there, we are looking at the new form of organized crime. A multi billion dollar industry in Europe alone.

The gun reference is of course to the distribution of plans for a 3D printer manufactured gun. Read about it here.

Caution in cases of political dissent

Caution in cases of political dissent

Much has been written about how Facebook and other interfaces have the power to democratize society, and their potential to promote revolution. The so-called Arab Spring is often given as an example, but as well as dissidents using Facebook to organize protests, the Syrian and other governments also used Facebook to identify and arrest dissidents.

There are plenty of examples. Here is an article about 3 Moroccan activists who were arrested for their comments criticizing governments at that time. One used a Wikileaks type platform, another Facebook and the third Youtube. They were all arrested and charged with various and sometimes unrelated crimes.

I wonder where they are now?