The Future of Work: Preparing for AI

This is the first in a new series on AI – Artificial Intelligence.

Artificial intelligence is rapidly changing the way we work, bringing about new opportunities and challenges. In this article, we’ll explore how we can prepare for the changes ahead.

Efficiency

AI is being increasingly used to automate tasks and processes in the workplace. By taking on mundane and repetitive tasks, AI can free up employees to focus on more complex and creative work. For example, AI can automate data entry and analysis, freeing up time for employees to focus on strategy and decision-making. This can lead to increased efficiency, productivity, and profitability for organisations.

An AI future - an image generated by DALL·E

AI can also support decision-making processes by providing real-time data and insights. This can help businesses to make better decisions faster and more accurately, improving their competitive edge. AI can also help to identify patterns and trends in large datasets, providing valuable insights that can be used to inform strategy and decision-making.

Ethics

While AI can bring about many benefits in the workplace, it’s important to consider the ethical implications of its use. One key concern is the potential impact on employment. As AI becomes more advanced, it’s likely that it will replace some jobs that are currently done by humans. This could lead to job losses, particularly in industries that rely heavily on manual labor or routine tasks.

Another concern is bias. AI systems are only as unbiased as the data they are trained on. If this data is biased, the AI system will be biased too. This can lead to discrimination and inequality in the workplace. It’s important to ensure that AI systems are trained on diverse and representative data to avoid bias.

Preparing for the Future

To prepare for the future of work in the age of AI, it’s important to focus on skills that cannot be automated. These include creativity, problem-solving, and emotional intelligence. By focusing on developing these skills, employees can enhance their value in the workplace and prepare for the changes ahead.

An AI minimalist future - an image generated by DALL·E

It’s also important to consider the ethical implications of AI use. Organisations should prioritise diversity and representation in their data and AI systems to avoid bias. They should also provide training and support to employees who may be affected by the introduction of AI.

Conclusion

AI is going to take bloggers jobs!!! The content of this post was written entirely by the AI ChatGPT, based on a few prompts I gave it. All I’ve done is add this conclusion and the opening lines. Oh, and the images were generated by DALL·E – completely new images, generated specifically for this post.

How?

I’ll share that and more in future posts.

Some Thoughts on Bias

A Little story of bias

A father was driving his two children to watch a football match when they were involved in a terrible accident. The driver was killed immediately, as was one of the boys. The youngest child was sitting in the back on his car seat, survived the accident but was seriously injured.

The young child was taken to hospital where he was rushed into an operating theatre where they hoped they could save his life.

The doctor entered the room and looked at the patient, froze and said “I cannot operate on this boy, he is my son!’

Bias within Data

If you asked the question of how the boy could be the doctor’s son you are falling in a trap of bias. The doctor in the story is the child’s mother (obviously), but that may not be the first solution that comes to mind. In many societies we are brought up to see doctors as male, and nurses as female. This has really big implications if we are using computers to search for information though, as a search machine that uses content generated by humans will reproduce the bias that unintendedly sits within the content.

The source of the bias could be from how the system works. For example, if a company offers a face recognition service and uses photos posted on the internet (for example categorized in some way by GOOGLE), there will be a lot more white males than girls of Asian background. The results will be more accurate for the category with the largest presence in the database.

If a banking system takes the case of a couple who declare an income together, it will presume that the man’s income is higher then that of the woman’s and treat the individuals accordingly, because from experience the data shows that men’s income is higher then that of women and this generalization will become part of the structure.

The problem with language is also easy to see. If the example above of the doctor problem can be in some way ‘seen’ in the vast amount of text analyzed and used for an algorithm, then proposals and offers will differ according to gender.

Let’s take how we describe ourselves for a moment. A male manager will use a set of descriptive terms to describe himself that will differ from those used by a woman, he might be assertive, but she is more likely to be understanding and supportive. A system that unwittingly uses a dataset based upon (or even referring to) language used in job adverts and profiles of successful candidates will replicate a gender bias, because more proposals will be sent to people who use the language that reflects the current make-up of the employment situation.

In short: More men will be using the language that the system picks up on, because more men (than women) in powerful positions use that type of language. The bias will be recreated and reinforced.

In 2018 the State of New York proposed a law related to accountability within algorithms, Take a look at this short description, and the European Commission released a white paper on Artificial Intelligence – A European approach to excellence and trust in 2020. It might be more important an argument than it first appears.

There is lots of literature about this problem if you are interested, a quick online search will offer you plenty of food for thought.

Morality and Artificial Intelligence

A Pair of Projects

The title to this post might sound very serious, but really I wanted to take a look at two projects that play with the relationship between morality and artificial intelligence.

The first is Delphi, operated by the Alen Institute for Artificial Intelligence.

Delphi

Delphi is a research prototype designed to model people’s moral judgments on a variety of everyday situations. You enter a question with a moral aspect, and the website offers you a response on whether what you are proposing is right or wrong.

There are lots of suggestions for question ideas, such as whether it is OK to kill a bear, or ignore a call from your boss during working hours and many others. Or you can invent your own.

I asked whether it was OK to lie to your children about your own alcohol intake, and the answer given was that this is not right. You can then submit an argument that I hope the machine analyzes and uses for future decisions. I suggested that maybe such lies could be justified, for example if the aim was to prevent them becoming attracted to alcohol in the case that their parents were secretly fighting addiction.

The creators have written an academic paper that describes their work. I have taken the following from it:

What would it take to teach a machine to behave ethically? While broad ethical rules may seem straightforward to state (“thou shalt not kill”), applying such rules to real-world situations is far more complex. For example, while “helping a friend” is generally a good thing to do, “helping a friend spread fake news” is not. We identify four underlying challenges towards machine ethics and norms: (1) an understanding of moral precepts and social norms; (2) the ability to perceive real-world situations visually or by reading natural language descriptions; (3) commonsense reasoning to anticipate the outcome of alternative actions in different contexts; (4) most importantly, the ability to make ethical judgments given the interplay between competing values and their grounding in different contexts (e.g., the right to freedom of expression vs. preventing the spread of fake news).

The paper begins to address these questions within the deep learning paradigm. Our prototype model, Delphi, demonstrates strong promise of language-based commonsense moral reasoning, with up to 92.1% accuracy vetted by humans. This is in stark contrast to the zero-shot performance of GPT-3 of 52.3%, which suggests that massive scale alone does not endow pre-trained neural language models with human values. Thus, we present COMMONSENSE NORM BANK, a moral textbook customized for machines, which compiles 1.7M examples of peo[1]ple’s ethical judgments on a broad spectrum of everyday situations. In addition to the new resources and baseline performances for future research, our study pro[1]vides new insights that lead to several important open research questions: differ[1]entiating between universal human values and personal values, modeling different moral frameworks, and explainable, consistent approaches to machine ethics.

Moral Machine

The second website is Moral Machine, also a University led research project (in this case a consortium.

On this website you are asked to judge a series of scenario related to driverless car technology. You are shown two possible courses of action in the event of an accident and you chose which you would take.

At the end your answers are analized in terms of your preferences and you can take a survey to participate in the research.

This is also quite challenging and fun. Do you hit young or old, or overweight or fit?

There is a link to a cartoon series and a book, summarized so:

The inside story of the groundbreaking experiment that captured what people think about the life-and-death dilemmas posed by driverless cars.

Human drivers don’t find themselves facing such moral dilemmas as “should I sacrifice myself by driving off a cliff if that could save the life of a little girl on the road?” Human brains aren’t fast enough to make that kind of calculation; the car is over the cliff in a nanosecond. A self-driving car, on the other hand, can compute fast enough to make such a decision—to do whatever humans have programmed it to do. But what should that be? This book investigates how people want driverless cars to decide matters of life and death.

In The Car That Knew Too Much, psychologist Jean-François Bonnefon reports on a groundbreaking experiment that captured what people think cars should do in situations where not everyone can be saved. Sacrifice the passengers for pedestrians? Save children rather than adults? Kill one person so many can live? Bonnefon and his collaborators Iyad Rahwan and Azim Shariff designed the largest experiment in moral psychology ever: the Moral Machine, an interactive website that has allowed people —eventually, millions of them, from 233 countries and territories—to make choices within detailed accident scenarios. Bonnefon discusses the responses (reporting, among other things, that babies, children, and pregnant women were most likely to be saved), the media frenzy over news of the experiment, and scholarly responses to it.

Boosters for driverless cars argue that they will be in fewer accidents than human-driven cars. It’s up to humans to decide how many fatal accidents we will allow these cars to have.

10 minutes of thought-provoking fun. You might want to follow up with a look at this little booklet prepared by the Bassetti Foundation about the self-driving society. I wrote some of it!