Longevity: Now Available in Cans!

Through my work at the Bassetti Foundation (a Technology Bloggers partner) I have been fortunate enough to lecture at universities and schools about responsibility in innovation. At the Foundation we have a concept that we call Poiesis intensive innovation, and I try to put this idea into practice during my lessons. Poiesis could be thought of as the art or craft of being able to do something. It resides within an individual as well as an institution. It might be the ability to use a machine or piece of technology in a way that it was not necessarily designed for, or to use skills that could be seen as from a different field.

With Angelo Hankins as collaborator, I use my theatre training and secondary school teaching experience in a lecture called Longevity: Now Available in Cans! This lecture aims at getting students to think about the role of technology design in future-making, based on the idea that technological development plays a role in steering society and as a result the way we behave and experience life. We only have to think about the development of the internet, or its commercial development from an initial military role, to see how our lives have been changed by a few individuals who built the system we now use every day.

And I would say that they crafted these developments, or that they are crafting them as they develop.

During the lecture we present a (near future) drink called Longevity. The drink contains nanobots, a form of nanotechnology. The nanobots are really switches that can be turned on and off. These switches stimulate your body to produce different levels of adrenaline. The user downloads an app which they use to control their own adrenaline levels, offering the possibility to lower levels at night so that sleeping patterns can be made regular, and once asleep, levels can be lowered to such an extent that they go into a form of hibernation. This allows the body to rest more, offering the chance to live 30% longer!

The presentation brings in lots of topics for discussion related to how the introduction of such a product might affect society. Will it be fairly distributed? How will it change demographics? Which questions does it raise about marketing and claims about truth, values and life itself?

After the product launch, we have a sketch in which a great grandchild comes home to his/her grandparent to discover that they no longer want to take the drink. They say it is unnatural (currently 107 years old) and that all of their friends (including partner) have died. This means that they can’t look after the great great grandchildren any more, and this causes a conflict in the house. Are they just being selfish? What are societal and familial expectations.

The students then play with the props (pictured above) and improvise conversations, before reporting to the class. The idea is that the design process can be seen and decision-making moments can be talked about.

This game is not limited to schools and universities though. It makes a great party game. We have published an article which is free to download here that explains everything. It has a description of how to make the props, a fake video of the company announcement of its discovery, as well as notes so that anyone can use it anywhere. Everything is open access and free to use.

And I didn’t even mention the Happiness: Now available in cans! version. Dopamine on demand. With adrenaline!

So why not take a look and play it with your friends?

Responsible Algorithm Use: The Dutch National and Amsterdam City Algorithm Registers

Artificial intelligence systems rely on algorithms to instruct them on how to analyze data, perform tasks, predict patterns, evaluate trends, calculate accuracy, optimize processes and make decisions. The Dutch government wants its own governmental departments to use algorithms responsibly. People must be able to trust that algorithms comply with society’s values and norms. And there must be an explanation of how algorithms work.

The government does this by checking algorithms before use for how they work and for possible discrimination and arbitrariness, in the belief that when they is open about algorithms and their application, citizens, organizations and media can follow and check whether they (and their use) follows the law and the rules.

According to the government, the following processes, among others, contribute to responsible algorithm use:

  1. The Algorithm Register helps to make algorithms findable, to explain them better and to make their application and results understandable.
  2. The Algorithm Supervisor (the Dutch Data Protection Authority) coordinates the control of algorithms: do the government’s algorithms comply with all the rules that apply to them? Learn more about the regulator .
  3. The Ministry of the Interior and Kingdom Relations is working on the ‘Use of Algorithms’ Implementation Framework . This makes it clear to governments what requirements apply to algorithms and how they can ensure that their algorithms can meet those requirements.
  4. Legislation: there will be a legal framework for the transparency of algorithms. This was announced in the letter to parliament dated December 2022 .

Find out more at The Algorithm Register of the Dutch government.

The City of Amsterdam also has an AI Algorithm Register 

The Algorithm Register is a window into an overview of artificial intelligence systems and algorithms used by the City of Amsterdam. Through the register, anyone can get acquainted with the quick overviews of the city’s algorithmic systems or examine their more detailed information based on their interests. Individuals can also give feedback and thus participate in building human-centered AI in Amsterdam. At this moment the register is still under development and does not yet contain all the algorithms that the City of Amsterdam uses.

Find out more at Algorithmic systems of Amsterdam.

The White House Office of Science and Technology Blueprint for an AI Bill of Rights

The White House Office of Science and Technology has published the Blueprint for an AI Bill of Rights.

The Blueprint is a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.

Safe and Effective Systems

You should be protected from unsafe or ineffective systems. Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards. Outcomes of these protective measures should include the possibility of not deploying the system or removing a system from use.

Algorithmic Discrimination Protections

You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.

Data Privacy

You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. You should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected. Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used.

Notice and Explanation

You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.

Human Alternatives, Consideration, and Fallback

You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. You should be able to opt out from automated systems in favor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts. In some cases, a human or other alternative may be required by law.

Interesting stuff. You can read the full text via the link above.