Responsible Algorithm Use: The Dutch National and Amsterdam City Algorithm Registers

Artificial intelligence systems rely on algorithms to instruct them on how to analyze data, perform tasks, predict patterns, evaluate trends, calculate accuracy, optimize processes and make decisions. The Dutch government wants its own governmental departments to use algorithms responsibly. People must be able to trust that algorithms comply with society’s values and norms. And there must be an explanation of how algorithms work.

The government does this by checking algorithms before use for how they work and for possible discrimination and arbitrariness, in the belief that when they is open about algorithms and their application, citizens, organizations and media can follow and check whether they (and their use) follows the law and the rules.

According to the government, the following processes, among others, contribute to responsible algorithm use:

  1. The Algorithm Register helps to make algorithms findable, to explain them better and to make their application and results understandable.
  2. The Algorithm Supervisor (the Dutch Data Protection Authority) coordinates the control of algorithms: do the government’s algorithms comply with all the rules that apply to them? Learn more about the regulator .
  3. The Ministry of the Interior and Kingdom Relations is working on the ‘Use of Algorithms’ Implementation Framework . This makes it clear to governments what requirements apply to algorithms and how they can ensure that their algorithms can meet those requirements.
  4. Legislation: there will be a legal framework for the transparency of algorithms. This was announced in the letter to parliament dated December 2022 .

Find out more at The Algorithm Register of the Dutch government.

The City of Amsterdam also has an AI Algorithm Register 

The Algorithm Register is a window into an overview of artificial intelligence systems and algorithms used by the City of Amsterdam. Through the register, anyone can get acquainted with the quick overviews of the city’s algorithmic systems or examine their more detailed information based on their interests. Individuals can also give feedback and thus participate in building human-centered AI in Amsterdam. At this moment the register is still under development and does not yet contain all the algorithms that the City of Amsterdam uses.

Find out more at Algorithmic systems of Amsterdam.

The White House Office of Science and Technology Blueprint for an AI Bill of Rights

The White House Office of Science and Technology has published the Blueprint for an AI Bill of Rights.

The Blueprint is a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.

Safe and Effective Systems

You should be protected from unsafe or ineffective systems. Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards. Outcomes of these protective measures should include the possibility of not deploying the system or removing a system from use.

Algorithmic Discrimination Protections

You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.

Data Privacy

You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. You should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected. Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used.

Notice and Explanation

You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible.

Human Alternatives, Consideration, and Fallback

You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. You should be able to opt out from automated systems in favor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts. In some cases, a human or other alternative may be required by law.

Interesting stuff. You can read the full text via the link above.

Some thoughts on the Film Oppenheimer

The Bomb as a Game Changer

As regular readers will know, the Technology Bloggers platform has a partnership with the Bassetti Foundation. As part of my own collaboration with the Foundation I edited the International Handbook on Responsible Innovation, and in this book Foundation President Piero Bassetti explains that innovation requires a surplus of knowledge alongside a surplus of power.

This argument was not new for him though, being addressed in his book Le Redini del Potere (the reins of power) written with Giacomo Corna Pellegrini back in 1959.

In this book (from a time of rapid change when Fidel Castro became President of Cuba, the first two primates survived space flight, and nylon tights (pantyhose) were released to the public), the authors discuss the decision taken by then President Franklin Roosevelt to pursue research into a weapon that for the first time could bring humanity itself to an end.

This decision is seen as a development point in the relationship between science and politics and the notion of collective responsibility that underpin the Bassetti Foundation’s mission to promote responsibility in innovation.

This surplus of knowledge and power is something that can be clearly seen in the latest Oppenheimer film, as the knowledge surplus is created by gathering the world’s greatest scientific minds together, all carried out under the drive and with the funding of the US government (the surplus of power). The US army offers the infrastructure to put the whole plan together.

Without the political will and capability to carry out the project, the surplus of knowledge remains just that, knowledge. For it to become (an) innovation, it has to change something, to be implemented, which brings in the influence of power, money, and in the old days at least, government.

This brings the type of questions about responsibility that we have been asking in the Bassetti Foundation for the last thirty years, and which are related to its approach and interests. If we follow Bassetti’s line of thinking as outlined in the Handbook, knowledge remains knowledge in the absence of political will and capacity, so responsibility must lie with the political decision-makers, or in other words, with power.

A single line expresses this idea in the new Oppenheimer film, uttered by Donald Truman, the US President who took the decision to drop the bombs over Japan: ‘those people is Hiroshima and Nagasaki aren’t interested in who built the bomb, they are interested in who dropped it’.  

Who is Responsible, The Individual or the Position?

In the case in question the US President is claiming the responsibility for the dropping of the bomb, but if we follow Bassetti, as President he also in some way ‘represents’ responsibility for the discovery of the bomb itself, even though the process was started by his predecessor. From some perspectives (those that see a ‘many hands’ problem), the discovery and production process brings joint responsibility; it requires military personnel and logistic capacities, scientists as well as finance, good will from family members, collaboration and political support. But we could also say that the process is fundamentally political and facilitated by power, the same power decides to facilitate, design and implement the process, and then decides what to do with the results.

This point of who controls the process (and therefore is responsible for it) comes up once more in the film, as Oppenheimer (having delivered a bomb to the military for use) starts to tell a soldier the effects of exploding the bomb at different altitudes. The soldier responds by making it clear that the military would be taking all of the decisions from then on, and they would decide on the logistics. Once the bomb was ready, it was made clear to the scientists that they did not have any say in how it might be used. It was never their bomb and their role had been completed.

Another interesting element of the film develops as Oppenheimer moves to limit the effects of the invention. He proposes the need to share knowledge of the discovery with the allies (Russians), to propose a moratorium and international governance of the new weapon, and to halt further developments that would lead to an arms race. If we want to bring this into the present there has recently been lots of debate about the how to govern developments in AI, including about a possible moratorium.

Rather than just seeing this as a problem of care, it can also be seen from the point of view of how perceptions of responsibility change over time. During a war (although there is some discussion about the bomb being unnecessary as the German government surrenders) the development of such a weapon is justified, even seen as necessary. But once the war is won, or almost won, its existence should be problematized.

Starlink

Returning to present day developments, the press that Elon Musk received back in September and revelations made in a recent book about his Starlink project brings up several similar questions. Whatever the truth is about denied requests to Starlink to facilitate an attack on the Russian Black Sea fleet, Musk finds himself and his company participating in warfare. Echoing the position that Oppenheimer finds himself in (as portrayed in the film), he remarks that the purpose of Starlink was not to facilitate war but to facilitate gaming and internet access for all. But once the technology is available, its use may be difficult to determine by those who enabled it.

The problem of many hands is not as evident to see in this situation however. Starlink resembles a family business, the surplus of knowledge and the surplus of power, will and capability all lie within the hands of one person. I have not heard any talk of a moratorium, or international governance for that matter, which raises several fundamental questions: What is the role for governance in this situation? Or the role of political will or finance? What are the implications for thinking about democracy? Where should Responsible Innovation practices be focused if there is a lack of external governance mechanisms? What are the implications of the fact that both sides in this war rely on Starlink to facilitate their actions?

Could we see Elon musk as playing a multifaceted role, of innovator and politician, mediator and strategist?