
AI may soon be on performance review
Artificial intelligence is changing society. Mirko Schäfer and Iris Muis of Data School, Utrecht University are researching this and offering tools and education to better deal with AI. ‘When we look back in five years, I hope we will say: pretty stupid that we thought AI was a solution for everything,’ Muis says.
“AI is not neutral. Algorithms always contain certain assumptions,” Schäfer explains. “For example, in a municipal algorithm that determines who falls below the poverty line and which processes are then initiated. A social democratic municipality would want to warn citizens not to make large purchases as they approach the limit, while in a liberal municipality the focus is on personal responsibility. Political colour also affects procurement policies. In a green municipality, large language models and other items with a high environmental impact are unlikely to be used.”
Evidence-based governance
Political values are built into an algorithm and should move with the composition of municipal councils after elections. AI literacy within governments is therefore essential. “Democracy is changing through the use of data and AI. The large amount of data now available provides new insights. As an alderman, you can now govern more evidence-based. But realise that data is not neutral. We now often see municipal councils asking the question: do we ourselves have the expertise to be critical when the digital file is discussed? There is often a technocratic understanding of AI among administrators, without understanding the technology itself,” says Schäfer. “That can lead to wrong assumptions about what AI can do and how it should be deployed.”
Tools for responsible AI
Professionals have different needs when it comes to AI literacy. Administrators need a realistic picture of data and AI and what political implications it has, while civil servants want to know how to use AI responsibly in their daily work. Responsible AI is about fairness, explainability and accountability. It should not disadvantage or favour groups and should be transparent in the way decisions are made. The Data School therefore developed tools such as The Ethical Data Assistant (DEDA) for identifying ethical issues and Impact Assessments for Human rights and Algorithms (IAMA) for making trade-offs.
'AI at a job interview'
‘You want to prevent algorithms from unintentionally reinforcing biases,’ says Schäfer. “There should be mechanisms for citizens to object. As a municipality, you then have to be able to intervene quickly.” So the focus is now shifting more towards effective control mechanisms. Mouse: “We often use the analogy that AI is going to take over tasks from employees. Before deploying AI, you should have a job interview to know: is this type of AI actually suitable for these tasks? You do that job interview with the IAMA. If AI passes this test, it can start working. About every year, you and I have a performance interview. AI should have one too. We are now developing a brief questionnaire for that. Is AI still doing what he was assigned to do? Have any incidents occurred? How has dealt with them? Are there opportunities for development? Or is there new technology on the market that could do this better? After all, AI also gets replaced sometimes.”

Package leaflet
In five years, Muis and Schäfer hope it will be natural to approach AI critically. Schäfer: “When you buy something, you then by default look at the accompanying datasheet or model card. On it you find information about how the data was collected, in what framework, in what context and what the limitations are. Or in the case of a model: has a fairness check been done? Are there any pitfalls? What can’t you use the model for?” Mouse adds: “You can think of it as a leaflet for medicines. It comes with it as standard. You don’t go through it from A to Z, but you can look at it if you need to. And something else. I would like us to look back in five years’ time and say: pretty stupid that we thought AI was a solution to everything. We now know that it mainly has value for specific purposes.”
Want to know more?
Want to delve into responsible AI use? Read the report Opgave AI by the Scientific Council for Government Policy. And start the demystification with yourself: understand what AI can and cannot do by reading the book AI Snake Oil by Arvind Narayanan and Sayash Kapoor. Want to know more about AI and human rights? Lessons learned can be found in the report IAMA in action.
The Netherlands has a rich ecosystem for AI where you can quickly gain knowledge. Some examples: the Overheid 360 congress may be of interest, PIANOo (expertise centre for procurement of the Ministry of Economic Affairs) offers information on AI procurement and the Association of Dutch Municipalities has a community AI and algorithms.
Read about the role of statistics and the AI Labs [link to interview with Georg Krempl]. Or follow one of Utrecht University’s courses and trainings.
The Data School - part of the Faculty of Humanities - investigates how big data and AI impact citizenship and democracy, and how this transforms public administration and the media sector. The research also provides applicable solutions, such as impact assessments and tools for implementing responsible data and AI practices.