Artificial intelligence and public integrity: a promising yet precarious alliance
João Pedro Caleiro, writer-researcher at the Lemann Foundation Programme shares his insights from a recent workshop bringing together twenty senior decision makers from two federal agencies in Brazil with Blavatnik School academics to discuss the risks and opportunities for using artificial intelligence in public administration.
While debates on the risks of artificial intelligence (AI) often focus on the future, one needs to look no further than the recent past for cautionary tales.
In 2019, it was revealed that a self-learning algorithm developed by the Dutch government for allocating childcare benefits employed racially biased indicators, such as “foreign sounding names”, to identify potential fraud. This led to the wrongful accusation of thousands of parents over the years and large sums being charged, in some cases contributing to bankruptcy, homelessness, and even taking children away from parents and into state custody. In January 2021, when the full extent of the scandal became clear, the coalition government resigned.
The case is extreme, but the injustices that it highlights are far from unique. However, it’s also undeniable that AI holds immense potential for processing large amounts of data and supporting integrity policies in many ways. The challenge is for governments to harness these benefits without causing harm to their citizens. Brazil is a focal point for these discussions, with a significant number of tools being developed by the federal bureaucracy while politicians in Congress consider passing a landmark AI bill.
These issues were discussed in a one-day workshop in Brasília in late August, organised by the Lemann Foundation Programme, a research programme at the Blavatnik School. The event brought together a group of 20 senior decision-makers at Controladoria-Geral da União (CGU), Brazil’s influential federal controller’s office and from Conselho Administrativo de Defesa Econômica (CADE), the federal agency responsible for enforcing antitrust laws. Participants included developers, auditors, and managers, each approaching AI from different perspectives.
The aim of the workshop “was to make the group think about the broader ethical implications of their tools and understand that different teams within the institution must work better together”, explained Fernanda Odilla, a researcher at the University of Bologna, in Italy, who specialises in the intersection between AI technology, anti-corruption and integrity. Odilla, along with Lia Pessoa, Engagement Manager at the Lemann Foundation Programme, facilitated the group discussions, presenting participants with AI-related dilemmas and scenarios to grapple with.
In his workshop presentation, Pepe Tonin, a former MPP student and until recently, director of Integrity research at the CGU, highlighted two key dimensions essential for AI best practices: transparency (ensuring there are no “black box” algorithms) and explicability (meaning that AI tool results must be explainable in a way that makes sense to a human being).
CGU has adopted an understanding of public integrity that focuses on striving to fulfil institutional purpose through legitimate means. Building integrity in this sense requires a broader perspective that goes beyond putting in place traditional anti-corruption policies, which limit their ambitions to reducing wrongdoing. Izabela Corrêa, Federal Secretary for Public Integrity, has spearheaded this shift in her employer, the CGU, drawing on a theoretical framework developed by the Blavatnik School’s Building Integrity Programme. Izabela’s approach is shaped by her experience as a former Postdoctoral Research Associate for the Chandler Sessions on Integrity and Corruption at the School, a programme that brings together senior international anti-corruption officials to develop and test innovative approaches to integrity.
The potential of AI in integrity policies
As the workshop also highlighted, the use of AI in government is taking off, offering exciting new opportunities. The Lemann Foundation Programme has supported a number of research efforts in the area. Beatriz Kira, former postdoc at the Programme, now lecturer in Law at the University of Sussex, is part of a research network that investigates broader AI regulation, and which seeks to create an evidence base for policymakers who seek to find the optimal balance between maintaining oversight and fostering innovation.
On 27 September, the Programme published a working paper by 2024 visiting fellow Pedro Cavalcante, former special advisor at CGU. This highlights emerging initiatives globally aimed at ensuring algorithm accountability, including the “human-rights approach” of the OECD. He draws attention to another critical issue: the digital divide. In Brazil and other countries, a significant proportion of the population remains offline, and could thus be excluded from the benefits of govtech.
This built on a Lemann Foundation Programme policy brief by Eduardo Araújo, an advisor for the treasury of Espírito Santo state and a former Programme MPP summer intern, investigating how automated chatbots could help citizens navigate complex budget information, facilitating accountability. Araújo flew in from his home state to participate in the Brasilia workshop, bringing with him insights into how AI can also assist government agencies like CGU in responding to the thousands of messages received daily by “ombudsman” services, for example, and in reviewing contracts at scale for signs of corruption.
Overcoming challenges
The discussions also noted that AI tools that are not built around the user experience often end up under-utilised and phased out, and that public organisations are often understandably cautious about funding expensive AI innovations that carry a significant risk of failing. Procurement poses a challenge in this area, with governments facing a tricky choice between developing in-house, purchasing “off-the-shelf” platforms or co-creating with partners in academia or civil society – each option offering a distinct set of benefits and risks.
A key takeaway from the workshop was that different parts of government are often operating in silos, both on finding AI solutions and struggling with the challenges presented by these new technologies. In this context, sharing knowledge and experience, and debating and agreeing on common principles, is a good place to start.