Charting the path to global AI governance: potential and ethics

Paola Galvez Callirgos (MPP 2022) sets out her vision for the effective global governance of Artificial Intelligence.

Estimated reading time: 4 Minutes
Neural network AI Artificial Intelligence illustration
Photo by Pietro Jeng on Unsplash

Artificial Intelligence (AI) grabbed the world's spotlight during the recent High-level Week of the 78th session of the UN General Assembly (UNGA).

Nations acknowledged the tremendous potential of incorporating digital advancements - including AI - into their development plans, while at the same time, underscoring the crucial need to address the ethical risks of AI as inequalities are widening and democracy is under threat.

US President Joseph Biden emphasised that “AI technologies need to be safe before they are released to the public”. Ukraine’s President Zelenskyy pointed out that AI could be trained towards combat applications long before it could be harnessed for good. In a similar tone, Germany's Chancellor Scholz stressed the need for common rules preventing the use of generative AI as a weapon, while the Presidents of Chile and Italy called for global governance mechanisms that ensure ethical boundaries are upheld. Japan’s Prime Minister Kishida called attention to the Hiroshima AI Process on Generative AI, toward trustworthy AI.

The benefits and threats

The motivation behind calls for regulation is clear. On the one hand, AI can bring huge benefits, for example expediting progress towards achieving the UN Sustainable Development Goals (SDGs) through innovative systems to evaluate the risks of femicide in cases of gender-based violence (SDG 5 Gender Equality), or by fighting plastic pollution with machine learning (SDG 14 Life below water). However AI has the potential to pose threats to peace and security, to be employed by authoritarian regimes, produce dis- and misinformation, and exacerbate preexisting disparities and exclusion from, for example, healthcare provision.

In his speech to the Assembly, UN Secretary-General Antońio Guterres outlined the proposal to create a global entity on AI, and reiterated that the UN is ready to host the global discussions that are needed. Subject to member state decisions, this new agency could resemble international organisations like the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change. Guterres announced that he will appoint a High-Level Advisory Body on Artificial Intelligence, which will provide recommendations on concrete governance solutions by the end of the year.

The need for unlikely coalitions

At the global governance level, several influential policy guidelines have been produced and adopted by a number of countries all over the world, for example, the OECD Principles on Artificial Intelligence, the UNESCO Recommendation on the Ethics of Artificial Intelligence and the Ethics Guidelines for Trustworthy AI presented by the EU’s High-Level Expert Group on AI. What these documents have in common is that they dictate principles for the protection and promotion of fairness, inclusion, transparency, accountability, and responsibility, but they are non-binding guidelines. So, to help the transition from paper to mandatory actions, UNESCO and the European Union partnered to support the least developed countries in the establishment of national legislation to implement regulations on the ethics of AI.

I believe that achieving a common agreement on global AI governance will only be possible unless governments and other stakeholders start building unlikely coalitions.

This includes a shift in the conventional way of agreeing on regulations by the international community to embrace the inclusion of technology firms in the discussion. These companies' power and knowledge could make possible effective governance as they possess exclusive insight into the tools they are developing, and their capabilities. In that sense, a Global Digital Compact could be a suitable multistakeholder scenario that not only includes the private sector but also academics, civil society, and scientists, among others. In order to prevent regulatory capture, where regulators enact rules that favour the regulated industry leading to detrimental outcomes, transparent decision-making procedures and rigorous conflict-of-interest rules can contribute to reducing the potential for risk.

On top of that, global cooperation must be enhanced. Enforcing global governance seems hard to achieve through a fragmented approach, and regulating AI in some nations while leaving it unregulated elsewhere holds limited effectiveness because of AI's rapid proliferation.

A blueprint for effective governance

The following should be the non-negotiable characteristics of AI governance:

  • Human-centred: firmly rooted in the principles of human rights, ethical values, and the rule of law
  • Comprehensive: leaves no room for gaps or grey interpretations
  • Agile: provides the necessary flexibility for policymakers to adjust and make necessary corrections as AI continues to develop
  • Anticipatory: focuses on possible risks before they materialise
  • Inclusive: welcomes the involvement of all stakeholders.

To sum up, the 78th session of the UN General Assembly brought AI to the forefront of global discussions. The recognition of both the potential and pitfalls associated with AI has led nations to call for comprehensive AI governance, with Secretary-General Guterres announcing the formation of a High-Level Advisory Body on AI and proposing the creation of a global AI agency.

These initiatives signal a commitment to shaping the future of AI governance, but to effectively govern AI at the global level, it is crucial to build unlikely coalitions; global cooperation is paramount. My proposal is that AI governance should be comprehensive, agile, anticipatory, inclusive, and human-centred, grounded in the principles of human rights, ethical values, and the rule of law.

Paola Galvez Callirgos is an alum of the Blavatnik School.

Photo by Pietro Jeng on Unsplash.