Optimists, doomers and securo-pragmatists: Reflections on the UK’s AI safety summit

For all the carping, the Prime Minister’s Bletchley Park jamboree moved the world forward on AI safety.

Estimated reading time: 10 Minutes
Ciaran Martin running a group workshop

It is very easy to criticise Prime Minister Rishi Sunak’s AI Safety Summit held at Bletchley Park on 1 and 2 November.

Such is the brand damage suffered by the British state in recent years, not least in the eyes of its own people, that any announcement from UK Ministers tends to provoke a torrent of cynicism (by way of contrast, think of the fanfare that would have accompanied a global tech event convened by Tony Blair at the height of ‘Cool Britannia’ in, say, 1998). Moreover, the grandly named Bletchley Declaration signed by 28 Governments, and the subsequent agreement between a western subset of those Governments and American Big Tech, were both relatively thin on detail. Legitimate concerns were expressed about the dominance of great powers and very rich companies to the exclusion of others. Others worry that noisy media coverage of overhyped security fears could turn populations against vital new technological innovations. Finally, Mr Sunak’s silly decision to conduct a public interview with maverick entrepreneur Elon Musk – described by one Westminster correspondent as “one of the maddest events I’ve ever covered” – generated material for political sketch-writers and satirists for years to come.

And yet for all that, the Prime Minister’s summit was a very good idea which was, overall, very well executed. It achieved something important and provides the foundations for some positive – and potentially crucial - developments in the future.

Why Bletchley mattered

The answers to four basic questions should confound the sceptics.

First, is it a good thing that world leaders from Governments and the tech industry came together to work out how to manage the risks of AI? The answer is unquestionably yes.

A generation ago, we failed to do this with Internet-based technology. We pay the price of that today on a daily basis with cyber attacks and other online harms. The brilliant Internet pioneer Dr Vinton Cerf, who co-wrote one of the key protocols on which the Internet works, famously remarked in a 2019 piece that “we did not know we were laying the tracks for what would become the digital superhighway and by the same token we did not envision that people would intentionally take advantage of the network” to commit serious harms. No Bletchley type conversation took place. In a phrase beloved of the cyber security industry, the Internet was designed without security in mind. Bletchley should help avoid repeating this mistake of history.

The second question is: was it in the UK’s interests to lead on AI safety by hosting last week’s summit? Hopefully this is a self-evident yes: there are considerable advantages for Britain, and no obvious downsides, to the world agreeing to being led by the UK. In fact it represents something of a coup for Mr Sunak. At the height of the Brexit crisis a few years ago, few would have bet on the world allowing Britain to take the lead on global AI governance in 2023. The general pessimism surrounding the UK at the moment works against the Government; look, for example, at media coverage saying that the United States had “upstaged” the summit by publishing its most detailed ever policy on AI on the eve of the event. In a more optimistic country, this might have been more accurately portrayed as the smaller power’s initiative acting as a forcing function for the world’s superpower. It would be a source of quiet pride.

Third, were the right people there? Obviously part of the answer here is of course not (even if the gender balance was far better than most tech events across the world). The required dialogue is far bigger than just over one hundred people, and non-traditional powers will demand a greater say in future, of which more later. But there were a wide range of expert views present from across the globe. Crucially, the Prime Minister made a very big call to invite China – and presumably secure American consent for that invitation. For all the (entirely legitimate) concerns about the horrific misuse of surveillance technology in the People’s Republic, no global set of rules or principles would be worthy of the name without Beijing’s signature. And excluding China from global discussions on AI would do nothing to prevent or slow down China’s development of AI.

Finally, and most importantly, did the event do any good? Overall, yes. Bletchley certainly passed the Hippocratic oath test of doing no harm. It was much better than nothing, and lest that be seen as damning with faint praise, nothing – and the untrammelled march of AI without debate and considered reflection – was the alternative. It started a much-needed conversation. The alternative was not a better summit: the UK did not ‘bid’ to host the event as if it were the Olympic Games.

In this context, whilst the importance of semantics can be overstated, the use of the word ‘summit’ may have been a mistake. To Europeans and perhaps to others, this can conjure up images of exhausted leaders arguing about specific bits of text till 4am before producing an agreement with binding legal force.

Bletchley was never going to do that. It was more a conference than a summit. But it did generate the outlines of a framework for the future global governance of AI. Deadlines and the need to generate activity for them matter in bureaucracies (and for these purposes Big Tech is just as bureaucratic as Government). The fact that a framework and functional global policy community now exists and has secured broad global support, and will now meet in Korea and then in France in 2024, really matters.

None of this is to say that everything was harmonious at Bletchley. Three significant challenges were obvious, and will feature in future summitry and the wider debate.

Doomers, optimists and securo-pragmatists

The first and most important challenge is the lack of consensus about what the challenges are. A very polite but fundamental disagreement underpinned much of the Bletchley discussions between three different analyses of the problem.

At one end are those seized by what they see as the looming existential risks of AI. In the run-up to the summit, their case was eloquently and powerfully made by Professor Stuart Russell of UC Berkeley in various speeches and media interventions. In summary, Professor Russell argues that for the first time in history we have deliberately set out to create something cleverer than we are, and it looks like we are going to succeed. That creates existential risks up to and including species extinction.

But this is not a consensus view, even if Mr Musk seemed to adopt it in his rambling discussion with the Prime Minister. Experts such as the Oxford Internet Institute’s Professor Sandra Wachter point out, for example, the limitations energy and water supply put on this sort of risk. Indeed, this way of thinking is now attracting its own mocking monikers in some tech circles: they are now sometimes disparagingly referred to as the AI doomers.

Doomer-ism dominated some of the Bletchley discussions, but by no means all. At the other end of the spectrum sit the optimists, those with a very positive view of the technology and a relatively benign view of its downsides. Meta, owner of Facebook, turned up this narrative at Bletchley in force at both the scientific end, through the Turing Award winner Yann LeCun, its chief AI scientist, and at the political and regulatory end through Sir Nick Clegg, President of Global Affairs, and a former British Deputy Prime Minister.

Somewhere in the middle, but probably in truth leaning more towards the optimists, are what one might call, for want of a better term, the securo-pragmatists (the author would put himself in this position). Securo-pragmatists tend to view AI as a series of positive technologies that give rise to a series of short- and long-term challenges, which are of varying degrees of severity. Some of those challenges are with us already, such as the use of AI to generate widespread disinformation or to entrench biases in the provision of public services. Others are coming down the track, such as more advanced and larger-scale cyberattacks and the potential for AI to increase access to dangerous bio-weapons. (The UK Government helpfully published a credible summary analysis of the risks a week ahead of the summit).

To the securo-pragmatist, these security and safety challenges are manageable if properly thought through. Importantly, they are also largely separate challenges: what society needs to do to manage disruption from AI in the labour market is completely different to what needs to be done to tackle information which is in turn completely different from ensuring human control of military AI systems. There is therefore, to the securo-pragmatist, no single such thing as AI safety. But a useful principle, now much to the fore in cyber security, is that systems should be secure by design, and that if they are not, those who make and run them should be liable for that.

The challenge here is not so much who wins the intellectual debate – even the wildest optimist would concede that some type of monitoring of existential risk is necessary. It’s about how this debate determines the balance of finite effort. Here, once again, the history of Internet-based technology provides a useful guide. A decade or two ago, the catastrophic risk of “Cyber 9/11” and “Cyber Pearl Harbors” were – in good faith but quite wrongly – emphasised by Governments. Efforts were therefore diverted to a threat that turned out to be close to non-existent, with less attention paid to the fundamental flaws in software that gave rise to industrial scale data and intellectual property theft, intimidation, political interference, and the disruption of public service administration on a daily basis. Both optimists and securo-pragmatists worry that over-emphasising – indeed, over-hyping – an existential threat that to many looks implausible will divert attention from the efforts needed to tackle the risks of the here and now. Hype hurts security. We already know that from the earlier phases of digital revolution.

This was the one area where the British Prime Minister’s messaging faltered slightly. His difficulty in balancing the probably remote existential threat with a more positive focus on the benefits of AI technologies with a practical focus on risks was captured perfectly in one gloriously sardonic British newspaper headline: “Rishi Sunak says people should not be alarmist about AI while admitting it could be as dangerous as nuclear war”. US Vice-President Kamala Harris managed the same tension a little more deftly; whilst not entirely dismissive of the existential risks she left listeners to her keynote speech in no doubt that Washington’s focus was on practical mitigations of risks from a potentially wondrous technology. At some point in the future, world leaders will have to make a definitive call about whether they reject or embrace ‘doomerism’ as the central focus of policy. Bletchley was not yet that moment.

The rest of the world will want to get on

This unresolved debate about the nature of harm we need to tackle may be the most important of the three future challenges emerging from Bletchley Park, but the other two also matter.

The second challenge is about global inclusion. AI and wider tech governance will take years, if not decades to evolve. And it is unlikely that the Governments of the bulk of the world’s population will happily fall into line between what is at risk of looking like an Anglo-American discussion. Calls from Washington and London to apply the post-1945 rules-based international order on the new age of technology tend to fall flat. Governments like Nigeria and Brazil with their large, growing and increasingly tech-savvy populations will want a greater say. India, now the most populous country in the world with a massive IT services sector, is acutely conscious of its market power. The European Union has already shown that American-built tech has to comply with rules even if it doesn’t like them. And yet the bulk of the tech will continue to be developed and manufactured in the United States and China. This imbalance in global tech will be difficult to navigate.

Linked to this is the final challenge: the need to avoid regulatory capture by the existing tech giants. To some, the calls by the existing West Coast titans for AI regulation is self-serving – the barriers to entry new regulation will erect will entrench their dominance. Again, at Bletchley the UK Government walked a very fine tightrope reasonably well; if progress is to be made now, it needs partnership with the companies already investing billions in the new technology, but sufficiently open to disruption. And, frankly, it is not in the interests of the host nation, or its continental neighbours, or the rest of the world, for the US-China corporate tech duopoly to be perpetuated. But breaking that duopoly is no mean feat.

The Turing test as a guide to the future

For all these challenges, and for all its imperfections, Bletchley moved the world forward on AI safety. And perhaps, looking to the future, the evocative location was useful. It was at Bletchley, of course, that Alan Turing and many others, including Joan Clark and the engineering genius Tommy Flowers, laid the foundation of modern computing. But Turing is wrongly remembered as primarily a codebreaker. In fact, he spent the majority of his career in Crown Service, securing Britain’s systems from being corrupted by hostile forces. He was a securo-pragmatist who worked to make the UK’s systems secure by design and secure in operation. It is this ‘Turing test’ that might usefully be applied to AI.

Professor Ciaran Martin, CB, is Professor of Practice the Blavatnik School of Government and a former head of the UK’s National Cyber Security Centre. He attended the first day of the Bletchley Park AI Safety Summit on 1 November.