The Digital Security Equilibrium – does it hold under AI?

Cyberspace has been full of harm, but despite predictions, the much-predicted global catastrophes have failed to materialise. Does this uneasy balance hold as AI-based technology rapidly advances?

Estimated reading time: 8 Minutes
Abstract image showing interconnected padlock icons symbolising cybersecurity, overlaid on a blue-toned cityscape with silhouettes of people and binary code.

At the dawn of the digital age, when cybersecurity became a top-level concern, predictions of catastrophic harm were common. The Economist in 2010 featured a mock-up Manhattan-type skyline suffering a 9/11 style atrocity under the headline Cyber War: The Threat from the Internet. As US Defense Secretary, Leon Panetta warned of Cyber Pearl Harbor, one of many such warnings from world leaders.

Whilst there have been many serious and damaging cyber security events, these catastrophic predictions have not come to pass. Official statistics in most developed countries tend not to attribute any fatalities to cyber attacks. The closest linkage between cyber attacks and mortal harm is in healthcare: frequent criminal attacks known as ransomware have damaged hospitals’ ability to function and patient care has suffered as a result. Counting the exact toll is difficult, because it is not possible to say with certainty that death occurred because of a cyber attack when the victims are already critically ill. Nonetheless, a University of Minnesota study in 2023 estimated that between 42 and 67 Medicare patients in the United States between 2016 and 2021 died as a result of ransomware cyber attacks.

There have been, for sure, serious and major disruptive events caused by malicious cyber activity. In the course of a six-week period in 2017, reckless activity by North Korea (the so-called Wannacry virus of May that year) and Russia (the so-called NotPetya operation in June) caused north of $10 billion of economic harm and disrupted critical services all over the world. But while it is obvious that cyber vulnerabilities remain of great concern – no one in the United States will wish to see a repeat of the Colonial Pipeline fiasco in 2021, let alone several such incidents at the same time – an uneasy peace has, broadly, held in cyber space.

Introducing the Digital Security Equilibrium

Why is this? I attribute it to the three different components of what can be called the Digital Security Equilibrium.

1. By and large, we do not subcontract human safety entirely to computers.

The first part of the equilibrium is the connection between security and safety. The English language – unlike, for example, French and Spanish – has two distinct words for these concepts. They are not the same. Take aviation. Aviation security can be poor – there have been multiple hacks, and many more accidental IT failures that have grounded fleets and caused chaos, disruption and economic costs. But aviation safety has a good 21st century record.

That’s because aviation safety amounts to considerably more than cyber security. No one would willingly get on an aircraft if they thought that were the IT to be hacked, or fail accidentally, that there was nothing the pilot – and ground staff communicating with the pilot – could do. A good example of this is the comprehensive failure, by accident, of Britain’s Air Traffic Control system in August 2023. The resulting administrative chaos was hugely socially disruptive and economically damaging with mass cancellations and diversions. But planes already in the air all landed safely, using backup communications and manual flying. No one suffered so much as a nosebleed, and that was entirely in line with the industry’s model. The same is true in railway systems: if signals fail, for whatever reason, trains should stop, rather than crash into each other. So hackers can easily cause mayhem, but not mass casualties, in transportation. The same holds true of most sectors, except healthcare.

2. Only a small number of highly capable actors have access to the most devastating tools

The second part of the equilibrium is about access to capabilities. In earlier decades it was fashionable to compare cyber capabilities with nuclear ones. This was mistaken for many reasons, but a main reason is that while one either has extremely destructive nuclear capabilities or one does not, anyone can carry out basic cyber operations. Carrying out high-impact cyber operations, though, is an extremely complicated endeavour and beyond the capabilities of most actors. Young criminals acting alone can – and have – undertaken data and cash theft, and damaged networks. But highly sophisticated operations – think the Olympic Games/Stuxnet operation against the Iranian nuclear programme in 2010, or Russia’s sabotage of France’s TV5 Monde station in 2015 – take years of preparation. They require skilled people, top-of-the-range covert infrastructure, organisational strategy, and a slice of luck. This is one of two reasons they are comparatively rare – the costs of doing them are significant. Given this, only serious cyber players have, to date, had the capability to do them.

This leads to the second reason these attacks have been rare: the highly capable actors in possession of the most powerful cyber capabilities – even the likes of Russia and China – will have some sense of calculation before launching them. For example, China is assessed to have the capability to launch devastating attacks on US critical infrastructure. But the same US assessment says these operations are unlikely to happen outside a serious US/China escalation, most likely over Taiwan. Just because China can hurt America domestically via cyber attacks, doesn’t mean it will, any more than it would suddenly take on the US militarily without major consideration of the consequences and America’s response 

3. The same tools that can be developed for malicious use can be developed to equal or greater good for our own security.

The final part of the equilibrium is a straightforward, continuous, attritional struggle for superiority between the use of capabilities for good and their use for ill. Cyber operations rely on maths and engineering. They have no agency, or moral compass, of their own. Malicious code, or vulnerabilities, that are detected can be ameliorated and it is common practice for the cyber security industry to release these fixes publicly so that everyone can defend against them. (Indeed, they can be ‘reverse engineered’ in the jargon, and fired back at the attacker, or anyone else).

The implication of this is that there is, in effect, a constant race between using the same capabilities for good and bad. A good example is the practice of what is known as ‘vulnerability scanning’. This is a technique where one can scan swathes of the online world and work out which networks are patched – protected – against known weaknesses, and which aren’t. Both malicious hackers and cyber defenders undertake vulnerability scanning. What matters is who is better at it and, in the case of defenders, whether those warned about unprotected networks take action.

Over the course of the digital revolution to date, this aspect of the cyber security equilibrium has been uneven but broadly at least neutral. Of course, there are plenty of occasions where defences have ‘lost’ – hence the many cyber attacks and harms we all hear about. But there has never been a comprehensive superiority of offence over defence. In other words, it has been broadly in equilibrium.

AI and the Digital Security Equilibrium

Will this uneasy equilibrium hold in the age of AI? It is, of course, too early to tell. But there are some pointers on each of the three pillars of the equilibrium.

1. By and large, we do not subcontract human safety entirely to computers.

Preserving this aspect of the equilibrium is a straightforward choice. It is up to us. And so far, the signs are encouraging.

Again, transportation provides a good example. A decade ago, predictions abounded that by now there would be no drivers on any public highways. That is transparently not yet the case. The principal reason for this is that societies have taken time and undertaken the extensive and detailed technical and communications work to gain widespread expert and public confidence for the safety model. In time, autonomous vehicles are highly likely to replace driven vehicles, even if more slowly than previously predicted, but in a way that will make them not just safer but felt to be safer, enhancing public confidence in the technology. This is an approach that should be replicated in other areas; it would be crazy to subcontract human safety entirely to computational machines that cannot be overridden.

2. Only a small number of highly capable actors have access to the most devastating tools

Contrastingly, widening access to powerful capabilities is, as of right now, the most worrying and fraying part of the equilibrium. AI does not create magical new weapons. But AI significantly enhances the quality of some malicious capabilities. It also reduces the cost of generating attacks, and the difficulty of doing them. For these reasons, alongside the growing market in selling damaging cyber capabilities (some of which is legal, and some illegal), AI-facilitated cyber attacks are one of the areas of greatest concern.

The geopolitical calculation that the likes of China, Iran and Russia will make before being overtly and overly aggressive in the use of potent cyber capabilities – which often leads to some form of restraint – is unlikely to extend to newer actors. Specifically, non-state terrorist groups with nihilistic tendencies have long craved powerful cyber capabilities but have never been able to acquire them. Were that to change, our exposure to risk significantly increases.

The spyware scandal over the activities of the NSO Group is a worrying sign of things that might be to come. Pegasus, its tool, was a powerful capability normally associated with the most capable nation states. Instead, it ended up on sale to a wide array of actors and deployed by them against a wide range of targets. (Though the company denies inappropriate selling, Washington, under both recent presidents, has sanctioned the company for its activities).

This is spyware, a silent intrusion for espionage. Should widespread proliferation happen with more destructive capabilities, big trouble could lie ahead. Even without legal and illegal proliferation of tools for sale, it will be easier for state and non-state groups to develop powerful capabilities for entirely reckless use. So far, even with the 2024 Pall Mall process led by Britain and France about spyware, global governance has been able to do little to address these risks.

At the moment, there appear to be significant risks that this pillar of the equilibrium could weaken significantly.  

3. The same tools that can be developed for malicious use can be developed to equal or greater good for our own security.

This final aspect of the equilibrium should hold – there is no automatic reason why cyber defenders should lose the capability arms race. But that will only happen if cyber security innovators working in free societies, whether in government or the private sector, keep up with or even outpace those who wish to misuse the new technologies. That is why it’s important for governments to retain highly specialised in-house capabilities in their security agencies, and why it is imperative that the West’s private sector cyber security industry continues to thrive and barriers to its development are tackled.

Conclusion

The Digital Security Equilibrium is a useful concept if we wish to understand why cyberspace had remained a place of harm, contestation, but not catastrophe to date. It can remain that way, but it requires a sustained effort and smart policymaking over many years. And for now, the most worrying part is the growing accessibility of potent cyber capabilities to new actors.