The year is 2029. An artificially intelligent defence network has gained self-awareness and shrugged off the controls of its technology contractor creator, sparking nuclear war. This AI has spawned cyborg soldiers, is in the ascendancy over humankind, and can even time travel. Against such a powerful adversary, one might reasonably ask: “What hope do we have?”

Luckily, the above dystopia is wholly fictional. If you hadn’t recognised it already: it’s taken from James Cameron’s 1984 blockbuster The Terminator, which starred Arnold Schwarzenegger as an AI-powered assassin hellbent on crushing humanity – all while uttering pithy threats. Cameron came to the idea in a dream, when he was ill in the early 1980s, perhaps unsurprising in an era of looming Cold War tensions. Though 20th century fears of nuclear apocalypse have receded, worries about what technology could mean for humankind are very much back in vogue. Global geopolitical tensions have ratcheted up, while war backdrop worries about the threat of AI.

In 2023, Google and OpenAI CEOs were among the signatories of an open letter warning about the existential threat of the technology. That same year, at the REAIM conference, part of an international multilateral summit on the responsible use of AI, concerns were raised about the deadly applications of AI. These included the use of AI-powered ‘slaughterbots’ – which could kill without human intervention – and the risk of AI-driven military escalation.

Despite this, governments are pushing their militaries to invest in AI – and the technology is already used on the battlefield, notably in Ukraine. But with the Machine Intelligence Research Institute’s research finding that six in ten AI researchers believe the technology poses a grave threat, could Cameron’s hit film have been more premonition than entertainment?

Principles of responsible use

It’s clear why militaries would want to use AI. Certainly, its advantages are already being exploited by militaries in practice. In the UK, for instance, the Ministry of Defence (MoD) already uses machine learning to inform policy creation, create efficiencies – and even pinpoint where mines are. Keen not to miss out, the US is budgeting about $3bn for AI in 2024. As an MoD spokesperson put it to Defence and Security Systems International: “[This tech is] advancing our military edge.”

You can spot similar enthusiasm at the supranational level, too, with an across-the-board interest in the computing power that AI can bring to military applications. AI can process huge volumes of data at lightning speed, which can deliver self-control, selfregulation and self-actuation to fighting systems. As such, Nato now uses autonomous systems to aid cyber defence, analyse images and predict threats from climate change. You can spot similar rhetoric in Nato documents: when the alliance released its AI strategy in 2021, it talked up the “unprecedented opportunity” the technology offered to defence.

Yet the perceived opportunity of AI is balanced with concerns, which have only grown as the technology is used for more than mere data analysis. These range from worries that the technology might act outside of predicted and programmed behaviours – to human operators dispensing with ethics for the sake of military expediency. As such, both nation states and intergovernmental bodies have created guidelines around AI’s usage. Among these, Nato’s ‘Principles of Responsible Use’ lays out how, from development to deployment, member state military AI usage needs to adhere to six guidelines.

First up is lawfulness: AI applications will be developed and used in accordance with national and international law. Then there’s the need for AI use to be responsible and accountable: AI applications must be developed and used with appropriate levels of judgement, care and with human oversight. These are followed by the need for explainability, reliability, governability and bias mitigation. For Simona Soare, senior lecturer at Lancaster University, these principles are critical in ensuring trust is built around how AI is used. As Soare puts it: “These principles have to guide the parameters for this technology, how it will be used, the conditions it can be used in, and the metrics set to drive trustworthiness and predictability.”

$38.8bn
The forecasted value of the AI military market by 2028.
Markets and Markets

Challenges to the principles

Yet ensuring that abstract principles are actually followed is no small task. Among other things, the intergovernmental group designing the rules must reconcile the differing objectives of Nato member states and provide financial and political support to keep militaries operating within them. Though Nikos Loutas, head of data and AI policy at Nato, believes that cooperation between member states during the creation of the principles is an important first step in ensuring compliance, he understands this will need ongoing work. “It was critical to have an agreement among allies about what our goal for AI is and what we mean by responsible use,” he says. “But we also need to ensure these principles don’t just remain nice words on paper.”

$1bn
The value of the Nato Innovation Fund.
Nato

Fair enough. Boasting 31 member states with 3.7 million military personnel – alongside differing AI capabilities and military ecosystems – finding agreement is always going to be hard. That’s especially true when some countries have their own AI principles and others have none. No wonder Loutas admits that even agreeing on principles was a lengthy process. Enforcement is also an issue. “These are voluntary principles,” Soare says, “and there’s no mechanism to ensure their enforceability.” As geopolitical tension rises, meanwhile, there are worries some states might overstep the boundaries. For Soare, even for states that do follow the rules, there is “no formula” to explain how one might, for instance, operationalise the principle of explainability.

This is an important point. As Soare stresses, there’s a difference between an abstract policy, even if widely agreed upon by member states, and what this means for real-world military deployment of AI. For Loutas, this is a gap that will be bridged by peer work: through ongoing review, dialogue and governance around the principles by all allied members. Nato has an AI governance body, made up of member state representatives, which guides the implementation of the principles, discusses sensitive issues, and ensures access to relevant experts. The alliance has also developed a toolkit to translate policy into operational reality. “The next stage of this,” Loutas says, “is to create applications in major use cases, see what happens when the rubber hits the ground, and then adapt these cases further.”

Prior to the practical toolkit, Soare adds, the development cycle is also critical in ensuring principles are adhered to. In her view, when private and public sector stakeholders work in tandem in the development cycle – designing and testing software, creating parameters in line with principles – there is less likelihood that the AI will act outside of pre-agreed guidelines. As she puts it: “Critical in ensuring that the principles of responsible AI use are embedded is in the continuous development stage. And when you venture out onto the battlefield, parties will need to recalibrate again to ensure the software continues to work within guiding principles.”

Indeed, it’s the challenges that Nato members come up against in the development of AI that Soare explains is probably taking the edge off the ‘AI arms race’. The subtext: as members realise the limits of the technology through the development process, AI appears less inexplicable. This offers reassurance that hostile nations aren’t racing ahead – which could, in turn, lessen the inclination of Nato countries to dispense with responsible use to ‘keep up’. For Soare, as AI becomes more explainable it is “demystified” and fears around how others are using it lessen. And though non-binding, Nato countries even reached modest agreements on the responsible use of AI with non-member states, including China, at a conference in the Netherlands last year. Among other things, delegates agreed to use AI in accordance with international legal obligations and in a way that doesn’t undermine international security, stability and accountability.

Trust and test

Surely all these initiatives are ultimately just based on trust? Experts agree – but stress that there is a history of trust keeping allied groupings aligned on key issues. Importantly, Nato has a recent history of creating group strategies on new military technologies. That’s true, for instance, when it comes to emerging hypersonic systems, next-generation communication and human enhancements. There is confidence this approach could equally work for AI – and not merely at a high level. As Soare explains, if new technology can’t be shown to abide by governing principles, on-the-ground personnel would likely reject its use. As she says: “If the AI software is not trustworthy, reliable, secure, or explainable enough to operatives or commanders, why are they going to use it?”

Loutas adds that without responsible use principles, there is no strategic military benefit to AI. “The dilemma that principled AI use comes at the expense of a strategic advantage is a false dilemma,” he says. “It’s a race to the bottom.” To put it differently, without any safeguards it’s easy to imagine humanity careening towards a Terminator-like tomorrow. That’s especially when some insiders worry that, without principled human oversight, AI could eventually make lethal decisions on its own accord – whether through the use of unmanned technology, or autonomous weapons. For Nato’s AI chief, responsible use principles therefore act as a critical safety mechanism, which in turn attracts private sector innovators to partner with the military, delivering a true strategic edge. “Our AI principles are like seatbelts not like brakes: they are there to protect [us], not stop development and advantages.”

Yet much like a James Cameron fever dream, the future of military AI isn’t clear. International relations are strained already, while war-like rhetoric is ramping up from the Caucasus to East Asia. But for those experts close to the development of AI – and guiding principles around its use – there is a belief that leaders won’t want to lead us towards an AI-driven dystopia. That’s echoed by a broader belief: that Nato’s AI principles will be adhered to. In fact, as Soare suggests, security and safety have a high standing in military forces. “For AI,” she says, “when it comes to robustness, security and safety, there is a very high expectation from the military…otherwise they’re not going to use it.”