The torpedo-shaped vessels plummeted from the American planes into a thick canopy of knotted jungle. Unlike the daily bombardments designed to tear through flesh, bone and North Vietnamese morale, these were devices crafted for a different genre of warfare. Calibrated to resemble small trees or bushes, this assemblage of microphones, seismic detectors and olfactory sensors were part of a meticulously coordinated reconnaissance mission.

For years, this snaking network of roads and footpaths running from North to South Vietnam had supplied manpower and materials to the Viet Cong and the People’s Army. Cloaked in a dense jungle canopy, the Ho Chi Minh trail was the beating heart of the Vietnamese resistance and bombing it into submission had been a controversial and mostly futile, controversial and costly exercise. Rather than waste artillery on empty pockets of jungle, what US forces really needed was better data. Aided by listening devices and movement sensors, the Americans hoped to track and target personnel and trucks carrying supplies. When it commenced in 1968, Operation Igloo White was seen as an ingenious ploy to wage an audacious form of electronic warfare, but after five years, the Viet Cong and their guerilla allies remained resilient, allusive and well-stocked. The project had largely failed its objective to provide information on the flow of supplies and troops along the trail.

A matter of intelligence

Despite being posthumously viewed as expensive and ill-judged, Igloo White was, in many ways, ahead of its time. While technological advancement has accelerated far beyond dropping clunky sensors into enemy territory, the strategy anticipated today’s modern armies who are using state of the art devices to aggregate large sums of data to inform decisions taken on and off the battlefield. Buoyed by advancements in AI – a convoluted term that encompasses everything from autonomous robots performing rudimentary tasks to complex deep learning networks – militaries are harnessing semi-assisted and autonomous AI systems to streamline logistical operations, improve battlefield awareness and defend their bases from attack.

Aware of the radical potential of ruthlessly efficient algorithms to reshape military operations, the US government has invested heavily in these technologies. In 2021, it was estimated to have $6bn tied up in AI-related research and development projects. In 2024, the US military will ask for more than $3bn to advance its AI and networking capabilities.

Unsurprisingly, the UK doesn’t want to get left behind. It too has recognised the advantages of autonomous ‘learning’ systems and AI, and the integral role they are likely to play in the future of defence. In June 2022, the Ministry of Defence published its ‘Defence AI Strategy’, which brazenly declared ambitions to make the UK a global leader in the responsible use of AI as part of a once-in-a-generation defence modernisation plan. The mood from inside the MoD about AI and its warfighting potential might best be described as cautiously optimistic. Brigadier Stefan Crossfield, principal AI officer at the British Army, says that while many AI systems are still being used experimentally or at the discovery phase, they are “maturing at pace” and “supporting defence from the back office to the front lines”.

$6bn
The total cost of the US government’s AIrelated R&D projects in 2021.
Bloomberg Government

Crossfield talks of AI technologies exhibiting “the potential to be incorporated into a wide range of systems to enable various degrees of autonomous or semi-autonomous behaviours”. These include enhancing the speed and efficiency of business processes and support functions; increasing the quality of decision making and tempo of operations; and improving the security and resilience of interconnected networks. Crossfield also sees AI as playing a vital, albeit supportive role on the battlefield by “enhancing the mass, persistence, reach and effectiveness of our military forces” and protecting soldiers from harm by automating dull, dirty and dangerous tasks. Robots have been used to defuse bombs for over 40 years, but most machines currently deployed in the field cannot perform contextual decision making or operate autonomously. By embracing sophisticated algorithms and image recognition technology AI-powered machines could theoretically learn to recognise the type of bomb technicians are dealing with and choose the best option for neutralising it.

A similar tactic is being deployed by the Royal Navy, which uses three vessels capable of working manually, remotely or autonomously, to collect and analyse data in real time to detect and classify mines and maritime ordnance. Known as Project Wilton, this £25m initiative has developed sophisticated vessels capable of controlling and communicating with fellow machines.

Calling the shots

Due to the proliferation of more advanced technologies, Crossfield talks of computer-aided military decisionmaking occurring more and more frequently. “The areas which show the most promise are systems that take vast amounts of data to help inform commander’s decision making under the stress of battle,” he says, arguing that this talent for sifting through public data sources offer senior military personnel unique insights and better understand local and international geopolitical environments. If clandestine Cold War ops like Igloo White were focused on stealing data, today’s military and intelligence communities are swimming in it. For example, in 2011 it was reported that US drones had captured 327,000 hours – 37 years – of footage for counterterrorism purposes. By 2017, it was estimated that for that year alone the footage US Central Command collected could amount to 325,000 feature films – or approximately 700,000 hours or 80 years.

Militaries are turning to AI systems to curate, analyse and deliver novel or valuable insights from these data streams. Powered by convolution neural network algorithms – a class of artificial neural networks commonly used to analyse images – an AI system named Project SPOTTER is being trained by MoD experts to identify specific objects of interest from classified satellite imagery.

Inevitably, AI systems are largely defined by the content and quality of the information they consume. This places a huge responsibility on those tasked with developing algorithms and deep neural networks, carrying the Frankensteinian risk that these machines grow to inherit unwanted biases. When applied to the military sphere the gravity of these problems becomes somewhat more acute.

“It becomes really important to take a long, hard look at the data, so that it is not under-representative of any particular category or misrepresentative of the sample that you have,” explains Shimona Mohan, research assistant at Centre for Security, Strategy and Technology, Observer Research Foundation, in New Delhi. To illustrate the point, Mohan gives a simple example pertaining to gender bias. “If you just show your system images of male doctors, it’s going to think that every doctor in the world is male.”

In a paper titled ‘Managing Expectations: Explainable AI and its Military Implications’, Mohan cites a study tracking available instances of bias in 133 AI systems across industries from 1988 to 2021. It found that 44.2% demonstrated a gender bias, and 25.7% exhibited both gender and racial biases. In a conflict environment, Mohan explains that “deploying AI systems could mean that a woman of a race against which this programme is biased […] could be misidentified by the computer vision or facial recognition software of an autonomous weapon system as a non-human object”.

What’s in the box?

Another potentially horrifying bind for AI military researchers is the ‘black box’ problem: the inability to see how and why complex deep learning systems make their decisions. Deep neural networks are composed of thousands of simulated neurons that can be structured into hundreds of interconnected layers that interact in enigmatic ways. Even when these systems have yielded remarkable outcomes – like combing through medical data to predict diseases – their developers and users can’t explain how they work. For this reason, Mohan advocates for the use of explainable AI systems in a military context. This encompasses ante-hoc models that are less sophisticated and by default more transparent, and post-hoc versions that transfer the knowledge from the black box model to a simpler, smaller one, known as the ‘white-box surrogate model’.

“The best thing that we can look at now is regulating these systems as soon as we can and in as robust a manner as we can,” Mohan says. “We don’t know what kind of processing takes place within them, we don’t know what these deep neural networks are taking in, and why they’re spitting out what they’re spitting out.”

$3bn
The amount the US military have requested in the 2024 budget to advance its AI and networking capabilities.
US Department of Defense

For soldiers to adequately trust and understand AI, Crossfield says that defence personnel must have “an appropriate, context-specific understanding of the AI-enabled systems they operate and work alongside”. He cites a lack of knowledge, skills and experience in using these technologies as a major impediment to AI implementation for military operations in general. Crucially, he talks of personnel being trained and therefore competent enough to understand these tools, enabling them to verify that all is working as intended.

“While the ‘black box’ nature of some machine learning systems means that they are difficult to fully explain, we must be able to audit either the systems or their outputs to a level that satisfies those who are duly and formally responsible and accountable,” Crossfield says. With a growing number of countries now building on decades’ worth of research and development in AI to boost their defence capabilities, the spectre of lethal autonomous weapons looms ominously on the horizon. Mohan worries about the lack of rigorous global safeguards currently in place around the use of AI in military contexts. “There’s not enough hesitancy and not enough urgency in policy circles around militaryspecific AI,” she says. As for ChatGPT and the hyperbole it has generated, Mohan argues that while the technology seems mind bogglingly proficient, it is making the same mistakes AI systems have been making for the past two decades. Actually, this paradox, she argues, is what makes these technologies so interesting and so terrifying.

“AI systems are so advanced, but they can be so stupid,” Mohan concludes. “There’s really no standard explanation or expectation of where and how AI systems are progressing. It could be as stupid as a dumb computer from the 1990s, but at the same time, it could do things that can spin beyond our wildest imaginations.”