
In our latest post, Francesco reflects on the increasingly active role AI plays in military operations. AI tech now has a decisive edge in reconnaissance and surveillance: integrated military platforms include real-time data assessment and the capacity to flag potential threats. The latest AI systems even have the ability to identify and engage targets autonomously. But can these systems be trusted under pressure?
In the modern theatre of war, the scout is no longer a lone figure in camouflage crawling through the underbrush. Today, it’s a neural network riding aboard a drone, a language model parsing intercepted signals, or an algorithm silently watching hours of satellite video. The battlefield’s ‘eyes and ears’ have gone digital – and they never blink.
Artificial intelligence, once a buzzword in Silicon Valley, has become a decisive edge in military reconnaissance and surveillance. At the heart of this transformation is a convergence of computer vision, pattern recognition, and natural language processing technologies, each tuned not for consumer convenience but for operational superiority.
Take drones, for example. These airborne machines have become synonymous with modern warfare. But what happens when a single drone collects more video than a team of analysts could possibly watch? Enter object detection models like YOLO and ViT – cuttingedge AI tools that can instantly distinguish tanks from trucks, civilians from combatants, or even real weapons from clever decoys. These aren’t just passive cameras –they’re thinking machines, trained to make split-second judgments.
Artificial intelligence, once a buzzword in. Silicon Valley, has become a decisive edge. in military reconnaissance and surveillance.
What’s more, these AI systems don’t operate in isolation. Military platforms increasingly integrate multi-sensor fusion, blending inputs from radar, thermal imaging, and visual feeds into a single stream of insight. This enables what military technologists call ‘edge AI’ – onboard computing that processes data in real time, without needing to call home. If latency means the difference between spotting a threat or missing it, edge AI ensures the machine gets the first look.
But seeing is only half the story. The real strategic value lies in prediction. Modern AI systems are now being used to model patterns of life, identifying not just what is happening, but what might happen next. By analysing satellite imagery over time, algorithms can detect the subtle rhythms of supply chains, troop movements, or the quiet preparations that precede an offensive. When those rhythms shift unexpectedly, the AI flags it. It’s the algorithmic equivalent of a hunch –except it’s drawn from petabytes of data and days of reconnaissance.
And then there’s the information war happening far from the front lines – in the electromagnetic and digital realms. Signals intelligence (SIGINT) and open-source intelligence (OSINT) have exploded in volume. Social media videos, intercepted radio chatter, public satellite feeds – all of it is data, and all of it is being fed into large language models. These AI systems, cousins of the chatbots now helping with homework or writing emails, are instead helping military analysts summarise battlefield movements, identify likely codewords in intercepted communications, and even assess the psychological tone of enemy broadcasts.
In Ukraine, we’ve seen these technologies leap from lab to live action. AI-enhanced drones have mapped troop positions; language models have parsed thousands of Telegram messages to geo-tag video footage; military dashboards have quietly begun surfacing AI-generated alerts for decision-makers. Whether in support of conventional forces or asymmetric groups using low-cost drones and open-source software, the digital scout has arrived.
Yet for all the precision and promise, these systems raise pressing questions. Can they be trusted under pressure? What happens when an AI misidentifies a threat – or worse, generates a convincing but false prediction? And who is accountable when decisions are based on machine inferences?
In warfare, the observer shapes the battle. Now that the observer is artificial, tireless, and fast evolving. As we continue this exploration into AI’s role in defence, the next frontier awaits: when machines don’t just watch the battlefield, but act on it.
It’s no longer a question of whether machines can pull the trigger. It’s a question of whether they should –and under what circumstances. As artificial intelligence takes on more decision-making power in combat, we find ourselves navigating a spectrum of autonomy where oversight is not always guaranteed.
In, On, or Out of the Loop?
Defence systems today fall into three categories:
1| Human-in-the-loop systems require manual approval before any lethal action. Think drone pilots authorising a missile strike after visual confirmation.
2| Human-on-the-loop systems can act on their own, but a human has the authority to intervene – if they’re paying attention and if time allows.
3| Human-out-of-the-loop systems identify, select, and engage targets autonomously. No human interaction, no override, no delay.
This last category is no longer science fiction. In environments where communications are jammed or latency is too great – say, in remote or contested airspace –systems must act instantly, or not at all. And so, they act.
Despite official policies and international directives calling for ‘meaningful human control’, necessity often pushes the line. Once a system has the capability to act autonomously, the only thing stopping it is software policy – or battlefield urgency. Both can be overridden.
As these capabilities evolve, the ethics shift. It’s no longer a question of: can an AI make a kill decision? The harder question is: If it does, and it’s wrong, who pays the price?
TARGET IDENTIFIED, FIRE AUTHORISED
This is not hypothetical. Autonomous or semi-autonomous systems with lethal capabilities are already active and effective.
The Hunter-Killers
Often dubbed ‘kamikaze drones’, loitering munitions combine surveillance, target acquisition, and strike into one package.
The Switchblade 300 and 600, developed in the U.S., are compact and portable. They can hover for up to 40 minutes, scan the terrain, and use onboard AI to recognise hostile vehicles or personnel.
A loitering munition that targets radar signals goes further: once launched, it needs no human approval to strike. It detects, selects, and destroys radar installations completely autonomously. In these cases, the human role ends at launch.
These systems collapse the traditional kill chain into a single actor. No need for separate reconnaissance, command, and fire teams. The drone sees and shoots in one fluid action, at algorithmic speed.
Selective Engagement AI:Ground-Based AutonomyIt’s not just drones. Ground-based systems are catching up. Enter Selective Ground Response AI (SGRAI), a term covering a class of AIdriven response systems.
Picture this: a mobile ground robot patrols a contested village at night. It’s equipped with Lidar, infrared sensors, and thermal imaging. It’s trained to recognise human behaviour: distinguishing civilians from aggressors by posture, movement, and the presence of weapons. It’s programmed with strict rules of engagement – only fire if approached while armed, do not engage unless fired upon, sound a warning before action.
It can make all these decisions on its own.
One of the most talked-about systems is South Korea’s SGR-A1 sentry gun, deployed along the DMZ. It can detect and track targets, assess threat levels, and fire without human approval. Policy currently keeps a human in control, but the capability for full autonomy is built in.
As more militaries experiment with neural nets, confidence thresholds, and embedded rules of engagement, we’re seeing the rise of systems that not only fire but also reason.
This leads to one of the thorniest problems of AI warfare: accountability.
International humanitarian law (IHL) rests on three foundational principles:
1| Distinction – You must differentiate combatants from civilians.
2| Proportionality – Civilian harm must not outweigh military advantage.
3| Accountability – Someone must be responsible for violations.
But with AI in the loop – or worse, out of it – who is responsible when things go wrong?
What if an algorithm misclassifies a teenager with a toy gun as a hostile insurgent? There’s no human operator to doublecheck, no pilot to hesitate. The decision is embedded in code. Is the commander liable? The developer? The procurement officer?
Critics argue that handing over lethal authority to machines risks creating a system with no culpability. That’s why advocacy groups like the Campaign to Stop Killer Robots are pushing for a preemptive global ban on fully autonomous weapons.
Others argue that properly constrained AI could actually reduce civilian casualties – machines don’t panic, don’t get tired, and don’t pull the trigger out of fear or vengeance.
The debate is far from settled. The United Nations has convened multiple discussions on lethal autonomous weapons systems (LAWS). But no binding international treaty currently limits their development or deployment. And the major powers – Russia, China, the United States – are pressing forward.
One compromise gaining traction: explainable autonomy. These are AI systems designed not just to act, but to explain why they acted. If a drone engages a target, it also generates a decision trail – sensor input, classification confidence, and rules matched. This could enable postaction audits and accountability.
But explainability is still in its infancy. And in the meantime, the battlefield keeps accelerating.
The shift we’re witnessing isn’t just technological – it’s conceptual. Warfare is no longer just fought by soldiers following orders. It’s increasingly conducted by systems executing logic.
Kill chains are becoming compressed. Human decisionmaking is becoming conditional. The burden of judgment is shifting – from officers and commanders to neural networks trained on data from past wars.
This raises not only moral and legal questions, but strategic ones. What happens when two autonomous systems face off, each locked in an algorithmic arms race? What if escalation is triggered not by a human mistake, but by a flawed line of code?
As militaries around the world push forward, one truth becomes clear: In the next war, software may be the most lethal weapon of all.
The Rise of the Drone Swarm
Imagine a sky blackened not by storm clouds, but by hundreds – perhaps thousands – of autonomous drones moving as a single, intelligent entity. No central controller. No master algorithm dictates their every move. Just a decentralised, self-organising swarm, capable of reconnaissance, electronic warfare, and precision strikes with terrifying efficiency.
This isn’t science fiction. It’s the next evolution of warfare, and it’s already being tested by the world’s most advanced militaries.
Traditional drones rely on human operators or centralised AI. Swarms are different. They operate like flocks of birds or colonies of ants –no leader, just simple rules guiding complex behaviour. Three key methods make this possible:
1| Consensus Algorithms –Drones ‘vote’ on decisions by sharing data with neighbours. If most agree on a flight path or target, the swarm follows. This ensures resilience – lose a few drones, and the rest adapt instantly.
2| Leader-Follower Dynamics –A few drones (leaders) know the mission objective, while the rest mimic their movements. If leaders are destroyed, new ones emerge dynamically.
3| Behaviour-Based Robotics –Each drone follows basic rules: avoid collisions, stay close to the group, match speed with neighbours. Together, these rules create fluid, adaptive formations.
The result? A swarm that doesn’t just follow orders – it thinks as a collective.
Surviving the Electronic Battlefield
Swarms don’t just face bullets and missiles – they operate in an invisible war of signals. Jamming, spoofing, and GPS denial are constant threats. So, how do they stay connected?
1| Ad-Hoc Mesh Networks – Each drone acts as a relay, creating a self-healing web of communication. Lose a node, and the swarm reroutes instantly.
2| Low-Probability-of-Intercept (LPI) Comms – Instead of broadcasting on one frequency, drones ‘hop’ across hundreds per second, making them nearly impossible to jam.
3| Opportunistic Synchronisation –If GPS fails, drones use visual cues (like blinking LEDs or infrared signals) to stay in sync.
The lesson? A well-designed swarm doesn’t just survive electronic warfare – it thrives in chaos.
China vs. the U.S.: Two Paths to Swarm Dominance
Different doctrines shape how nations deploy swarms:
One favours brute force. The other is precision. Both are redefining modern combat.
Nature’s Blueprint: Flocks, Ants, and AI
The most fascinating aspect of drone swarms? They borrow from biology:
These techniques make swarms smarter over time – learning, evolving, and outmanoeuvring adversaries with eerie precision.
The Future: Autonomous Swarms in the Wild
The implications are staggering. Picture:
But with great power comes risk. What happens when a swarm misidentifies a target? Can we trust machines to make life-and-death decisions without human oversight?
One thing is certain: the age of swarm warfare has arrived. And the battlefield will never be the same.