Default to Defence
Why strategic systems converge on visible preparedness — and how AI accelerates the pattern
This is Part 4 of a 5-part series exploring why, in systems that cannot measure intent, defence is not a choice — it is a strategy.
In April 2026, a wounded American airman spent 36 hours hiding in a mountain crevice in southern Iran while search teams closed in. His survival beacon could not pinpoint him precisely enough for rescue. What found him was a classified CIA system called Ghost Murmur — developed by Lockheed Martin’s Skunk Works — which detected the electromagnetic signature of his heartbeat from 40 miles away using quantum magnetometry, then used AI to isolate that signal from background noise. President Trump confirmed the capability publicly. CIA Director John Ratcliffe described the airman as “still invisible to the enemy, but not to the CIA.”
The precise technical parameters remain reported rather than independently verified. The capability, as described publicly, is enough for the structural point. Ghost Murmur is not a metaphor. It is infrastructure. And, it illustrates, with unusual clarity, what this piece is about. Strategic systems do not respond to what actors intend. They respond to what can be counted, because what cannot be counted, cannot reliably enter decision-making at scale.
The constraint set
Intent cannot be verified, audited, or compared across actors. Capability can. It can be measured, displayed, updated, and acted upon. Over time, this distinction becomes decisive.
There is no central authority that can reliably verify intent. The penalty for underestimating a threat is catastrophic. The penalty for over-preparing is delayed and diffuse. Under these conditions, capability becomes the only credible signal.
Defence then becomes, not a choice, but the only strategy that survives measurement.
How defensive systems escalate
Once capability becomes the dominant signal, escalation does not require aggression. It emerges from locally rational decisions. One actor increases defensive capacity. The other observes the increase. Since intent is not legible within the system, the increase must be interpreted as potential threat. The rational response is to increase one’s own capability.
Each step is justified. Each move is stabilising from the perspective of the actor taking it. The system accumulates capability even when all participants describe themselves as defensive.
The historical record is consistent. Nuclear arsenals expanded far beyond minimal deterrence requirements during the Cold War, reaching over 60,000 warheads globally at peak levels according to Federation of American Scientists estimates — not because actors intended unlimited accumulation, but because the system rewarded visible preparedness and provided no credible mechanism for mutual de-escalation. Missile defence systems have repeatedly triggered offensive countermeasures, as documented by the RAND Corporation and the Center for Strategic and International Studies. Defensive infrastructure does not neutralise escalation. It redistributes it.
The baseline does not stabilise. It ratchets upward.
A system that grades what is visible
Imagine an exam in which only visible working is graded. A correct answer without steps receives no credit because it cannot be verified. Students optimise for what is graded. They produce more visible steps, regardless of whether those steps improve understanding. The exam does not reward understanding. The system does not reward restraint. Both reward what can be demonstrated.
Strategic systems function the same way. Capability is the visible working. Intent is the unobservable answer. Over time, actors optimise for what can be demonstrated, not because they are cynical, but because the system provides no other path to credibility.
AI and the expansion of measurement
Artificial intelligence enters this system by expanding what can be measured and how quickly systems respond to it.
Ghost Murmur is one illustration. A heartbeat, previously detectable only at contact range, becomes a trackable signal across distance. The underlying physics has not changed. What changed is the measurement infrastructure: quantum magnetometry sensors built around microscopic defects in synthetic diamonds, paired with AI that filters noise and isolates the target signal in near real time.
This is not an isolated capability. Programmes such as Project Maven, developed by the United States Department of Defense, use machine learning to analyse full-motion video and sensor data for object detection and targeting support. Congressional Research Service summaries note that such systems have reduced imagery analysis timelines from hours of human review to near real-time machine-assisted identification, with public defence discussions describing “sensor-to-shooter” timelines compressing from hours to minutes in specific workflows.
The shift is structural. AI increases the density of observable signals across the system. Satellite imagery, drone feeds, communications intercepts, sensor networks, and now biometric traces from living bodies are continuously processed and translated into actionable outputs.
This does not make intent more legible. It makes capability more continuously visible.
Compression: The collapse of decision time
As measurement expands, decision cycles compress. Where detection, analysis, and response once occurred in sequence, AI systems collapse these stages into near-simultaneous processes. Intelligence is generated continuously, prioritised algorithmically, and fed directly into decision pipelines.
The operational implication is measurable. Military AI programmes have explicitly targeted reductions in the time between detection and action, with documented shifts from hours to minutes in certain targeting workflows.
Empirical Evidence
Project Shrike: Developed by the U.S. Army’s Future Command, this AI-driven software has reportedly reduced target identification and fire mission creation time from 15 minutes down to 60 seconds.
Project Maven: This Pentagon program uses AI for image recognition in drone feeds to identify objects/vehicles and pinpoint them on a map, speeding up intelligence analysis.
TITAN: These mobile ground stations use AI to process satellite imagery for real-time targeting.
The cost of delay rises. The space for deliberation narrows. Defensive responses become more immediate and more frequent. The system shifts from interpretation toward reaction. This is not because actors become more aggressive. It is because the structural incentives of a faster system reward faster response.
Dual-use opacity: Seeing more, understanding less
The expansion of measurement produces a counterintuitive effect. AI systems increase visibility while reducing interpretability.
Most AI-enabled capabilities are dual-use. The same system that can locate a downed airman can, in principle, locate a high-value target. A surveillance platform can support defensive monitoring or prepare strike coordinates. A data integration system can optimise logistics or enable real-time battlefield coordination.
From the outside, these uses are indistinguishable in real time. And, because these systems operate continuously, this ambiguity is not resolved over time. It compounds.
This creates a structural asymmetry. Actors can observe more of what others are doing, yet understand less about why they are doing it. Increased visibility does not resolve uncertainty. It amplifies it. As a result, systems default toward worst-case interpretation. Defensive preparation becomes not only rational, but unavoidable.
Rate of change: The private sector integration effect
AI development operates in cycles measured in months. Traditional military procurement operates in cycles measured in years or decades. When these systems integrate, capability no longer scales in steps. It scales continuously. This is not simply an improvement in tools. It is a change in the rate at which the system responds to itself.
Companies such as Palantir Technologies, Anduril Industries, and Shield AI are not peripheral vendors. They are embedded within operational systems, providing real-time data processing, autonomous capabilities, and decision-support infrastructure. Cloud providers host the underlying architecture. AI firms build and iterate the models. Defence agencies deploy the outputs in real-world systems.
This collapses the distance between innovation and deployment. Systems can update, iterate, and expand capability without waiting for traditional procurement cycles.
Recent reporting provides a concrete illustration of this integration. Systems built around Palantir’s Maven platform have incorporated commercial large language models, including those developed by Anthropic, to assist in analysing intelligence streams and prioritising targets in active operations. Reporting across multiple outlets indicates that AI-assisted workflows enabled the rapid identification of large volumes of targets within compressed timeframes, while the underlying models remained constrained to analysis and decision support rather than autonomous execution. At the same time, the companies developing these systems have resisted removing safeguards for unrestricted military use, creating direct friction between commercial development norms and operational demands.
This is the system responding to itself in real time. Capability expands through integration, while the boundaries of its use are negotiated after deployment rather than before it.
The result is an environment in which measurable capacity grows continuously rather than episodically, and in which the gap between what is possible and what is publicly understood widens with each iteration.
The pattern, restated
This is the same structure that has appeared across every domain in this series. Image generation systems amplify what is statistically dominant. Social platforms amplify what generates the most engagement. Self-learning AI systems amplify the distribution they are trained on. Strategic systems amplify what can be measured.
The outcome is not driven by what systems are designed to do, but by what they are able to measure.
The system functioning as designed
Capability is measurable. Intent is not. AI expands the scope and speed of measurement without changing the underlying logic. It intensifies it. Each actor, operating rationally within the system, contributes to an environment of increasing capability and decreasing interpretability. Escalation emerges not from failure, but from adherence to the system’s constraints.
The system is not broken. It is functioning in accordance with its measurement structure. That is precisely why the outcome is so consistent.
The question is no longer whether systems drift toward these outcomes. It is whether any system that cannot measure intent can avoid them.
Next in the series
Part 5 — Default to Intensity
The one pattern behind all of it. Across image generation, social media, self-learning AI, and strategic systems, the same mechanism appears: intensity is more visible than normalcy, friction is more measurable than calm, capability is more legible than intent. Systems amplify what they can measure. The final piece synthesises the series into a single structural argument — not four observations about four domains, but one structural diagnosis about how modern systems amplify what they measure, and what that means for the world they are increasingly shaping.
Follow on X: The Quiet Cartographer
Sources and additional reading
CNA Report: Artificial intelligence: Emerging themes, issues, and narratives
Enhancing Tactical Level Targeting With Artificial Intelligence
Human, Machine, War: How the mind-tech nexus will win future wars
Is the ‘Ghost Murmur’ quantum device possible? Scientists are skeptical
New military software cuts targeting time from 15 minutes to 60 seconds




