Summary
What happens when a computer is so smart that even its creators are afraid to let it loose?
That is the situation the world found itself in during 2026, with an artificial intelligence system called Mythos — and it is connected to a military targeting program called Project Maven that has already helped direct real bombs in real wars.
Together, these two systems are changing how wars are fought, and the world is deeply divided about what that means.
To understand Mythos, think of it as an extremely powerful computer brain made by a company called Anthropic.
This brain was so good at finding hidden security holes in computer systems that Anthropic itself decided not to release it to the public.
In simple terms, imagine a locksmith so skilled that they can pick any lock in the world — including locks that no one even knew were broken — and they can do it overnight.
That is roughly what Mythos can do in the digital world.
It found security flaws in computer systems that engineers had missed for 27 years.
It also found another bug that had survived 5 million tests over 16 years — all in one night of processing.
Anthropic shared these findings with a small group of major technology companies so the companies could fix the problems before anyone could exploit them.
This protective effort is called Project Glasswing.
Now imagine that a military organization wants to use this super-locksmith to break into enemy computer systems during a war.
That is essentially what the U.S. Pentagon wanted to do.
And when Anthropic said no — or rather said "only under strict rules" — the Pentagon got angry and started cutting ties with the company.
The conflict boiled down to a fundamental disagreement: the Pentagon wanted the safety guardrails removed so the military could use the AI freely during combat operations.
Anthropic refused, saying those rules exist precisely because the stakes are so high.
Project Maven is a different but related story. It started back in 2017 as a program to help the U.S. military sort through enormous amounts of drone video footage.
Think of it like having a digital assistant who watches thousands of hours of security camera footage and tells you exactly where the suspicious activity is, so you do not have to watch it all yourself.
Over time, this assistant became far more powerful.
By 2024, a company called Palantir had built it into something that could pull together information from more than 150 different intelligence sources — drone cameras, satellites, ground sensors, intercepted communications, and more — and help military commanders decide where to strike.
The system could now do what formerly took hundreds of human analysts working around the clock.
The clearest real-world example of these systems working together happened in early 2026, when the U.S. military launched a large-scale operation against Iran called Operation Epic Fury.
In the first 24 hours, the military struck more than 1,000 targets. That is an almost unimaginable number by historical standards.
Before AI, identifying that many valid targets and ordering strikes would have taken weeks of painstaking intelligence work.
With the Maven Smart System and AI-assisted analysis, it happened in a single day.
Some analysts compared it to the difference between hand-delivering letters across a city versus sending emails to the entire country simultaneously.
But then something terrible happened. The AI-assisted targeting system was linked to a strike on a girls' school in a city called Minab in Iran.
Over 170 people, most of them children, were killed. Questions immediately arose: did the machine make a mistake?
Did the humans check carefully enough?
Was it the AI's fault, or the humans who fed it old, incorrect information?
Investigators found that the likely cause was old and inaccurate data that humans had entered into the system — not a failure of the AI's own logic.
But that answer made people even more worried, not less.
If the AI moves so fast that humans do not have time to double-check the information it relies on, then the very speed that makes it useful in war also makes it dangerous.
Think of it as a fast car with faulty brakes: the faster it goes, the worse the crash.
This is the central dilemma of AI warfare in plain terms: the faster the machine, the less time humans have to think.
And when humans do not have time to think, mistakes that kill innocent people become more likely.
The argument between Anthropic and the Pentagon makes this even clearer.
Anthropic built Mythos with safety guardrails — rules built into the AI that prevent it from helping with things like spying on citizens or operating weapons completely without human control.
The Pentagon wanted those rules removed so the military could use the AI more freely.
Anthropic refused.
Think of it like a car manufacturer who builds vehicles with automatic braking systems, and a customer demanding those brakes be removed so the car can go faster.
The manufacturer says no, because they know what can happen.
As a result, the Trump administration began moving to terminate the government's relationship with Anthropic entirely — even as Anthropic co-founder Jack Clark said in April 2026 that the company still cares deeply about national security and wants to keep talking.
Around the world, different countries are watching these developments with very different emotions.
China, Russia, and other major military powers are speeding up their own AI warfare programs.
At a military parade in Beijing, China showed off drones that can fly in coordinated swarms without human pilots, operating through AI instructions alone — watched in person by both Vladimir Putin of Russia and Kim Jong-un of North Korea.
Pentagon officials confirmed afterward that the U.S. drone program is falling behind China's.
This dynamic shows how one country's use of AI in war pushes all others to match it — a classic arms race, but with algorithms instead of nuclear warheads.
In the Middle East, the Minab school tragedy has inflamed public sentiment across Arab nations.
Coverage in Al Jazeera and the Arab News framed Project Maven not as a precision tool but as a mechanism of indiscriminate killing, noting that the same system celebrated by Pentagon officials as life-saving had contributed to the deaths of over 170 children.
The Financial Times and The Economist both noted the irony that the most expensive AI targeting system in human history produced one of the most politically damaging military errors of the decade.
Civil society organizations and international legal bodies are raising strong alarms.
Human Rights Watch published a report in 2025 saying that autonomous weapons — machines that choose their own targets without human decision — violate basic human rights, including the right to life.
The United Nations has called for a global ban on fully autonomous weapons, and more than 120 countries have backed a new international treaty on this issue.
The challenge is that the countries most capable of building these weapons — the United States, China, Russia — are also the most reluctant to agree to limits.
The bottom line is this.
Mythos is an AI model so powerful at breaking into computer systems that its own creators are keeping it under lock and key.
Project Maven is an AI targeting system that has already been used in real wars and may have contributed to the deaths of hundreds of innocent people.
Together, they represent both the extraordinary promise and the terrifying risk of AI in modern conflict.
The promise is faster, more precise military decisions that could reduce total casualties in war.
The risk is that machines move faster than human moral judgment, that errors are amplified at machine scale, and that a global arms race in autonomous weapons will make the world less safe for everyone.
The decisions made right now — about safety rules, about international treaties, about who controls these machines and how — will shape warfare and international stability for the next 50 years.
History does not offer second chances in domains where errors kill children.
The world cannot afford to get these decisions wrong.
