Summary
What happens when a computer is so smart that even its creators are afraid to let it loose?
That is the situation the world found itself in during 2026 with an artificial intelligence system called Mythos — and it is connected to a military targeting program called Project Maven that has already helped direct bombs in real wars.
Together, these two systems are changing how wars are fought, and the world is deeply divided about what that means.
To understand Mythos, think of it as an extremely powerful computer brain made by a company called Anthropic.
This brain was so good at finding hidden security holes in computer systems that Anthropic itself decided not to release it to the public.
In simple terms, imagine a locksmith so skilled that they can pick any lock in the world, including locks that no one even knew were broken, and they can do it overnight.
That is roughly what Mythos can do in the digital world.
It found security flaws in computer systems that engineers had missed for 27 years.
It also found another bug that had survived 5 million tests over 16 years — all in one night of processing.
Anthropic shared these findings with a small group of major technology companies so they could fix the problems before anyone could exploit them, through a program called Project Glasswing.
Now imagine that a military organization wants to use this super-locksmith to break into enemy computer systems during a war.
That is essentially what the U.S. Pentagon wanted. And when Anthropic said no — or rather, said "only under strict rules" — the Pentagon got angry and started cutting ties with the company.
Project Maven is a different but related story. It started back in 2017 as a program to help the U.S. military sort through enormous amounts of drone video footage.
Think of it like having a digital assistant who watches thousands of hours of security camera footage and tells you exactly where the suspicious activity is, so you don't have to watch it all yourself.
Over time, this assistant became much more powerful.
By 2024, a company called Palantir had built it into something that could pull together information from more than 150 different sources — drone cameras, satellites, ground sensors, and more — and help military commanders decide where to strike.
The clearest real-world example of these systems working together happened in early 2026, when the U.S. military launched a large operation against Iran called Operation Epic Fury.
In the first 24 hours, the military struck more than 1,000 targets.
That is an almost unimaginable number by historical standards.
Before AI, identifying that many valid targets and ordering strikes would have taken weeks. With Maven and Claude — the AI model made by Anthropic — it happened in a single day.
But then something terrible happened.
The AI-assisted targeting system was linked to a strike on a girls' school in a city called Minab in Iran.
Over 170 people, most of them children, were killed.
Questions immediately arose: did the machine make a mistake?
Did the humans check carefully enough?
Was it the AI's fault, or the humans who fed it old, incorrect information?
Investigators found that the likely cause was old and incorrect data that humans had entered into the system — not a failure of the AI itself.
But that answer made people even more worried, not less.
If the AI moves so fast that humans don't have time to double-check the information it relies on, then the speed that makes it useful in war also makes it dangerous.
This is the central dilemma of AI warfare in plain terms: the faster the machine, the less time humans have to think.
And when humans don't have time to think, mistakes that kill innocent people become more likely.
The argument between Anthropic and the Pentagon makes this even clearer.
Anthropic built Mythos with what it calls safety guardrails — basically rules built into the AI that prevent it from helping with things like spying on citizens or operating weapons completely without human control.
The Pentagon wanted those rules removed so the military could use the AI more freely. Anthropic refused.
Think of it like a car manufacturer who builds cars with automatic braking systems for safety, and a customer demanding that the brakes be removed so the car can go faster.
The car company says no, because they know what can happen.
As a result, the Trump administration began moving to terminate the government's relationship with Anthropic entirely.
At the same time, Anthropic co-founder Jack Clark said in April 2026 that the company still cares deeply about national security and wants to keep talking.
It is a difficult situation with no easy answer.
Around the world, different countries are watching these developments with very different feelings.
China, Russia, and other major military powers are speeding up their own AI warfare programs.
At a military parade in Beijing, China showed off drones that can fly in coordinated swarms without human pilots, operating through AI instructions alone — watched by both Vladimir Putin of Russia and Kim Jong-un of North Korea.
Pentagon officials confirmed afterward that the U.S. drone program is falling behind China's.
This shows how one country's use of AI in war pushes others to match it — a classic arms race, but with computers instead of nuclear bombs.
Civil society organizations and international legal bodies are raising strong alarms.
Human Rights Watch published a report in 2025 saying that autonomous weapons — machines that choose their own targets without human decision — violate basic human rights, including the right to life.
The United Nations has called for a global ban on fully autonomous weapons, and more than 120 countries have backed a new international treaty on this issue.
The challenge is that the countries most capable of building these weapons — the United States, China, Russia — are also the ones most reluctant to agree to limits.
The bottom line is this. Mythos is an AI model so powerful at breaking into computer systems that its own creators are keeping it under lock and key.
Project Maven is an AI targeting system that has already been used in real wars and may have contributed to the deaths of hundreds of innocent people.
Together, they represent both the extraordinary promise and the terrifying risk of AI in modern conflict.
The promise is fewer casualties and faster military decisions.
The risk is that machines move faster than human moral judgment, that errors are amplified at machine scale, and that a global arms race in autonomous weapons will make the world less safe for everyone.
The decisions made right now — about safety rules, about international treaties, about who controls these machines and how — will shape warfare and international stability for the next 50 years.
The world cannot afford to get them wrong.
