Project Maven Explained: How America Taught Its Military to Think with Machines – A Beginner's 101 Guide to America’s AI Warfare
Summary
Imagine you are a soldier responsible for watching thousands of hours of security camera footage every week, trying to find the moment when something dangerous happens.
The footage comes from drones flying over far-off deserts and cities.
There is so much of it — hundreds of thousands of hours every year — that no team of people, no matter how large, could possibly watch all of it in time to act on what they see.
Now imagine a computer program that could do that watching for you, flagging the important moments, identifying vehicles, weapons, and people, and handing you a summary in minutes instead of days.
That, in its simplest form, is how Project Maven began.
Project Maven is a United States military programme that started in April 2017. The Pentagon — the headquarters of the American military — asked a team of people to find a way to use artificial intelligence, or AI, to help soldiers make sense of enormous amounts of information gathered by surveillance drones.
The man in charge of this team was a Marine Corps Colonel named Drew Cukor, who believed that America's military advantage over rivals like China was being slowly eroded because U.S. troops were drowning in data they could not process fast enough.
At first, the job was simple.
The programme was designed to recognize 38 different types of objects in drone video footage — things like trucks, buildings, or groups of people.
Think of it like teaching a child to recognize shapes, except the shapes are military targets and the child is a very powerful computer.
The technology that made this possible is called machine learning, which means the computer learns to recognize patterns by studying millions of examples until it can do the task on its own.
To build this technology quickly, the Pentagon turned to Google, one of the world's most advanced technology companies.
Google had already built sophisticated AI tools for other purposes — like recognizing faces in photos and translating languages — and adapting those tools for drone imagery seemed logical.
However, when Google employees found out that their work was being used by the military, more than 4,000 of them signed a petition saying: "We believe that Google should not be in the business of war."
Nearly a dozen employees quit their jobs in protest. In 2018, under this pressure, Google announced it would not renew its military contract when it expired.
This was a big moment in the history of technology and warfare.
For a brief period, it seemed as if Silicon Valley — the part of America where most of the big technology companies are based — might refuse to be part of military AI development.
That refusal, however, did not last long. Other technology companies stepped in to fill the gap.
The most important of these was Palantir Technologies, a company founded specifically to help governments and intelligence agencies manage large amounts of data.
Palantir did not share Google's hesitation.
It embraced the military mission and began building what would eventually become the Maven Smart System — a dramatically more powerful version of the original programme.
By 2024 and 2025, Maven had grown from a simple object-recognition tool into something far more complex. Think of the original Maven as a smoke alarm — it could detect one specific kind of danger and alert you.
Think of the Maven Smart System as a full home security network that monitors every door and window simultaneously, checks the weather, tracks the movements of everyone on the street, reads the local news, and gives you a complete risk assessment every few seconds.
The new system pulls in information from more than 150 different intelligence sources at the same time — satellite images, intercepted phone calls, drone footage, and much more — and uses all of it to produce targeting recommendations for military commanders.
In early 2026, during U.S. military operations against Iran, the Maven Smart System was used to generate more than 1,000 potential strike options within the first 24 hours of the conflict.
Reports indicate that around 900 targets were struck within a 12-hour window.
Commanders reportedly became so dependent on Maven's outputs that planning operations without it had become almost unthinkable.
The system had done what its creators always intended: it had become indispensable.
The Pentagon's March 2026 decision to make Maven a permanent, official program of the military — known as a "program of record" — confirmed that AI warfare was no longer an experiment. It was now a permanent feature of how America fights.
The contracts involved in Maven's development give a sense of just how important it has become.
Palantir received an initial $480 million contract from the Pentagon in May 2024.
By May 2025, that ceiling had been raised to $1.3 billion.
In July 2025, the U.S. Army signed a framework agreement with Palantir potentially worth $10 billion over a decade.
To put that in perspective: $10 billion is roughly equal to the annual defense budgets of many mid-sized countries.
Palantir's market value has climbed toward $360 billion, largely on the strength of its military contracts.
Not everyone is comfortable with how quickly this has happened. Scientists and ethicists worry that AI systems can make mistakes that human analysts would avoid.
A senior Palantir employee told The Times in January 2026 that some situations were so complex they "tested the limits of the software." When software is helping to decide where bombs fall, those limits matter enormously.
The AI model Anthropic's Claude, which is now integrated into Maven's operations, was built by a company whose own chief executive has publicly warned about the dangers of using AI in military decision-making — even as his company accepted a $200 million government contract to do exactly that.
This kind of contradiction, between the ethical concerns of the people building these tools and the financial rewards of deploying them, is at the heart of the most difficult questions raised by Project Maven.
Internationally, governments are trying — and largely failing — to regulate AI warfare before it gets further out of control. In November 2025, 156 nations supported a United Nations resolution calling for a legally binding treaty to govern autonomous weapons.
But Russia, which voted against the resolution alongside North Korea and Belarus, is building its own AI targeting systems in Ukraine, reportedly conducting around 300 unmanned AI-assisted strikes per day.
China is investing heavily in military AI with a goal of matching the United States by 2035. The race is accelerating faster than the rules can be written.
Journalist Katrina Manson, whose book Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare was published in 2026, has spent years investigating this story and believes that the world is at a pivotal moment.
The technology exists, it is being used, and the decisions being made right now — by the Pentagon, by technology companies, by allied governments, and by international institutions — will determine whether AI in warfare becomes a tool that saves lives or one that makes atrocities faster and harder to prevent.
The answer to that question is not yet written. But the algorithm is already running.



