Categories

Beginner's 101 Guide: Why the World Can't Agree on Rules for AI

Beginner's 101 Guide: Why the World Can't Agree on Rules for AI

Summary

Countries around the world keep meeting to set rules about artificial intelligence, but little progress is made.

Why? Because nobody agrees on what AI really is or what it will do. Without a shared understanding, making rules together becomes nearly impossible.

Think of it like this: imagine a group trying to create road safety laws, but some believe they are talking about bicycles, others think they mean cars, and some believe they are talking about rocket ships.

They end up with rules that don’t fit together—and that’s exactly what’s happening with AI today.

What’s the Problem With Defining AI?

When someone says "artificial intelligence," different people have very different images.

A student might think of a chatbot like ChatGPT. A factory manager might imagine a robot that sorts packages.

A scientist could picture a future computer smarter than any human.

All of these can technically be called AI, but they work very differently and require different rules.

Even international organizations cannot agree.

The Organisation for Economic Co-operation and Development (OECD) has its own legal definition of AI, which the European Union largely copied when it created the EU AI Act—the world's first major AI law. But the EU's version and the OECD's version have small but important differences in wording. Other countries, like the United States and China, define AI very differently in their rules and policies.

This means a computer system might be considered AI in one country but not in another—making international agreements very difficult to draft clearly.

The Speed Issue: Will AI Change Things Quickly or Slowly?

There’s an even bigger disagreement about the future. Even if everyone agreed on what AI is today, they sharply disagree about what it will do.

Some respected economists, like Nobel Prize winner Daron Acemoglu, believe AI will be helpful but won’t change the world right away—maybe increasing the economy by about 0.5% over ten years.

That’s useful, but not revolutionary.

Others, like leaders of major AI companies such as Anthropic's Dario Amodei, believe AI could change almost everything very quickly.

They think AI might reach a level called artificial general intelligence (AGI)—where it can do any intellectual task a human can do—within just a few years, potentially transforming healthcare, science, warfare, and the economy all at once.

These two groups aren’t just disagreeing over percentages. They’re describing two very different worlds. And if governments believe they are in different worlds, they will make very different choices.

A government that believes AGI is coming next year will act very differently than one that thinks AI is just a helpful tool spreading gradually over thirty years.

How Countries Are Responding Differently

These different beliefs lead to very different national strategies.

The United States hosts the world's biggest AI companies—OpenAI, Google DeepMind, Anthropic, and Microsoft.

Many people in the U.S. government believe powerful AI could arrive quickly and want their country to control it when it does.

So instead of creating tough international rules that could slow down American AI companies, the current U.S. administration, under President Donald Trump, has actively removed regulations and protected companies from oversight.

China’s view is somewhat different. It sees AI less as an immediate revolutionary change and more as a powerful general-purpose technology—like electricity or the internet—that will spread through the economy over time.

China's strategy focuses on widely spreading AI via open-source models and government-led programs. China also builds its own AI systems, like DeepSeek, to avoid relying on American technology.

Most other countries are in a tougher spot.

They lack powerful AI companies and depend on American or Chinese technology. Some smaller and wealthier nations, like the United Arab Emirates, have decided to partner closely with the U.S., essentially making deals for guaranteed access to American AI.

Others, like India, are investing slowly in building their own AI capabilities, believing they have enough time to develop independent AI industries without chasing the frontier right now.

India's AI Impact Summit in February 2026 in New Delhi focused on making AI useful for everyday people and developing countries, rather than worrying about superintelligence.

Dr. Antonio Bhardwaj, a global AI expert, summarized this situation simply: "Every country is playing a different game because every country thinks they are on a different board. Until they share a map, they cannot play together."

What Happened at the Major AI Summits?

Several big international meetings on AI have been held recently.

The first was at Bletchley Park in the UK in November 2023, where 28 countries—including the US, China, and others—signed a statement recognizing that AI could pose serious risks and that nations needed to communicate.

This was historic— the first time the US and China signed any joint AI document.

The Seoul AI Summit in May 2024 went further. Sixteen major tech companies—including both American firms and China's Zhipu.ai—signed safety commitments.

27 countries agreed to develop shared risk thresholds for dangerous AI.

Ten countries agreed to connect their AI safety research institutes. However, all these commitments were voluntary— no country could be forced to follow them.

By 2025, when France hosted its AI Action Summit, the tone had shifted. France focused more on AI's risks to jobs and culture rather than the existential dangers earlier discussed.

This wasn’t because France misunderstood the technology; it genuinely saw the risks differently.

These differing agendas made it very tough to create a common set of priorities because each summit reflected a different mental picture of AI’s nature and future.

Why Major Powers Resist Giving Up Control

Another simple reason why international AI governance keeps failing is that the US and China control about 90% of the world's AI computing power and most top AI models.

These countries have little motivation to hand over control of their most advanced technology to an international body. The US prefers to regulate AI on its own terms—or not at all.

China objects to external oversight that would require foreign inspectors to examine its AI systems.

Meanwhile, private companies like OpenAI, Google, Microsoft, Nvidia, and others spend huge amounts lobbying governments and shaping AI rules.

In Brussels alone, lobbying by digital industry groups grew over 50% in four years, reaching $175 million in 2025.

Governments often lack the technical expertise and resources to evaluate powerful AI on their own. They rely on the companies to explain what their systems can do.

This means those who know the most about AI are also the ones with the biggest financial incentives to keep it lightly regulated.

This creates a serious challenge for any governing efforts.

What Can Help?

There are no simple answers, but experts believe some steps could make a real difference.

First, the world needs an independent, shared technical body—similar to the Intergovernmental Panel on Climate Change—that assesses what AI systems are capable of and how fast the technology is developing.

This would be like a scientific advisory panel that helps countries speak the same language of facts when negotiating AI rules.

Second, countries could start small by forming alliances with nations that share similar views and values about AI—building habits of cooperation and shared standards that can grow over time.

Third, governments must significantly boost their own technical skills. Most agencies can’t evaluate advanced AI independently and depend on companies’ reports. Fixing this requires money, expertise, and political will.

The Risks Are Too Great to Delay

The disagreements might seem like technical debates among smart people, but they have real-world consequences.

AI already influences jobs, healthcare, education, and military planning.

The International Monetary Fund estimates AI could impact about 40% of all jobs globally.

If powerful AI systems emerge in the next few years without shared safety rules and international oversight, the impact on ordinary people—especially in poorer countries—could be severe.

The world has successfully created rules for nuclear, chemical weapons, and climate change, despite tough negotiations among nations with different interests.

AI is just as important. But the first step—deciding exactly what we’re talking about—still hasn’t been taken.

Until it is, the paradox will continue: every country calls for cooperation on AI, yet very little of it actually happens.

The Epistemic Crisis at the Heart of Global AI Governance

The Epistemic Crisis at the Heart of Global AI Governance