Summary
The world is building artificial intelligence faster than it is building the power plants and cables needed to run it. This simple fact sits behind many headlines you see about new AI models, giant data centers, and rising electricity prices. To understand where AI will get its power, it helps to look at how much energy it already uses, why this is a problem, and what new ideas are being tested to fix it.
Right now, data centers in the United States use about 4 percent of all the electricity in the country, roughly 183 terawatt-hours in 2024, which is as much as a large country such as Pakistan uses in a year. A big AI-focused data center already uses as much power as 100,000 homes, and the largest ones being planned could use 20 times more.
By 2028, data centers in the US alone could consume between 6.7 and 12 % of all US electricity. Globally, data centers account for just over 1 % of electricity use today, but this is expected to more than double by 2030, and the share of that used just for AI could rise from about 5–15 % today to 35–50 % by 2030.
To make this more concrete, imagine a medium-sized city of 100,000 homes. Now imagine a single AI data center that needs the same amount of electricity as that entire city. Now imagine building 20 such centers. That is roughly the scale of what is happening in the AI world today.
Electricity demand from data centers is growing so fast that it is now reshaping how power companies plan the grid. One forecast says that by 2030, US data center consumption could grow by 133 % to about 426 terawatt-hours, more than double 2024 levels.
Another forecast says that AI data center power demand in the US could grow more than thirtyfold by 2035, reaching 123 gigawatts of load. Utilities are now updating their plans around this surge, and analysts say that power is becoming the main limit on how fast AI can grow, not just the availability of chips or software.
This growing demand affects ordinary people. In some areas near data centers, electricity prices for residents have risen far faster than the national average over the last five years, partly because local grids must be upgraded or stretched to serve the new load. This does not mean that AI is the only reason prices rise, but it is now one of the drivers.
Faced with this pressure, engineers and companies are asking a simple but hard question: where will AI get its power in the future?
The answers fall into two big groups. One group tries to find new, smarter ways to supply energy to AI. The other group tries to make AI itself much more efficient so that it needs less energy in the first place.
On the supply side, companies are looking at three main directions. The first is to improve what can be done on land: better grid planning, on-site power plants, and batteries. The second is to go into the ocean and use seawater to cool computers and offshore renewables to power them. The third is the most futuristic: sending AI into space, where the sun shines almost all the time and cooling is easier.
Let’s start on Earth. In the past, data centers were mostly passive customers of the grid. They just plugged in and paid their bills. Now, because AI uses so much electricity, major operators are becoming partners with utilities.
They co-invest in new power plants, fund upgrades to local transmission lines, and install on-site generation such as gas turbines, solar panels, or fuel cells. Some large campuses are being designed as “energy plus compute” sites where a data center is physically integrated with its own power station and battery systems, rather than simply drawing power from a distant plant.
For example, analysts predict that US data center grid demand will rise from about 75.8 gigawatts in 2026 to 108 gigawatts in 2028 and 134 gigawatts by 2030.
To cope with this, operators are adding local power sources and offering “demand response,” which means they can turn down some AI workloads when the grid is under stress and turn them back up when power is abundant. This is a big shift. In simple terms, data centers are starting to act like giant flexible factories that can help balance the grid, not just drain it.
Yet even with these changes, there are physical limits. Many grids were built decades ago for homes, offices, and old-style factories, not for buildings full of AI chips that together can draw hundreds of megawatts. That is one reason companies are experimenting with more unusual ideas like underwater and space-based data centers.
Underwater data centers sound like science fiction, but several real systems already exist. The basic idea is simple: it is cold underwater, and water carries heat away better than air. In a normal data center, 40 to 60 % of the energy can be used just for cooling the servers. With deep-sea cooling, the mechanical cooling load can fall below 10 percent of total facility energy use, because cold seawater does most of the work. That can cut both electricity use and carbon emissions.
Microsoft tested this idea in a project called Natick, placing sealed server containers on the seafloor near the Orkney Islands. The environment was controlled, unstaffed, and insulated from temperature swings, and the company later reported that the underwater servers were eight times more reliable than those in a similar land-based data center. The facility also used 100 % local renewable energy from wind and tidal sources.
China has moved beyond experiments into early commercial deployment.
A Shanghai project known as Hailanyun uses pipes to pump seawater through radiators on the back of server racks, absorbing heat and carrying it away. It uses at least 30 % less electricity than similar onshore centers because of this natural cooling. Another firm claims that its underwater center, powered by offshore wind and cooled by seawater, can improve overall energy efficiency by up to 60 percent compared with traditional approaches.
An example can help here. Picture a large office building in a hot city using many air conditioners to stay cool. The electric bill is huge. Now imagine moving that building into a cool cave where the rock walls keep the temperature down. Suddenly, you need far fewer air conditioners. Underwater centers are like putting the “computer building” into a cool cave under the sea, except that the cave is water and the air conditioners are pipes and heat exchangers.
Of course, there are concerns. Scientists worry about how warming water from these systems might affect fish and other marine life if deployment grows large. Engineers must also deal with corrosion, pressure, and maintenance challenges. Still, these projects show that moving some compute offshore can cut energy use and tap into renewables like wind, wave, and tidal power without placing more strain on land-based grids.
Space-based data centers push this idea even further. In orbit, there is no night in the usual sense if you choose the right path around Earth. Satellites in certain orbits, such as dawn-to-dusk sun-synchronous paths, can keep their solar panels in almost constant sunlight. There are no clouds, no atmosphere, and the sunlight is stronger and more steady. Studies say that orbital solar arrays could receive unfiltered sunlight 24/7 and reach effective energy efficiency 90-95 %,with energy costs projected at about half a cent to one cent per kilowatt-hour at scale.
A single orbital data center with one square kilometer of solar panels could generate around 1.4 gigawatts of power continuously, roughly the output of a large nuclear power plant, but with no fuel and no carbon emissions after launch.
A startup called Starcloud, for instance, has proposed orbital systems where huge solar arrays feed power directly to computing modules in space. Google has also described a project called Suncatcher that explores constellations of solar-powered satellites equipped with AI accelerators and connected with free-space optical links, essentially “fiber optics through the vacuum.
Cooling is easier in some ways up there too. Space is very cold, and heat can be shed into the vacuum using radiators. Engineers see this as a way to get rid of the massive energy spent on air conditioning on Earth. One scientist compared orbital data centers to “solar-powered, self-ventilating server farms circling the planet.”
There are big hurdles. Space is a harsh environment, with radiation, micrometeoroids, and debris. Launching large amounts of hardware is still expensive and creates its own emissions, although some analyses suggest that, if run for years, orbital centers could “pay back” their launch carbon within two to three years of cleaner operation.
There are also tricky questions about how to get processed data to and from Earth quickly, how to share orbital slots fairly between nations, and what happens if something goes wrong in orbit. Still, major demonstrations are expected around 2027, and some analysts think an orbital data center market of $15–20 billion could emerge by 2030.
Back on the ground, we should not forget simpler but powerful steps: making AI itself use less energy. Right now, about 60 % of data center electricity goes to servers, 5 % percent to storage, and up to 5 % to networking gear.
In AI-heavy facilities, most of that server share is going to dense racks of GPUs or other accelerators running large models. Because these models often have hundreds of billions of parameters and process huge amounts of data, they use astonishing amounts of power.
One way to cut this is to avoid sending all tasks to giant models in distant clouds. Instead, smaller, specialized models can run on devices closer to where data is generated. This is called edge computing. For example, a factory robot can run a compact vision model on a local chip instead of sending video to a large data center for analysis. A smartphone can run a small speech model to understand commands without contacting a remote server. This cuts network traffic and can save energy overall.
Another idea is to change how models are represented inside computers. Many AI models use high-precision numbers to store their weights and activations. Companies like Nvidia have created lower-precision formats, such as special 4-bit formats, that allow models to run using much less memory and computation while keeping almost the same accuracy.
This approach, called quantization, can improve energy efficiency per token or per operation by large factors, sometimes up to dozens of times in specific cases, while making it easier to run models on cheaper hardware.
There is also interest in neuromorphic computing, which tries to copy the way the human brain works. The brain runs on about 20 watts of power—less than a bright light bulb—yet it can recognize faces, understand speech, and learn from experience.
Neuromorphic chips place memory and processing units close together, reducing the energy wasted moving data back and forth, something that standard computers do constantly. Early prototypes suggest these chips can do certain tasks using much less energy than traditional AI hardware, making them candidates for ultra-low-power edge devices in the future.
These efficiency improvements matter because they change the basic question from “Where can we find more electricity?” to “How can we make each unit of electricity go further?”
Both questions are important. On their own, efficiency gains will likely not be enough to cover all AI growth, especially for very large training runs. But combined with new power sources—on-site plants, underwater cooling, space solar—they can slow the growth in energy demand and make it easier for grids to keep up.
As AI expands, big companies and governments are recognizing that energy strategy and AI strategy are now deeply linked. Analysts say that global AI spending could reach $1.5 trillion in 2025 and surpass $2 trillion in 2026, driving demand for high-performance chips and new data centers.
BloombergNEF forecasts that data-center power demand could reach 106 gigawatts globally by 2035, a 36 % jump from earlier expectations once AI growth is fully factored in. Deloitte warns that by 2035, US power demand from AI data centers alone could grow more than thirtyfold.
This is why power is now described as “where the rubber meets the road” for AI. Behind every flashy AI model announcement lies a more mundane question: is there enough electricity, at the right time, in the right place, at the right price?
The answer will depend on many things: how quickly grids are upgraded, how widely renewables and storage are adopted, whether underwater and orbital data centers prove viable, and how far AI hardware and software can be pushed toward greater efficiency.
A useful way to picture the future is to imagine three layers working together. At the bottom is the energy layer: grids, solar farms, wind turbines, nuclear plants, hydro dams, batteries, and possibly space-based power. In the middle is the infrastructure layer: data centers on land, underwater capsules, satellites in orbit, network cables and satellites connecting them. At the top is the AI layer: models of different sizes and types, some huge and centralized, others tiny and embedded in devices at the edge.
Where AI gets its power will depend on how well these three layers are coordinated. A smart system will send each kind of work to the place where it can be done most efficiently. Massive model training might happen in orbital centers with abundant solar energy.
Real-time video analysis for a city’s traffic system might be handled by an underwater hub near the coast, cooled by the ocean and powered by offshore wind. Everyday tasks like voice commands or text suggestions could run on tiny chips inside phones or home devices, using only a sliver of power.
In the end, the “energy crisis” reshaping computing is less about running out of electricity and more about respecting the limits of systems built for a different era. AI did not appear when the grid was designed.
Now the grid must adapt quickly. If it does, AI may be powered in ways that are cleaner and smarter than today. If it does not, AI deployment will be slowed or will strain energy systems in ways that people feel in their monthly bills and in grid reliability.
The question “Where will AI get its power?” is really a question about how seriously societies choose to invest in the foundations of their digital future.



