YOU ARE AT:AI InfrastructureOnsite power to play key role in data center growth, says study

Onsite power to play key role in data center growth, says study

Bloom Energy’s report highlighted that onsite power generation was expected to become a defining feature of the next wave of AI-driven infrastructure

In sum – what to know:

Onsite power surging – By 2030, 27% of data centers expect to be fully powered by onsite energy, up from just 1% in 2024, amid grid delays and rising AI energy needs.

Grid delays reshaping decisions – Utilities report energy delivery delays of up to two years longer than developers expect, making electricity access the top factor in data center site selection.

AI fuels energy intensity – Median data center capacity is expected to rise 115% by 2035, driving urgent demand for fast and scalable power generation alternatives.

Access to electricity has overtaken all other considerations in data center site selection, according to a mid-year update from Bloom Energy.

In its 2025 report, the firm highlighted that onsite power generation was expected to become a defining feature of the next wave of AI-driven infrastructure.

The updated findings reveal that nearly 27% of data centers expect to be fully powered by onsite generation by 2030, a dramatic increase compared to just 1% in 2024. An additional 11% of data centers are expected to use it as a major source of power. The report noted that the expected surge is being driven by rising AI workloads and delays in utility grid interconnections.

“Decisions around where data centers get built have shifted dramatically over the last six months, with access to power now playing the most significant role in location scouting,” said Aman Joshi, chief commercial officer at Bloom Energy. “The grid can’t keep pace with AI demands, so the industry is taking control with onsite power generation. When you control your power, you control your timeline, and immediate access to energy is what separates viable projects from stalled ones.”

The report also highlighted a growing gap between expectations and reality. While developers often plan around a 12-to-18-month window to access grid power, utility providers in major U.S. markets report that timelines may extend by as much as two additional years, making it a real challenge to meet the aggressive timelines required for AI infrastructure deployments.

As a result, 84% of data center leaders now rank power availability among their top three site selection criteria, surpassing considerations like land cost or proximity to end users, according to the recent report.

It added that the size of data centers is also scaling rapidly. The report projects the median data center size will more than double, from the current 175 MW  to approximately 375 MW over the next decade. These facilities will require more dynamic and reliable energy solutions, particularly for workloads driven by AI, which demand high-density compute.

Bloom Energy also noted that data center operators are turning to low-emission, fast-deployment energy systems that can better manage the unpredictable energy loads of large-scale AI training and inference.

The report also found that 95% of surveyed data center leaders say carbon reduction targets remain in place. However, many acknowledge that the timeline to achieve those goals may shift as the focus temporarily realigns around securing dependable energy sources.

Artificial intelligence (AI) data centers are the backbone of modern machine learning and computational advancements. However, one of the biggest challenges these AI data centers face is the enormous power consumption they require. Unlike traditional data centers, which primarily handle storage and processing for standard enterprise applications, AI data centers must support intensive workloads such as deep learning, large-scale data analytics as well as real-time decision-making.

AI workloads, especially deep learning and generative AI models, require massive computational power. Training models such as GPT-4 or Google’s Gemini involves processing trillions of parameters, which requires thousands of high-performance GPUs (Graphics Processing Units) or TPUs (Tensor Processing Units). These specialized processors consume a lot more power than traditional CPUs.

ABOUT AUTHOR

Juan Pedro Tomás
Juan Pedro Tomás
Juan Pedro covers Global Carriers and Global Enterprise IoT. Prior to RCR, Juan Pedro worked for Business News Americas, covering telecoms and IT news in the Latin American markets. He also worked for Telecompaper as their Regional Editor for Latin America and Asia/Pacific. Juan Pedro has also contributed to Latin Trade magazine as the publication's correspondent in Argentina and with political risk consultancy firm Exclusive Analysis, writing reports and providing political and economic information from certain Latin American markets. He has a degree in International Relations and a master in Journalism and is married with two kids.