YOU ARE AT:AI InfrastructurePower, cooling shape the future of AI infra: ABI Research

Power, cooling shape the future of AI infra: ABI Research

Novel cooling and power technologies help manage heat from high-density compute

In sum – what you need to know:

Power and cooling now core to AI infra – High-density GPU workloads are pushing legacy data centers to the brink, making advanced cooling like liquid immersion and heat reuse standard practice by 2026.

Power, not chips, is the main bottleneck – AI infrastructure growth is increasingly constrained by megawatt availability, not semiconductors.

Hyperscalers and telcos diverge on AI buildout – While hyperscalers focus on centralized mega campuses with custom silicon, telcos prioritize distributed edge AI to serve industrial use cases closer to end users.

As the global AI race accelerates, next-generation data centers are facing mounting challenges in power and cooling—two historically overlooked pillars that have now become central to infrastructure strategy. According to Sebastian Wilke, Principal Analyst at ABI Research, “novel cooling and power technologies help manage heat from high-density compute, reduce power consumption and increase efficiency, enable sustainability at scale, and unlock new geographies and form factors.”

AI workloads, particularly those driven by large language models and high-density GPU clusters, are pushing existing systems beyond their limits. “As model sizes and compute intensity grow, the thermal profile of AI infrastructure is breaking the legacy data center design paradigm,” Wilke told RCR Wireless News. “Traditional air cooling is no longer sufficient for dense GPU clusters, making technologies like liquid cooling, immersion systems, and heat recapture essential considerations. These solutions are shifting from niche deployments to standard practice, a trend expected to accelerate by 2026.”

At the same time, power not chips is becoming the core bottleneck. “Power is becoming the most crucial supply chain constraint. It’s no longer just about chip availability: the real bottleneck is megawatts for expansion,” Wilke explained. Grid limitations and sustainability mandates are now in direct tension with AI infrastructure demands. “AI’s power demands are under growing scrutiny, particularly in ESG-sensitive markets. Governments are beginning to respond. For example, Brazil is offering tax incentives for data centers that run on 100% renewable energy, aligning infrastructure growth with national climate goals.”

But while these foundational challenges are universal, hyperscalers and telecom operators are diverging in how they build and deploy AI data centers.

“Hyperscalers are going for building for scale and attempt to own as much as possible of the AI stack—from infrastructure to models to APIs—which includes developing in-house LLMs, managing custom data center designs, and offering AI-as-a-Service platforms globally,” Wilke said.

This vision translates into centralized mega campuses with custom silicon, built to maximize energy and computational efficiency. “Most hyperscalers deployed custom silicon solutions they developed in-house in order to optimize on scale and energy for their first-party workloads,” he added. “Think of Amazon being AWS’ best customer, hence AWS running its internal workloads on their own CPU Graviton or accelerators like Trainium rather than exclusively on Intel/AMD CPUs or NVIDIA GPUs.”

Telcos, on the other hand, are taking a more distributed and customer-specific approach. “Telcos have focused on distributed, edge AI inference,” said Wilke. “They have adapted network sites and operate regional hubs and deploy regional micro data centers, often retrofitted.”

In this context, telecom operators are becoming crucial enablers of AI for industry verticals—supporting inference workloads closer to end users, especially in manufacturing, logistics, and critical infrastructure.

Despite their different trajectories, both telcos and hyperscalers are forming strategic partnerships to reinforce their AI infrastructure ambitions. “That said, both are essential to the AI compute landscape, but they operate at different layers of the stack and with different timelines, economics, and end users in mind,” Wilke emphasized.

The distinction is especially pronounced in China. “State-owned telecoms are building AI infrastructure aligned with national compute objectives for sensitive verticals compared to U.S. hyperscalers building U.S. government clouds, while private cloud hyperscalers (e.g., Alibaba Cloud, Tencent Cloud) mirror Western hyperscalers in vertically integrating AI stacks,” Wilke noted.

Despite the momentum behind AI infrastructure investment, significant roadblocks remain. “Power capacity availability remains the major expansion bottleneck,” Wilke said, pointing to well-known constrained markets like Ireland and Northern Virginia. He also highlighted growing resistance from local communities and environmental advocates, especially around water usage, as another hurdle. “We are increasingly seeing environmental and community resistance apart from government scrutiny (particularly water related ones),” he added.

“The global infrastructure readiness is quite uneven,” Wilke warned. “Global AI expansion is often limited to a few power-rich, fiber-rich zones.” Additionally, geopolitical tensions and fragmented regulatory landscapes complicate the equation, the analyst added.

ABOUT AUTHOR

Juan Pedro Tomás
Juan Pedro Tomás
Juan Pedro covers Global Carriers and Global Enterprise IoT. Prior to RCR, Juan Pedro worked for Business News Americas, covering telecoms and IT news in the Latin American markets. He also worked for Telecompaper as their Regional Editor for Latin America and Asia/Pacific. Juan Pedro has also contributed to Latin Trade magazine as the publication's correspondent in Argentina and with political risk consultancy firm Exclusive Analysis, writing reports and providing political and economic information from certain Latin American markets. He has a degree in International Relations and a master in Journalism and is married with two kids.