At the edge, solving for thermal and power efficiency is just as critical as managing bandwidth and latency
AI at the edge is helping telecom operators enhance network reliability, optimize performance, and detect threats in real-time. These use cases require powerful compute, often in locations where traditional data center infrastructure can’t reach, such as cell towers, rooftops, and remote sites.
Containerized edge systems offer a flexible alternative, delivering local processing in compact, self-contained units. According to Gartner, in 2025, around 75% of enterprise data will be generated and processed at the edge, up from 10% in 2018, illustrating the rising scale and importance of edge AI deployments.
But with higher compute density comes increased heat and energy use, especially in environments with limited access to cooling or utilities. At the edge, solving for thermal and power efficiency is just as critical as managing bandwidth and latency.
Supporting AI at the edge without traditional data centers
Unlike conventional data centers, telecom edge sites are often space-constrained, remotely located or exposed to harsh conditions. Yet the compute demands of edge AI, from real-time analytics to threat detection, require powerful hardware that generates significant heat and draws considerable power.
Traditional air cooling isn’t built for these environments. It depends on ambient airflow, requires regular maintenance and doesn’t scale well with dense hardware. Water-based systems pose another issue. Many edge sites lack reliable water access, and water use is increasingly scrutinized from a sustainability perspective.
To meet performance needs without relying on traditional infrastructure, operators need cooling solutions that are efficient, low-maintenance and self-contained. They need to be capable of handling high thermal loads independently. This is driving growing interest in closed-loop liquid cooling systems built for the edge.
Closed-loop cooling is gaining traction at the edge
Closed-loop liquid cooling is emerging as a practical solution for edge environments where traditional systems fall short. Unlike air-based cooling, it doesn’t rely on ambient airflow or large volumes of water. Instead, it uses a sealed system to transfer heat efficiently, making it ideal for locations with fluctuating temperatures or limited ventilation.
These systems are compact, quiet and low maintenance, easily integrated into pole-mounted cabinets, shelters or outdoor enclosures. Because the coolant is recirculated and fully contained, they can maintain stable operating temperatures even in remote or rugged locations, without needing external hookups or HVAC systems.
Dielectric fluid and chassis-level immersion
While closed-loop liquid cooling addresses many of the challenges of edge deployments, some operators are also looking to dielectric fluid–based, chassis-level immersion systems. In these designs, servers or individual components are partially or fully immersed in a non-conductive liquid that directly absorbs and transfers heat.
Because dielectric fluids are electrically non-conductive, they can come into direct contact with electronics without risk, allowing for extremely efficient heat removal. This direct-to-hardware approach reduces reliance on fans, eliminates the need for complex airflow management, and can further improve power usage effectiveness (PUE) in space-constrained environments.
For containerized edge deployments, chassis-level immersion cooling can offer another layer of flexibility. Systems can be sealed to protect against dust and humidity, making them well-suited for rugged or outdoor locations. And, like closed-loop liquid cooling, dielectric-based immersion avoids continuous water consumption, supporting sustainability goals while maintaining high compute density.
Running high-performance compute at the edge
Deploying AI at the edge introduces a new set of infrastructure demands. Locations like rooftops, poles and remote hubs require compact, ruggedized systems that can withstand weather and space constraints.
Power availability is another challenge. Many edge sites weren’t designed to support high draw, so energy-efficient compute and cooling are essential to avoid overloading existing infrastructure. Data centers are expected to consume about 2% of global electricity in 2025, roughly 536 terawatt-hours — and that figure could double by 2030 as AI workloads continue to scale, according to recent Deloitte research. These demands are no longer confined to core facilities; they’re extending to the edge, where power constraints are often even more acute.
Thermal design plays a central role. In places without HVAC or fresh air circulation, self-contained cooling isn’t optional, it’s necessary. Latency and bandwidth must be optimized to avoid bottlenecks while keeping compute close to users. And with limited on-site access, systems need to be reliable and low maintenance, capable of running autonomously for extended periods.
Sustainability also matters. As operators scale out, minimizing water use and energy waste is key. Above all, edge infrastructure must be built to scale, ready to support more compute and heat without constant redesigns.
Designing for the demands of AI at the edge
As telecom operators roll out AI-driven services, edge infrastructure is becoming critical to network performance, reliability and security. But deploying high-performance compute in distributed, unconventional locations takes more than powerful hardware, it requires smarter design.
Containerized systems paired with closed-loop liquid cooling offer a scalable, resilient path forward. They enable operators to deploy AI wherever it’s needed, without building full data centers. The challenge now is strategic. It requires finding the right balance of performance, efficiency and durability to support both today’s AI applications and tomorrow’s growth. The edge may be distributed, but the approach to infrastructure must be unified, purpose-built to support the future of telecom.