YOU ARE AT:AI InfrastructureJLL: AI impacts power, location needs

JLL: AI impacts power, location needs

JLL highlights that AI-native facilities are being built closer to major population centers to meet the critical “sub-10 millisecond” latency target

The data center industry is undergoing significant changes as generative AI accelerates demand for high-density, latency-sensitive infrastructure, according to Sean Farney, vice president of data center strategy at JLL.

In an interview with RCR Wireless News, the JLL executive laid out how AI is fundamentally altering design assumptions, operational standards and geography for hyperscalers and colocation providers alike.

“Artificial Intelligence (AI) in the data center industry can be likened to planning a trip to Disney World a year in advance,” Farney said. “While there’s an understanding of the eventual goal, many of the specifics remain uncertain.”

That uncertainty, he further explained, has transformed the industry into a dynamic state of flux: “The industry is currently in a state of flux, with AI largely still in the research phase. This rapid change in trajectory and speed has caused a significant shift in how data centers operate, leading to a complete rewrite of operational run books and design bases within just the last two years.”

At present, AI deployments are largely dominated by hyperscalers building large training clusters. “Currently, AI implementation is primarily within the realm of hyperscalers, who are constructing massive learning facilities, both hybrid and dedicated, with some GPU-as-a-service options,” Farney said.

But this model is starting to evolve as the market moves toward AI inferencing — real-time applications where the value of AI is monetized at the edge. “Every company and enterprise recognizes the need for this capability, and due to technical requirements such as low latency, the geography of where this computing takes place may shift,” he continued.

One of the most visible impacts of AI is the dramatic increase in rack-level power density. “AI cabinets can require 50 to 100 kilowatts of power, with some potentially reaching the mythical 1 megawatt per cabinet,” said Farney. “This increase in power density results in a shrinking overall facility footprint, with the same computing power now possible in a fraction of the space.”

That kind of power demand is forcing operators to revisit long-held assumptions about mechanical and electrical design. “These high-density facilities require massive mechanical and electrical infrastructure to support the servers,” he said. “The heat produced at these densities overwhelms traditional air cooling methods, necessitating liquid cooling technologies.”

“The industry is still experimenting with various cooling methods, including immersion cooling, direct-to-chip cooling, and rear door heat exchanger technology,” Farney noted. “Different providers are exploring and adopting various approaches, leading to a period of ‘creative destruction’ where innovation is rapidly transforming traditional operational methods and equipment to accommodate new technologies.”

As inferencing takes center stage, the JLL executive said location strategy is a key factor in the process: “Some hyperscalers are beginning to construct AI-dedicated facilities… with future AI inferencing monetization in mind, which requires close proximity to end-users to minimize latency.”

To meet the critical “sub-10 millisecond” latency target, AI-native facilities are being built closer to major population centers. “These facilities are typically being built within a 100-mile radius of major population centers, focusing on top U.S. data center cities such as Atlanta, Chicago, Dallas, Northern Virginia and Phoenix,” he added.

“This approach ensures that millions of potential users are within reach, making these established markets ideal for AI-native facilities,” said Farney. “While large language model (LLM) training facilities could potentially be located in more remote areas, the future of AI inferencing monetization is steering providers towards building in existing, well-connected markets.”

Even in these traditional hubs, however, future AI data centers will look very different. “They typically have a smaller physical footprint but much higher power density, reflecting the evolving needs of AI computing infrastructure,” the executive added.

ABOUT AUTHOR

Juan Pedro Tomás
Juan Pedro Tomás
Juan Pedro covers Global Carriers and Global Enterprise IoT. Prior to RCR, Juan Pedro worked for Business News Americas, covering telecoms and IT news in the Latin American markets. He also worked for Telecompaper as their Regional Editor for Latin America and Asia/Pacific. Juan Pedro has also contributed to Latin Trade magazine as the publication's correspondent in Argentina and with political risk consultancy firm Exclusive Analysis, writing reports and providing political and economic information from certain Latin American markets. He has a degree in International Relations and a master in Journalism and is married with two kids.