As AI workloads move from centralised training to distributed inference, the industry’s infrastructure challenge is changing shape. Power and land matter, but Nvidia, SUBCO, and Zayo – on a panel at PTC’26 – argue that connectivity, latency, and scale in terrestrial and subsea fibre networks are fast becoming the next hard constraints.
In sum – what to know:
Massive builds – the twin AI disciplines of training and inference will require hundreds of millions of miles of new long-haul, metro, and subsea fibre over the next few years as the AI industry seeks to scale capacity
Traffic patterns – personalised, latency-sensitive inference workloads will push compute and networking closer to the edge, reshaping data centre strategy and increasing demand for capable criss-cross infrastructure
Risk strategies – the AI debate has shifted, necessarily, from whether the bubble will burst to where value accrues – and whether network infrastructure can be built fast enough to keep pace
One hundred and seventy million miles of new fibre in long-haul backbone infrastructure; 40-60 million miles of new fibre in metropolitan areas – that is what it will take the global telecoms industry to serve AI over the next four years, it reckons. It is quite the workload, so to speak; a “big pickup in our build-out”, says Steve Smith, chief executive at Zayo Group, speaking during a headline panel session at PTC’26 in Hawaii last week (January 20).
His firm started its rollout in earnest last year, he says, deploying “thousands of new fibre miles” just in the US. The company’s backhaul and metro “requirements” have moved from 12-24 fibres, typically, to 48-72, to support data traffic between data centres, mostly to train a handful of large language models to deliver generative AI on a handful of consumer-oriented platforms, which use static data in centralised compute clusters. But times are changing.
New AI inference applications will bring the action closer to the edge, and into the enterprise space, and likely change network deployment strategies along the way. This was the subject of the PTC panel (‘AI at the edge – fueling inference-driven growth’), attended also by US chip giant Nvidia and Australia-based subsea cable operator SUBCO. “We’re studying both [trends] closely,” said Smith. “We are building nearly 20 new routes right now.”
He went on: “We are seeing requirements for both; there’s still a lot of training requirements that we’re all trying to help with, and inference is starting to show up because companies are working on use cases and POCs. [New fibre routes] take two to four years; these are big capital decisions, in growth corridors – where the hyperscalers need capacity. The amount of fiber is massive. That has changed dramatically over the last 12-24 months.”
(On a side-note, Smith said his firm, separate from Zayo Europe, which split from the US group in 2024, is focused only on its home market/s, where all the hyperscalers are headquartered. “We carved out our European business, which we’re still deciding what to do with. We are totally focused on North America,” he said.) Smith’s note about expanding capacity is echoed by Lynn Smullen, member of the PTC board of governors, hosting the session.

Capacity requirements
As part of her intro spiel, she suggests the mission to train AI foundation models will drive total global data centre capacity to 120-150 GW, “even 180 GW”, and that, as AI workloads shift towards inference tasks over the course the next couple of years, “as much as 90 GW of capacity will be at the edge”. Smullen said: “And what’s critical is not just power, but the network, and its efficiency, resiliency, reach, and protection (security).”
Bevan Slattery, founder and chief executive at SUBCO, has a sensible-sounding analogy for the pattern of growth, and also appears to dismiss the existential question, posed in Sunday’s session on data centre trends (January 29, see coverage here), about whether edge-wards migration of AI workloads will, at some point, render the 2025/26 data-centre build-out as some kind of mass white-elephant project. “Training is going to keep going,” he said.
“Because of new models, fracturing of models, and versioning of those different models [for] certain applications and specific areas, whether for healthcare or whatever it might be. Training will continue to grow. The closest parallel is with the growth of cloud – where it started pretty simple, everyone uploading photos, and then applications started going there all of a sudden. Cloud growth from 2010/12 through to 2022, post-COVID, was just extraordinary.
“Inference might [mean] less traffic – maybe, not sure – but you add reasoning to it, [and] you need more fibre in the metro and better software-defined interconnection to take the inference tokens to the enterprise… And on the subsea side… a gigawatt of AI is like $10 billion in data centres, $30 billion in chips – but it’s 25 to 50 terabits per second of data. Wherever you put these gigawatts of chips, you’re going to need a hell of a lot of bandwidth.”
It takes four or five years to build a new subsea cable route, he says – versus three-to-four for a terrestrial project, per Smith’s comment. “We’ve got to start building that backbone – big, big cables and direct connections to major markets, to where we can drop power and energy, and training and inference, as fast as we can. Where this will grow massively is with interconnections between data centres and enterprises, and with the backbone between countries.”
The build work is pressing, the argument goes; there’s no time to debate AI bubbles, and such – and sheer demand suggests it is a silly discussion, anyway, says Smith. He chimes in: “The risk of over investing versus the under investing is debatable. You listen to the stuff we are doing, our companies, plus everyone else, and it is interesting to think about. There’s this debate about [another] dotcom [crash]. [But] we’re in a whole different discussion.”
Industrial applications
There is lots in the PTC discussion – also about ecosystems and platforms, and scaling up/down between the cloud and edge (Nvidia), plus GPU as-a-service to commercialise compute and content delivery networks (CDNs) to move it closer to the action (Slattery, SUBCOM), as well as talk about data sovereignty as the “perfect example of edge inference” (Nvidia, all) – and some jokes and jumping about. But the other highlight is Nvidia’s talk of applications.
Raj Mirpuri is vice president of enterprise and cloud sales at Nvidia in charge of its neocloud business; it is his first PTC, he says – where his fellow panellists have been coming for years. Which explains how PTC has gone from 1,000 delegates to over 10,000 – more than, if you count the unticketed hyperscalers holed-up in big hotels, waiting on visitors – in just a few years on the back of the AI boom, and also how (fixed) telecoms is almost-sexy again.
He says: “Training is going to continue to grow, and continue to be very important, but inference is growing many fold, as well – Jensen [(Huang, Nvidia boss) has mentioned tenfold or more every year. And we’re starting to see these thinking / reasoning models growing at tremendous pace. [So] inference tokens are going up, [and] require a tremendous amount of compute – and we’re working with everyone to figure out where to land the compute…
“We’re waiting to see all the applications, whether it’s industrial AI or [other] edge [AI] – a robot building a car, autonomous driving; [applications in] retail, financial services, healthcare. We’re seeing these edge applications grow – where it’s way beyond a science project. We’re seeing multiple applications of true AI in production environments.” Which is the sweet-spot for this hack, at least – which you’ll know if you’ve been following these pages.
Otherwise, the session’s focus on networking is good, all-round. High-performing criss-cross networks are the only way to feed the global AI machine, says Slattery – who also founded Australia-based network as-a-service provider Megaport, which offers private software-defined on-demand interconnection into big data centres and service providers, and gets a plug as part of a general point about the urgent criticality of the AI network build project.
“It’s in 1,000 data centers, in 27 countries, in every major market. It allows enterprises to directly connect with anyone on a private-secure basis with SLAs. That’s going to be another level of growth. But the neoclouds will become so big, or big enough, that they’ll want their own direct connections as close to customers as they can. And whether you’ve got 20 chips or a hundred or a thousand… you’re going to need fibre pairs, across countries and oceans.
“So you’re going to see this consumer-level internet, enterprise-level fabric, and neo-level spectrum fibre-pairs – or just truckloads of capacity direct to the countries where these tokens are being digested. Maybe there is a hyper-scale mega fabric, so to speak, somewhere in there. But that is where we are headed.”
Critical telecoms
And this is PTC, about telecoms, so the discussion makes a point to position “critical” connectivity as a bottleneck for inference traffic.
Smith at Zayo says: “Today, the constraints are power, land, and electricity. That will change as we go to inference. Connectivity issues will become a constraint – as we go from static data on the internet to feeding everybody personalized data in the inference world, especially when we all have an agent on our devices. Because we’re not all getting the same data anymore – like we do now with a Google search, where everyone gets the same result.
“The demands on connectivity will be enormous, and the network architecture will have to evolve. That is going to be pretty significant… Latency will be critical for inference. Blink, and you’re at 100ms, and you can send data halfway around the world on a fibre network in 100ms. So the pressure to get to 50ms, or less for certain applications, is going to be part of our world… It’s going to be a really interesting challenge; we’re at an interesting juncture.
“I don’t know what the end state will be – maybe it will be 80-90% inference and 10-20% training. But we are in the early stages, and we’re hundreds of billions of dollars into a multi-trillion dollar industry. Inference is the next wave, and there will be a lot of winners and some losers as well.”
