Nokia also estimates that AI could represent 30% of all global traffic
In sum – what to know:
A new traffic wave – Unlike video or SaaS, AI generates compounding demand through user interactions, API calls, agent-to-agent workflows, and constant data replication across clouds and regions.
Inference and agentic AI – Training is centralized and planned, but inference is real-time, latency-sensitive, and increasingly distributed, making performance and time-to-first-token critical network metrics.
A new forecast from Nokia suggests that wide area network (WAN) traffic could grow by as much as 700% by 2034, driven largely by the rapid expansion of artificial intelligence workloads — particularly inference and agentic systems that are reshaping how data moves across networks. Nokia also estimates that AI could represent 30% of all global traffic.
Speaking to the findings, Cameron Daniel, the CTO of network-as-a-service provider Megaport, said that while previous growth waves, such as video streaming, SaaS adoption, and cloud migration, added significant traffic at the edge, AI introduces traffic that multiplies across every layer of digital infrastructure.
“AI adds traffic everywhere,” he told RCR Wireless News. He pointed to the compounding effect of user-to-AI interactions, application-level model API calls, autonomous AI agents communicating with other systems, and massive replication of data across regions and clouds.
He noted, as well, that unlike earlier internet booms, this growth will not be evenly distributed. Some routes — especially those linking major AI data center hubs — could see traffic increases of 10x or more, while other areas may experience only modest change. In fact, he shared that several AI corridors have already recorded several-hundred-percent growth over the past 24 months.
Nokia’s analysis points to inference, not training, as the defining AI driver for WAN evolution. Daniel explained that training workloads tend to be centralized batch jobs, with enormous bandwidth requirements, but they can often be planned in advance, and latency outside the training cluster is “generally tolerable.” Inference is different. It represents the real-time “use” of AI systems — from chat interfaces to autonomous enterprise agents — and it is far more latency-sensitive.
Agentic workflows, in particular, raise the stakes. Nokia projects agentic AI traffic will grow at a 26% CAGR through 2034, driving a significant rise in inference traffic carried over the WAN. As AI agents begin communicating with other agents, performance becomes tightly linked to both network latency and inference latency. “Which is why most model providers track metrics like time-to-first-token,” added Daniel.
Additionally, as inference shifts closer to users, factories, campuses, and industrial environments, Nokia argues that networks must evolve beyond best-effort connectivity.
Edge inference requires low and consistent latency, fast failover, and infrastructure that can support service-level intent — especially as AI becomes embedded in operational decision-making. Daniel said: “Edge inference shifts the network requirements from best effort to something closer to infrastructure with service-level intent. Networks increasingly need to deliver low and consistent latency with fast failover, and as AI moves closer to users and industrial sites, these requirements are just going to amplify.”
If AI traffic scales as projected, Danial warns that the first pressure points will likely emerge in interconnect and data center interconnect (DCI) links, where east-west traffic growth will outpace traditional internet demand. “Network operators and AI providers need to stay ahead of this,” he cautioned.
Security is also becoming a critical concern: “Shadow IT is a problem, and a rogue agent could do meaningful damage. DLP and other tools are critical to ensure data stays protected.”
Ultimately, Daniel argued the most important shift may not be traffic volume alone, but the rise of agent-to-agent communication — a new class of network behavior that could redefine WAN design over the next decade.
