AI is already a scaled, mobile-native workload, which is forcing networks to evolve from SLA-driven capacity models to deterministic, programmable architectures that connect data centres, transport, and the edge. So says Nokia chief Justin Hotard at MWC in Barcelona.
In sum – what to know
New workload – Nokia says AI traffic will go from human-to-machine tokens to machine-to-machine engagements, and networks have to change to survive and thrive.
Operator partners – Nvidia deal has been misread, it says: AI-RAN is about AI traffic; new partnerships have been announced with BT, Elisa, NTT DOCOMO, Vodafone.
Traffic control – The shift is from peak-capacity planning and five-nines SLAs to deterministic, programmable networks embedding AI at every layer, says Nokia.
Here is a summary of Nokia’s opening address at MWC in Barcelona yesterday (March 1), and of Nokia chief executive Justin Hotard’s MWC debut. The question is: do you believe? Hmm; maybe. It went like this…
Note: links to all of Nokia’s announcements at MWC (at writing) are included below.
Networks evolve with workloads – voice, data, video, and now, suddenly, with AI: 100 trillion tokens per day, 77 exabytes of data per month, 1.3 trillion sessions (“discrete AI engagements”) per year. “AI is already a scaled workload,” said Hotard; and mobile is the default access, he suggested – for 53.5 percent of traffic. “[AI] is mobile-native,” he declared. Equally, the tech industry is just at the start of this AI “supercycle”, so far dominated by human-to-machine tokens – the “early stages” of agentic AI, the first “incubation of the potential” of physical AI. But machine-to-machine engagements are coming. “That’s when it takes off,” said Hotard. “We’re on the cusp.”
Networks follow compute, he said – from scale-up (servers to racks) to scale-out (to data centres) to scale-across (between AI factories; where “the unit of compute in AI is bigger than just one data centre [and required] multiple interconnected data centres). Hotard stated: “Compute and networking live in a trade-off, and [in] balance.” This is essentially what Lumen talked about last week at its Investor Day (February 25) – about growth for smart networks, east to west, between AI factories, and not just north to south, cloud to user. It was broadly the same presentation, just spun for MWC. You wonder whether Lumen or T-Mobile, say, is more important for Nokia. Why choose, right?
Mobile 5G/6G is for access (often), and crucial; scale-across is mostly regional fibre (per Lumen’s build, Nokia’s other supply). Hotard said: “At every level, from the server to the AI factory, we’ve had innovation in connectivity – in terms of the scale-up network, how we access memory in the server or rack, the scale-out network, how we build data centers out of racks, and the scale-across network with AI factories. It’s already happening… We see this in our optical systems and our IP switching and routing platforms… AI [is] reshaping the architecture of the network… [and] driving tremendous increases in investment in transport networks…. The network is changing.”
But at MWC, the instruction from Nokia is about how to make 5G ‘AI-ready’ and 6G ‘AI-native’. Fatter pipes won’t cut it this time, said Hotard. “AI brings a fundamentally different traffic path… we’re not just talking about more capacity… It’s easy to say it is uplink constrained or downlink constrained, [but] it’s not about that.” Latency and reliability (URLLC, per the early 5G promise) is not enough for AI, he said. What’s the point of a 5G slice if bursty traffic messes with the mechanics, busts the SLA? It would mean over-compensation, and no spectrum left. It is not about five/six nines, said Hotard, but determinism to meet real-time demands (as discussed forever in private 5G.)
AI requires a “structural shift” from policy driven SLAs to deterministic connectivity. It is not about layering AI on top but everywhere. Hotard explained: “I can’t afford to just leave a huge slice open in anticipation of peak demand. And if I underscope it, I won’t serve or deliver the SLA that’s needed. This is why we need a different network. The networks of the past were built on a different set of principles. They were built on SLAs… Where we’re headed is not just about five-nines or six-nines (99.999 per cent or 99.9999 percent reliability). It’s no longer about peak capacity and traffic planning…. It’s about understanding the devices that are coming and delivering deterministic connectivity.”
He added: “This is a structural shift in what’s coming for access networks.”
Which was the way into Nokia’s (and MWC’s, effectively) discussion about AI in the core network, and also in the RAN, reverse engineered in architecture in the 5G era, embedded in the 6G one. Nvidia is everywhere in this story, of course (and MWC is already Nvidia’s show, arguably). Hotard said Nokia’s deal with Nvidia has been misread – that it’s not about GPUs for “spare capacity” but for managing traffic. “We’ve been working hard since we made the announcement with Nvidia in October, and I want to make a few points clear because a lot of things get covered and discussed. Let me just land the facts: this is not about putting in a GPU to leave excess capacity for intelligence.
“This is about building a dynamic programmable radio architecture… It’s about optimizing token delivery, optimizing performance, managing ROI, and managing energy. And at its foundation, it recognizes that at a minimum… it has to perform consistently with… existing 5G networks. That’s the innovation principle.” And so its AI RAN story – with new partners in tow (BT, Elisa, NTT DOCOMO, Vodafone) – is about AI at the edge, and dynamism to run it north and south, in and out of the scale-across network. Hotard spoke about physical AI at the edge as a tipping point, and of embedding AI in “every layer” of the network – and not just on top – to be able to “constantly realign”.
Which is how AI will work in 5G and 6G. He said: “A robot [at the edge] needs a token, information, intelligence. It is going to request that of the access network, and the network needs to optimize how it delivers the intelligence back… All of a sudden, our central office may be an execution engine for [edge] compute… In some cases, in dense areas, compute could sit in the RAN and provide low latency access – for a first responder, say, trying to address an emergency with real-time AR… The architecture [cannot] be hierarchical; it has to be flattened and dynamic to… integrate compute, control, and connectivity.” There was reference to its new edge work with Telefonica.
Hotard looped-in most of its announcements, besides – including work with Orange, Du, and AWS around agentic network AI, and, most interestingly, its collaboration with Ericsson on autonomous networks; plus the rest, as listed below.
He rounded-off: “The traffic profile is changing. This isn’t new. We’ve seen this through every evolution. We see this already with AI in the data center. It’s starting to affect our transport networks, and that innovation will persist all the way down to the fixed and ultimately the mobile edge. The workload shift is what drives this transformation, because ultimately, when the workload changes, the network has to change.”
Nokia expands with TIM Brasil to deliver next-generation AI-ready 5G network with Nvidia
Nokia and Deutsche Telekom collaborate to advance AI-native and open RAN innovation
Nokia and Ericsson strengthen cooperation to accelerate towards autonomous networks
Nokia accelerates AI-RAN momentum with new partnerships driving path to AI-native 6G
Nokia expands network portfolio for premium performance in the AI-RAN era
Nokia to deploy AI-ready network solutions in Telefónica’s edge data centers in Spain
Nokia and AWS show industry-first agentic AI-powered network slicing with Du and Orange
