While most of the big talk at MWC is about 5G and 6G, the most urgent AI infrastructure work is with fibre-heavy data centre interconnects. Cisco, and certain others, are capitalising on this east-west traffic surge, with mobile and edge networks positioned as a critical mid-term component in the AI networking stack.
In sum – what to know:
Compass points – Most investment in AI network infrastructure is east-west for training workloads in hyperscale data centres; north-south metro and mobile systems are a downstream concern.
Telecom vendors – Cisco, plus others, are seeing growth across optical transport systems for interconnects; but it has enterprises covered for when mid-term metro and mobile systems are ready.
Enterprise inference – as distributed AI agents proliferate, inferencing workloads will dominate, driving investment in metro fibre, mobile access, and enterprise edge for lower-latency AI services.
Here’s a short (getting longer) retelling of some of the points raised in an excellent roundtable event with Cisco at MWC on Monday (March 2) – which captured a subplot of the show, and of the whole telecoms industry: that the earliest work and quickest chance to support AI infrastructure does not have much to do with mobile access networks at all. It is about fibre, instead, and mostly about new fibre builds between new data centres. Yes, this is MWC, and much of the agenda on the stages and stands is about new mobile networks for new AI workloads with new traffic patterns.
Nokia talked on Sunday night (March 1) about how AI traffic is spiralling upwards in haphazard bursts on mobile networks, where AI is engaged in more than half of cases (mostly for consumer LLM enquiries to the cloud). It requires an infrastructure reset, it said, mixing-in talk about AI RAN, to be reverse engineered in architecture in the 5G era and embedded in the 6G one – and pursued open-fisted with Nvidia, and mob-handed with big operator groups. But telcos have been burned by 5G, set down in 3GPP pre-AI, and Nokia is making money fastest from fibre.
This is because the real money, right now, is flowing east and west, between AI factories, rather than north to south, cloud to user, via long-haul and metro transport networks and, finally, fixed and mobile access networks. This was the discussion at PTC in January – which, like everything, has swelled into a broad AI infrastructure show, but retains its original focus about fixed telecoms. It was the story from Lumen’s investor day last week: a fibre provider, practically back from the dead. It is why Ciena has just (yesterday, March 5) posted a 33 percent jump in its first-quarter revenues.
Ciena, selling fibre optic systems like Nokia, has raised its guidance once again – already 28 percent higher in the second quarter. Incidentally, Ciena was at MWC as well, talking about the same (coverage to follow), and also about the mid-term discipline to flow hyperscale-level cutting-edge gear into traditional telco infrastructure. But again, this is for fibre systems (this is Ciena), just below the big-money scale-across build-out. Making mobile work for AI is a mid-term discipline, and maybe not such an urgent case – which might be a helpful way to frame all the MWC noise.
Which is not to say MWC has jumped the gun; the work on GPU-accelerated radio systems, agentic core software, and whatever-level autonomy is urgent, too. It is just, amid the bluster, the elephant in the Fira this week is that cellular-based AI is a dicier business, and hardly the whole story – with neither the momentum or money going into data-centre fibre interconnections, right now, and a tactical job to convince its protagonists to invest again, heavily and urgently – somewhere between the 5G NS tragicomedy, 5G SA and 5G-A live-show, and 6G sci-fi stuff.
The agentic rush
Which is a long way into discussion about the Cisco roundtable. But the thing is, in the backrooms at MWC, all of this was very plain, and Cisco is serving both sides, of course. Jeetu Patel, president and chief product officer at Cisco, started by talking about the AI effect on all networks, effectively, and how the narrative is spinning its way. “If you think about it, orders of magnitude: for every human you might have 10 to 100 to 1,000 agents – which means close to a trillion agents out there, working 24/7, which is equivalent in [new network] capacity to three trillion humans.”

He added: “That will require infrastructure.” Cisco is the one to support its delivery, he said. Not just in mobile access networks, whenever operators get themselves together, but from the hyperscale scale-across build-out today, and right on down the network infrastructure stack.
“You might have noticed, Cisco almost feels like a different company over the past 18 to 24 months. There’s a lot of momentum… Our ability to work with hyperscale clouds, neoclouds, sovereign clouds, service providers, and enterprises means we have one of the most comprehensive views.”
True that; even if lots of network vendors might claim similar. But the Cisco view seems like a good one. Its message to MWC was very simple: pull your socks up. Jeetu said: “The pace, this exponential curve, is almost like a vertical line. So it is very hard to think about it in a predictable way – because it’s moving so fast. But you can’t stay on the sidelines and wait for it to mature. For telcos, there is an entirely different opportunity – not just around the value they bring [to operations], but the modernization they can [also] provide to companies through a global connectivity fabric.”
Which is the other theme at MWC, ultimately: that, even if the urgent scale-across is mostly a hyperscaler business (where the rich non-telcos commission and operate AI networks to train distributed models between data centres), the whole rest of telco ecosystem, across transport and access, land and sea, on fibre and cellular, is required to bring AI to work in cities and enterprises on newly performant and controllable networks. (Per above: mobile is the last-mile touchpoint for AI, and totally crucial, but also fragmented and fraught, and subservient to fibre right now.)
“These agents… require different infrastructure,” said Jeetu, with an anecdote as well about how Cisco is using Claude Code from Anthropic and Codex from OpenAI. “We have our first product that is 100 percent written with AI. By the end of the year, we will have at least a half a dozen products 100-percent written with AI. By the end of next year, 60 to 70 percent of our products will be 100-percent in AI,” he said. “AI is building AI at this point in time. There is no lag. If the capabilities aren’t there, they will be in three months. It takes longer to plan than to build.”
It requires a “very different mental model”, he said, before repeating the message for telcos, and really for the whole global economy: “If companies cannot handle the pace, they will be left behind.”
Edge-ways migration
But we should get back to the premise of the piece, about how MWC presents a skewed view of AI networks – just on the grounds that, even if the opportunity is a layered one, the most immediate value and investment is in fibre-based transport systems, as driven by hyperscalers. Cisco confirms this; in response to RCR at MWC, Jeetu commented: “The bulk of the spend [is] in all these AI clusters; 95 percent of GPUs are going to massive hyperscalers to train [models] or build infrastructure for the model builders,”
Most of the networking spend is on fibre-heavy work to scale-up, scale-out, and scale-across GPUs in data centres – within racks, within data centres, between data centres. Hence, the good business for the likes of Cisco, Ciena, and Nokia in coherent pluggable optics, notably, plus other optical transport systems, and packet and aggregation routers, switches, and edge platforms, variously. “Until we see that shift – either to service providers deploying inference services at the edge or to consumption from enterprises directly – the bulk is for training infrastructure.”
That shift from training to inference, with distributed agents attached, is when the metro fibre network, the mobile access network, and the enterprise edge platform is engaged. “Scaling laws persist. Billions are spent to train the next model because there is a step function improvement in the model… But at some point, as agents get deployed, the training volume will dwarf compared to the inferencing volume. As the inferencing volume aligns with [distributed] agentic workflows, companies will seek efficiency with every basis point of margin they have,” said Jeetu.
He added: “At some point, they might want to bring (build or co-locate) data centre [capacity] just for the margin, if nothing else.” And so, again: mobile and edge networks matter, but the fibre build-out is urgent. Mid-term, however, they will be the critical touchpoint for enterprises, and, upgraded and optimised, monetizable, as well. The rise of agentic AI should make telco minds race, the message goes. “Inferencing growth will be much more than training growth in three to five years.” Cisco’s portfolio is primed for data centres, but it has enterprises covered, it said.
At the roundtable, Cisco mentioned a full stack and an on-prem platform (Unified Edge) for enterprises. Jeetu said: “They will build-out their own, of course, [but] virtually every company will be hybrid eventually with capacity from public and private cloud data centers – for sovereignty, margins, any of that. We will provide hyperscale-class reliability and performance to them. The network ASIC in our 8,000 series switches and routers, largely for hyperscale, is now available with our Nexus operating system and devices – so enterprises get to benefit.”
In networking, the east-west hyperscale technologies will cascade – in the networks, for the data, down to enterprises. Early on in the roundtable, Jeetu set out Cisco’s stall, and it works as the endpoint here. “There is a tremendous opportunity for tiered services not just around connectivity, but also around security and observability – for the entire stack, from how GPUs perform to how models perform, to how applications perform, and all the way to how the agents perform… Cisco is a critical infrastructure company for the AI era.
“We are going to provide low-latency, high-performance, energy-efficient networking. We are going to provide optics and optical systems to make sure these things are connected even through data centres that might be hundreds of kilometres apart. We will have our own silicon chips. We have our own operating system. We are going to invest very heavily in safety and security. We have one of the world’s largest security businesses, and we will double-down – observability not just for applications and networks, but also for AI. Which, in my mind, covers the entire stack.”
It was a good session at MWC, and there’s lots of discussion from it that hasn’t made the cut, here – which RCR may revisit over the next weeks.
