Telcos face both a challenge and an opportunity in handling the unique traffic patterns and demands of AI-generated content, especially at the network edge.
In sum – what to know:
Traffic challenge – uncached, on-the-fly AI content strains networks and bypasses CDN efficiencies, requiring new edge strategies.
Microservices foundation – cloud service-based architecture enables modular, agent-to-agent orchestration of AI tasks at the edge.
Data imperative – effective AI automation requires robust and real-time (and clean) data pipelines to coordinate microservices at scale.
Note: This article is continued from a previous entry, available here, and is taken from a longer editorial report, which is free to download – and available here, or by clicking on the image at the bottom. An attendant webinar on the same topic is available to watch on-demand here.
5G is increasingly integrated with the fiber core, and will support advanced features like dynamic bandwidth allocation and network slicing to enable prioritized services over the wireless network. Which flows into a flipside discussion, briefly, about transport challenges associated with the nature of AI traffic, and not just the volume of it – and how telcos might overcome this, too, with network computing and programmability in edge locations.
Stephen Douglas, head of market strategy at Spirent, points out that AI-generated content – particularly video content, but all sorts variously presents a unique challenge for networks because it mostly goes uncached – on the grounds it is created “on the fly”, based on user input or live data, and is not trailed by a pre-existing copy that can be stored for faster retrieval – and because Content Delivery Network (CDN) mechanisms in wired and wireless networks rely on caching to deliver content efficiently across servers.
“It means operators might find themselves cut out of the value chain,” says Douglas. “Most [AI content] generation happens in a central data center, and, without caches and CDNs, traffic just flows across the network. The question is how that content gets onto the device. Some handset vendors want more processing on the device. Which opens a debate about how much is needed at edge locations versus central data centers.”
This discussion grows more urgent with delivery of higher-quality critical-grade professional content – versus TikTok clips. The trick for telcos is to drive AI to the edge, he says, and to make it controllable in programmable networks. “If they can host the AI at the edge, where the video is generated, then they can optimize latency and traffic, and offer Service Level Agreements (SLAs) against those services – and position themselves directly in that chain.”
As another sideways tangle, deep in the weeds, this whole edge AI game only plays out for telcos if their systems are flexible and scalable. Lucky for them, 3GPP introduced a service-based architecture (SBA) in Release 15 (2018) of the 5G standard that defined a cloud-native microservices-driven system. “AI cannot be a monolith,” says Fatih Nar, chief technologist and architect at Red Hat. “It must operate as microservices – where each model delivers distinct value and interacts seamlessly with others.”
Back to the discussion, from earlier, about distributed agents in distributed infrastructure, moving to the edge for reasons of performance and efficiency: “One agent fixes a latency problem, and talks with a subscription agent to be sure the customer is on the right plan. So we have this agent-to-agent dialogue, leveraging external data points and capabilities via APIs, DB queries, SQL queries. Which is only possible with a microservices architecture,” says Nar.
“The same SBA elasticity is central to agentic frameworks, where agents discover one another, and capacity shrinks and grows on demand, and embraces security, privacy, governance.” He cites Anthropic’s Model Context Protocol (MCP), introduced last November, as an open standard for secure two-way comms between AI models and external sources, and IBM’s similar Agent Communication Protocol (ACP) framework, announced in March.
“All of this is going at a really fast pace,” he says. “It is a fast train, developing open source technologies – so AI can be implemented in a scalable way.” But just to pause, and think: in such a modular architecture, where the network is a set of loosely coupled API-connected microservices – variously handling session management, authentication, policy control, and now AI agents as well – the volume, variety, and complexity of operational data ramps-up quickly.
For these microservices to work effectively – especially in a distributed setup across fiber, edge, and cloud – they rely on real-time, clean, and accurate data to make decisions, automate functions, and coordinate between services. Dirty data makes it hard for AI-driven orchestration, automation, and optimization to function. AI models, used to improve microservice behaviour (whether optimizing bandwidth, predicting faults, adjusting slices) need clean data.
Nelson Englert-Yang, industry analyst at ABI Research at ABI Research rejoins: “That is one of the most central components of this entire discussion – about where telcos get their data and how they organize it. Because telco data is very messy and it is useless if it’s messy. So they need very robust processes for gathering data, cleaning it, and then training models if they’re training models themselves in order for it to be useful. And there are many kinds of discussions surrounding that as well.”
Which, as hinted, is a(nother) discussion for another day. Here, the conversation flips back again to ecosystem roles, drawing on both Douglas’ point about control of AI traffic at the edge (and the opportunity for telcos to prioritize, optimize, and monetize guarantees about performance and security) and Nar’s point about orchestration of AI agents at the edge (and the opportunity to host smaller AI models).
Such a distributed microservices architecture also enables telcos to broker or federate AI models and services, reckons Douglas – because they have the edge presence and the architectural tools to deliver AI as a scalable, managed service. All of a sudden, by accident and design, their old MEC shtick becomes something else, potentially – where they can bundle AI services, running on their edge assets, with their broader enterprise propositions.
“A number of telcos are positioning themselves as brokers or federators of AI to host big foundation models or partner on industry-specific ones, and to abstract all of that complexity for enterprises as part of an existing service offer. So the pitch is: ‘You don’t need to worry about what’s behind the scenes; this gives you the right outcome, and comes with your connectivity package.’ Which is a unique role, especially because big enterprises aren’t the target.”
To be continued…