From call centres to back-office systems, telcos are navigating the shift from predictive machine learning to generative AI, balancing innovation with caution in critical environments.
Model adoption – there are three phases of model adoption: from third-party tools, to on-prem solutions, to hybrid models with domain data.
Slowly, slowly – generative AI opens new frontiers in the front- and back-office, but critical network infrastructure remains mostly off-limits.
Hybrid approach – modular, hybrid architectures are enhancing internal operations and laying groundwork to offer AI services externally.
Note: This article is continued from a previous entry, available here, and is taken from a longer editorial report, which is free to download – and available here, or by clicking on the image at the bottom. An attendant webinar on the same topic is available to watch on-demand here.
The references to model training (see previous entry) are important, of course. Fatih Nar, a chief technologist and architect at Red Hat, describes the journey to here with model-making in three phases (as travelled with previous employers): from reliance on external third-party models, to more tightly controlled on-premise solutions, to a customised hybrid integrating proprietary data. We have the map-view, again.
He cites TM Forum’s network automation framework, which runs from level-zero caveman mode to some kind of level-five zero-touch self-autonomy: “At Verizon, we used Google’s Contact centre AI suite of AI tools to improve call centre operations. That was a level-one achievement. At Google, all the discussion was about how to move some parts on prem – which is level two. The third phase, started with early ChatGPT models, was about plugging in your domain databases, your own wisdom, into an external pre-trained model. That is level three.”
Robert Curran at Appledore Research chimes in: “We’re seeing a better mix of large-generic and small-specific models – where the first provides natural language interaction and a general-purpose toolkit, and the second roots it in the enterprise or industry, with its own processes and policies. The industry is experimenting. There is discussion about the utility and cost of an industry model. Does that have meaning and value? Who would own such a thing? There’s no obvious answer yet.”
The automation framework, referenced above, is useful to frame the conversation – level one-through-three, as telcos seek more varied and expansive AI solutions. The reference to OpenAI’s generative pre-trained transformer (GPT) models – and by extension to other generative AI models (from the likes of Google, Microsoft, Meta, Anthropic, Mistral, DeepSeek, and so on) – is important, too, because they are advancing rapidly, it seems, and cleaving open a whole new branch of AI.
Curran says: “That’s the new stuff, about creating something from existing source materials. ML says what happens next, and this opens up a new generative dimension. The crossover into back-office functions and field work is to [sort and summon] all of that accumulated data in how-to guides, manuals, tariff plans, rules and regulations – so the front office can answer customer queries more easily. The answers are there, but finding them is difficult.”
Indeed, generative AI is the rage. As referenced at the top of the piece, ABI Research says telcos will spend $47 billion on it by 2030 – from virtually nothing today (2025). Nelson Englert-Yang, analyst at the firm, says: “Early adopters have been prioritizing areas of clear returns with lower risk. So far this has largely been business deployments such as customer care. But we’re also seeing… a broader set of use cases and adoption of MLops frameworks.”
They are looking outwards, too – in ways that will be discussed later in this report. A new study by Nvidia says 84 percent of telcos plan to offer generative AI solutions externally to customers; 52 percent will offer it as a software as-a-service solution, and 35 percent will offer it as a developer platform, including for compute services. But most of their efforts are focused on internal functions, like customer care, a step removed from the network itself.
Englert-Yang says: “It is slow; gradual. But that is because of its critical nature – in terms of security, reliability. There is a great risk to deploy generative AI in the core network, say. We’re not going to see that for some time. It is mostly hands-off. It needs a shift in mindset, and even organisational structure, and more experience and confidence. Which is why it’s mostly concentrated around OSS/BSS and some higher-level applications.”
Curran says the same: “It is super early. But it is caution, not resistance – because of the network, ultimately, which is critical infrastructure. Automation – let alone autonomy; two separate things – is only being progressively introduced. There’s unhelpful language about telcos being super conservative. We’re dealing with something very serious here. So it’s right to be careful about it, just as it’s also right to want it to happen in the most efficient way possible.”
Away from the network, progress is decent. A human service agent in a call centre can now interrogate an NLP query engine to extract relevant information from scattered digital libraries in adjacent OSS/BSS systems – as Verizon tells. Previously, even with its ML “match-making”, staff had to know and find information in the back-end system (“on five or six screens”), and liaise with domain experts. “With AI, we can level-up the rep.”
Steve Szabo from Verizon Business comments: “The AI can siphon through tens-of-thousands of pages of information very quickly. We are seeing a high rate, in the 90-plus percentile, in terms of the accuracy of the responses.” How should we grade the AI in these enhancements? “It is between early AI and generative AI,” he responds. The earlier reference to raising confidence in AI is vital, clearly. Generative AI, as we know, is designed to lie – to make up answers if it does not know them.
It is like a puppy dog, as someone else put it – eager to please; retrieving balls, and socks, and slippers – when you’d just like it to fetch the paper (or make a cup of tea). Generative AI is known to ‘hallucinate’ (fabricate and lie) and prone to ‘drift’ – of data, concepts, models. So that, like a puppy (which, properly managed, will become a whip-smart police hound), it needs to be fed and watered, and trained over and again. Which takes human ‘oversight’ – which might be discussed (like lots here) in a completely separate report, but should be briefly covered here.
To be continued…