In sum – what to know:
Clean data drives AI intent – by breaking telcos’ silos, creating global visibility, and enabling contextual models; and by providing a unified foundation for AI models to understand their desired outcomes.
AI intent drives automation – by shifting from manual telco scripts to declarative outcomes across the network; and by translating their high-level goals into self-directed actions across network layers.
AI automation drives growth – by raising telcos’ service delivery, customer experience, and order volumes; and by and by driving their innovation, service offerings, and measurable business impact.
It is shameful, perhaps, but a new RCR study about telco AI (Using AI and Supporting AI – out next week) makes a footnote, effectively, of the crucial point that clean data is everything. It is made at the death; an obvious statement as a final reminder, but one that also gets forgotten in the excitement and confusion. Data (and people) have to be organised first – the message goes. Because in the end, after all the strategic knowhow and guesswork – about applications and architectures, supply and demand, sure-bets and wild things – the killer for AI is the data.
If it’s right, the AI works, and business transforms; if it’s wrong; it doesn’t, and a whole lot of money is wasted. And here endeth the lesson – type of thing. A quote follows in the report from Nelson Englert-Yang, an analyst at ABI Research. “That is one of the most central components of this entire discussion – about where telcos get their data and how they organize it,” he says. “Because telco data is messy and it is useless if it’s messy. So telcos need robust processes for gathering it, cleaning it, and then training it for it to be useful.”
And that’s about the sum of it in the report – regrettably or not, because the narrative flow takes its own course. But yesterday (May 7) at FutureNet World in London, Blue Planet, the digital-change division of US fibre outfit Ciena, took to the stage to say the same, and expand on the point in forceful fashion. If telcos want to get to high level-four (‘Level 4’) autonomy in their networks (meaning: pervasive AI, minimal human intervention), as defined by TM Forum and adopted by analyst firms, then they need to sort out their data, it said – first and fast.
Everything else follows from there: breaking data silos, orchestrating data flows, building digital twins, and unleashing intent-based data networks – where complexity is reduced, teams are unburdened, and some kind of autonomy is enabled. Kailem Anderson, vice president for portfolio and engineering at the firm, said: “The key… is to bring intent and declarative models with clean and structured data and [clean and structured] data models. [Because] then you have a foundation to apply these cool agentic and generative AI use cases.”
In other words, if your data is ‘clean’ – where it is accurate, consistent, and formatted – then your data can be trusted, structured, and shared, and your data models can be logical, contextual, and stateful. And then telcos can do clever things – and maybe make mad-cap business dreams come true. “When you have a data model that is context aware, topology aware, stateful, and relationship-aware, then you can untap the power of generative [and agentic] AI,” said Anderson. Again, it is 101-stuff, but it was neatly told on stage at FutureNet World.
He zoomed-out to explain the role of declarative intent-based networking in level-4 autonomous networks – where you declare the outcome, and set the intent, rather than specify how (the technical steps) to do it. As it stands, most networks rely on imperative automation – manual scripts, integrations, and oversight. “Let’s be honest,” he said. “Intent-based models are foundational. Modern DevOps is built on it. Cloud infrastructure management has inherent intent-based and declarative models as a part of it. We just need to apply it to the network.”
He went on: “The long descriptive scripts our industry is built on for automation, which may have been successful in the past, will be unscalable as we move to level-four autonomous operations. Intent must be captured from the users and actuated across all layers in the network business. It needs to be reconciled to ensure the desired state matches the operational state. From a process standpoint, it means the industry has to make profound change – which is driven through intent, and not manually in the network. This is going to be very important to achieve our goals.”
And from there, Anderson presented a case study of sorts with an unnamed cable operator in North America, which has followed the Blue Planet script – to clean and sort data, break silos and cross sources, build a data fabric, create contextual models, feed automation models, and so on. The upshot is that its client has reduced its “order-to-cash cycle” (service design and provisioning) from around 45 days to a couple of days, and also seen its order volumes for optical and ethernet services jump by 300 percent and 500 percent, respectively. Or so the story goes.
But Anderson can tell it; the full transcript of this back-end part of his presentation is copied below.
“Let me give an example of the benefits of bringing data-intent and AI together: a North American cable operator, which, like most cable operators in North America, has grown over 20 years by acquiring various assets in different markets – to give it footprints in strategic business areas. It has a patchwork of OSS and BSS systems supported in each market. It has data silos in each market. It has data spending, planning, orchestration, insurance that are generally not talking – and which it finds very difficult to stitch together to have a global view of things.
“What’s the implication? Very simply, this cable operator had design times of roughly 30 days and provision cycles of roughly 15 days – so order-to-cash cycles in excess of 45 days. It understood if it was going to be competitive, given the market pressure and dynamics we are seeing, that it needed to change all of this. It needed to break down these silos to get a global view and to weaponize its data. It adopted a mantra: automate or die. It understood the profound importance of automation to its business to drive out costs and deliver a better experience.
“So what did it do? It broke down those silos. It implemented a service inventory layer to pull information up from these data silos to have a global view. Once it had that global view, it had a foundation to reconcile what was in the network. So it discovered what was in the network across all its markets, and then reconciled between the planned state and the operational state, and then fed the active operational data back to the systems that needed it – the planning systems, automation systems, services assurance systems.
“What did this do? Because it had stateful data about what was going on in each of its markets, it was able to do very simple things like visibility checks. It was able to start doing pre-reservation on what was in the network – which was a foundation to offer bandwidth-on-demand type services. It then used this data to drive value-added use cases where it started linking operational data to its services assurance systems so, when it had alarms, it could understand the impact on the service path. Because it knew what was in the network, it was able to start doing active testing in the network.
“It started to shift its business from being reactive to proactive. It then kicked off a secondary transformation. Because it understood what was in its network, which moved the needle in terms of the services it offered, it introduced a services orchestration layer to start to feed intent from customers and move from an imperative-based model to a declarative-based model. It did a rip-and-replace of its services assurance system… to leverage the [live, total, consistent, clean] data to do predictive analytics on fault-based use cases to identify failures in the network before they happened. And then it started to apply that to its performance data so it could predict performance trends and apply policy back into the network to do closed-loop actions – basically getting it to level-three autonomy.
“All of this had a profound impact across its business. Design times went from 30 days to 20 hours for its optical services. Order volumes went up 300 percent. Provisioning times went from 12 days to 19 hours. Order volumes for its ethernet services skyrocketed 500 percent. Because it was able to package its services differently.
“What is the next step? It wants to take what it has done for the network, automating layer zero through to layer three, and apply that to business services. It wants to stitch its business services layer with its underlay so it can do multilayer automation – AKA, truly getting to level-four autonomous operations.
“Level-four autonomous networks are achievable one step at a time. There are a few key considerations. The first is the role of a consistent data fabric layer to break down data silos. Consistent data across all domains and functions is key. It provides a foundation to stitch together planning, fulfillment, and assurance functions. Once you have that, you have the basics to deliver an AI-enabled OSS and introduce intent and agentic AI into your network… [And thereby] save time, get to market quickly, introduce new services, make customers happier, and increase the number of services you can deliver each month – all while reducing op/ex.”