YOU ARE AT:AI-Machine-LearningA foundational AI strategy is key for unlocking AI's full potential in...

A foundational AI strategy is key for unlocking AI’s full potential in mobile networks (Analyst Angle)

At the Mobile World Congress (MWC) this year, virtually all industry players exhibiting at the show centered their messaging around Artificial Intelligence (AI), presenting modular Proofs of Concept (PoCs) and demonstrating how new AI applications can address specific pain points in isolation, including the following:

  • Infrastructure power efficiency
  • Enhanced network performance through resource optimization
  • Better customer experience management
  • Revenue assurance
  • The creation of new revenue streams

However, there was little discussion devoted to the benefits of implementing a holistic foundational AI framework applied across all layers of the telco network, from radio to the service layer, to fully realize AI’s potential.

This paper discusses the advantages of implementing a holistic foundational AI and compares it to the modular approach the industry is currently exploring. It will elaborate on the challenges associated with such an implementation and examine the different options available to harmonize AI implementations across the entire telco network stack. Toward the end of this article, we will introduce the Telecom Foundation Model (TFM) proposed by Huawei at MWC last month and use this as a case study for how operators should use a holistic approach to AI to be able to deconstruct, analyze and address complex processes spanning diverse operational scenarios, infrastructure layers and use cases.

Beyond AI silos: Transitioning from modular to holistic foundational implementation

While already prevalent in current telecoms infrastructure, AI is evolving to tackle myriad use cases. Most often, the industry is implementing the technology as an add-on modular framework, whereby each AI model used is fine-tuned to enhance a particular use case in isolation. As the scope of AI expands to support a growing number of use cases, the efficacy of modular implementation faces mounting limitations:

  1. This approach limits the potential for comprehensive optimizations across the entire network.
  2. It risks introducing conflicts, redundancies, or suboptimal decision-making due to lack of end-to-end visibility.
  3. It makes it hard for mobile operators to manage and orchestrate the increasingly complex and fragmented AI implementations within their networks. Interoperability between the various models used poses yet another hurdle for modular implementations, exacerbating operational challenges. Moreover, operating the entirety of the AI network through a modular lens hinders cross-functional collaboration and obstructs the seamless exchange of insights across the telco organization.
  4. Finally, due to such suboptimalities from failure to integrate with other AI uses, cost inefficiencies loom large over modular AI implementation strategy, especially in the long term as telco AI applications accumulate.

Hence, two fundamental questions linger here. Do the potential cost and energy savings facilitated by the modular AI approach effectively counterbalance the utilization of dispersed computing resources, typically made up of costly and energy-intensive equipment? Is a modular approach focused or simply short-sighted?

In contrast, a holistic foundational AI strategy presents an alternative solution to address the questions above. It offers a panoramic view of AI implementation across the organization, alongside greater awareness of future AI demands on infrastructure. This holistic approach requires operators to improve transparency and flexibility in sharing data and compute resources; more fundamentally, it requires operators to invest in a future-proof infrastructure that is sufficiently advanced to meet surging demands of AI at the organizational level, not merely application level. If operators can find enough support to overcome these early challenges, a holistic approach promises harmonized AI implementation, fostering comprehensive optimizations. This strategy streamlines the scaling of AI capabilities, aligning seamlessly with network advancements toward 6G and the integration of emerging technologies like edge computing and sensor networks. It simplifies the management, maintenance and upgrades, reducing both operational complexity and Capital Expenditure (CAPEX) associated with AI deployment. This approach serves as a catalyst for intelligent automation, optimization and innovative service enablement throughout the network and business.

Evaluating various options for implementing telco AI

There are several ways mobile operators can implement a comprehensive, end-to-end AI solution across their entire network. One approach is to develop in-house solutions, which requires robust internal AI expertise and substantial investment in AI talent, infrastructure and data training.

An alternative avenue is to collaborate with infrastructure suppliers, which enables mobile operators to deploy end-to-end, integrated AI platforms specifically customized for them. At MWC this year, Huawei, Ericsson, Nokia, Samsung and others unveiled their strategies to incorporate holistic AI into their infrastructure solutions while preparing for 6G. The vision behind their strategies is to create a unified AI hub formed of a library of multiple models targeting various use cases across the entire network. The hub is topped by an abstraction layer responsible for distributing AI prompts and workloads across multiple models, depending on the targeted use case. It is also supported by a single multimodal data library, so the AI solutions gain a comprehensive understanding of the bigger network picture to provide end-to-end optimization, intelligent automation and innovative service delivery in a harmonized way.

To efficiently implement this holistic AI approach, infrastructure suppliers must think of an innovative way for orchestrating the multitude of AI models within their ecosystem and avoid siloed utilization of AI models. Central to this holistic framework is the incorporation of an orchestration layer, indispensable for managing a multitude of decision points in real time. Furthermore, the orchestration layer must seamlessly integrate with legacy infrastructure and Operations Support System (OSS)/Business Support System (BSS) solutions to ensure smooth operation. However, orchestrating and managing a hub of AI models is an exceedingly complex task, surpassing the capabilities of current Service Management and Orchestration (SMO) tools.

The introduction of the foundational AI model by Huawei

Huawei’s TFM, announced at MWC 2024, stands out as a viable case study of the holistic AI strategy due to its broad scope, technical specificity and supporting investments, which together make for a holistic and practically pursued strategy. Huawei’s proposal entails establishing an intelligent central engine consisting of proprietary and third-party AI models. These models encompass a spectrum of functionalities, spanning generative AI, computer vision, natural language processing, recommendation systems and telco-specific models, meticulously orchestrated by a foundational AI model.

Huawei’s TFM Huawei is based on a three-layer architecture aiming to deliver an optimized user experience, enhanced network operational efficiency and productivity, and accelerate deployment of innovative services.

The first layer (L0) is a hub of AI models containing a mix of open-source models, third-party models and Huawei proprietary models, ranging from simple regression or recommendation engines to more sophisticated large generative AI models.

The second layer (L1) contains telco-specific models. For Huawei, this second layer is founded on three main pillars:

  1. High-quality corpus to fine-tune accuracy of the large models used, whereby Huawei leverages its comprehensive telco expertise gained over the last 30 years servicing the mobile telecommunications market to enrich generic models with telco specificity.
  2. Comprehensive toolchain for automatic testing, evaluation and improvement of L0 in a closed loop. This is to ensure highly-accurate training and inference of the Large Language Models (LLMs) used.
  3. Comprehensive agent to use cross-domain orchestration framework to harmonize collaboration between the various models used for specific use cases in a closed-loop fashion.

Finally, the third layer (L2) is formed from two main sub-layers, namely the “role-based copilots” agent, a tool designed to help employees and to enhance internal efficiency and the “scenario based” agent, which aims to deconstruct complex scenarios in simpler manageable frameworks to maximize outcomes from collaboration between the AI models. This approach is extremely useful for troubleshooting the network for potential fault detection or prediction.

In summary, the layered approach of Huawei TFM and other vendor products enable mobile operators to deconstruct and analyze complex processes spanning diverse operational scenarios, infrastructure layers and use cases. Subsequently, these holistic AI platforms orchestrate them within a unified, automatic and closed-loop domain. This approach enables collaborative scheduling among the myriad models hosted within the AI library/engine, fostering holistic management and orchestration of the entire AI framework in a closed-loop fashion.

ABOUT AUTHOR