In the early heyday of Open RAN, experts were predicting rapid growth and adoption worldwide. As the overall telecommunications market contracted, however, these predictions failed to fully materialize and initial 5G Open RAN deployments fell short of industry expectations. But now a second wave of Open RAN deployments is underway, with industry analysts forecasting that Open RAN revenues will account for five to ten percent of total RAN revenues in 2025.
This resurgence comes at an opportune time when today’s mobile network operators (MNOs) are seeking to cut costs, increase revenues and reduce network complexity by leveraging greater use of artificial intelligence (AI) in the network. That’s because the cloud-native architecture, open interfaces and greater standardization of Open RAN enable AI-RAN applications to be implemented more easily.
Increased adoption of AI-RAN will not only improve RAN performance and speed delivery of new revenue-generating services, but also will help MNOs to monetize excess compute resources in the network. Yet, this evolution won’t necessarily be quick and easy; therefore, monetizing AI investments as quickly as possible will be key to success. Let’s examine how Open RAN will facilitate this transition, and look at some of the AI use cases that will allow MNOs to boost performance and profitability.
Open for business
As AI technology advances, some radio unit (RU) vendors inevitably will be better or faster at integrating AI in their RAN baseband offerings. With Open RAN, standardized communication protocols and open application programming interfaces (APIs) allow MNOs to select best of breed network components, offering the flexibility to select those baseband solutions where AI is first available. This not only speeds adoption of AI-RAN, but also helps MNOs protect significant investments in their installed base of radios.
Likewise, standardized communications interfaces in Open RAN facilitate the integration of third-party AI applications, making it easier to implement AI-RAN solutions regardless of which vendor(s) provided the Open RAN solution. For example, third-party xApps or rApps in an Open RAN-compliant RAN intelligent controller (RIC) can make use of the AI capabilities of a GPU located in the Distributed Unit (DU).
Why AI-RAN now?
As mobile networks evolve to an AI-native RAN, the capability to harness AI intelligence in the network helps improve overall RAN performance and service delivery in a number of ways. With the opportunity to use AI for network enhancements, MNOs can improve RF optimization for better spectral efficiency, as well as optimizing energy consumption, helping to reduce total cost of ownership (TCO).
Moreover, by performing AI processing on the same hardware as the RAN, AI-RAN enables real-time processing and immediate feedback to significantly reduce latency, compared to processing in the cloud. This is essential for applications requiring extremely low latency, such as augmented reality (AR), where delays exceeding 20 milliseconds are noticeable.
In addition, by applying intelligent algorithms to signal processing, AI can perform superior channel error estimation to improve uplink throughput by 20 to 30 percent or more in areas with poor coverage. This faster uplink throughput is ideal for gaming and user-generated content, such as video conferencing and multimedia uploads. The resulting improvements in quality of experience (QoE) are particularly important for enterprise customers that need reliable connectivity.
How to build AI use cases
With the enhanced intelligence and processing power of AI-RAN, MNOs can offer innovative and profitable new services by combining low-latency RAN compute with AI inferencing. These use cases might range from gaming, AR and interactive video, to robots and drones with video processing and decision-making analytics. When these AI-powered applications are run on the same hardware as the RAN, the extremely low latency enables accurate tracking and real-time decision-making. Otherwise, if they are run in the cloud, the higher latency and extended processing time prevents some applications from performing properly.
Alternatively, this processing capacity can be used for applications that are not latency-sensitive as well. This allows MNOs to offer profitable new services to enterprise and manufacturing customers with use cases such as autonomous systems, network assisted smart devices and large language model (LLM) retrieval-augmented generation (RAG) whereby the LLMs receive relevant, up-to-date information retrieved from external knowledge bases.
The intelligent flexibility of the AI-RAN allows the AI apps to run on the most cost-effective portion of the network that meets their latency, location and reliability requirements. This flexibility also enables MNOs to access the same compute capability to run various network management applications just like another AI workload, such as AIOps, to improve network planning and operational efficiency.
Maximize network value
The use of high-performance GPUs for AI-RAN provides the necessary computational power to perform lightning-fast AI processing for considerable improvements in mobile network performance and a better overall customer experience. However, having enough AI services for a profitable business case to get the most out of AI-RAN investments can take time. Fortunately, by sharing common infrastructure between the RAN and AI workloads, MNOs have the option of monetizing excess capacity while waiting for new AI services to develop.
AI-RAN radio baseband units have excess compute capacity due to redundancy and fluctuations in traffic, which varies throughout the day depending on network demand. For example, demand is typically higher in the city center during the work day and in the suburbs at night. If this capacity is pooled, it can be monetized by selling GPU-as-a-Service (GPUaaS) on demand via open markets, improving platform utilization and providing immediate return on investment (ROI). In this way, MNOs can temporarily lease excess GPU capacity on existing markets, adjusting prices and quantities based on availability, demand and chip specifications.
The capability to monetize this excess capacity is enabled by open APIs. With the centralized service management and orchestration (SMO) framework as defined by the O-RAN Alliance, network managers can automatically monitor what each GPU is doing to see utilization and available capacity. As a result, MNOs can take full advantage of the lucrative AI services market, which is growing at 30 to 40 percent annually.
Usher in the intelligent RAN
The intelligence, compute power and flexibility built into AI-RAN infrastructure will empower MNOs to fully leverage a new generation of self-aware AI applications to improve network performance and reliability, as well as deliver valuable new services. The journey from here to there, however, will be much faster and smoother if we travel the open road.