YOU ARE AT:5GThe biggest 5G revenue opportunity depends on edge ML — are telcos...

The biggest 5G revenue opportunity depends on edge ML — are telcos ready? (Reader Forum)

One of the most promising new revenue opportunities in 5G is network slicing. This allows spectrum owners to create private wireless networks which they can then rent or lease to enterprise users for short or long durations. By some estimates, the private 5G network market could reach up to $30 billion in annual revenue.

For example, in rural areas spectrum owners can allocate underutilized spectrum to autonomous farm equipment. Or in financial districts that are crowded during the day with workers (and their devices) but then empty out in the evenings and at night, spectrum owners can rent portions of the spectrum to commercial IoT equipment to backhaul the day’s sensor data or fan out new algorithms wirelessly to that same equipment.

But communication service providers (CSPs) have to ensure that even as they explore new 5G business models they continue to provide minimum levels of service quality or else face increased regulatory scrutiny. In order to offer both a flexible network that can automatically allocate spectrum bandwidth based on demand while also ensuring service levels to consumers, CSPs will need to deploy low latency network analytics to thousands of edge locations to understand and predict real-time network quality. 

The two major oversights to running ML at the network edge

CSPs are not new to data science. According to Analytics Insight, the telecommunications sector is the largest investor in data science, already comprising about a third of the big data market. That spend is expected to double from 2019 to 2023, reaching over $105 billion in 2023.

That said, taking complex machine learning models trained in cloud or on-prem environments and deploying them at the edge is still relatively new. The dev environment can be so different from the live production environment that it could take months of reengineering before a single model can be successfully deployed at the edge. And once live, the data scientists who developed the model often have no view into the ongoing performance of their model until something goes wrong.

Overall we have seen two major hurdles for creating a scalable and profitable edge ML program for communication service providers:

1) Monitoring the ongoing accuracy of live models: Data science teams can focus so much on just deploying models and running models to the edge, that they forget to think about the day after a model is deployed. The network environment is continually changing (e.g., a new class of devices can come online) and so past network quality and demand models quickly degrade. Does your edge ML operations have the ability to monitor performance, push updated models, run A/B tests across portions of your fleet or do shadow testing?

2) Edge environment compute constraints: Most modern 5G network architectures rely on tens of thousands of small cells with the goal of centralizing as much of the network management functions to the cloud while limiting edge applications to those that absolutely require low latency (e.g., optimizing local bandwidth allocation). What this means for deploying edge ML is a highly constrained device in terms of compute and power, making it difficult to run complex ML models. There are several different methods for reducing the compute load required by your machine learning models, like quantization & pruning and knowledge distillation, but perhaps the simplest is to use a specialized engine for running ML at the edge.

But overall, CSPs need to reconsider their approach to machine learning not as a one-time deployment of a model, but rather as a cycle of deploying, managing and monitoring models based on ongoing performance. There is amazing potential for 5G to generate new revenue streams, but use cases like private networks rely on low latency ML models deployed at the edge. Given their investment in data science, CSPs will often default to building in-house production ML solutions but soon find these solutions are more costly to maintain, have lower performance and fail to scale. Luckily, there are off-the-shelf solutions that enable CSPs to offload none-core machine learning operations (aka, MLOps) functions like deployment and monitoring, while their data teams can focus on the MLOps functions that drive business results, like building more precise network demand forecasting models.

ABOUT AUTHOR