Could AI dramatically change how DSS works?
Radio spectrum is pricey. Operators drop billions at auction to lock down licensed frequency bands, and every single frequency counts. Dynamic Spectrum Sharing (DSS) was built to address exactly this, enabling new technological improvements to launch on the same frequency bands used by older tech. But carving up that shared space with static rules only gets you so far. That may be where AI-based approaches could help.
How DSS works
DSS lets 4G LTE and 5G NR run simultaneously within the same frequency band. It does this by dynamically distributing Resource Blocks (RBs), the fundamental units of spectrum assignment, between the two technologies in real time. The reason coexistence even works is that both 4G and 5G rely on orthogonal frequency-division multiplexing (OFDM), giving them a shared modulation structure and scheduling framework. That underlying compatibility is what keeps interference from becoming a dealbreaker.
Two main strategies govern how the sharing actually happens. Frequency-domain multiplexing (FDM) divides the available frequencies within a band and hands them out to LTE and NR at the same time, essentially splitting lanes on a highway. Time-domain multiplexing (TDM) takes a different approach — LTE and NR alternate their transmissions within the same band, each taking turns using the full width. Which one makes more sense depends on the deployment scenario, traffic characteristics, and network architecture involved.
It’s worth noting that DSS isn’t some theoretical concept floating around in research papers. It was standardized by 3GPP in Release 15, finalized back in 2018, and major equipment vendors have shipped it in commercial networks. The standard gives everyone a common framework to work from. But, that doesn’t necessarily mean that the methods for howspectrum gets allocated moment-to-moment are the same across the industry.
Predictive and adaptive optimization
There is a core problem with DSS — traffic doesn’t behave on neat, predictable schedules. Sure, there are broad strokes, like heavier usage during business hours, and quieter stretches late at night. But, you’ll find constant spikes and dips at granularities measured in milliseconds. A static rule that says “give LTE 60% of RBs during the workday” is going to waste spectrum during momentary 4G lulls and starve 5G users when unexpected demand surges hit.
This is exactly where AI-driven traffic prediction changes the equation. Machine learning models trained on historical network data can parse traffic patterns across multiple time scales — from seasonal shifts down to sub-second fluctuations — and forecast demand accurately enough to pre-emptively reallocate spectrum before congestion materializes. The practical objective is spotting microsecond-to-millisecond windows of unused 4G capacity and sliding 5G packets into those temporal gaps, essentially playing Tetris at machine speed with the spaces between 4G transmissions.
Smart scheduling algorithms then translate those predictions into action, dynamically tuning resource allocation to balance load and give priority to critical traffic types. On top of scheduling, AI handles adaptive modulation and coding too — adjusting Modulation and Coding Schemes (MCS) on the fly based on real-time channel conditions to wring maximum throughput out of whatever spectrum windows happen to be available at any given instant.
The upshot, at least in theory, is a system that gets ahead of traffic shifts instead of reacting to them, proactively reallocating spectrum rather than scrambling to catch up after things have already gone sideways.
Real-world implementation examples
Real-world DSS deployments offer a window into how these AI-driven approaches actually perform across different environments.
In dense urban settings using FDM, AI algorithms have been deployed to balance the split between LTE and NR while prioritizing distinct traffic classes — think Ultra-Reliable Low-Latency Communication (URLLC) for 5G and Voice over LTE (VoLTE) for 4G. The AI layer’s core job here is making sure neither technology’s critical services degrade, even as the overall spectrum gets carved up continuously.
Rural deployments are a little different. TDM-based scenarios have leaned on historical traffic data to predict usage patterns, enabling pre-emptive time-slot adjustments. Rural networks typically feature much more pronounced traffic valleys, meaning there’s potentially a lot more “free” spectrum available for 5G during off-peak windows — but only if the system can nail the timing of when those valleys show up and how long they’ll persist.
The takeaway from these examples is that DSS is far from a one-size-fits-all proposition. The AI models and sharing strategies need calibration to the specific quirks of each network environment, which adds both flexibility and a layer of complexity.
Business benefits
The economic argument for AI-driven DSS is pretty obvious — operators squeeze more value out of spectrum they’ve already paid for. Instead of chasing entirely new spectrum purchases or embarking on full refarming exercises, DSS makes an incremental transition possible using existing antenna and RF hardware. That’s a direct hit to the bottom line, since operators dodge the capital expense of dedicated spectrum acquisition and the operational nightmare of ripping and replacing infrastructure.
Operators also don’t have to sit around waiting for the next spectrum auction or finish a full network overhaul before they can offer 5G. They can flip 5G on across existing bands almost immediately, then scale coverage and capacity as demand dictates.
And maybe most critically, DSS enables seamless coexistence between the two generations, plus upcoming generations. Legacy 4G subscribers keep their service quality intact while 5G users get access to current-gen capabilities.
Limitations
For all its upside, AI-driven DSS comes with real practical challenges that deserve honest treatment.
Complexity is a big one. Running sophisticated ML infrastructure for real-time spectrum management demands robust data collection pipelines, training and inference systems, and serious technical talent. Smaller operators or those in less mature telecom markets may simply not have the resources to stand these systems up and keep them running. In some cases, the overhead of deploying, tuning, and monitoring AI-driven scheduling could outweigh the efficiency gains — especially in regions where spectrum is still relatively plentiful. For those operators, a well-configured static allocation might be perfectly fine.
Interference management is another persistent headache. DSS is engineered to minimize interference between 4G and 5G, but dynamically shuffling resource allocations within the same band creates coordination challenges that compound as the network scales. Consistent real-world performance depends on advanced beamforming, precise power control, and sophisticated interference mitigation — none of which scale uniformly across every deployment scenario. Seamless coexistence is achievable, but pulling it off reliably across diverse network conditions is harder than it looks on paper.
Then there’s prediction accuracy. ML models trained on historical data may do well under normal circumstances, but they can stumble during anomalous events, like network outages, major sporting events, or natural disasters — or in freshly deployed areas with limited training data. The whole system works through predictions, and when those predictions miss, you could actually end up with worse spectrum utilization than a competently tuned static scheme would have delivered.
Regulatory and standardization hurdles add another wrinkle. DSS itself is standardized under 3GPP, but the broader regulatory frameworks governing spectrum sharing differ country to country. Regulatory bodies have to sign off on sharing arrangements, and that approval process can be slow and unpredictable. A DoD study concluded that sharing 350 MHz of 3 GHz spectrum would not be feasible without DSS proven at scale, which positions it as a critical enabler but also underscores that proving it at scale with high confidence is still a work in progress.
And it’s worth flagging that 3GPP-defined DSS represents just one flavor of dynamic spectrum sharing. The broader landscape includes cognitive radio, opportunistic spectrum access, and other advanced techniques that aren’t all equally standardized or ready for real-world deployment. Not every approach to dynamic sharing is ready for prime time.
Emerging tech for AI-driven DSS
A handful of adjacent technologies are coming together to make AI-driven DSS both more practical and more powerful.
Open RAN (O-RAN) architectures stand out here. O-RAN standards deliver open, vendor-agnostic interfaces that let spectrum sensing and management applications work across different equipment platforms. That matters enormously for AI-driven DSS because it means spectrum optimization algorithms aren’t trapped inside a single vendor’s proprietary stack — they can ingest data from and push decisions to a heterogeneous network. O-RAN’s distributed design also enables spectrum sensing at scale, feeding the data pipelines that AI models need to function.
Cognitive radio technology fits naturally alongside this. Cognitive radios sense the spectrum environment in real time and let lower-priority users dynamically tap into licensed spectrum when primary users aren’t fully utilizing it. That dovetails directly with AI-driven DSS — enabling intelligent, protocol-aware spectrum access that goes well beyond simple time or frequency multiplexing.
