AI-native air interfaces could play a major role in 6G
When people talk about AI in telecommunications, the conversation usually gravitates toward network management, whether it AI handling traffic flows, making routing decisions, or allocating resources more intelligently. These are cases of AI bolted onto existing infrastructure though. AI-native air interfaces are a little different — they represent using machine learning to design the radio signal at the physical layer.
With AI-native air interfaces, AI-based systems change how signals get encoded, modulated, and transmitted from the ground up. As 6G research picks up speed, the approach could replace decades of handcrafted waveform design with neural networks that learn optimal signal patterns for specific hardware and environments. Here’s how it works.
What is an AI-native air interface
The key to how the AI-native air interface work is the “AI-native” part. That sets them apart from other networks that might be “AI-augmented” where machine learning handles things like routing optimization or resource allocation but leaves the underlying signal design alone. AI-native air interfaces operate at a deeper level. Machine learning designs the signal itself at the physical layer, swapping out traditional mathematical models for learned representations.
Wireless communications have depended on waveforms like OFDM (Orthogonal Frequency-Division Multiplexing) for decades. These are signals developed through rigorous mathematical theory and standardized industry-wide. Engineers handcraft these based on theoretical models of radio wave propagation, interference behavior, and idealized hardware performance. AI-native air interfaces flip that though. Neural networks learn optimal signal designs by training on how real hardware actually behaves under real environmental conditions.
This amounts to a fundamental reimagining of encoding, modulation, and transmission. Rather than applying predetermined signal structures, the system learns what works best for specific deployment scenarios. This is particularly useful in niche environments, where the behavior of networks might be a little different than an average urban setting. These are characteristics that AI can discover and adjust for, rather than engineers needing to specify upfront.
Deep learning at the PHY layer
The technical progression toward AI-native air interfaces is happening in stages. Early efforts focused on replacing individual processing blocks within the traditional digital signal processing chain — using machine learning for encoding, symbol mapping, equalization, or decoding. Later work is replacing multiple connected blocks. The most advanced implementations replace the entire physical layer.
At this stage, both transmitter and receiver are implemented as deep neural network modules, functioning as auto-encoders. The transmitter learns to encode information into signals, while the receiver learns to decode those signals back into data. The most important piece is the fact that the system trains end-to-end, optimizing both sides jointly rather than independently. Traditional systems optimize each component in isolation, which can produce less-than-idea overall performance when those components don’t perfectly complement each other.
The goal shifts from minimizing bit errors under idealized channel models to minimizing “semantic loss” under real channel constraints. Instead of designing systems that assume theoretical hardware performance, AI-native approaches learn the actual imperfections of equipment.
Performance gains
Research and early field trials point to meaningful improvements across several dimensions, though these results remain preliminary and come largely from controlled environments or pilot deployments.
Spectrum efficiency gains come from AI-designed waveforms creating bespoke constellations and pilot signals that adapt to available spectrum conditions. Rather than fixed modulation schemes, the system learns representations optimized for current channel characteristics. Some research suggests potential compression gains up to three times greater than conventional approaches, though such figures need validation across diverse conditions.
Energy reduction represents another claimed advantage. Studies indicate potential reductions of up to 50% in transmit power compared to 5G for equivalent bandwidth and data rates. Field trials involving AI-optimized scheduling have demonstrated 34% network energy reduction in practical deployments. These savings matter for both operational costs and environmental impact, though the computational overhead of training AI models may partially offset transmission energy savings.
Latency improvements have been demonstrated in large-scale operator trials spanning more than 5,000 gNodeBs. These deployments showed 25–34% reduction in air interface latency in urban and vehicular environments. One specific example showed short-video streaming latency dropped from 43.0 milliseconds to 32.0 milliseconds. That said, these results come from specific operators with incentives to publicize successful pilots, and generalization across global networks hasn’t yet been established.
Real-world uses
Private networks in factories and warehouses look like the most promising near-term application. These environments prioritize flexibility over standardization, and the closed nature of private deployments sidesteps the interoperability concerns that complicate public networks. A learning network could reconfigure from supporting low-bandwidth industrial sensors to high-throughput video surveillance to latency-critical robotic control, without manual retuning of radio parameters.
High-interference environments, particularly dense urban areas, have conditions where traditional signals degrade and AI-native approaches might discover new solutions. Learning from actual interference patterns rather than theoretical models could yield better performance where conventional waveforms struggle.
Latency-critical services could also benefit from an AI-native approach. Autonomous vehicles requiring vehicle-to-everything (V2X) communication, for example, really need optimized air interfaces. Dynamic spectrum scenarios, where conditions shift rapidly due to weather or varying usage patterns, could also benefit from systems that can adapt on the fly.
General consumer mobile broadband isn’t as clear-cut though. The global mobile ecosystem depends on standardization across vendors and interoperability across borders. Whether AI-native approaches can work within that framework, or whether they’ll remain confined to specialized deployments, is still an open question.
Interoperability and standardization
Of course, none of this really matters without some kind of standardization. 5G works globally because manufacturers and operators agree on 3GPP specifications. Everyone uses the same waveforms, the same modulation schemes, the same protocol structures. When each AI system learns its own optimal waveform, communication between different vendors’ equipment becomes a little more problematic. A user device from one manufacturer communicating with base stations from multiple vendors, or roaming across networks operated by different companies, depends on shared signal standards.
Some researchers propose “dynamically generated control interfaces,” potentially enabled by large language models, that could negotiate signal parameters between incompatible systems. This remains highly speculative. Others suggest the 3GPP standards process itself must fundamentally change, moving from fixed specifications to frameworks that accommodate learned behaviors. Neither approach has achieved consensus.
Validation and testing are somewhat difficult too. Traditional networks can be verified against mathematical specifications, but AI-native air interfaces will require new testing approaches, including Hardware-in-the-Loop testing, Black Box evaluation, and sophisticated simulation environments. Regulators, operators, and vendors haven’t settled on standardized protocols, either.
The energy and computational costs of AI training should also be considered. While optimized networks may reduce transmission power, training models at scale or implementing federated learning at the edge requires substantial computation. Whether net energy savings emerge across the full system lifecycle, including training, deployment, operation, and updates, remains to be seen.
