Challenges in Open RAN testing include inconsistent output from OTICs, according to industry experts
It has been nearly a decade since the telecom industry began its concerted push toward Open Radio Access Networks (Open RAN). Open RAN appeared to be hitting its stride as revenues accelerated between 2019-2022, but then a RAN market slowdown starting in 2023 interrupted its momentum.
During 2024, the overall RAN market recorded what Stefan Pongratz, VP of RAN market research at Dell’Oro Group, called the “steepest full-year decline in more than 20 years,” and Open RAN revenues fell 30% year-over-year during the first three quarters of the year.
However, Pongratz has indicated cautious optimism for the RAN market this year, and forecasts that Open RAN revenues will grow and account for 5-10% of the total RAN market. The “when” of Open RAN may be pushed out, but it’s still a “when” rather than an “if.”
Boost CTO touts Open RAN performance and flexibility
While revenues and deployments may not have moved as quickly as hoped, Open RAN testing progress indicates other signs of technology maturity. O-RAN Alliance’s spring plugfest for 2025 ran through mid-May, and its six themes have shifted away from basic conformance testing and toward use cases and applications of O-RAN to achieve things like energy savings, system testing with Layer 1 acceleration and open fronthaul transport testing with multiple open radio units.
During the recent CTIA 5G Summit in Washington, D.C., Eben Albertyn, EVP and CTO of Open RAN network operator Boost Mobile, touted not only Open RAN’s flexibility, but its performance—and, amid a backdrop of increasing geopolitical tensions, emphasized that its network was built using only technology from American vendors and close allies.
In January, Accenture and its testing unit, Umlaut, released a report from drive- and walk-testing conducted in New York City which ranked Boost Mobile over all three of the other national mobile network operators on data performance scores and data reliability (Verizon edged Boost out on voice-related metrics).
“Open RAN is not a science experiment,” Albertyn declared at the 5G Summit. “It really works.” He went on to add: “Open RAN not only works, but it also provides absolutely fantastic quality. … You can provide the best possible quality in the most competitive and most difficult environment using these technologies.”
According to Albertyn, Open RAN is also living up to promises of its flexibility. “This year we have changed two very significant portions of our architecture, without our customers being aware of it at all. And we did so in days, not months or years, like we would have if we used a different architecture,” he said. “We were able to do that because we live in the cloud, and therefore we can innovate at the speed of the cloud.”
Integration, configuration possibilities and inconsistent OTIC results are among the current Open RAN testing challenges
However, establishing interoperability and consistent, repeatable ways to test and measure the behavior and performance of O-RAN elements and systems has been one of the major challenges for the technology. Discussions at the recent Test and Measurement Forum event reflected the real-world complexity that is still playing out with Open RAN testing.
“Keep in mind that this is not an easy task,” cautioned Venkatesh “Venki” Ramaswamy, distinguished chief technologist for NextG at Mitre Labs. “That is something that we already knew from the beginning, but this is an enormously complex task putting these things together. So, we made a lot of progress. We are not there yet.” He sees a current trend toward Open-RAN-compliant solutions that come from the same vendor—which doesn’t quite live up to the thesis of Open RAN as supporting a diverse set of suppliers.
Nirlay Kundu, head of technology standards at research institute IMDEA Networks and a longtime veteran of Verizon, said that he thinks success in Open RAN will come when service providers are actually able to easily mix and match radio units and distributed units from different vendors, without additional Open RAN testing being a heavy lift. “In my opinion, it’s getting there, but it’s not there yet,” Kundu said. And, he added, the additional integration and testing costs make it unclear whether the total cost of ownership (TCO) for Open RAN is actually better at this point.

One of the double-edged swords for Open RAN testing is lies in the variety of configuration possibilities.
“Let’s say that you are getting [a radio unit] from vendor A, and they have a given configuration for testing—and that configuration is going to be quite different from a different vendor’s configuration,” Ramaswamy said. That not only makes it difficult for the person conducting the tests, but the configuration “is not going to provide the same set of behavior across different RU vendors.” Throw in a third component, say, for scheduling, and the parameters suddenly skyrocket.
“Even when you are able to match one-on-one with those parameters, it’s impossible to get the same behavior across these different components,” Ramaswamy said. “So that makes [testing] very, very complicated; very, very time consuming; very, very inconsistent and not reproducible.”
Even among Open Test and Integration Centers (OTICs), which are qualified by O-RAN Alliance, testing results can be inconsistent. Kundu said that OTICs are not all equal in terms of capabilities or testing resources for various layers. “If you test certain interfaces or certain layers, it has to be very, very consistent. And we are seeing that consistency is not there,” he added.
Ramaswamy suggested a accreditation program for OTICs that would certify not just the labs’ support consistent testing across labs. “Right now, there is nothing like that. So you cannot reproduce the results that you get from one of them to the other,” he said.
He suggested that it would be helpful to be able to create “pre-tested versions” of Open RAN network components for specific verticals, which would support private O-RAN networks and also be a starting point, to which specific testing would be added for given use case or implementations. “Having a pre-integrated solution where you have some confidence and some trust for a given equipment ecosystem for a given vertical, I think that is going to help as well. But the first and foremost, in my opinion is to have the accreditation of the testing lab itself,” he added.
Kundu said that he also would like to see more test-result cross-checking from O-RAN Alliance, of which he is a participating member. But he noted that another gap is that service providers haven’t yet jumped in to require specific parameters from the labs. Why? “Because the service provider is still not confident of the repeatability of the results,” he said. “So what the Tier One service providers do is, even if it’s tested by the OTICs, they will do most of the testing in their own labs themselves. That step should not be there.”
O-RAN will be successful, he continued, at the point at when service providers have confidence in the results coming out of OTICs and can put a list of requirements in an RFP, rather than conduct extensive additional system testing and integration themselves.

“We all know that just plugging different vendors together and standing back and turning on the on switch and hoping it’s going to work, it is not the way that it works,” said Simon Fletcher, chief strategy officer from Small Cell Forum. He added that product development teams should be working with test and measurement companies to create standardized profiles for testing, and that this is a gradual process.
He added: “To some extent I’m hearing some frustrations that perhaps we are not making so much progress, but I was always of the thought that this was not going to be a short-term gain. You’ve really got to stick at this. … But hopefully we can work through that over the next couple of years.”
Adam Smith, director of product marketing for testing company LitePoint, made the point that while O-RAN interfaces can be “wildly complicated,” in other ways the technology has made great strides towards a standardized software test automation framework, despite disparities across vendors. “I think what we need to move towards is testing towards a collection of experience,” he offered. “We’re finding things that fail in the real world, that don’t interoperate in the real world. And being able to feed that back into the test, I think is important, moving forward.”
For more insights, watch all of the content from Test and Measurement Forum on-demand, and check out the RCR Wireless News Test and Measurement Market Pulse Report.