YOU ARE AT:Test & MeasurementRepeatability vs reproducibility in O-RAN testing

Repeatability vs reproducibility in O-RAN testing

The terms “repeatability” and “reproducibility” are often interchangeably used. But in O-RAN testing, they are not one and the same

The business case for Open Radio Access Network (O-RAN) adoption begins with breaking down vendor monopolies — a strategy that effectively lowers costs and introduces more choices for carriers. 

Open interfaces and disaggregated components in the O-RAN architecture offer benefits like greater flexibility, enhanced scalability, and improved network intelligence through integration of AI. This gives operators a high-performing, energy-efficient network, but more importantly, one that serves as a solid foundation for Industry 4.0 digital transformation.

The Open Testing and Integration Centers (OTIC) — an O-RAN Alliance initiative — provides open, vendor-agnostic labs for conformance, interoperability, and end-to-end testing of the multi-vendor O-RAN products and solutions. The labs reduce upfront investments for operators while opening access to carrier-grade testing to small businesses and academic institutes with limited resources.

“One of the tenets is, to actually have this network of labs that can offload some of the baseline testing from operators, to make the business case for O-RAN definitely a lot better,” Ian Wong, director of RF and Wireless Architecture, Viavi, said during a session at the i14y Lab Summit 2025.

However, the upsides that make O-RAN an appealing technology also present a slew of testing challenges. Testing is meant to verify that multi-vendor components are compatible, specifications conformant, and tools secure and performing. The process is inherently complex — made especially challenging by vendor diversity, variability of test processes and workflows, and infrastructural inconsistency between the labs. 

The RAN Intelligent Controller (RIC) application is a good example of this. The RIC comes with high variability and sensitivity towards different settings. Test results from small topologies read differently from those from big topologies, making the data unreliable.

Having repeatable testing workflows in place that are reproducible across labs enables operators to trust open labs like OTIC to do the baseline testing on their behalf, and have confidence in the results. 

Repeatability vs Reproducibility

So what’s the difference between repeatable and reproducible tests? 

“Repeatability means you do a test and you keep on doing the test and you get consistent result. So consistency is a characteristic of repeatable tests,” explained Wong.

In other words, readings from multiple tests conducted under identical conditions show the exact same values. This high repeatability indicates that the test tools and methods used are consistent and reliable. 

Reproducibility on the other hand is when different groups run the same test in different labs with different sets of tools and still obtain identical results. For example, if a device under test (DUT) shows similar readings in two or more lab settings, it demonstrates reproducibility.

The results are “consistent across different test lines and across different labs,” Wong emphasized. It is key to establish accuracy of the metrics.

But, while reproducibility is a key measure of reliability in O-RAN testing, in reality, its a lot harder to achieve. 

“Every lab has their own infrastructure, they have their own test systems, they have their own processes. It much harder to get to reproducible. But reproducible is what the industry needs,” he said. 

Variabilities like the performance of the core network, lack of clarity in test plans, gap in information sharing between teams, and test equipment in use, can influence the results.

Wong has a solution. “To really have a test be reproducible, we can’t enforce every lab in the world to use the same core or core emulator. So you need to make sure the components in the test environment does not affect the test results.”

Viavi’s NTIA-backed VALOR test suite is only one of the many options available for testing Open RAN projects. Rohde & Schwarz, Keysight, Spirent are other key providers whose solutions and services are helping labs achieve this repeatability and reproducibility in testing.

Conclusion

As different as their meanings are, both repeatable and reproducible tests are critical for the commercial success of O-RAN. They tell an operator if the results are true, or due to chance. If repeatable, the tests establish accuracy and reliability of the data, and if reproducible, the readings become applicable to broader contexts.

ABOUT AUTHOR