The terms are often interchangeably used, but in O-RAN testing, they are not one and the same
The business case for Open Radio Access Network (O-RAN) adoption is to break down vendor monopolies — a strategy that effectively lowers costs while introducing more choices for carriers. Open interfaces and disaggregated components in the O-RAN architecture offer benefits like greater flexibility, enhanced scalability, and improved network intelligence through integration of AI. This gives operators a high-performing, energy-efficient network, but more importantly, one that can serve as the foundation for Industry 4.0 digital transformation.
The Open Testing and Integration Centers (OTIC) — an O-RAN Alliance initiative — provides open, vendor-agnostic labs for testing these multi-vendor O-RAN products and solutions. The labs seek to reduce investments in testing infrastructure for operators, while opening access to carrier-grade testing for small businesses and academic institutes with limited resources.
“One of the tenets is to actually have this network of labs that can offload some of the baseline testing from operators to make the business case for O-RAN definitely a lot better,” Ian Wong, director of RF and Wireless Architecture, Viavi, said during a session at the i14y Lab Summit 2025.
However, the same characteristics that make O-RAN an appealing technology also make testing challenging. Testing is meant to verify that multi-vendor components are compatible, their specifications are conformant, and tools are secure and performing. This process is inherently complex — made especially challenging by vendor diversity, variability of test processes and workflows, and infrastructural nuances between labs.
The RAN Intelligent Controller (RIC) application is a good example of this. The RIC comes with high variability and sensitivity towards different settings. Test results collected from small topologies read differently from those from big topologies, making the data unreliable.
Repeatability vs Reproducibility
Having repeatable testing workflows in place that are also reproducible across disparate labs enables operators to trust open labs like OTIC to perform baseline testing on their behalf, and have confidence in their results.
Here’s how test providers like Viavi view repeatable and reproducible tests? “Repeatability means you do a test and you keep on doing the test and you get consistent result. So consistency is a characteristic of repeatable tests,” Wong explained.
In other words, readings from multiple tests conducted under identical conditions show the exact same values when they are repeatable. This high repeatability indicates that the test tools and methods are consistent and reliable.
Reproducibility on the other hand is when different groups perform the same test in different lab settings using different sets of tools and still obtain same results. For example, if a device under test (DUT) shows similar readings in two or more lab settings, it demonstrates reproducibility.
The results are “consistent across different test lines and across different labs,” Wong underlined. This is key to establish accuracy and reliability of the metrics.
But, while reproducibility is a key measure of reliability in O-RAN testing, in reality, its a lot harder to achieve.
“Every lab has their own infrastructure, they have their own test systems, they have their own processes. It much harder to get to reproducible. But reproducible is what the industry needs,” he said.
Variables like performance of the core networks, test plans, gap in information sharing between teams, and test equipment in use, can influence and contaminate the results.
Wong suggested a solution: “To really have a test be reproducible, we can’t force every lab in the world to use the same core or core emulator. So you need to make sure the components in the test environment does not affect the test results.”
Viavi’s NTIA-backed VALOR test suite is only one of the many options available today for testing Open RAN projects reliably across various laboratories. Rohde & Schwarz, Keysight, Spirent are many other key providers bring solutions and services that are helping labs achieve this repeatability and reproducibility in testing.
Conclusion
As different as their meanings are, both repeatability and reproducibility of tests are critical for the commercial success of O-RAN. They tell an operator if the results are true, or due to chance. If repeatable, the tests establish accuracy and reliability of the data, and with reproducibility, the readings become applicable to broader contexts.
