YOU ARE AT:Chips - SemiconductorReality Check: The role of ‘smart silicon’ in mobile networks

Reality Check: The role of ‘smart silicon’ in mobile networks

Editor’s Note: Welcome to our weekly Reality Check column where we let C-level executives and advisory firms from across the mobile industry to provide their unique insights into the marketplace.

What does “smart silicon” (specialized integrated circuits with both general-purpose and function-specific processors) have to do with next-generation mobile services? Plenty, especially as the number of bandwidth-hungry devices and applications continues to grow unabated. To accommodate the accompanying data deluge, base station throughput will need to increase by more than an order of magnitude from 300 megabits per second in 3G networks to 5 gigabits per second in LTE networks. LTE-Advanced technology will require base station throughput to double again to 10 Gbps.

Several related changes are also having an impact on base stations. Next-generation access networks are using more and smaller cells to deliver the higher data rates reliably. Multiple radios are being employed in cloud-like distributed antenna systems. Network topologies are flattening. Content is being cached at the edge to conserve backhaul bandwidth. Operators are offering advanced quality of service and location-based services, and are moving to application-aware billing.

These changes are motivating mobile network operators to seek more intelligent and more cost-effective ways to keep pace with the data deluge, and this is where smart silicon can help. General-purpose processors are simply too slow for base station functions that must operate deep inside every packet in real-time, such as packet classification, digital signal processing, transcoding, encryption/decryption and traffic management.

For this reason, packet-level processing functions are increasingly being performed in hardware to improve performance, and these hardware accelerators are now being integrated with multicore processors in specialized system-on-chip communications processors. The number of function-specific acceleration engines available also continues to grow, and more engines (along with more processor cores) can now be placed on a single SoC. With current technology, it is even possible to integrate an equipment vendor’s unique intellectual property into a custom SoC for use in a proprietary system. In many cases, these advances now make it possible to replace multiple SoCs with a single SoC in base stations.

In addition to delivering higher throughput, SoCs reduce the total cost of the system, resulting in a significant improvement in its price/performance, while the inclusion of multiple acceleration engines makes it easier to satisfy end-to-end QoS and service-level agreement requirements. An equally important consideration in mobile network infrastructures is power consumption, and here too the SoC has a distinct advantage with its ability to replace multiple discrete components with a single, energy-efficient integrated circuit.

Another challenge involves the way hardware acceleration is implemented in some SoCs. The problem is caused by the workflow within the SoC when packets must pass through several hardware acceleration engines, as is the case for many services and applications. If traffic flows must be handled by a general-purpose processor core whenever traversing a different acceleration engine, undesirable latency and jitter (variability in latency) will both increase, potentially significantly.

Some next-generation SoCs address this issue by supporting configurable pipelines capable of processing packets deterministically. Each separate service-oriented pipeline creates a message-passing control path that enables system designers to specify different packet-processing flows that utilize different combinations of acceleration engines. Such granular traffic management enables any service to process any traffic flow directly through any engines required and in any sequence desired without intervention from a general-purpose processor, thereby minimizing latency and assuring that even the strictest QoS and SLA guarantees can be met.

Without these advances in integrated circuits, it would be virtually impossible for mobile operators to keep pace with the data deluge. So what does “smart silicon” have to do with next-generation mobile services, especially when it comes to reducing cost while improving overall system performance? Everything.

Greg Huff is SVP and CTO for LSI. In this capacity, he is responsible for shaping the future growth strategy of LSI products within the storage and networking markets. Huff joined the company in May 2011 from Hewlett-Packard, where he was VP and CTO of the company’s Industry Standard Server business. In that position, he was responsible for the technical strategy of HP’s ProLiant servers, BladeSystem family products and its infrastructure software business. Prior to that, he served as research and development director for the HP Superdome product family. Huff earned a bachelor’s degree in Electrical Engineering from Texas A&M University and an MBA from the Cox School of Business at Southern Methodist University.

ABOUT AUTHOR