YOU ARE AT:FundamentalsIEEE 1588v2A Look at Time and Frequency Synchronization

IEEE 1588v2A Look at Time and Frequency Synchronization

Time is an essential element of networking that is usually ignored or given low importance.  Some of the uses in the network range from time-stamping information, marking logging services (to detect network anomalies, to monitor user access, and network management events), access control (disable wireless AP’s or logging privileges after-hours), billing of course, and many more uses.  For these types of applications, a time resolution of hours, minutes and seconds would suffice.

There is, however, another time usage category that is heavily used by the networking industry, and this one require milli-, micro- and in some cases picosecond accuracy.  This type of time usage is found within average leased lines and optical trunks where endpoints agree how fast to transmit information; in cell sites where there is a need to coordinate who can use a voice or data channel and for how long; and in data centers where there is a need to send as much information as possible between endpoints with minimal delay. It is also found in media (voice and video) and broadcasting applications where content is time-sensitive, and applications such as bidding on your favorite item at the last minute, or buying a stock option before the market closes.
Time provides the basic ability to determine when a data frame starts and another ends, when media is in use or can be shared with someone else, or when an event starts and stops.  Along with time, there is another metric of importance: frequency.
In the perfect world, an internal clock will take the same amount of cycles to represent a second; but the clocks that are located inside every computer, cell site or network element are not perfect. Their accuracy varies according to manufacturing standards and even silicone temperature, and as a result, they’ll simply wander over time. This wander is a change in frequency.  So time synchronization does not stop at just providing a timestamp to its clients, it also implies that these clocks will be adjusted constantly, the same way a heart is synched by a pacemaker.  So a time synchronization protocol must work to maintain both time and frequency over long periods.
So, how has the networking industry dealt with time in the past?

  • NTP (Network Time Protocol) servers have provided a way to convey time over an IP network form a centralized location. A server is placed somewhere in the network, that server receives time form an accurate source such as a strata clock or a satellite feed, and then the server will respond to time requests from other servers or network elements.  The accuracy of this protocol will depend on how far away the server is placed from the client as well as the network topology impairments (traffic, routing, queuing) affecting NTP updates. NTP has the major advantage that any system can talk to the server without requiring special hardware or change in design.
  • Another option is to derive time/frequency from the transport media.  Synchronous or time- based transport technologies in both copper and fiber worlds such as T1’s, DS, and OC variants require endpoint synchronization; the faster you transmit, the more accurate you must be.  In this scenario, the time is less important than the frequency.
  • There is always the possibility of appending an accurate clock source to every network element or cell site, to have perfect time harmony; however this will not be cost effective, if you have to do so for many nodes. This is the case for the cellular industry, which is increasingly adding micro or pico cells to support the expanding wireless user population and their usage patterns; you can’t simply add a GPS to every one of those cell sites.

So IEEE 1588-2002, also called Precision Time Protocol (PTP) and IEEE 1588v2, is just another protocol which can convey such time and frequency information. It is doing so in parallel to technologies such as NTP and SyncE.  Most of these protocols operate the same way: they start by exchanging messages between client and server, and each of these messages is time-stamped at time of transmission and reception, and then sent back to their source. After a couple of these handshakes, each endpoint can estimate the round trip delay time and also they can exchange metrics about local clock behavior, network impairments and in some cases, frequency parameters. If a single server is offering time synchronization, it will tell the client how to correct its clock; if more than one server is available, the client will obey to the most accurate source of timing; this information exchange rate can be adjusted over time,  and the more often you do this, the more frequency stability you will maintain.
It is also important to highlight that while these protocols share a similar goal, that they do differ in their resolution, the topologies they can serve, the complexity of messaging/information exchange, and finally in their hardware dependency.
Why is time and frequency synchronization gaining momentum and requiring new protocols?

  • In the case of the wireless industry, there has been a shift in the adoption of backhaul technologies to get data out of cell sites and terminate then into the IP world. Providers historically relied on dedicated (and very expensive) synchronous transport technologies but they are shifting into shared (less expensive) IP, Ethernet and/or MPLS based backhaul solutions. These last protocols were created for asynchronous communications, burst-prone traffic that doesn’t require a constant service over time (it is no secret that internet web browsing traffic is a better source revenue than phone call traffic).  This also implies that the cell sites are now detached from a synchronous network and will have a harder time in maintaining  accuracy, especially in environments where they coexist with other carriers or other cell sites from the same provider.  The immediate solution is to fallback to GPS per site; another alternative is to use NTP and deal with its accuracy variance due to network traffic conditions.
  • Data centers are increasingly pushing the limits of data transfer rates: 40G or 100G Ethernet speeds to serve or store information. These cannot be achieved if we use the original communication standards created to serve in a best-effort form. By converting an asynchronous protocol such as Ethernet into a synchronous one (Synch-E) we can derive the benefits of time- based protocols listed before. This solution is great for enterprise environments but it’s not yet an end-to-end solution: changing the flavor of Ethernet all across the board will take some time.
  • Other markets for synchronization are available in power utilities and their control systems (SmartGrid); and radio spectral coexistence, amongst others.

Why the protocol was created?  I can’t phrase this better than the following quote:

  • “IEEE 1588 is designed to fill a niche not well served by either of the two dominant protocols, NTP and GPS. IEEE 1588 is designed for local systems requiring accuracies beyond those attainable using NTP. It is also designed for applications that cannot bear the cost of a GPS receiver at each node, or for which GPS signals are inaccessible.” (Eidson, April 2006)

What IEEE 1588v2  is trying do is to take the best the best of both worlds:

  •  In multicast form, it can operate comparably to Sync-E but requires support by all network elements; however, it doesn’t require changing the transport protocols.   In unicast form, it will try increasing accuracy without requiring every network element to talk the protocol.

IEEE 1588v2 will achieve such goal under the following environments:

  • Best results if all nodes understand the protocol (either as a master, client transparent or boundary clock)
  • If transport network does not support the protocol, equivalent routing is a must ( based on the way the protocol calculates time difference between master and slave)
  • Its accuracy will gradually be reduced (as in the case of NTP) as more network impairments are added to its communication path.

At the end of the day, the selection of protocols will be determined by the markets which they serve.   Sync-E will initially flourish in data center environments, potentially within service provider cores (controlled environments, with end-to-end support). IEEE 1588v2 might have an edge in environments that have hidden nodes but predictable forwarding (such as clients running above an MPLS cloud) and NTP will remain the top choice for the average joe.  Their architecture, placement and accuracy tolerance will ultimately be determined by the customer budget and the technology they are trying to time-synchronize.
 Author Jose Ramon Santos is lab director of the Interdisciplinary Telecommunications Program at the University of Colorado, Boulder. 
 

ABOUT AUTHOR