As services move into the cloud, more vendors are publishing the latency requirements for their platform or application. These requirements could specify a maximum latency among computing nodes in a cluster, between a computing node and a storage array, or for an application-specific data flow like vMotion. Being able to estimate network latency between locations will help you identify which connectivity options are viable solutions. The following are some items to consider when estimating latency:
1. One-way versus round-trip
Many vendor requirements specify a maximum supported latency, in milliseconds, but do not say whether it is round-trip time (RTT), or one-way latency. You can usually assume that the vendor’s intention is “round trip time,” but due to the large costs involved in WAN circuits, it’s good to ask for clarification.
Modern network switches and routers have a forwarding latency of a fraction of a millisecond, depending mostly on frame/packet size. It is usually safe to assume that all the switches, routers and telecommunications equipment across a typical WAN circuit will add one millisecond in each direction (2 msec RTT, or round trip time).
If you’re testing latency using the “ping” command, note that “ping” also measures the speed at which the remote device answers pings. In many cases, that adds a millisecond to the ping results, above and beyond the latency of the network itself.
The biggest factor in typical WAN latency is the speed of light, through fiber-optic cable, which is about 124 miles per millisecond. A typical wavelength circuit within a metro area may travel from the data center to the serving telephone central office (e.g. 5 miles), then travel across the metro area on a metro fiber ring to another central office (e.g. 50 miles along a “beltway loop”), then travel to the destination data center (5 miles). The path will vary, depending on the local layout of your carrier’s fiber network, but a fiber path within a single metro area will usually be fewer than 100 fiber-miles in length. This should usually keep in-metro round-trip latency below 2 msec (fiber) plus 2 msec (electronics), or 4 msec., well within most clustering/application requirements.
To estimate latency of a wavelength between metro areas, take a look at your carrier’s proposed fiber route. Their fiber map is probably available on their website. An online mapping website can provide a rough estimate of highway distance between two cities, along the path used by your carrier. I’d recommend padding that distance by 10% to account for a few twists and turns at river crossings, freeway junctions, etc. To that estimate, add 100 miles for the local metro fiber path at each end, double the result (to get the round-trip distance), divide by 124 miles/msec, and add 2 milliseconds (for the electronics). For example, a Chicago/Dallas wavelength latency estimate might be:
- 925 miles x 110% = 1017.5 miles
- + 100 miles (Chicago local fiber) = 1117.5 miles
- + 100 miles (Dallas fiber) = 1217.5 miles
- x 2 = 2435 miles (round-trip)
- / 124 msec/mile = 19.6 msec
- + 2 msec = 22 msec (electronics)
Note that your carrier may provide a guarantee for latency which is a generic number of milliseconds (e.g. 50 msec for any wavelength within North America). It’s useful to name the fiber path to be used, on your contract or service order with the carrier, to ensure that your circuit is not routed through a fiber route which is unnecessarily long, but is still within the carrier’s SLA.
4. Switched networks
The latency example above is primarily for “wavelength” services or dark fiber. For circuits which travel over a carrier switched network (e.g. MPLS), congestion at any point in the network path may increase latency by tens of milliseconds. Additionally, the carrier may reroute traffic during maintenance, or may reroute traffic around a congested POP for days or weeks, using a longer network path.
5. Encryption and fragmentation
For VPNs such as IPSec tunnels, fragmentation/reassembly and encryption/decryption may significantly increase latency.