Wide area network performance has a huge impact on any multi-branch organization. A network performance audit takes into account the response of key applications and websites and ties that response to hard data on the network's provisioned capacity, latency, loss, and jitter. Armed with an independent analysis, organizations can work with their telecommunications vendors to get the best performance for their applications.
Key applications may include email and Intranet or Internet web sites, which are generally very tolerant of network issues. They could also include Voice over IP (VoIP), Enterprise Resource Planning (ERP) systems, Remote Desktop Protocol (RDP) or Citrix based applications, or access to file servers across a wide area network. None of these applications are tolerant of network issues, and some, like RDP, tend to fail quickly when faced with low capacity, latency, loss, or jitter beyond what would be expected in a local area network.
Provisioned Capacity means the amount of bandwidth your organization is paying for. This could be broken down into Peak Information Rate (PIR) and Committed Information Rate (CIR) figures, which could be anywhere from the kilobits to the tens or hundreds of megabits per second.
Latency is the amount of time it takes to send a packet to a remote location and have it come back again. It is typically measured as RTT - return trip time, and can range from less than a millisecond on local networks, to over a second on satellite based networks. It is most often measured on an ad-hoc basis with a tool called "ping", which is a standard application on Windows, Macintosh, and Unix systems. Consistent measurement over time is important to determine whether a circuit is experiencing latency problems.
Loss, in this case Packet Loss, is the percentage of packets that are lost. The measurement is typically across a round-trip path, using the Ping tool or a variant. Packet loss has a huge impact on most business applications due to the nature of TCP, one of the main Internet protocols. Simplistically, when TCP experiences loss, it assumes the network is congested, and slows down transmission of data. Very low levels of packet loss can have a very high impact on the performance of some applications.
Jitter, or Inter-Packet Delay Variation, is a measure of consistency of latency. A network with little jitter will have very constant latency measurements. A network with high jitter will have latency measurements that are constantly changing. For many business applications, only capacity, latency, and loss matter. For real-time applications like VoIP, RDP, or Citrix, which depend on streaming data from one computer to another, high jitter can result in noisy or garbled phone calls, jumpy response on remote computers, or applications that quit suddenly for no apparent reason. On an ad-hoc basis jitter can also be measured using the ping tool.
