Five simple tools that helps analyze network latency in Linux

ping

Ping is one of the most basic commands in network management, verifying network connectivity through the roundtrip times taken by the ICMP protocol packets sent to a target host.

ping - send ICMP ECHO_REQUEST to network hosts

  • -c count

Stop after sending count ECHO_REQUEST packets. With deadline option, ping waits for count ECHO_REPLY packets, until the timeout expires.

  • -i interval

Wait interval seconds between sending each packet. The default is to wait for one second between each packet normally, or not to wait in flood mode. Only super-user may set interval to values less 0.2 seconds.

$ ping 10.10.1.17 -c 1000 -i 0.010
PING 10.10.1.17 (10.10.1.17) 56(84) bytes of data.
64 bytes from 10.10.1.17: icmp_seq=1 ttl=64 time=0.176 ms
64 bytes from 10.10.1.17: icmp_seq=2 ttl=64 time=0.173 ms
<omitted...>
64 bytes from 10.10.1.17: icmp_seq=999 ttl=64 time=0.197 ms
64 bytes from 10.10.1.17: icmp_seq=1000 ttl=64 time=0.195 ms

--- 10.10.1.17 ping statistics ---
1000 packets transmitted, 1000 received, 0% packet loss, time 10992ms
rtt min/avg/max/mdev = 0.096/0.173/0.210/0.025 ms

Round-trip time (RTT) is the duration, measured in milliseconds, from when the source server sends a request to when it receives a response from a target server. It’s a key performance metric to measure network latency.

Actual round trip time can be influenced by:

  • Distance – The length a signal has to travel correlates with the time taken for a request to reach a server.
  • Transmission medium – The medium used to route a signal (e.g., copper wire, fiber optic cables) can impact how quickly a request is received by a server and routed back to a user.
  • Number of network hops – Intermediate routers or servers take time to process a signal, increasing RTT. The more hops a signal has to travel through, the higher the RTT.
  • Traffic levels – RTT typically increases when a network is congested with high levels of traffic. Conversely, low traffic times can result in decreased RTT.
  • Server response time – The time taken for a target server to respond to a request depends on its processing capacity, the number of requests being handled and the nature of the request (i.e., how much server-side work is required). A longer server response time increases RTT.

traceroute

A traceroute displays the path that the signal took as it traveled around the Internet to the website. It also displays times which are the response times that occurred at each stop along the route. If there is a connection problem or latency connecting to a site, it will show up in these times. You will be able to identify which of the stops (also called ‘hops’) along the route is the culprit.

$ for i in `seq 1 5`; do traceroute 10.10.1.17;sleep 3; done
traceroute to 10.10.1.17 (10.10.1.17), 30 hops max, 60 byte packets
 1  10.10.1.17 (10.10.1.17)  0.181 ms  0.086 ms  0.084 ms
traceroute to 10.10.1.17 (10.10.1.17), 30 hops max, 60 byte packets
 1  10.10.1.17 (10.10.1.17)  0.179 ms  0.087 ms  0.081 ms
traceroute to 10.10.1.17 (10.10.1.17), 30 hops max, 60 byte packets
 1  10.10.1.17 (10.10.1.17)  0.175 ms  0.087 ms  0.081 ms
traceroute to 10.10.1.17 (10.10.1.17), 30 hops max, 60 byte packets
 1  10.10.1.17 (10.10.1.17)  0.183 ms  0.073 ms  0.081 ms
traceroute to 10.10.1.17 (10.10.1.17), 30 hops max, 60 byte packets
 1  10.10.1.17 (10.10.1.17)  0.177 ms  0.080 ms  0.081 ms

-
Hop Number – the first column is simply the number of the hop along the route.

-
RTT Columns – The last three columns display the round trip time (RTT) for the packet to reach that point and return. It is listed in milliseconds. There are three columns because the traceroute sends three separate signal packets. This is to display consistency in the route.

netperf

Netperf is a benchmark that can be used to measure the performance of many different types of networking. It provides tests for both unidirectional throughput, and end-to-end latency. The environments currently measureable by netperf include:

  • TCP and UDP via BSD Sockets for both IPv4 and IPv6

  • DLPI

  • Unix Domain Sockets

  • SCTP for both IPv4 and IPv6

    netperf -h

    Usage: netperf [global options] – [test options]

    Global options:
    -a send,recv Set the local send,recv buffer alignment
    -A send,recv Set the remote send,recv buffer alignment
    -B brandstr Specify a string to be emitted with brief output
    -c [cpu_rate] Report local CPU usage
    -C [cpu_rate] Report remote CPU usage
    -d Increase debugging output
    -D [secs,units] * Display interim results at least every secs seconds
    using units as the initial guess for units per second
    -f G|M|K|g|m|k Set the output units
    -F fill_file Pre-fill buffers with data from fill_file
    -h Display this text
    -H name|ip,fam * Specify the target machine and/or local ip and family
    -i max,min Specify the max and min number of iterations (15,1)
    -I lvl[,intvl] Specify confidence level (95 or 99) (99)
    and confidence interval in percentage (10)
    -j Keep additional timing statistics
    -l testlen Specify test duration (>0 secs) (<0 bytes|trans)
    -L name|ip,fam * Specify the local ip|name and address family
    -o send,recv Set the local send,recv buffer offsets
    -O send,recv Set the remote send,recv buffer offset
    -n numcpu Set the number of processors for CPU util
    -N Establish no control connection, do ‘send’ side only
    -p port,lport* Specify netserver port number and/or local port
    -P 0|1 Donot/Do display test headers
    -r Allow confidence to be hit on result only
    -s seconds Wait seconds between test setup and test start
    -S Set SO_KEEPALIVE on the data connection
    -t testname Specify test to perform
    -T lcpu,rcpu Request netperf/netserver be bound to local/remote cpu
    -v verbosity Specify the verbosity level
    -W send,recv Set the number of send,recv buffers
    -v level Set the verbosity level (default 1, min 0)
    -V Display the netperf version and exit
    For those options taking two parms, at least one must be specified;
    specifying one value without a comma will set both parms to that
    value, specifying a value with a leading comma will set just the second
    parm, a value with a trailing comma will set just the first. To set
    each parm to unique values, specify both and separate them with a
    comma.

    • For these options taking two parms, specifying one value with no comma
      will only set the first parms and will leave the second at the default
      value. To set the second value it must be preceded with a comma or be a
      comma-separated pair. This is to retain previous netperf behaviour.

    $ wget -O netperf-2.5.0.tar.gz -c https://codeload.github.com/HewlettPackard/netperf/tar.gz/netperf-2.5.0
    $ tar xf netperf-2.5.0.tar.gz && cd netperf-netperf-2.5.0
    $ ./configure && make && make install

    [root@10.0.0.17]$ netserver -D
    [root@10.0.0.16]$ netperf -H 10.10.1.17 -l -1000000 -t TCP_RR -w 10ms -b 1 -v 2 – -O min_latency,mean_latency,max_latency,stddev_latency,transaction_rate
    Packet rate control is not compiled in.
    Packet burst size is not compiled in.
    MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.10.1.17 (10.10.1.17) port 0 AF_INET : first burst 0
    Minimum Mean Maximum Stddev Transaction
    Latency Latency Latency Latency Rate
    Microseconds Microseconds Microseconds Microseconds Tran/s

    63 84.92 2980 7.86 11740.092

iperf

iPerf3 is a tool for active measurements of the maximum achievable bandwidth on IP networks. It supports tuning of various parameters related to timing, buffers and protocols (TCP, UDP, SCTP with IPv4 and IPv6). For each test it reports the bandwidth, loss, and other parameters.

lldp

LLDP (Link Layer Discovery Protocol) can be essential in the situations of complex network-server infrastructure configurations and it’s extremely helpful in case there is no direct access to our setup but we need to determine what network ports on the switches are our servers NIC cards connected to.

Below example shows how to install and enable LLDP Daemon on CentOS and check what are the corresponding neighbor ports connected to the server network cards.

$ yum install lldpd
$ systemctl --now enable lldpd
  
$ lldpcli show neighbors
-------------------------------------------------------------------------------
LLDP neighbors:
-------------------------------------------------------------------------------
Interface:    enp6s0f1, via: LLDP, RID: 1, Time: 0 day, 00:01:29
  Chassis:
    ChassisID:    mac 00:1c:73:82:07:ee
    SysName:      xx-ay-01.06.09
    SysDescr:     Arista Networks EOS version 4.16.6M running on an Arista Networks Lab-71x-28
    MgmtIP:       10.0.254.9
    Capability:   Bridge, on
    Capability:   Router, on
  Port:
    PortID:       ifname Ethernet17
    TTL:          120
-------------------------------------------------------------------------------

Reference