Linux performance analysis in 60 seconds

Performance tools checklist

The following tools can be used for the first look at any performance issues. By executing these commands in the poorly performing Linux system, it would help solve the performance issue or reveal the clues for the next step of performance issue analysis.

  • uptime
  • dmesg | tail
  • lscpu
  • free -m
  • lsblk
  • vmstat 1
  • mpstat -P ALL 1
  • pidstat -u 1
  • pidstat -dt 1
  • iostat -ktdx 1
  • sar -n DEV 1
  • sar -n TCP,ETCP 1
  • top

uptime

This is a quick way to view the load averages on the system. It includes the number of processes waiting to run on the CPUs and blocked in uninterruptible I/O(usually disk I/O). The three numbers are the moving load averages for the last 1-minute, 5-minute, and 15-minutes. In this example, it shows a significant load increase.

$ cat uptime.out
 20:22:25 up 97 days, 46 min,  1 user,  load average: 23.50, 11.97, 5.07

lscpu

This command is useful to know about the system CPU information.

$ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                96
On-line CPU(s) list:   0-95
Thread(s) per core:    2
Core(s) per socket:    24
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz
Stepping:              7
CPU MHz:               2315.744
CPU max MHz:           4000.0000
CPU min MHz:           1000.0000
BogoMIPS:              4800.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              36608K
NUMA node0 CPU(s):     0-23,48-71
NUMA node1 CPU(s):     24-47,72-95
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities

free -m

This shows the total and available memory in MB. Generally, the available memory is the sum of free and buff/cache. This is because the buff/cache can be flushed to disk as needed.

$ free -m
              total        used        free      shared  buff/cache   available
Mem:          61939        5954        1561         270       54424       52116
Swap:          4095         198        3897

dmesg | tail

This shows the last 10 system messages. Sometimes, the error messages shown here could indiate the cause for the performance issue.

$ dmesg | tail
[6832692.919986] bash (27857): drop_caches: 3
[7550537.761563] Device 1009069759715322128 added ffff888248373000 with mode 0x48002 fastpath 0 npath 0
[7550538.267764] removing device 1009069759715322128
[7550544.483191] Device 1009069759715322128 added ffff888801cd6000 with mode 0x48002 fastpath 0 npath 0
[7551342.509806] trace_kprobe: Could not probe notrace function vfs_stat_set_lookup_flags
[7551342.512439] trace_kprobe: Could not probe notrace function vfs_stat_set_lookup_flags
[7551359.071096] trace_kprobe: Could not probe notrace function vfs_stat_set_lookup_flags
[7551359.073699] trace_kprobe: Could not probe notrace function vfs_stat_set_lookup_flags
[8276360.681817] RPC: fragment too large: 612067950
[8281013.520451] nfsd: peername failed (err 107)!

vmstat 1

This is a simple but very useful command including a system performance summary for the CPU, memory, swap and I/O. In many cases, you could get the cause or clue for the performance issue. In this example, it indicates the number of running processes(or threads) is ~25 and the ~22% CPU spent in user space. Notice that the first report gives the averages since the last reboot. Usually, we look at the report starting from the second sampling period.

$ vmstat 1 3
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
31  0 202752 1599384  45844 55684312    0    0    62   785    0    0  2  1 97  0  0

24  0 202752 1774464  45468 55505764    0   88 328816   116 145768 111575 22  2 75  1  0
27  0 202752 1577296  45476 55693220    0    0 185692    44 148113 108146 23  2 75  0  0

iostat -xz 1

This is another useful tool to analyze the disk I/O performance. The first report provides statistics since the last system boot.

The metrics include

  • merges/s - rrqm/s, wrqm/s

  • iops - reads/s, writes/s

  • throughput - rKB/s, wKB/s

  • io size - avgrq-sz

  • queue size - avgqu-sz

  • wait time in ms - await, r_await, w_await

  • service time in ms - svctm

  • disk bandwidth utilization - %util

    $ iostat -xz 1 3
    Linux 5.7.12-1.el7.elrepo.x86_64 (localhost) 03/20/2023 x86_64 (96 CPU)

    avg-cpu: %user %nice %system %iowait %steal %idle
    2.31 0.00 0.52 0.05 0.00 97.11
    Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
    nvme4n1 0.09 0.77 13.66 13.87 555.49 681.99 89.89 0.00 0.14 0.22 0.06 0.29 0.81
    nvme3n1 1.41 25.85 12.89 269.43 567.87 14516.73 106.86 0.02 0.09 0.23 0.08 0.24 6.75
    nvme5n1 1.40 25.13 15.25 261.88 576.08 14394.15 108.04 0.02 0.09 0.20 0.08 0.24 6.72
    nvme0n1 0.17 0.47 0.71 0.75 15.99 22.02 52.06 0.00 0.42 0.14 0.69 0.39 0.06
    nvme2n1 0.18 0.84 14.35 14.01 559.02 687.26 87.88 0.00 0.14 0.22 0.07 0.30 0.84
    nvme1n1 0.00 0.00 0.00 0.00 0.01 0.00 52.12 0.00 0.10 0.10 0.00 0.13 0.00
    nvme6n1 1.44 25.28 11.29 259.40 562.39 14368.24 110.32 0.02 0.09 0.25 0.08 0.24 6.60
    nvme8n1 0.22 0.80 13.58 13.64 554.13 676.59 90.43 0.00 0.15 0.23 0.07 0.29 0.80
    nvme7n1 1.44 25.69 11.51 258.52 556.81 14250.59 109.67 0.02 0.08 0.25 0.08 0.24 6.56
    nvme9n1 0.10 0.91 13.86 13.54 550.89 668.71 89.04 0.00 0.14 0.22 0.07 0.29 0.81
    md0 0.00 0.00 12.31 15.97 655.15 414.27 75.63 0.00 0.00 0.00 0.00 0.00 0.00

    avg-cpu: %user %nice %system %iowait %steal %idle
    21.63 0.00 2.10 1.23 0.00 75.04
    Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
    nvme4n1 8.00 0.00 2264.00 0.00 83248.00 0.00 73.54 0.49 0.22 0.22 0.00 0.29 65.30
    nvme0n1 0.00 16.00 0.00 7.00 0.00 92.00 26.29 0.14 19.29 0.00 19.29 15.71 11.00
    nvme2n1 3.00 0.00 2264.00 0.00 82724.00 0.00 73.08 0.50 0.22 0.22 0.00 0.30 67.50
    nvme8n1 8.00 0.00 2180.00 4.00 80624.00 20.00 73.85 0.47 0.22 0.22 0.00 0.30 64.60
    nvme9n1 6.00 0.00 2254.00 0.00 82180.00 0.00 72.92 0.49 0.22 0.22 0.00 0.28 64.10
    md0 0.00 0.00 8986.00 8.00 328704.00 20.00 73.10 0.00 0.00 0.00 0.00 0.00 0.00

    avg-cpu: %user %nice %system %iowait %steal %idle
    22.84 0.00 1.78 0.44 0.00 74.94
    Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
    nvme4n1 3.00 0.00 1153.00 0.00 47180.00 0.00 81.84 0.25 0.22 0.22 0.00 0.44 50.80
    nvme0n1 0.00 0.00 5.00 0.00 40.00 0.00 16.00 0.00 0.20 0.20 0.00 0.40 0.20
    nvme2n1 2.00 0.00 1140.00 0.00 47260.00 0.00 82.91 0.26 0.23 0.23 0.00 0.42 48.20
    nvme8n1 2.00 0.00 1101.00 8.00 46696.00 32.00 84.27 0.25 0.22 0.22 0.00 0.45 50.10
    nvme9n1 5.00 1.00 1112.00 2.00 44516.00 12.00 79.94 0.25 0.22 0.22 0.00 0.43 48.00
    md0 0.00 0.00 4516.00 19.00 185620.00 44.00 81.88 0.00 0.00 0.00 0.00 0.00 0.00

sar -n DEV 1

The sar tool has many options for different metrics. Here we use an option to look at the network device metrics. This is very useful to check the current network throughput on each interface.

$ sar -n DEV 1 3
Linux 5.7.12-1.el7.elrepo.x86_64 (localhost) 03/20/2023 _x86_64_ (96 CPU)

08:22:25 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
08:22:26 PM        lo      9.00      9.00      0.73      0.73      0.00      0.00      0.00
08:22:26 PM      eno1     57.00      8.00      3.47      0.47      0.00      0.00      0.00
08:22:26 PM      eno2      0.00      0.00      0.00      0.00      0.00      0.00      0.00
08:22:26 PM enp175s0f0  22353.00  22359.00  13986.21  13454.67     0.00      0.00      0.00
08:22:26 PM enp175s0f1     0.00      0.00      0.00      0.00     0.00      0.00      0.00
08:22:26 PM   docker0      0.00      0.00      0.00      0.00      0.00      0.00      0.00

08:22:26 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
08:22:27 PM        lo      8.00      8.00      0.47      0.47      0.00      0.00      0.00
08:22:27 PM      eno1     60.00      6.00      3.66      0.43      0.00      0.00      0.00
08:22:27 PM      eno2      0.00      0.00      0.00      0.00      0.00      0.00      0.00
08:22:27 PM enp175s0f0  23500.00  23497.00  14913.20  13944.36     0.00      0.00      0.00
08:22:27 PM enp175s0f1     0.00      0.00      0.00      0.00      0.00      0.00      0.00
08:22:27 PM   docker0      0.00      0.00      0.00      0.00      0.00      0.00      0.00

08:22:27 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
08:22:28 PM        lo     12.00     12.00      1.19      1.19      0.00      0.00      0.00
08:22:28 PM      eno1     66.00      8.00      4.11      0.63      0.00      0.00      0.00
08:22:28 PM      eno2      0.00      0.00      0.00      0.00      0.00      0.00      0.00
08:22:28 PM enp175s0f0  23584.00  23590.00  14889.09  14073.68     0.00      0.00      0.00
08:22:28 PM enp175s0f1     0.00      0.00      0.00      0.00      0.00      0.00      0.00
08:22:28 PM   docker0      0.00      0.00      0.00      0.00      0.00      0.00      0.00

Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
Average:           lo      9.67      9.67      0.80      0.80      0.00      0.00      0.00
Average:         eno1     61.00      7.33      3.75      0.51      0.00      0.00      0.00
Average:         eno2      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:    enp175s0f0  23145.67  23148.67  14596.16  13824.24     0.00      0.00      0.00
Average:    enp175s0f1     0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:      docker0      0.00      0.00      0.00      0.00      0.00      0.00      0.00

sar -n TCP,ETCP 1

This command shows the following three useful metrics

  • active/s - The number of locally initiated TCP connections per second

  • passive/s - The number of remotely initiated TCP connections per second

  • retrans/s - The number of TCP retransmits per second

    $ sar -n TCP,ETCP 1
    Linux 5.7.12-1.el7.elrepo.x86_64 (init500-c4) 03/20/2023 x86_64 (96 CPU)

    08:48:02 PM active/s passive/s iseg/s oseg/s
    08:48:03 PM 1.00 0.00 23194.00 23200.00

    08:48:02 PM atmptf/s estres/s retrans/s isegerr/s orsts/s
    08:48:03 PM 0.00 0.00 0.00 0.00 4.00

    08:48:03 PM active/s passive/s iseg/s oseg/s
    08:48:04 PM 0.00 0.00 23410.00 23406.00

    08:48:03 PM atmptf/s estres/s retrans/s isegerr/s orsts/s
    08:48:04 PM 0.00 0.00 0.00 0.00 8.00

    08:48:04 PM active/s passive/s iseg/s oseg/s
    08:48:05 PM 1.00 0.00 23698.00 23706.00

    08:48:04 PM atmptf/s estres/s retrans/s isegerr/s orsts/s
    08:48:05 PM 0.00 0.00 0.00 0.00 4.00

    Average: active/s passive/s iseg/s oseg/s
    Average: 0.67 0.00 23434.00 23437.33

    Average: atmptf/s estres/s retrans/s isegerr/s orsts/s
    Average: 0.00 0.00 0.00 0.00 5.33

mpstat -P ALL 1

This command helps identify the per-CPU core usage. You may look for if any of the CPU cores has utilization(%usr+%sys) close to 100%.

$ mpstat -P ALL 1 3
Linux 5.7.12-1.el7.elrepo.x86_64 (init500-c4)   03/20/2023  _x86_64_    (96 CPU)

08:22:25 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
08:22:26 PM  all   21.11    0.00    1.79    1.23    0.00    0.20    0.00    0.00    0.00   75.67
08:22:26 PM    0   27.17    0.00    2.17    2.17    0.00    0.00    0.00    0.00    0.00   68.48
08:22:26 PM    1    4.12    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00   95.88
08:22:26 PM    2    7.00    0.00    1.00    0.00    0.00    0.00    0.00    0.00    0.00   92.00
08:22:26 PM    3   12.00    0.00    1.00    1.00    0.00    0.00    0.00    0.00    0.00   86.00
08:22:26 PM    4   15.00    0.00    1.00    1.00    0.00    0.00    0.00    0.00    0.00   83.00
08:22:26 PM    5   16.83    0.00    1.98    0.00    0.00    0.00    0.00    0.00    0.00   81.19
08:22:26 PM    6   24.00    0.00    1.00    2.00    0.00    0.00    0.00    0.00    0.00   73.00
[...]

08:22:26 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
08:22:27 PM  all   22.84    0.00    1.53    0.44    0.00    0.28    0.00    0.00    0.00   74.91
08:22:27 PM    0   10.64    0.00    1.06    0.00    0.00    3.19    0.00    0.00    0.00   85.11
08:22:27 PM    1   35.35    0.00    3.03    1.01    0.00    0.00    0.00    0.00    0.00   60.61
08:22:27 PM    2   17.17    0.00    0.00    1.01    0.00    0.00    0.00    0.00    0.00   81.82
08:22:27 PM    3   40.00    0.00    3.00    1.00    0.00    0.00    0.00    0.00    0.00   56.00
08:22:27 PM    4    8.08    0.00    1.01    0.00    0.00    0.00    0.00    0.00    0.00   90.91
08:22:27 PM    5   27.00    0.00    1.00    1.00    0.00    0.00    0.00    0.00    0.00   71.00
08:22:27 PM    6   14.00    0.00    1.00    0.00    0.00    0.00    0.00    0.00    0.00   85.00
[...]

08:22:27 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
08:22:28 PM  all   22.42    0.00    1.57    0.42    0.00    0.20    0.00    0.00    0.00   75.39
08:22:28 PM    0   29.35    0.00    2.17    0.00    0.00    0.00    0.00    0.00    0.00   68.48
08:22:28 PM    1   20.41    0.00    1.02    0.00    0.00    0.00    0.00    0.00    0.00   78.57
08:22:28 PM    2   18.18    0.00    1.01    0.00    0.00    0.00    0.00    0.00    0.00   80.81
08:22:28 PM    3   10.89    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00   89.11
08:22:28 PM    4   18.00    0.00    1.00    0.00    0.00    0.00    0.00    0.00    0.00   81.00
08:22:28 PM    5    5.05    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00   94.95
08:22:28 PM    6   21.21    0.00    1.01    0.00    0.00    0.00    0.00    0.00    0.00   77.78
[...]

Average:     CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest  %gnice   %idle
Average:     all   22.12    0.00    1.63    0.69    0.00    0.23    0.00    0.00    0.00   75.32
Average:       0   22.30    0.00    1.80    0.72    0.00    1.08    0.00    0.00    0.00   74.10
Average:       1   20.07    0.00    1.36    0.34    0.00    0.00    0.00    0.00    0.00   78.23
Average:       2   14.09    0.00    0.67    0.34    0.00    0.00    0.00    0.00    0.00   84.90
Average:       3   20.93    0.00    1.33    0.66    0.00    0.00    0.00    0.00    0.00   77.08
Average:       4   13.71    0.00    1.00    0.33    0.00    0.00    0.00    0.00    0.00   84.95
Average:       5   16.33    0.00    1.00    0.33    0.00    0.00    0.00    0.00    0.00   82.33
Average:       6   19.73    0.00    1.00    0.67    0.00    0.00    0.00    0.00    0.00   78.60
[...]

pidstat 1

This command shows the CPU usage per process. In this example, the process cockroach consumes 100% CPU. Notice that the system has toally 9600%(96 cores) CPU bandwidth available. There is an option -t which shows the per-thread CPU usage. Also, the option -d shows the disk I/O per process(or per thread when to use with option -t).

$ pidstat 1 3
Linux 5.7.12-1.el7.elrepo.x86_64 (localhost) 03/20/2023 _x86_64_ (96 CPU)

08:22:25 PM   UID       PID    %usr %system  %guest    %CPU   CPU  Command
08:22:26 PM     0       568    0.00   12.62    0.00   12.62     9  kswapd0
08:22:26 PM     0     27131  100.00  100.00    0.00  100.00    85  cockroach
08:22:26 PM     0     42056    0.97    1.94    0.00    2.91    40  pidstat
08:22:26 PM     0     42060    0.97    1.94    0.00    2.91    89  top
08:22:26 PM     0     53123    0.00    0.97    0.00    0.97    74  kworker/74:0-events
08:22:26 PM     0     94796    0.00    1.94    0.00    1.94    41  kworker/u192:1-mlx5_cmd_0000:af:00.0

08:22:26 PM   UID       PID    %usr %system  %guest    %CPU   CPU  Command
08:22:27 PM     0       568    0.00    3.00    0.00    3.00    58  kswapd0
08:22:27 PM     0      3175    0.00    1.00    0.00    1.00    19  containerd
08:22:27 PM     0     17494    0.00    1.00    0.00    1.00    33  kworker/33:2-events
08:22:27 PM     0     27131  100.00  100.00    0.00  100.00    79  cockroach
08:22:27 PM     0     38660    0.00    1.00    0.00    1.00    32  kworker/32:1-mm_percpu_wq
08:22:27 PM     0     39110    0.00    1.00    0.00    1.00    46  kworker/46:0-events
08:22:27 PM     0     41667    0.00    1.00    0.00    1.00    80  kworker/80:2-events
08:22:27 PM     0     42056    1.00    4.00    0.00    5.00    40  pidstat
08:22:27 PM     0     42060    2.00    2.00    0.00    4.00    89  top
08:22:27 PM     0     94796    0.00    1.00    0.00    1.00    81  kworker/u192:1-mlx5_cmd_0000:af:00.0

08:22:27 PM   UID       PID    %usr %system  %guest    %CPU   CPU  Command
08:22:28 PM     0        11    0.00    1.00    0.00    1.00    57  rcu_sched
08:22:28 PM     0       568    0.00    7.00    0.00    7.00    58  kswapd0
08:22:28 PM     0     27131  100.00  100.00    0.00  100.00    40  cockroach
08:22:28 PM     0     41616    0.00    1.00    0.00    1.00    23  iostat
08:22:28 PM     0     42056    1.00    2.00    0.00    3.00    41  pidstat
08:22:28 PM     0     53853    0.00    1.00    0.00    1.00    29  kworker/29:1-events
08:22:28 PM     0     87419    0.00    1.00    0.00    1.00     0  kworker/0:0-events

Average:      UID       PID    %usr %system  %guest    %CPU   CPU  Command
Average:        0        11    0.00    0.33    0.00    0.33     -  rcu_sched
Average:        0       568    0.00    7.59    0.00    7.59     -  kswapd0
Average:        0      3175    0.00    0.33    0.00    0.33     -  containerd
Average:        0     17494    0.00    0.33    0.00    0.33     -  kworker/33:2-mm_percpu_wq
Average:        0     27131  100.00  100.00    0.00  100.00     -  cockroach
Average:        0     38660    0.00    0.33    0.00    0.33     -  kworker/32:1-mm_percpu_wq
Average:        0     39110    0.00    0.33    0.00    0.33     -  kworker/46:0-events
Average:        0     41616    0.00    0.33    0.00    0.33     -  iostat
Average:        0     41667    0.00    0.33    0.00    0.33     -  kworker/80:2-events
Average:        0     42056    0.99    2.64    0.00    3.63     -  pidstat
Average:        0     53123    0.00    0.33    0.00    0.33     -  kworker/74:0-events
Average:        0     53853    0.00    0.33    0.00    0.33     -  kworker/29:1-events
Average:        0     87419    0.00    0.33    0.00    0.33     -  kworker/0:0-events
Average:        0     94796    0.00    0.99    0.00    0.99     -  kworker/u192:1-dm-thin

top

This command shows the CPU and memory usage per process. It’s very useful when you need to create a CPU or memory profile per process. You may notice that it also shows the load averages. If you want to collect the metrics by saving to file, you can use the command like top -b -d1 -n3.

$ top
top - 20:22:25 up 97 days, 46 min,  1 user,  load average: 23.50, 11.97, 5.07
Tasks: 979 total,   1 running, 405 sleeping,   0 stopped,   0 zombie
%Cpu(s): 23.2 us,  3.0 sy,  0.0 ni, 71.2 id,  2.3 wa,  0.0 hi,  0.3 si,  0.0 st
KiB Mem : 63426556 total,  1962168 free,  6098708 used, 55365680 buff/cache
KiB Swap:  4194300 total,  3991548 free,   202752 used. 53364460 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
27131 root      20   0   19.0g   1.3g  46196 S  2244  2.2   7412:53 cockroach
[...]

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
27131 root      20   0   19.0g   1.3g  46196 R  2177  2.2   7413:15 cockroach
[...]

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
27131 root      20   0   19.0g   1.3g  46208 S  2323  2.2   7413:39 cockroach
[...]