4 (derived from ftp://ftp.tik.ee.ethz.ch/pub/students/2011-FS/MA-2011-01.pdf)
6 For measurement purposes, we have implemented a tool called ifpps, which
7 periodically provides top-like networking and system statistics from the
8 kernel. ifpps gathers its data directly from procfs files and does not
9 apply any user space monitoring libraries such as libpcap [120] which is
10 used in tools like iptraf [121], for instance.
12 The main idea behind ifpps is to apply principles from section 2.2.1 in
13 order to be able to have more accurate networking statistics under a high
14 packet load. For instance, consider the following scenario: two directly
15 connected Linux machines with Intel Core 2 Quad Q6600 2.40GHz CPUs,
16 4 GB RAM, and an Intel 82566DC-2 Gigabit Ethernet NIC are used for
17 performance evaluation. One machine generates 64 Byte network packets by
18 using the kernel space packet generator pktgen with a maximum possible
19 packet rate. The other machine displays statistics about incoming network
20 packets by using i) iptraf and ii) ifpps.
22 iptraf, that incorporates libpcap, shows an average packet rate of 246,000 pps
23 while on the other hand ifpps shows an average packet rate of 1,378,000 pps.
24 Hence, due to copying packets and deferring statistics creation into user space,
25 a measurement error of approx. 460 per cent occurs. Tools like iptraf, for
26 instance, display much more information such as TCP per flow statistics
27 (therefore the use of libpcap), which we have not implemented in ifpps because
28 overall networking statistics are in our focus. The principle P1 in our case
29 is applied by avoiding collecting network packets in user space for statistics
30 creation. Further, principle P2 means that we let the kernel calculate packet
31 statistics, for instance, within the network device drivers. With both
32 principles applied, we fetch network driver receive and transmit statistics
33 from procfs. Hence, the following files are of interest for ifpps:
35 * /proc/net/dev: network device receive and transmit statistics
36 * /proc/softirqs: per CPU statistics about scheduled NET_RX and NET_TX
37 software interrupts (section 3.1)
38 * /proc/interrupts: per CPU network device hardware interrupts
39 * /proc/stat: per CPU statistics about time (in USER_HZ) spent in user,
40 system, idle, and IO-wait mode
41 * /proc/meminfo: total and free memory statistics
43 Every given time interval (t, default: t = 1s), statistics are parsed from
44 procfs and displayed in an ncurses-based [122] screen. An example ifpps output
45 of ifpps eth0 looks like the following:
47 Kernel net / sys statistics for eth0
49 RX: 0.003 MiB/t 20 pkts/t 0 drops/t 0 errors/t
50 TX: 0.000 MiB/t 2 pkts/t 0 drops/t 0 errors/t
52 RX: 226.372 MiB 657800 pkts 0 drops 0 errors
53 TX: 12.104 MiB 101317 pkts 0 drops 0 errors
55 SYS: 2160 cs/t 43.9% mem 1 running 0 iowait
57 CPU0: 0.0% usr/t 0.0% sys/t 100.0% idl/t 0.0% iow/t
58 CPU1: 0.0% usr/t 0.5% sys/t 99.5% idl/t 0.0% iow/t
59 CPU2: 0.5% usr/t 0.0% sys/t 99.5% idl/t 0.0% iow/t
60 CPU3: 4.9% usr/t 0.5% sys/t 94.6% idl/t 0.0% iow/t
62 CPU0: 7 irqs/t 7 soirq RX/t 0 soirq TX/t
63 CPU1: 8 irqs/t 8 soirq RX/t 0 soirq TX/t
64 CPU2: 3 irqs/t 3 soirq RX/t 0 soirq TX/t
65 CPU3: 3 irqs/t 4 soirq RX/t 0 soirq TX/t
72 The first two lines display received and transmitted MiB, packets, dropped
73 packets, and errors for the network device eth0 within a given time interval t.
74 The next two lines show their aggregated values since boot time. Moreover, the
75 line starting with SYS shows context switches per t seconds, current memory
76 usage, currently running processes and processes waiting for I/O to complete.
77 Furthermore, ifpps displays per CPU information about CPU percentage spent
78 executing in user mode, kernel mode, idle mode and I/O wait mode. Next to
79 this, per CPU hardware and software interrupts are shown. Displayed hardware
80 interrupts are only hardware interrupts that were caused by the networking
81 device eth0. Software interrupts are not further distinguished by devices,
82 thus only per CPU receive and transmit overall statistics are provided.
83 However, the last lines show aggregated eth0 hardware interrupts since boot
86 Furthermore, ifpps supports setting the network adapter into promiscuous mode
87 by applying option --promisc, i.e.
89 ifpps --dev eth0 --promisc.