1 .TH interbench "8" "March 2006" "Interbench 0.30" "System Commands"
3 interbench \-benchmark application designed to benchmark interactivity in Linux
5 .B interbench \fR\ [-l <int>] [-L <int>] [-t <int] [-B <int>] [-N <int>]
6 [-b] [-c] [-r] [-C <int> -I <int>] [-m <comment>]
7 [-w <load type>] [-x <load type>] [-W <bench>] [-X <bench>]
10 \fB\-l\fR Use loops per sec (default: use saved benchmark)
12 \fB\-L\fR Use cpu load of with burn load (default: 4)
14 \fB\-t\fR Seconds to run each benchmark (default: 30)
16 \fB\-B\fR Nice the benchmarked thread to (default: 0)
18 \fB\-N\fR Nice the load thread to (default: 0)
20 \fB\-b\fR Benchmark loops_per_ms even if it is already known
22 \fB\-c\fR Output to console only (default: use console and logfile)
24 \fB-r\fR Perform real time scheduling benchmarks (default: non-rt)
26 \fB\-C\fR Use percentage cpu as a custom load (default: no custom load)
28 \fB\-I\fR Use microsecond intervals for custom load (needs -C as well)
30 \fB-m\fR Add to the log file as a separate line
32 \fB\-w\fR Add to the list of loads to be tested against
34 \fB\-x\fR Exclude from the list of loads to be tested against
36 \fB\-W\fR Add <bench> to the list of benchmarks to be tested
38 \fB-X\fR Exclude <bench> from the list of benchmarks to be tested
42 If run without parameters \fBinterbench\fR will run a standard benchmark.
44 \fBinterbench\fR is designed to measure the effect of changes in Linux kernel design or system
45 configuration changes such as cpu, I/O scheduler and filesystem changes and
46 options. With careful benchmarking, different hardware can be compared.
51 It is designed to emulate the cpu scheduling behaviour of interactive tasks and
52 measure their scheduling latency and jitter. It does this with the tasks on
53 their own and then in the presence of various background loads, both with
54 configurable nice levels and the benchmarked tasks can be real time.
58 First it benchmarks how best to reproduce a fixed percentage of cpu usage on the
59 machine currently being used for the benchmark. It saves this to a file and then
60 uses this for all subsequent runs to keep the emulation of cpu usage constant.
62 It runs a real time high priority timing thread that wakes up the thread or
63 threads of the simulated interactive tasks and then measures the latency in the
64 time taken to schedule. As there is no accurate timer driven scheduling in linux
65 the timing thread sleeps as accurately as linux kernel supports, and latency is
66 considered as the time from this sleep till the simulated task gets scheduled.
68 Each benchmarked simulation runs as a separate process with its own threads,
69 and the background load (if any) also runs as a separate process.
71 .SH What interactive tasks are simulated and how?
74 X is simulated as a thread that uses a variable amount of cpu ranging from 0 to
75 100%. This simulates an idle gui where a window is grabbed and then dragged
79 Audio is simulated as a thread that tries to run at 50ms intervals that then
80 requires 5% cpu. This behaviour ignores any caching that would normally be done
81 by well designed audio applications, but has been seen as the interval used to
82 write to audio cards by a popular linux audio player. It also ignores any of the
83 effects of different audio drivers and audio cards. Audio is also benchmarked
84 running SCHED_FIFO if the real time benchmarking option is used.
87 Video is simulated as a thread that tries to receive cpu 60 times per second
88 and uses 40% cpu. This would be quite a demanding video playback at 60fps. Like
89 the audio simulator it ignores caching, drivers and video cards. As per audio,
90 video is benchmarked with the real time option.
93 The cpu usage behind gaming is not at all interactive, yet games clearly are
94 intended for interactive usage. This load simply uses as much cpu as it can
95 get. It does not return deadlines met as there are no deadlines with an
96 unlocked frame rate in a game. This does not accurately emulate a 3d game
97 which is gpu bound (limited purely by the graphics card), only a cpu bound
101 This load will allow you to specify your own combination of cpu percentage and
102 intervals if you have a specific workload you are interested in and know the
103 cpu usage and frame rate of it on the hardware you are testing.
106 .SH What loads are simulated?
109 Otherwise idle system.
112 The video simulation thread is also used as a background load.
115 The X simulation thread is used as a load.
118 A configurable number of threads fully cpu bound (4 by default).
121 A streaming write to disk repeatedly of a file the size of physical ram.
124 Repeatedly reading a file from disk the size of physical ram (to avoid any
128 Simulating a heavy 'make -j4' compilation by running Burn, Write and Read
132 Simulating heavy memory and swap pressure by repeatedly accessing 110% of
133 available ram and moving it around and freeing it. You need to have some
134 swap enabled due to the nature of this load, and if it detects no swap this
138 This repeatedly runs the benchmarking program "hackbench" as 'hackbench 50'.
139 This is suggested as a real time load only but because of how extreme this
140 load is it is not unusual for an out-of-memory kill to occur which will
141 invalidate any data you get. For this reason it is disabled by default.
144 The custom simulation is used as a load.
147 .SH What is measured and what does it mean?
149 1. The average scheduling latency (time to requesting cpu till actually getting it) of deadlines met during the test period.
151 2. The scheduling jitter is represented by calculating the standard deviation of the latency
153 3. The maximum latency seen during the test period
155 4. Percentage of desired cpu
157 5. Percentage of deadlines met.
159 This data is output to console and saved to a file which is stamped with the
160 kernel name and date. See sample.log.
163 --- Benchmarking simulated cpu of X in the presence of simulated ---
165 Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
167 None 0.495 +/- 0.495 45 100 96
169 Video 11.7 +/- 11.7 1815 89.6 62.7
171 Burn 27.9 +/- 28.1 3335 78.5 44
173 Write 4.02 +/- 4.03 372 97 78.7
175 Read 1.09 +/- 1.09 158 99.7 88
177 Compile 28.8 +/- 28.8 3351 78.2 43.7
179 Memload 2.81 +/- 2.81 187 98.7 85
181 What can be seen here is that never during this test run were all the so called
182 deadlines met by the X simulator, although all the desired cpu was achieved
183 under no load. In X terms this means that every bit of window movement was
184 drawn while moving the window, but some were delayed and there was enough time
185 to catch up before the next deadline. In the 'Burn' column we can see that only
186 44% of the deadlines were met, and only 78.5% of the desired cpu was achieved.
187 This means that some deadlines were so late (%deadlines met was low) that some
188 redraws were dropped entirely to catch up. In X terms this would translate into
189 jerky movement, in audio it would be a skip, and in video it would be a dropped
190 frame. Note that despite the massive maximum latency of >3seconds, the average
191 latency is still less than 30ms. This is because redraws are dropped in order
192 to catch up usually by these sorts of applications.
195 .SH What is relevant in the data?
197 The results pessimise quite a lot what happens in real world terms because they
198 ignore the reality of buffering, but this allows us to pick up subtle
199 differences more readily. In terms of what would be noticed by the end user,
200 dropping deadlines would make noticeable clicks in audio, subtle visible frame
201 time delays in video, and loss of "smooth" movement in X. Dropping desired cpu
202 would be much more noticeable with audio skips, missed video frames or jerks
203 in window movement under X. The magnitude of these would be best represented by
204 the maximum latency. When the deadlines are actually met, the average latency
205 represents how "smooth" it would look. Average humans' limit of perception for
206 jitter is in the order of 7ms. Trained audio observers might notice much less.
209 Written by Con Kolivas.
211 This manual page was written for the Debian system by
212 Julien Valroff <julien@kirya.net>.
214 Report bugs to <kernel@kolivas.org>.
216 Copyright 2006 Con Kolivas <kernel@kolivas.org>
218 This is free software; see the source for copying conditions. There is NO
219 warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
221 http://interbench.kolivas.org
223 /usr/share/doc/interbench/readme.gz
225 /usr/share/doc/interbench/readme.interactivity