1 A GUIDE TO BENCHMARKING NBDKIT
7 * The plugin matters! Different plugins have completely different
8 uses, implementations and threading models. There is little point
9 in talking generically about “the performance of nbdkit” without
10 mentioning what plugin you are testing.
12 * The client matters! Does the client support multi-conn? Does the
13 client use the oldstyle or newstyle protocol? Has the client been
14 written with performance in mind? The currently best clients are
15 (a) the Linux kernel (nbd.ko), (b) qemu, and (c) fio. Make sure you
16 are using recent versions and have multi-conn enabled.
18 * Filters impair performance! When benchmarking you should never use
19 filters unless filters are what you are trying to benchmark.
25 FIO is a Flexible I/O tester written by Jens Axboe, and it is the
26 primary tool used for generating the load to test filesystems and
31 (2) Clone and compile fio:
33 https://github.com/axboe/fio
37 ./configure --enable-libnbd
39 (3) Edit the test file in examples/nbd.fio, if required.
41 (4) Run nbdkit and fio together. From the fio source directory:
44 nbdkit -f -U /tmp/socket null 1G --run './fio examples/nbd.fio'
46 If you want to use nbdkit from the source directory too, change
47 ‘nbdkit’ to the path of the wrapper, eg:
50 ../nbdkit/nbdkit -f -U /tmp/socket null 1G --run './fio examples/nbd.fio'
56 * Try adjusting the number of fio jobs (threads).
58 * Try adjusting the number of nbdkit threads (nbdkit -t option).
60 * Use other plugins. Both nbdkit-memory-plugin and nbdkit-file-plugin
61 are important ones to test.
63 * Run nbdkit under perf:
65 perf record -a -g --call-graph=dwarf -- \
66 server/nbdkit -f -U /tmp/socket \
67 ./plugins/null/.libs/nbdkit-null-plugin.so 1G
70 Testing using the Linux kernel client
71 =====================================
73 Step (1) is the same as above - obtain or compile fio.
75 (2) Create the fio configuation file.
77 Create /var/tmp/test.fio containing:
79 ----------------------------------------------------------------------
83 directory=/var/tmp/nbd
91 ----------------------------------------------------------------------
95 From the nbdkit source directory:
98 ./nbdkit -f -U /tmp/socket memory 1G
100 (4) Loop mount the NBD server:
103 nbd-client -C 8 -unix /tmp/socket /dev/nbd0
104 mkfs.xfs -f /dev/nbd0
106 mount /dev/nbd0 /var/tmp/nbd
108 (5) Run the fio test:
110 fio /var/tmp/test.fio
116 Qemu contains an NBD client with excellent performance. However it's
117 not very useful for general benchmarking. But two tests you can
118 perform are described below.
121 Test linear copying performance
122 -------------------------------
124 In some situations, linear copying is important, particularly when
125 copying large disk images or virtual machines around. Both nbdkit and
126 the qemu client support sparseness detection and efficient zeroing.
128 To test copying speed you can use ‘qemu-img convert’, to or from
131 nbdkit -U - memory 1G --run 'qemu-img convert file.qcow2 -O raw $nbd'
133 nbdkit -U - memory 1G --run 'qemu-img convert $nbd -O qcow2 file.qcow2'
137 * In the second case, because the memory plugin is entirely sparse
138 and zero, the convert command should do almost no work. A more
139 realistic test might use the file, data or pattern plugins.
141 * Try copying to and from remote sources like nbdkit-curl-plugin and
144 * nbdkit-readahead-filter can optimize copying when reading from
145 nbdkit. This filter can particularly affect performance when the
146 nbdkit plugin source is remote (eg. nbdkit-curl-plugin).
148 * qemu-img has options for optimizing number of threads and whether
149 out of order writes are permitted.
151 Test end-to-end VM block device performance
152 -------------------------------------------
154 Set up a virtual machine using an NBD block device, connected to
155 nbdkit. On the qemu command line you would use:
157 qemu ... -drive file=nbd:unix:/tmp/sock,if=virtio,format=raw ...
159 In libvirt you would use:
162 <disk type='network' device='disk'>
163 <driver name='qemu'/>
164 <source protocol='nbd'>
165 <host transport='unix' socket='/tmp/sock'/>
167 <target dev='vda' bus='virtio'/>
171 Set up nbdkit to serve on the Unix domain socket:
173 nbdkit -U /tmp/sock memory 1G
175 Inside the guest you will see a block device like /dev/vdX which is
176 backed by the nbdkit instance, and you can use fio or other filesystem
177 testing tools to evaluate performance.
179 This is very much a real world, end-to-end test which tests many
180 different things together, including the client, guest kernel, qemu,
181 virtio transport, host kernel and nbdkit. So it's more useful as a
182 way to detect that there is a problem, rather than as a way to
183 identify which component is at fault.
185 If you have sufficiently recent kernel and qemu you can try using
186 virtio-vsock as the transport (instead of a Unix domain socket), see
187 AF_VSOCK in nbdkit-service(1).