m4: Fix check for yajl.pc
[libvirt/ericb.git] / docs / locking-lockd.html.in
blob855404ac97bb9529c0ce1ccf744552a4f40611fe
1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE html>
3 <html xmlns="http://www.w3.org/1999/xhtml">
4 <body>
5 <h1>Virtual machine lock manager, virtlockd plugin</h1>
7 <ul id="toc"></ul>
9 <p>
10 This page describes use of the <code>virtlockd</code>
11 service as a <a href="locking.html">lock driver</a>
12 plugin for virtual machine disk mutual exclusion.
13 </p>
15 <h2><a id="background">virtlockd background</a></h2>
17 <p>
18 The virtlockd daemon is a single purpose binary which
19 focuses exclusively on the task of acquiring and holding
20 locks on behalf of running virtual machines. It is
21 designed to offer a low overhead, portable locking
22 scheme can be used out of the box on virtualization
23 hosts with minimal configuration overheads. It makes
24 use of the POSIX fcntl advisory locking capability
25 to hold locks, which is supported by the majority of
26 commonly used filesystems.
27 </p>
29 <h2><a id="sanlock">virtlockd daemon setup</a></h2>
31 <p>
32 In most OS, the virtlockd daemon itself will not require
33 any upfront configuration work. It is installed by default
34 when libvirtd is present, and a systemd socket unit is
35 registered such that the daemon will be automatically
36 started when first required. With OS that predate systemd
37 though, it will be necessary to start it at boot time,
38 prior to libvirtd being started. On RHEL/Fedora distros,
39 this can be achieved as follows
40 </p>
42 <pre>
43 # chkconfig virtlockd on
44 # service virtlockd start
45 </pre>
47 <p>
48 The above instructions apply to the instance of virtlockd
49 that runs privileged, and is used by the libvirtd daemon
50 that runs privileged. If running libvirtd as an unprivileged
51 user, it will always automatically spawn an instance of
52 the virtlockd daemon unprivileged too. This requires no
53 setup at all.
54 </p>
56 <h2><a id="lockdplugin">libvirt lockd plugin configuration</a></h2>
58 <p>
59 Once the virtlockd daemon is running, or setup to autostart,
60 the next step is to configure the libvirt lockd plugin.
61 There is a separate configuration file for each libvirt
62 driver that is using virtlockd. For QEMU, we will edit
63 <code>/etc/libvirt/qemu-lockd.conf</code>
64 </p>
66 <p>
67 The default behaviour of the lockd plugin is to acquire locks
68 directly on the virtual disk images associated with the guest
69 &lt;disk&gt; elements. This ensures it can run out of the box
70 with no configuration, providing locking for disk images on
71 shared filesystems such as NFS. It does not provide any cross
72 host protection for storage that is backed by block devices,
73 since locks acquired on device nodes in /dev only apply within
74 the host. It may also be the case that the filesystem holding
75 the disk images is not capable of supporting fcntl locks.
76 </p>
78 <p>
79 To address these problems it is possible to tell lockd to
80 acquire locks on an indirect file. Essentially lockd will
81 calculate the SHA256 checksum of the fully qualified path,
82 and create a zero length file in a given directory whose
83 filename is the checksum. It will then acquire a lock on
84 that file. Assuming the block devices assigned to the guest
85 are using stable paths (eg /dev/disk/by-path/XXXXXXX) then
86 this will allow for locks to apply across hosts. This
87 feature can be enabled by setting a configuration setting
88 that specifies the directory in which to create the lock
89 files. The directory referred to should of course be
90 placed on a shared filesystem (eg NFS) that is accessible
91 to all hosts which can see the shared block devices.
92 </p>
94 <pre>
95 $ su - root
96 # augtool -s set \
97 /files/etc/libvirt/qemu-lockd.conf/file_lockspace_dir \
98 "/var/lib/libvirt/lockd/files"
99 </pre>
102 If the guests are using either LVM and SCSI block devices
103 for their virtual disks, there is a unique identifier
104 associated with each device. It is possible to tell lockd
105 to use this UUID as the basis for acquiring locks, rather
106 than the SHA256 sum of the filename. The benefit of this
107 is that the locking protection will work even if the file
108 paths to the given block device are different on each
109 host.
110 </p>
112 <pre>
113 $ su - root
114 # augtool -s set \
115 /files/etc/libvirt/qemu-lockd.conf/scsi_lockspace_dir \
116 "/var/lib/libvirt/lockd/scsi"
117 # augtool -s set \
118 /files/etc/libvirt/qemu-lockd.conf/lvm_lockspace_dir \
119 "/var/lib/libvirt/lockd/lvm"
120 </pre>
123 It is important to remember that the changes made to the
124 <code>/etc/libvirt/qemu-lockd.conf</code> file must be
125 propagated to all hosts before any virtual machines are
126 launched on them. This ensures that all hosts are using
127 the same locking mechanism
128 </p>
130 <h2><a id="qemuconfig">QEMU/KVM driver configuration</a></h2>
133 The QEMU driver is capable of using the virtlockd plugin
134 since the release <span>1.0.2</span>.
135 The out of the box configuration, however, currently
136 uses the <strong>nop</strong> lock manager plugin.
137 To get protection for disks, it is thus necessary
138 to reconfigure QEMU to activate the <strong>lockd</strong>
139 driver. This is achieved by editing the QEMU driver
140 configuration file (<code>/etc/libvirt/qemu.conf</code>)
141 and changing the <code>lock_manager</code> configuration
142 tunable.
143 </p>
145 <pre>
146 $ su - root
147 # augtool -s set /files/etc/libvirt/qemu.conf/lock_manager lockd
148 # service libvirtd restart
149 </pre>
152 Every time you start a guest, the virtlockd daemon will acquire
153 locks on the disk files directly, or in one of the configured
154 lookaside directories based on SHA256 sum. To check that locks
155 are being acquired as expected, the <code>lslocks</code> tool
156 can be run.
157 </p>
159 </body>
160 </html>