tests: Use the new DO_TEST_CAPS_*() macros
[libvirt/ericb.git] / docs / migration.html.in
blob7c345b65b7a9076ccd05660f7168bee829d782d3
1 <?xml version="1.0" encoding="UTF-8"?>
2 <!DOCTYPE html>
3 <html xmlns="http://www.w3.org/1999/xhtml">
4 <body>
5 <h1>Guest migration</h1>
7 <ul id="toc"></ul>
9 <p>
10 Migration of guests between hosts is a complicated problem with many possible
11 solutions, each with their own positive and negative points. For maximum
12 flexibility of both hypervisor integration, and administrator deployment,
13 libvirt implements several options for migration.
14 </p>
16 <h2><a id="transport">Network data transports</a></h2>
18 <p>
19 There are two options for the data transport used during migration, either
20 the hypervisor's own <strong>native</strong> transport, or <strong>tunnelled</strong>
21 over a libvirtd connection.
22 </p>
24 <h3><a id="transportnative">Hypervisor native transport</a></h3>
25 <p>
26 <em>Native</em> data transports may or may not support encryption, depending
27 on the hypervisor in question, but will typically have the lowest computational costs
28 by minimising the number of data copies involved. The native data transports will also
29 require extra hypervisor-specific network configuration steps by the administrator when
30 deploying a host. For some hypervisors, it might be necessary to open up a large range
31 of ports on the firewall to allow multiple concurrent migration operations.
32 </p>
34 <p>
35 <img class="diagram" src="migration-native.png" alt="Migration native path"/>
36 </p>
38 <h3><a id="transporttunnel">libvirt tunnelled transport</a></h3>
39 <p>
40 <em>Tunnelled</em> data transports will always be capable of strong encryption
41 since they are able to leverage the capabilities built in to the libvirt RPC protocol.
42 The downside of a tunnelled transport, however, is that there will be extra data copies
43 involved on both the source and destinations hosts as the data is moved between libvirtd
44 and the hypervisor. This is likely to be a more significant problem for guests with
45 very large RAM sizes, which dirty memory pages quickly. On the deployment side, tunnelled
46 transports do not require any extra network configuration over and above what's already
47 required for general libvirtd <a href="remote.html">remote access</a>, and there is only
48 need for a single port to be open on the firewall to support multiple concurrent
49 migration operations.
50 </p>
52 <p>
53 <img class="diagram" src="migration-tunnel.png" alt="Migration tunnel path"/>
54 </p>
56 <h2><a id="flow">Communication control paths/flows</a></h2>
58 <p>
59 Migration of virtual machines requires close co-ordination of the two
60 hosts involved, as well as the application invoking the migration,
61 which may be on the source, the destination, or a third host.
62 </p>
64 <h3><a id="flowmanageddirect">Managed direct migration</a></h3>
66 <p>
67 With <em>managed direct</em> migration, the libvirt client process
68 controls the various phases of migration. The client application must
69 be able to connect and authenticate with the libvirtd daemons on both
70 the source and destination hosts. There is no need for the two libvirtd
71 daemons to communicate with each other. If the client application
72 crashes, or otherwise loses its connection to libvirtd during the
73 migration process, an attempt will be made to abort the migration and
74 restart the guest CPUs on the source host. There may be scenarios
75 where this cannot be safely done, in which cases the guest will be
76 left paused on one or both of the hosts.
77 </p>
79 <p>
80 <img class="diagram" src="migration-managed-direct.png" alt="Migration direct, managed"/>
81 </p>
84 <h3><a id="flowpeer2peer">Managed peer to peer migration</a></h3>
86 <p>
87 With <em>peer to peer</em> migration, the libvirt client process only
88 talks to the libvirtd daemon on the source host. The source libvirtd
89 daemon controls the entire migration process itself, by directly
90 connecting the destination host libvirtd. If the client application crashes,
91 or otherwise loses its connection to libvirtd, the migration process
92 will continue uninterrupted until completion. Note that the
93 source libvirtd uses its own credentials (typically root) to
94 connect to the destination, rather than the credentials used
95 by the client to connect to the source; if these differ, it is
96 common to run into a situation where a client can connect to the
97 destination directly but the source cannot make the connection to
98 set up the peer-to-peer migration.
99 </p>
102 <img class="diagram" src="migration-managed-p2p.png" alt="Migration peer-to-peer"/>
103 </p>
106 <h3><a id="flowunmanageddirect">Unmanaged direct migration</a></h3>
109 With <em>unmanaged direct</em> migration, neither the libvirt client
110 or libvirtd daemon control the migration process. Control is instead
111 delegated to the hypervisor's over management services (if any). The
112 libvirt client merely initiates the migration via the hypervisor's
113 management layer. If the libvirt client or libvirtd crash, the
114 migration process will continue uninterrupted until completion.
115 </p>
118 <img class="diagram" src="migration-unmanaged-direct.png" alt="Migration direct, unmanaged"/>
119 </p>
122 <h2><a id="security">Data security</a></h2>
125 Since the migration data stream includes a complete copy of the guest
126 OS RAM, snooping of the migration data stream may allow compromise
127 of sensitive guest information. If the virtualization hosts have
128 multiple network interfaces, or if the network switches support
129 tagged VLANs, then it is very desirable to separate guest network
130 traffic from migration or management traffic.
131 </p>
134 In some scenarios, even a separate network for migration data may
135 not offer sufficient security. In this case it is possible to apply
136 encryption to the migration data stream. If the hypervisor does not
137 itself offer encryption, then the libvirt tunnelled migration
138 facility should be used.
139 </p>
141 <h2><a id="offline">Offline migration</a></h2>
144 Offline migration transfers inactive the definition of a domain
145 (which may or may not be active). After successful completion, the
146 domain remains in its current state on the source host and is defined
147 but inactive on the destination host. It's a bit more clever than
148 <code>virsh dumpxml</code> on source host followed by
149 <code>virsh define</code> on destination host, as offline migration
150 will run the pre-migration hook to update the domain XML on
151 destination host. Currently, copying non-shared storage or other file
152 based storages (e.g. UEFI variable storage) is not supported during
153 offline migration.
154 </p>
156 <h2><a id="uris">Migration URIs</a></h2>
159 Initiating a guest migration requires the client application to
160 specify up to three URIs, depending on the choice of control
161 flow and/or APIs used. The first URI is that of the libvirt
162 connection to the source host, where the virtual guest is
163 currently running. The second URI is that of the libvirt
164 connection to the destination host, where the virtual guest
165 will be moved to (and in peer-to-peer migrations, this is from
166 the perspective of the source, not the client). The third URI is
167 a hypervisor specific
168 URI used to control how the guest will be migrated. With
169 any managed migration flow, the first and second URIs are
170 compulsory, while the third URI is optional. With the
171 unmanaged direct migration mode, the first and third URIs are
172 compulsory and the second URI is not used.
173 </p>
176 Ordinarily management applications only need to care about the
177 first and second URIs, which are both in the normal libvirt
178 connection URI format. Libvirt will then automatically determine
179 the hypervisor specific URI, by looking up the target host's
180 configured hostname. There are a few scenarios where the management
181 application may wish to have direct control over the third URI.
182 </p>
184 <ol>
185 <li>The configured hostname is incorrect, or DNS is broken. If a
186 host has a hostname which will not resolve to match one of its
187 public IP addresses, then libvirt will generate an incorrect
188 URI. In this case the management application should specify the
189 hypervisor specific URI explicitly, using an IP address, or a
190 correct hostname.</li>
191 <li>The host has multiple network interfaces. If a host has multiple
192 network interfaces, it might be desirable for the migration data
193 stream to be sent over a specific interface for either security
194 or performance reasons. In this case the management application
195 should specify the hypervisor specific URI, using an IP address
196 associated with the network to be used.</li>
197 <li>The firewall restricts what ports are available. When libvirt
198 generates a migration URI it will pick a port number using hypervisor
199 specific rules. Some hypervisors only require a single port to be
200 open in the firewalls, while others require a whole range of port
201 numbers. In the latter case the management application may wish
202 to choose a specific port number outside the default range in order
203 to comply with local firewall policies.</li>
204 </ol>
206 <h2><a id="config">Configuration file handling</a></h2>
209 There are two types of virtual machines known to libvirt. A <em>transient</em>
210 guest only exists while it is running, and has no configuration file stored
211 on disk. A <em>persistent</em> guest maintains a configuration file on disk
212 even when it is not running.
213 </p>
216 By default, a migration operation will not attempt to modify any configuration
217 files that may be stored on either the source or destination host. It is the
218 administrator, or management application's, responsibility to manage distribution
219 of configuration files (if desired). It is important to note that the <code>/etc/libvirt</code>
220 directory <strong>MUST NEVER BE SHARED BETWEEN HOSTS</strong>. There are some
221 typical scenarios that might be applicable:
222 </p>
224 <ul>
225 <li>Centralized configuration files outside libvirt, in shared storage. A cluster
226 aware management application may maintain all the master guest configuration
227 files in a cluster filesystem. When attempting to start a guest, the config
228 will be read from the cluster FS and used to deploy a persistent guest.
229 For migration the configuration will need to be copied to the destination
230 host and removed on the original.
231 </li>
232 <li>Centralized configuration files outside libvirt, in a database. A data center
233 management application may not store configuration files at all. Instead it
234 may generate libvirt XML on the fly when a guest is booted. It will typically
235 use transient guests, and thus not have to consider configuration files during
236 migration.
237 </li>
238 <li>Distributed configuration inside libvirt. The configuration file for each
239 guest is copied to every host where the guest is able to run. Upon migration
240 the existing config merely needs to be updated with any changes.
241 </li>
242 <li>Ad-hoc configuration management inside libvirt. Each guest is tied to a
243 specific host and rarely migrated. When migration is required, the config
244 is moved from one host to the other.
245 </li>
246 </ul>
249 As mentioned above, libvirt will not modify configuration files during
250 migration by default. The <code>virsh</code> command has two flags to
251 influence this behaviour. The <code>--undefine-source</code> flag
252 will cause the configuration file to be removed on the source host
253 after a successful migration. The <code>--persist</code> flag will
254 cause a configuration file to be created on the destination host
255 after a successful migration. The following table summarizes the
256 configuration file handling in all possible state and flag
257 combinations.
258 </p>
260 <table class="data">
261 <thead>
262 <tr class="head">
263 <th colspan="3">Before migration</th>
264 <th colspan="2">Flags</th>
265 <th colspan="3">After migration</th>
266 </tr>
267 <tr class="subhead">
268 <th>Source type</th>
269 <th>Source config</th>
270 <th>Dest config</th>
271 <th>--undefine-source</th>
272 <th>--persist</th>
273 <th>Dest type</th>
274 <th>Source config</th>
275 <th>Dest config</th>
276 </tr>
277 </thead>
278 <tbody>
279 <!-- src:N, dst:N -->
280 <tr>
281 <td>Transient</td>
282 <td class="n">N</td>
283 <td class="n">N</td>
284 <td class="n">N</td>
285 <td class="n">N</td>
286 <td>Transient</td>
287 <td class="n">N</td>
288 <td class="n">N</td>
289 </tr>
290 <tr>
291 <td>Transient</td>
292 <td class="n">N</td>
293 <td class="n">N</td>
294 <td class="y">Y</td>
295 <td class="n">N</td>
296 <td>Transient</td>
297 <td class="n">N</td>
298 <td class="n">N</td>
299 </tr>
300 <tr>
301 <td>Transient</td>
302 <td class="n">N</td>
303 <td class="n">N</td>
304 <td class="n">N</td>
305 <td class="y">Y</td>
306 <td>Persistent</td>
307 <td class="n">N</td>
308 <td class="y">Y</td>
309 </tr>
310 <tr>
311 <td>Transient</td>
312 <td class="n">N</td>
313 <td class="n">N</td>
314 <td class="y">Y</td>
315 <td class="y">Y</td>
316 <td>Persistent</td>
317 <td class="n">N</td>
318 <td class="y">Y</td>
319 </tr>
321 <!-- src:N, dst:Y -->
322 <tr>
323 <td>Transient</td>
324 <td class="n">N</td>
325 <td class="y">Y</td>
326 <td class="n">N</td>
327 <td class="n">N</td>
328 <td>Persistent</td>
329 <td class="n">N</td>
330 <td class="y">Y<br/>(unchanged dest config)</td>
331 </tr>
332 <tr>
333 <td>Transient</td>
334 <td class="n">N</td>
335 <td class="y">Y</td>
336 <td class="y">Y</td>
337 <td class="n">N</td>
338 <td>Persistent</td>
339 <td class="n">N</td>
340 <td class="y">Y<br/>(unchanged dest config)</td>
341 </tr>
342 <tr>
343 <td>Transient</td>
344 <td class="n">N</td>
345 <td class="y">Y</td>
346 <td class="n">N</td>
347 <td class="y">Y</td>
348 <td>Persistent</td>
349 <td class="n">N</td>
350 <td class="y">Y<br/>(replaced with source)</td>
351 </tr>
352 <tr>
353 <td>Transient</td>
354 <td class="n">N</td>
355 <td class="y">Y</td>
356 <td class="y">Y</td>
357 <td class="y">Y</td>
358 <td>Persistent</td>
359 <td class="n">N</td>
360 <td class="y">Y<br/>(replaced with source)</td>
361 </tr>
363 <!-- src:Y dst:N -->
364 <tr>
365 <td>Persistent</td>
366 <td class="y">Y</td>
367 <td class="n">N</td>
368 <td class="n">N</td>
369 <td class="n">N</td>
370 <td>Transient</td>
371 <td class="y">Y</td>
372 <td class="n">N</td>
373 </tr>
374 <tr>
375 <td>Persistent</td>
376 <td class="y">Y</td>
377 <td class="n">N</td>
378 <td class="y">Y</td>
379 <td class="n">N</td>
380 <td>Transient</td>
381 <td class="n">N</td>
382 <td class="n">N</td>
383 </tr>
384 <tr>
385 <td>Persistent</td>
386 <td class="y">Y</td>
387 <td class="n">N</td>
388 <td class="n">N</td>
389 <td class="y">Y</td>
390 <td>Persistent</td>
391 <td class="y">Y</td>
392 <td class="y">Y</td>
393 </tr>
394 <tr>
395 <td>Persistent</td>
396 <td class="y">Y</td>
397 <td class="n">N</td>
398 <td class="y">Y</td>
399 <td class="y">Y</td>
400 <td>Persistent</td>
401 <td class="n">N</td>
402 <td class="y">Y</td>
403 </tr>
405 <!-- src:Y dst:Y -->
406 <tr>
407 <td>Persistent</td>
408 <td class="y">Y</td>
409 <td class="y">Y</td>
410 <td class="n">N</td>
411 <td class="n">N</td>
412 <td>Persistent</td>
413 <td class="y">Y</td>
414 <td class="y">Y<br/>(unchanged dest config)</td>
415 </tr>
416 <tr>
417 <td>Persistent</td>
418 <td class="y">Y</td>
419 <td class="y">Y</td>
420 <td class="y">Y</td>
421 <td class="n">N</td>
422 <td>Persistent</td>
423 <td class="n">N</td>
424 <td class="y">Y<br/>(unchanged dest config)</td>
425 </tr>
426 <tr>
427 <td>Persistent</td>
428 <td class="y">Y</td>
429 <td class="y">Y</td>
430 <td class="n">N</td>
431 <td class="y">Y</td>
432 <td>Persistent</td>
433 <td class="y">Y</td>
434 <td class="y">Y<br/>(replaced with source)</td>
435 </tr>
436 <tr>
437 <td>Persistent</td>
438 <td class="y">Y</td>
439 <td class="y">Y</td>
440 <td class="y">Y</td>
441 <td class="y">Y</td>
442 <td>Persistent</td>
443 <td class="n">N</td>
444 <td class="y">Y<br/>(replaced with source)</td>
445 </tr>
446 </tbody>
447 </table>
449 <h2><a id="scenarios">Migration scenarios</a></h2>
452 <h3><a id="scenarionativedirect">Native migration, client to two libvirtd servers</a></h3>
455 At an API level this requires use of virDomainMigrate, without the
456 VIR_MIGRATE_PEER2PEER flag set. The destination libvirtd server
457 will automatically determine the native hypervisor URI for migration
458 based off the primary hostname. To force migration over an alternate
459 network interface the optional hypervisor specific URI must be provided
460 </p>
462 <pre>
463 syntax: virsh migrate GUESTNAME DEST-LIBVIRT-URI [HV-URI]
466 eg using default network interface
468 virsh migrate web1 qemu+ssh://desthost/system
469 virsh migrate web1 xen+tls://desthost/system
472 eg using secondary network interface
474 virsh migrate web1 qemu://desthost/system tcp://10.0.0.1/
475 virsh migrate web1 xen+tcp://desthost/system xenmigr:10.0.0.1/
476 </pre>
479 Supported by Xen, QEMU, VMware and VirtualBox drivers
480 </p>
482 <h3><a id="scenarionativepeer2peer">Native migration, client to and peer2peer between, two libvirtd servers</a></h3>
485 virDomainMigrate, with the VIR_MIGRATE_PEER2PEER flag set,
486 using the libvirt URI format for the 'uri' parameter. The
487 destination libvirtd server will automatically determine
488 the native hypervisor URI for migration, based off the
489 primary hostname. The optional uri parameter controls how
490 the source libvirtd connects to the destination libvirtd,
491 in case it is not accessible using the same address that
492 the client uses to connect to the destination, or a different
493 encryption/auth scheme is required. There is no
494 scope for forcing an alternative network interface for the
495 native migration data with this method.
496 </p>
499 This mode cannot be invoked from virsh
500 </p>
503 Supported by QEMU driver
504 </p>
506 <h3><a id="scenariotunnelpeer2peer1">Tunnelled migration, client and peer2peer between two libvirtd servers</a></h3>
509 virDomainMigrate, with the VIR_MIGRATE_PEER2PEER &amp; VIR_MIGRATE_TUNNELLED
510 flags set, using the libvirt URI format for the 'uri' parameter. The
511 destination libvirtd server will automatically determine
512 the native hypervisor URI for migration, based off the
513 primary hostname. The optional uri parameter controls how
514 the source libvirtd connects to the destination libvirtd,
515 in case it is not accessible using the same address that
516 the client uses to connect to the destination, or a different
517 encryption/auth scheme is required. The native hypervisor URI
518 format is not used at all.
519 </p>
522 This mode cannot be invoked from virsh
523 </p>
526 Supported by QEMU driver
527 </p>
529 <h3><a id="nativedirectunmanaged">Native migration, client to one libvirtd server</a></h3>
532 virDomainMigrateToURI, without the VIR_MIGRATE_PEER2PEER flag set,
533 using a hypervisor specific URI format for the 'uri' parameter.
534 There is no use or requirement for a destination libvirtd instance
535 at all. This is typically used when the hypervisor has its own
536 native management daemon available to handle incoming migration
537 attempts on the destination.
538 </p>
540 <pre>
541 syntax: virsh migrate GUESTNAME HV-URI
544 eg using same libvirt URI for all connections
546 virsh migrate --direct web1 xenmigr://desthost/
547 </pre>
550 Supported by Xen driver
551 </p>
553 <h3><a id="nativepeer2peer">Native migration, peer2peer between two libvirtd servers</a></h3>
556 virDomainMigrateToURI, with the VIR_MIGRATE_PEER2PEER flag set,
557 using the libvirt URI format for the 'uri' parameter. The
558 destination libvirtd server will automatically determine
559 the native hypervisor URI for migration, based off the
560 primary hostname. There is no scope for forcing an alternative
561 network interface for the native migration data with this
562 method. The destination URI must be reachable using the source
563 libvirtd credentials (which are not necessarily the same as the
564 credentials of the client in connecting to the source).
565 </p>
567 <pre>
568 syntax: virsh migrate GUESTNAME DEST-LIBVIRT-URI [ALT-DEST-LIBVIRT-URI]
571 eg using same libvirt URI for all connections
573 virsh migrate --p2p web1 qemu+ssh://desthost/system
576 eg using different libvirt URI auth scheme for peer2peer connections
578 virsh migrate --p2p web1 qemu+ssh://desthost/system qemu+tls:/desthost/system
581 eg using different libvirt URI hostname for peer2peer connections
583 virsh migrate --p2p web1 qemu+ssh://desthost/system qemu+ssh://10.0.0.1/system
584 </pre>
587 Supported by the QEMU driver
588 </p>
590 <h3><a id="scenariotunnelpeer2peer2">Tunnelled migration, peer2peer between two libvirtd servers</a></h3>
593 virDomainMigrateToURI, with the VIR_MIGRATE_PEER2PEER &amp; VIR_MIGRATE_TUNNELLED
594 flags set, using the libvirt URI format for the 'uri' parameter. The
595 destination libvirtd server will automatically determine
596 the native hypervisor URI for migration, based off the
597 primary hostname. The optional uri parameter controls how
598 the source libvirtd connects to the destination libvirtd,
599 in case it is not accessible using the same address that
600 the client uses to connect to the destination, or a different
601 encryption/auth scheme is required. The native hypervisor URI
602 format is not used at all. The destination URI must be
603 reachable using the source libvirtd credentials (which are not
604 necessarily the same as the credentials of the client in
605 connecting to the source).
606 </p>
608 <pre>
609 syntax: virsh migrate GUESTNAME DEST-LIBVIRT-URI [ALT-DEST-LIBVIRT-URI]
612 eg using same libvirt URI for all connections
614 virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system
617 eg using different libvirt URI auth scheme for peer2peer connections
619 virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+tls:/desthost/system
622 eg using different libvirt URI hostname for peer2peer connections
624 virsh migrate --p2p --tunnelled web1 qemu+ssh://desthost/system qemu+ssh://10.0.0.1/system
625 </pre>
628 Supported by QEMU driver
629 </p>
631 </body>
632 </html>