2 Copyright (C) 2017 Red Hat Inc.
4 This work is licensed under the terms of the GNU GPL, version 2 or
5 later. See the COPYING file in the top-level directory.
7 ============================
8 Live Block Device Operations
9 ============================
11 QEMU Block Layer currently (as of QEMU 2.9) supports four major kinds of
12 live block device jobs -- stream, commit, mirror, and backup. These can
13 be used to manipulate disk image chains to accomplish certain tasks,
14 namely: live copy data from backing files into overlays; shorten long
15 disk image chains by merging data from overlays into backing files; live
16 synchronize data from a disk image chain (including current active disk)
17 to another target image; and point-in-time (and incremental) backups of
18 a block device. Below is a description of the said block (QMP)
19 primitives, and some (non-exhaustive list of) examples to illustrate
23 The file ``qapi/block-core.json`` in the QEMU source tree has the
24 canonical QEMU API (QAPI) schema documentation for the QMP
25 primitives discussed here.
27 .. todo (kashyapc):: Remove the ".. contents::" directive when Sphinx is
32 Disk image backing chain notation
33 ---------------------------------
35 A simple disk image chain. (This can be created live using QMP
36 ``blockdev-snapshot-sync``, or offline via ``qemu-img``)::
45 (backing file) (overlay)
47 The arrow can be read as: Image [A] is the backing file of disk image
48 [B]. And live QEMU is currently writing to image [B], consequently, it
49 is also referred to as the "active layer".
51 There are two kinds of terminology that are common when referring to
52 files in a disk image backing chain:
54 (1) Directional: 'base' and 'top'. Given the simple disk image chain
55 above, image [A] can be referred to as 'base', and image [B] as
56 'top'. (This terminology can be seen in in QAPI schema file,
59 (2) Relational: 'backing file' and 'overlay'. Again, taking the same
60 simple disk image chain from the above, disk image [A] is referred
61 to as the backing file, and image [B] as overlay.
63 Throughout this document, we will use the relational terminology.
66 The overlay files can generally be any format that supports a
67 backing file, although QCOW2 is the preferred format and the one
68 used in this document.
71 Brief overview of live block QMP primitives
72 -------------------------------------------
74 The following are the four different kinds of live block operations that
75 QEMU block layer supports.
77 (1) ``block-stream``: Live copy of data from backing files into overlay
80 .. note:: Once the 'stream' operation has finished, three things to
83 (a) QEMU rewrites the backing chain to remove
84 reference to the now-streamed and redundant backing
87 (b) the streamed file *itself* won't be removed by QEMU,
88 and must be explicitly discarded by the user;
90 (c) the streamed file remains valid -- i.e. further
91 overlays can be created based on it. Refer the
92 ``block-stream`` section further below for more
95 (2) ``block-commit``: Live merge of data from overlay files into backing
96 files (with the optional goal of removing the overlay file from the
97 chain). Since QEMU 2.0, this includes "active ``block-commit``"
98 (i.e. merge the current active layer into the base image).
100 .. note:: Once the 'commit' operation has finished, there are three
101 things to note here as well:
103 (a) QEMU rewrites the backing chain to remove reference
104 to now-redundant overlay images that have been
105 committed into a backing file;
107 (b) the committed file *itself* won't be removed by QEMU
108 -- it ought to be manually removed;
110 (c) however, unlike in the case of ``block-stream``, the
111 intermediate images will be rendered invalid -- i.e.
112 no more further overlays can be created based on
113 them. Refer the ``block-commit`` section further
114 below for more details.
116 (3) ``drive-mirror`` (and ``blockdev-mirror``): Synchronize a running
117 disk to another image.
119 (4) ``drive-backup`` (and ``blockdev-backup``): Point-in-time (live) copy
120 of a block device to a destination.
123 .. _`Interacting with a QEMU instance`:
125 Interacting with a QEMU instance
126 --------------------------------
128 To show some example invocations of command-line, we will use the
129 following invocation of QEMU, with a QMP server running over UNIX
134 $ |qemu_system| -display none -no-user-config -nodefaults \\
136 node-name=node-A,driver=qcow2,file.driver=file,file.node-name=file,file.filename=./a.qcow2 \\
137 -device virtio-blk,drive=node-A,id=virtio0 \\
138 -monitor stdio -qmp unix:/tmp/qmp-sock,server=on,wait=off
140 The ``-blockdev`` command-line option, used above, is available from
141 QEMU 2.9 onwards. In the above invocation, notice the ``node-name``
142 parameter that is used to refer to the disk image a.qcow2 ('node-A') --
143 this is a cleaner way to refer to a disk image (as opposed to referring
144 to it by spelling out file paths). So, we will continue to designate a
145 ``node-name`` to each further disk image created (either via
146 ``blockdev-snapshot-sync``, or ``blockdev-add``) as part of the disk
147 image chain, and continue to refer to the disks using their
148 ``node-name`` (where possible, because ``block-commit`` does not yet, as
149 of QEMU 2.9, accept ``node-name`` parameter) when performing various
152 To interact with the QEMU instance launched above, we will use the
153 ``qmp-shell`` utility (located at: ``qemu/scripts/qmp``, as part of the
154 QEMU source directory), which takes key-value pairs for QMP commands.
155 Invoke it as below (which will also print out the complete raw JSON
156 syntax for reference -- examples in the following sections)::
158 $ ./qmp-shell -v -p /tmp/qmp-sock
162 In the event we have to repeat a certain QMP command, we will: for
163 the first occurrence of it, show the ``qmp-shell`` invocation, *and*
164 the corresponding raw JSON QMP syntax; but for subsequent
165 invocations, present just the ``qmp-shell`` syntax, and omit the
166 equivalent JSON output.
169 Example disk image chain
170 ------------------------
172 We will use the below disk image chain (and occasionally spelling it
173 out where appropriate) when discussing various primitives::
175 [A] <-- [B] <-- [C] <-- [D]
177 Where [A] is the original base image; [B] and [C] are intermediate
178 overlay images; image [D] is the active layer -- i.e. live QEMU is
179 writing to it. (The rule of thumb is: live QEMU will always be pointing
180 to the rightmost image in a disk image chain.)
182 The above image chain can be created by invoking
183 ``blockdev-snapshot-sync`` commands as following (which shows the
184 creation of overlay image [B]) using the ``qmp-shell`` (our invocation
185 also prints the raw JSON invocation of it)::
187 (QEMU) blockdev-snapshot-sync node-name=node-A snapshot-file=b.qcow2 snapshot-node-name=node-B format=qcow2
189 "execute": "blockdev-snapshot-sync",
191 "node-name": "node-A",
192 "snapshot-file": "b.qcow2",
194 "snapshot-node-name": "node-B"
198 Here, "node-A" is the name QEMU internally uses to refer to the base
199 image [A] -- it is the backing file, based on which the overlay image,
202 To create the rest of the overlay images, [C], and [D] (omitting the raw
203 JSON output for brevity)::
205 (QEMU) blockdev-snapshot-sync node-name=node-B snapshot-file=c.qcow2 snapshot-node-name=node-C format=qcow2
206 (QEMU) blockdev-snapshot-sync node-name=node-C snapshot-file=d.qcow2 snapshot-node-name=node-D format=qcow2
209 A note on points-in-time vs file names
210 --------------------------------------
212 In our disk image chain::
214 [A] <-- [B] <-- [C] <-- [D]
216 We have *three* points in time and an active layer:
218 - Point 1: Guest state when [B] was created is contained in file [A]
219 - Point 2: Guest state when [C] was created is contained in [A] + [B]
220 - Point 3: Guest state when [D] was created is contained in
222 - Active layer: Current guest state is contained in [A] + [B] + [C] +
225 Therefore, be aware with naming choices:
227 - Naming a file after the time it is created is misleading -- the
228 guest data for that point in time is *not* contained in that file
229 (as explained earlier)
230 - Rather, think of files as a *delta* from the backing file
233 Live block streaming --- ``block-stream``
234 -----------------------------------------
236 The ``block-stream`` command allows you to do live copy data from backing
237 files into overlay images.
239 Given our original example disk image chain from earlier::
241 [A] <-- [B] <-- [C] <-- [D]
243 The disk image chain can be shortened in one of the following different
244 ways (not an exhaustive list).
248 (1) Merge everything into the active layer: I.e. copy all contents from
249 the base image, [A], and overlay images, [B] and [C], into [D],
250 *while* the guest is running. The resulting chain will be a
251 standalone image, [D] -- with contents from [A], [B] and [C] merged
252 into it (where live QEMU writes go to)::
258 (2) Taking the same example disk image chain mentioned earlier, merge
259 only images [B] and [C] into [D], the active layer. The result will
260 be contents of images [B] and [C] will be copied into [D], and the
261 backing file pointer of image [D] will be adjusted to point to image
262 [A]. The resulting chain will be::
268 (3) Intermediate streaming (available since QEMU 2.8): Starting afresh
269 with the original example disk image chain, with a total of four
270 images, it is possible to copy contents from image [B] into image
271 [C]. Once the copy is finished, image [B] can now be (optionally)
272 discarded; and the backing file pointer of image [C] will be
273 adjusted to point to [A]. I.e. after performing "intermediate
274 streaming" of [B] into [C], the resulting image chain will be (where
275 live QEMU is writing to [D])::
280 QMP invocation for ``block-stream``
281 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
283 For `Case-1`_, to merge contents of all the backing files into the
284 active layer, where 'node-D' is the current active image (by default
285 ``block-stream`` will flatten the entire chain); ``qmp-shell`` (and its
286 corresponding JSON output)::
288 (QEMU) block-stream device=node-D job-id=job0
290 "execute": "block-stream",
297 For `Case-2`_, merge contents of the images [B] and [C] into [D], where
298 image [D] ends up referring to image [A] as its backing file::
300 (QEMU) block-stream device=node-D base-node=node-A job-id=job0
302 And for `Case-3`_, of "intermediate" streaming", merge contents of
303 images [B] into [C], where [C] ends up referring to [A] as its backing
306 (QEMU) block-stream device=node-C base-node=node-A job-id=job0
308 Progress of a ``block-stream`` operation can be monitored via the QMP
311 (QEMU) query-block-jobs
313 "execute": "query-block-jobs",
318 Once the ``block-stream`` operation has completed, QEMU will emit an
319 event, ``BLOCK_JOB_COMPLETED``. The intermediate overlays remain valid,
320 and can now be (optionally) discarded, or retained to create further
321 overlays based on them. Finally, the ``block-stream`` jobs can be
322 restarted at anytime.
325 Live block commit --- ``block-commit``
326 --------------------------------------
328 The ``block-commit`` command lets you merge live data from overlay
329 images into backing file(s). Since QEMU 2.0, this includes "live active
330 commit" (i.e. it is possible to merge the "active layer", the right-most
331 image in a disk image chain where live QEMU will be writing to, into the
332 base image). This is analogous to ``block-stream``, but in the opposite
335 Again, starting afresh with our example disk image chain, where live
336 QEMU is writing to the right-most image in the chain, [D]::
338 [A] <-- [B] <-- [C] <-- [D]
340 The disk image chain can be shortened in one of the following ways:
342 .. _`block-commit_Case-1`:
344 (1) Commit content from only image [B] into image [A]. The resulting
345 chain is the following, where image [C] is adjusted to point at [A]
346 as its new backing file::
350 (2) Commit content from images [B] and [C] into image [A]. The
351 resulting chain, where image [D] is adjusted to point to image [A]
352 as its new backing file::
356 .. _`block-commit_Case-3`:
358 (3) Commit content from images [B], [C], and the active layer [D] into
359 image [A]. The resulting chain (in this case, a consolidated single
364 (4) Commit content from image only image [C] into image [B]. The
369 (5) Commit content from image [C] and the active layer [D] into image
370 [B]. The resulting chain::
375 QMP invocation for ``block-commit``
376 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
378 For :ref:`Case-1 <block-commit_Case-1>`, to merge contents only from
379 image [B] into image [A], the invocation is as follows::
381 (QEMU) block-commit device=node-D base=a.qcow2 top=b.qcow2 job-id=job0
383 "execute": "block-commit",
392 Once the above ``block-commit`` operation has completed, a
393 ``BLOCK_JOB_COMPLETED`` event will be issued, and no further action is
394 required. As the end result, the backing file of image [C] is adjusted
395 to point to image [A], and the original 4-image chain will end up being
401 The intermediate image [B] is invalid (as in: no more further
402 overlays based on it can be created).
404 Reasoning: An intermediate image after a 'stream' operation still
405 represents that old point-in-time, and may be valid in that context.
406 However, an intermediate image after a 'commit' operation no longer
407 represents any point-in-time, and is invalid in any context.
410 However, :ref:`Case-3 <block-commit_Case-3>` (also called: "active
411 ``block-commit``") is a *two-phase* operation: In the first phase, the
412 content from the active overlay, along with the intermediate overlays,
413 is copied into the backing file (also called the base image). In the
414 second phase, adjust the said backing file as the current active image
415 -- possible via issuing the command ``block-job-complete``. Optionally,
416 the ``block-commit`` operation can be cancelled by issuing the command
417 ``block-job-cancel``, but be careful when doing this.
419 Once the ``block-commit`` operation has completed, the event
420 ``BLOCK_JOB_READY`` will be emitted, signalling that the synchronization
421 has finished. Now the job can be gracefully completed by issuing the
422 command ``block-job-complete`` -- until such a command is issued, the
423 'commit' operation remains active.
425 The following is the flow for :ref:`Case-3 <block-commit_Case-3>` to
426 convert a disk image chain such as this::
428 [A] <-- [B] <-- [C] <-- [D]
434 Where content from all the subsequent overlays, [B], and [C], including
435 the active layer, [D], is committed back to [A] -- which is where live
436 QEMU is performing all its current writes).
438 Start the "active ``block-commit``" operation::
440 (QEMU) block-commit device=node-D base=a.qcow2 top=d.qcow2 job-id=job0
442 "execute": "block-commit",
452 Once the synchronization has completed, the event ``BLOCK_JOB_READY`` will
455 Then, optionally query for the status of the active block operations.
456 We can see the 'commit' job is now ready to be completed, as indicated
457 by the line *"ready": true*::
459 (QEMU) query-block-jobs
461 "execute": "query-block-jobs",
480 Gracefully complete the 'commit' block device job::
482 (QEMU) block-job-complete device=job0
484 "execute": "block-job-complete",
493 Finally, once the above job is completed, an event
494 ``BLOCK_JOB_COMPLETED`` will be emitted.
497 The invocation for rest of the cases (2, 4, and 5), discussed in the
498 previous section, is omitted for brevity.
501 Live disk synchronization --- ``drive-mirror`` and ``blockdev-mirror``
502 ----------------------------------------------------------------------
504 Synchronize a running disk image chain (all or part of it) to a target
507 Again, given our familiar disk image chain::
509 [A] <-- [B] <-- [C] <-- [D]
511 The ``drive-mirror`` (and its newer equivalent ``blockdev-mirror``)
512 allows you to copy data from the entire chain into a single target image
513 (which can be located on a different host), [E].
517 When you cancel an in-progress 'mirror' job *before* the source and
518 target are synchronized, ``block-job-cancel`` will emit the event
519 ``BLOCK_JOB_CANCELLED``. However, note that if you cancel a
520 'mirror' job *after* it has indicated (via the event
521 ``BLOCK_JOB_READY``) that the source and target have reached
522 synchronization, then the event emitted by ``block-job-cancel``
523 changes to ``BLOCK_JOB_COMPLETED``.
525 Besides the 'mirror' job, the "active ``block-commit``" is the only
526 other block device job that emits the event ``BLOCK_JOB_READY``.
527 The rest of the block device jobs ('stream', "non-active
528 ``block-commit``", and 'backup') end automatically.
530 So there are two possible actions to take, after a 'mirror' job has
531 emitted the event ``BLOCK_JOB_READY``, indicating that the source and
532 target have reached synchronization:
534 (1) Issuing the command ``block-job-cancel`` (after it emits the event
535 ``BLOCK_JOB_COMPLETED``) will create a point-in-time (which is at
536 the time of *triggering* the cancel command) copy of the entire disk
537 image chain (or only the top-most image, depending on the ``sync``
538 mode), contained in the target image [E]. One use case for this is
539 live VM migration with non-shared storage.
541 (2) Issuing the command ``block-job-complete`` (after it emits the event
542 ``BLOCK_JOB_COMPLETED``) will adjust the guest device (i.e. live
543 QEMU) to point to the target image, [E], causing all the new writes
544 from this point on to happen there.
546 About synchronization modes: The synchronization mode determines
547 *which* part of the disk image chain will be copied to the target.
548 Currently, there are four different kinds:
550 (1) ``full`` -- Synchronize the content of entire disk image chain to
553 (2) ``top`` -- Synchronize only the contents of the top-most disk image
554 in the chain to the target
556 (3) ``none`` -- Synchronize only the new writes from this point on.
558 .. note:: In the case of ``drive-backup`` (or ``blockdev-backup``),
559 the behavior of ``none`` synchronization mode is different.
560 Normally, a ``backup`` job consists of two parts: Anything
561 that is overwritten by the guest is first copied out to
562 the backup, and in the background the whole image is
563 copied from start to end. With ``sync=none``, it's only
566 (4) ``incremental`` -- Synchronize content that is described by the
570 Refer to the :doc:`bitmaps` document in the QEMU source
571 tree to learn about the detailed workings of the ``incremental``
572 synchronization mode.
575 QMP invocation for ``drive-mirror``
576 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
578 To copy the contents of the entire disk image chain, from [A] all the
579 way to [D], to a new target (``drive-mirror`` will create the destination
580 file, if it doesn't already exist), call it [E]::
582 (QEMU) drive-mirror device=node-D target=e.qcow2 sync=full job-id=job0
584 "execute": "drive-mirror",
593 The ``"sync": "full"``, from the above, means: copy the *entire* chain
596 Following the above, querying for active block jobs will show that a
597 'mirror' job is "ready" to be completed (and QEMU will also emit an
598 event, ``BLOCK_JOB_READY``)::
600 (QEMU) query-block-jobs
602 "execute": "query-block-jobs",
621 And, as noted in the previous section, there are two possible actions
624 (a) Create a point-in-time snapshot by ending the synchronization. The
625 point-in-time is at the time of *ending* the sync. (The result of
626 the following being: the target image, [E], will be populated with
627 content from the entire chain, [A] to [D])::
629 (QEMU) block-job-cancel device=job0
631 "execute": "block-job-cancel",
637 (b) Or, complete the operation and pivot the live QEMU to the target
640 (QEMU) block-job-complete device=job0
642 In either of the above cases, if you once again run the
643 `query-block-jobs` command, there should not be any active block
646 Comparing 'commit' and 'mirror': In both then cases, the overlay images
647 can be discarded. However, with 'commit', the *existing* base image
648 will be modified (by updating it with contents from overlays); while in
649 the case of 'mirror', a *new* target image is populated with the data
650 from the disk image chain.
653 QMP invocation for live storage migration with ``drive-mirror`` + NBD
654 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
656 Live storage migration (without shared storage setup) is one of the most
657 common use-cases that takes advantage of the ``drive-mirror`` primitive
658 and QEMU's built-in Network Block Device (NBD) server. Here's a quick
659 walk-through of this setup.
661 Given the disk image chain::
663 [A] <-- [B] <-- [C] <-- [D]
665 Instead of copying content from the entire chain, synchronize *only* the
666 contents of the *top*-most disk image (i.e. the active layer), [D], to a
667 target, say, [TargetDisk].
670 The destination host must already have the contents of the backing
671 chain, involving images [A], [B], and [C], visible via other means
672 -- whether by ``cp``, ``rsync``, or by some storage array-specific
675 Sometimes, this is also referred to as "shallow copy" -- because only
676 the "active layer", and not the rest of the image chain, is copied to
680 In this example, for the sake of simplicity, we'll be using the same
681 ``localhost`` as both source and destination.
683 As noted earlier, on the destination host the contents of the backing
684 chain -- from images [A] to [C] -- are already expected to exist in some
685 form (e.g. in a file called, ``Contents-of-A-B-C.qcow2``). Now, on the
686 destination host, let's create a target overlay image (with the image
687 ``Contents-of-A-B-C.qcow2`` as its backing file), to which the contents
688 of image [D] (from the source QEMU) will be mirrored to::
690 $ qemu-img create -f qcow2 -b ./Contents-of-A-B-C.qcow2 \
691 -F qcow2 ./target-disk.qcow2
693 And start the destination QEMU (we already have the source QEMU running
694 -- discussed in the section: `Interacting with a QEMU instance`_)
695 instance, with the following invocation. (As noted earlier, for
696 simplicity's sake, the destination QEMU is started on the same host, but
697 it could be located elsewhere):
701 $ |qemu_system| -display none -no-user-config -nodefaults \\
703 node-name=node-TargetDisk,driver=qcow2,file.driver=file,file.node-name=file,file.filename=./target-disk.qcow2 \\
704 -device virtio-blk,drive=node-TargetDisk,id=virtio0 \\
705 -S -monitor stdio -qmp unix:./qmp-sock2,server=on,wait=off \\
706 -incoming tcp:localhost:6666
708 Given the disk image chain on source QEMU::
710 [A] <-- [B] <-- [C] <-- [D]
712 On the destination host, it is expected that the contents of the chain
713 ``[A] <-- [B] <-- [C]`` are *already* present, and therefore copy *only*
714 the content of image [D].
716 (1) [On *destination* QEMU] As part of the first step, start the
717 built-in NBD server on a given host (local host, represented by
720 (QEMU) nbd-server-start addr={"type":"inet","data":{"host":"::","port":"49153"}}
722 "execute": "nbd-server-start",
734 (2) [On *destination* QEMU] And export the destination disk image using
735 QEMU's built-in NBD server::
737 (QEMU) nbd-server-add device=node-TargetDisk writable=true
739 "execute": "nbd-server-add",
741 "device": "node-TargetDisk"
745 (3) [On *source* QEMU] Then, invoke ``drive-mirror`` (NB: since we're
746 running ``drive-mirror`` with ``mode=existing`` (meaning:
747 synchronize to a pre-created file, therefore 'existing', file on the
748 target host), with the synchronization mode as 'top' (``"sync:
751 (QEMU) drive-mirror device=node-D target=nbd:localhost:49153:exportname=node-TargetDisk sync=top mode=existing job-id=job0
753 "execute": "drive-mirror",
758 "target": "nbd:localhost:49153:exportname=node-TargetDisk",
763 (4) [On *source* QEMU] Once ``drive-mirror`` copies the entire data, and the
764 event ``BLOCK_JOB_READY`` is emitted, issue ``block-job-cancel`` to
765 gracefully end the synchronization, from source QEMU::
767 (QEMU) block-job-cancel device=job0
769 "execute": "block-job-cancel",
775 (5) [On *destination* QEMU] Then, stop the NBD server::
777 (QEMU) nbd-server-stop
779 "execute": "nbd-server-stop",
783 (6) [On *destination* QEMU] Finally, resume the guest vCPUs by issuing the
784 QMP command ``cont``::
793 Higher-level libraries (e.g. libvirt) automate the entire above
794 process (although note that libvirt does not allow same-host
795 migrations to localhost for other reasons).
798 Notes on ``blockdev-mirror``
799 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
801 The ``blockdev-mirror`` command is equivalent in core functionality to
802 ``drive-mirror``, except that it operates at node-level in a BDS graph.
804 Also: for ``blockdev-mirror``, the 'target' image needs to be explicitly
805 created (using ``qemu-img``) and attach it to live QEMU via
806 ``blockdev-add``, which assigns a name to the to-be created target node.
808 E.g. the sequence of actions to create a point-in-time backup of an
809 entire disk image chain, to a target, using ``blockdev-mirror`` would be:
811 (0) Create the QCOW2 overlays, to arrive at a backing chain of desired
814 (1) Create the target image (using ``qemu-img``), say, ``e.qcow2``
816 (2) Attach the above created file (``e.qcow2``), run-time, using
817 ``blockdev-add`` to QEMU
819 (3) Perform ``blockdev-mirror`` (use ``"sync": "full"`` to copy the
820 entire chain to the target). And notice the event
823 (4) Optionally, query for active block jobs, there should be a 'mirror'
824 job ready to be completed
826 (5) Gracefully complete the 'mirror' block device job, and notice the
827 the event ``BLOCK_JOB_COMPLETED``
829 (6) Shutdown the guest by issuing the QMP ``quit`` command so that
832 (7) Then, finally, compare the contents of the disk image chain, and
833 the target copy with ``qemu-img compare``. You should notice:
834 "Images are identical"
837 QMP invocation for ``blockdev-mirror``
838 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
840 Given the disk image chain::
842 [A] <-- [B] <-- [C] <-- [D]
844 To copy the contents of the entire disk image chain, from [A] all the
845 way to [D], to a new target, call it [E]. The following is the flow.
847 Create the overlay images, [B], [C], and [D]::
849 (QEMU) blockdev-snapshot-sync node-name=node-A snapshot-file=b.qcow2 snapshot-node-name=node-B format=qcow2
850 (QEMU) blockdev-snapshot-sync node-name=node-B snapshot-file=c.qcow2 snapshot-node-name=node-C format=qcow2
851 (QEMU) blockdev-snapshot-sync node-name=node-C snapshot-file=d.qcow2 snapshot-node-name=node-D format=qcow2
853 Create the target image, [E]::
855 $ qemu-img create -f qcow2 e.qcow2 39M
857 Add the above created target image to QEMU, via ``blockdev-add``::
859 (QEMU) blockdev-add driver=qcow2 node-name=node-E file={"driver":"file","filename":"e.qcow2"}
861 "execute": "blockdev-add",
863 "node-name": "node-E",
867 "filename": "e.qcow2"
872 Perform ``blockdev-mirror``, and notice the event ``BLOCK_JOB_READY``::
874 (QEMU) blockdev-mirror device=node-B target=node-E sync=full job-id=job0
876 "execute": "blockdev-mirror",
885 Query for active block jobs, there should be a 'mirror' job ready::
887 (QEMU) query-block-jobs
889 "execute": "query-block-jobs",
908 Gracefully complete the block device job operation, and notice the
909 event ``BLOCK_JOB_COMPLETED``::
911 (QEMU) block-job-complete device=job0
913 "execute": "block-job-complete",
922 Shutdown the guest, by issuing the ``quit`` QMP command::
931 Live disk backup --- ``drive-backup`` and ``blockdev-backup``
932 -------------------------------------------------------------
934 The ``drive-backup`` (and its newer equivalent ``blockdev-backup``) allows
935 you to create a point-in-time snapshot.
937 In this case, the point-in-time is when you *start* the ``drive-backup``
938 (or its newer equivalent ``blockdev-backup``) command.
941 QMP invocation for ``drive-backup``
942 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
944 Yet again, starting afresh with our example disk image chain::
946 [A] <-- [B] <-- [C] <-- [D]
948 To create a target image [E], with content populated from image [A] to
949 [D], from the above chain, the following is the syntax. (If the target
950 image does not exist, ``drive-backup`` will create it)::
952 (QEMU) drive-backup device=node-D sync=full target=e.qcow2 job-id=job0
954 "execute": "drive-backup",
963 Once the above ``drive-backup`` has completed, a ``BLOCK_JOB_COMPLETED`` event
964 will be issued, indicating the live block device job operation has
965 completed, and no further action is required.
968 Notes on ``blockdev-backup``
969 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
971 The ``blockdev-backup`` command is equivalent in functionality to
972 ``drive-backup``, except that it operates at node-level in a Block Driver
975 E.g. the sequence of actions to create a point-in-time backup
976 of an entire disk image chain, to a target, using ``blockdev-backup``
979 (0) Create the QCOW2 overlays, to arrive at a backing chain of desired
982 (1) Create the target image (using ``qemu-img``), say, ``e.qcow2``
984 (2) Attach the above created file (``e.qcow2``), run-time, using
985 ``blockdev-add`` to QEMU
987 (3) Perform ``blockdev-backup`` (use ``"sync": "full"`` to copy the
988 entire chain to the target). And notice the event
989 ``BLOCK_JOB_COMPLETED``
991 (4) Shutdown the guest, by issuing the QMP ``quit`` command, so that
994 (5) Then, finally, compare the contents of the disk image chain, and
995 the target copy with ``qemu-img compare``. You should notice:
996 "Images are identical"
998 The following section shows an example QMP invocation for
1001 QMP invocation for ``blockdev-backup``
1002 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1004 Given a disk image chain of depth 1 where image [B] is the active
1005 overlay (live QEMU is writing to it)::
1009 The following is the procedure to copy the content from the entire chain
1010 to a target image (say, [E]), which has the full content from [A] and
1013 Create the overlay [B]::
1015 (QEMU) blockdev-snapshot-sync node-name=node-A snapshot-file=b.qcow2 snapshot-node-name=node-B format=qcow2
1017 "execute": "blockdev-snapshot-sync",
1019 "node-name": "node-A",
1020 "snapshot-file": "b.qcow2",
1022 "snapshot-node-name": "node-B"
1027 Create a target image that will contain the copy::
1029 $ qemu-img create -f qcow2 e.qcow2 39M
1031 Then add it to QEMU via ``blockdev-add``::
1033 (QEMU) blockdev-add driver=qcow2 node-name=node-E file={"driver":"file","filename":"e.qcow2"}
1035 "execute": "blockdev-add",
1037 "node-name": "node-E",
1041 "filename": "e.qcow2"
1046 Then invoke ``blockdev-backup`` to copy the contents from the entire
1047 image chain, consisting of images [A] and [B] to the target image
1050 (QEMU) blockdev-backup device=node-B target=node-E sync=full job-id=job0
1052 "execute": "blockdev-backup",
1061 Once the above 'backup' operation has completed, the event,
1062 ``BLOCK_JOB_COMPLETED`` will be emitted, signalling successful
1065 Next, query for any active block device jobs (there should be none)::
1067 (QEMU) query-block-jobs
1069 "execute": "query-block-jobs",
1073 Shutdown the guest::
1084 The above step is really important; if forgotten, an error, "Failed
1085 to get shared "write" lock on e.qcow2", will be thrown when you do
1086 ``qemu-img compare`` to verify the integrity of the disk image
1087 with the backup content.
1090 The end result will be the image 'e.qcow2' containing a
1091 point-in-time backup of the disk image chain -- i.e. contents from
1092 images [A] and [B] at the time the ``blockdev-backup`` command was
1095 One way to confirm the backup disk image contains the identical content
1096 with the disk image chain is to compare the backup and the contents of
1097 the chain, you should see "Images are identical". (NB: this is assuming
1098 QEMU was launched with ``-S`` option, which will not start the CPUs at
1101 $ qemu-img compare b.qcow2 e.qcow2
1102 Warning: Image size mismatch!
1103 Images are identical.
1105 NOTE: The "Warning: Image size mismatch!" is expected, as we created the
1106 target image (e.qcow2) with 39M size.