1 <?xml version=
"1.0" encoding=
"UTF-8"?>
3 <html xmlns=
"http://www.w3.org/1999/xhtml">
5 <h1 >libvirt architecture
</h1>
8 Currently libvirt supports
2 kind of virtualization, and its
9 internal structure is based on a driver model which simplifies
16 <h2><a id=
"Xen">Xen support
</a></h2>
18 <p>When running in a Xen environment, programs using libvirt have to execute
19 in
"Domain 0", which is the primary Linux OS loaded on the machine. That OS
20 kernel provides most if not all of the actual drivers used by the set of
21 domains. It also runs the Xen Store, a database of information shared by the
22 hypervisor, the backend drivers, any running domains, and libxl (aka libxenlight).
23 libxl provides a set of APIs for creating and managing domains, which can be used
24 by applications such as the xl tool provided by Xen or libvirt. The hypervisor,
25 drivers, kernels and daemons communicate though a shared system bus
26 implemented in the hypervisor. The figure below tries to provide a view of
28 <img src=
"architecture.gif" alt=
"The Xen architecture" />
29 <p>The library will interact with libxl for all management operations
31 <p>Note that the libvirt libxl driver only supports root access.
</p>
33 <h2><a id=
"QEMU">QEMU and KVM support
</a></h2>
35 <p>The model for QEMU and KVM is completely similar, basically KVM is based
36 on QEMU for the process controlling a new domain, only small details differs
37 between the two. In both case the libvirt API is provided by a controlling
38 process forked by libvirt in the background and which launch and control the
39 QEMU or KVM process. That program called libvirt_qemud talks though a specific
40 protocol to the library, and connects to the console of the QEMU process in
41 order to control and report on its status. Libvirt tries to expose all the
42 emulations models of QEMU, the selection is done when creating the new
43 domain, by specifying the architecture and machine type targeted.
</p>
44 <p>The code controlling the QEMU process is available in the
45 <code>qemud/
</code> directory.
</p>
47 <h2><a id=
"drivers">Driver based architecture
</a></h2>
49 <p>As the previous section explains, libvirt can communicate using different
50 channels with the current hypervisor, and should also be able to use
51 different kind of hypervisor. To simplify the internal design, code, ease
52 maintenance and simplify the support of other virtualization engine the
53 internals have been structured as one core component, the libvirt.c module
54 acting as a front-end for the library API and a set of hypervisor drivers
55 defining a common set of routines. That way the Xen Daemon access, the Xen
56 Store one, the Hypervisor hypercall are all isolated in separate C modules
57 implementing at least a subset of the common operations defined by the
58 drivers present in driver.h:
</p>
60 <li>xend_internal: implements the driver functions though the Xen
62 <li>xs_internal: implements the subset of the driver available though the
64 <li>xen_internal: provide the implementation of the functions possible via
65 direct hypervisor access
</li>
66 <li>proxy_internal: provide read-only Xen access via a proxy, the proxy code
67 is in the
<code>proxy/
</code> directory.
</li>
68 <li>xm_internal: provide support for Xen defined but not running
70 <li>qemu_internal: implement the driver functions for QEMU and
71 KVM virtualization engines. It also uses a qemud/ specific daemon
72 which interacts with the QEMU process to implement libvirt API.
</li>
73 <li>test: this is a test driver useful for regression tests of the
74 front-end part of libvirt.
</li>
76 <p>Note that a given driver may only implement a subset of those functions,
77 (for example saving a Xen domain state to disk and restoring it is only
78 possible though the Xen Daemon), in that case the driver entry points for
79 unsupported functions are initialized to NULL.
</p>