2 .\" Copyright (c) 2006, 2007
3 .\" The DragonFly Project. All rights reserved.
5 .\" Redistribution and use in source and binary forms, with or without
6 .\" modification, are permitted provided that the following conditions
9 .\" 1. Redistributions of source code must retain the above copyright
10 .\" notice, this list of conditions and the following disclaimer.
11 .\" 2. Redistributions in binary form must reproduce the above copyright
12 .\" notice, this list of conditions and the following disclaimer in
13 .\" the documentation and/or other materials provided with the
15 .\" 3. Neither the name of The DragonFly Project nor the names of its
16 .\" contributors may be used to endorse or promote products derived
17 .\" from this software without specific, prior written permission.
19 .\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 .\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 .\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
22 .\" FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
23 .\" COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
24 .\" INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES (INCLUDING,
25 .\" BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
26 .\" LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
27 .\" AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 .\" OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
29 .\" OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
40 .Nd virtual kernel architecture
42 .Cd "platform vkernel64 # for 64 bit vkernels"
47 .Pa /var/vkernel/boot/kernel/kernel
50 .Op Fl e Ar name Ns = Ns Li value : Ns Ar name Ns = Ns Li value : Ns ...
52 .Op Fl I Ar interface Ns Op Ar :address1 Ns Oo Ar :address2 Oc Ns Oo Ar /netmask Oc Ns Oo Ar =mac Oc
55 .Op Fl n Ar numcpus Ns Op Ar :lbits Ns Oo Ar :cbits Oc
57 .Op Fl r Ar file Ns Op Ar :serno
58 .Op Fl R Ar file Ns Op Ar :serno
62 architecture allows for running
66 The following options are available:
67 .Bl -tag -width ".Fl m Ar size"
69 Specify a readonly CD-ROM image
71 to be used by the kernel, with the first
83 option specified on the command line will be the boot disk.
84 The CD9660 filesystem is assumed when booting from this media.
85 .It Fl e Ar name Ns = Ns Li value : Ns Ar name Ns = Ns Li value : Ns ...
86 Specify an environment to be used by the kernel.
87 This option can be specified more than once.
89 Shows a list of available options, each with a short description.
91 Specify a memory image
93 to be used by the virtual kernel.
96 option is given, the kernel will generate a name of the form
97 .Pa /var/vkernel/memimg.XXXXXX ,
100 being replaced by a sequential number, e.g.\&
102 .It Fl I Ar interface Ns Op Ar :address1 Ns Oo Ar :address2 Oc Ns Oo Ar /netmask Oc Ns Oo Ar =MAC Oc
103 Create a virtual network device, with the first
113 argument is the name of a
115 device node or the path to a
120 path prefix does not have to be specified and will be automatically prepended
124 will pick the first unused
132 arguments are the IP addresses of the
143 interface is added to the specified
148 address is not assigned until the interface is brought up in the guest.
152 argument applies to all interfaces for which an address is specified.
156 argument is the MAC address of the
159 If not specified, a pseudo-random one will be generated.
161 When running multiple vkernels it is often more convenient to simply
164 socket and let vknetd deal with the tap and/or bridge.
165 An example of this would be
166 .Pa /var/run/vknet:0.0.0.0:10.2.0.2/16 .
168 Specify which, if any, real CPUs to lock virtual CPUs to.
172 .Cm map Ns Op , Ns Ar startCPU ,
177 does not map virtual CPUs to real CPUs.
180 .Cm map Ns Op , Ns Ar startCPU
181 maps each virtual CPU to a real CPU starting with real CPU 0 or
186 locks all virtual CPUs to the real CPU specified by
189 Locking the vkernel to a set of cpus is recommended on multi-socket systems
190 to improve NUMA locality of reference.
192 Specify the amount of memory to be used by the kernel in bytes,
200 Lowercase versions of
205 .It Fl n Ar numcpus Ns Op Ar :lbits Ns Oo Ar :cbits Oc
207 specifies the number of CPUs you wish to emulate.
208 Up to 16 CPUs are supported with 2 being the default unless otherwise
212 specifies the number of bits within APICID(=CPUID) needed for representing
214 Controls the number of threads/core (0 bits - 1 thread, 1 bit - 2 threads).
215 This parameter is optional (mandatory only if
220 specifies the number of bits within APICID(=CPUID) needed for representing
222 Controls the number of core/package (0 bits - 1 core, 1 bit - 2 cores).
223 This parameter is optional.
225 Specify a pidfile in which to store the process ID.
226 Scripts can use this file to locate the vkernel pid for the purpose of
227 shutting down or killing it.
229 The vkernel will hold a lock on the pidfile while running.
230 Scripts may test for the lock to determine if the pidfile is valid or
231 stale so as to avoid accidentally killing a random process.
232 Something like '/usr/bin/lockf -ks -t 0 pidfile echo -n' may be used
234 A non-zero exit code indicates that the pidfile represents a running
237 An error is issued and the vkernel exits if this file cannot be opened for
238 writing or if it is already locked by an active vkernel process.
239 .It Fl r Ar file Ns Op Ar :serno
240 Specify a R/W disk image
242 to be used by the kernel, with the first
249 A serial number for the virtual disk can be specified in
256 option specified on the command line will be the boot disk.
257 .It Fl R Ar file Ns Op Ar :serno
260 but treats the disk image as copy-on-write. This allows
261 a private copy of the image to be modified but does not
262 modify the image file. The image file will not be locked
263 in this situation and multiple vkernels can run off the
264 same image file if desired.
266 Since modifications are thrown away, any data you wish
267 to retain across invocations needs to be exported over
268 the network prior to shutdown.
269 This gives you the flexibility to mount the disk image
270 either read-only or read-write depending on what is
272 However, keep in mind that when mounting a COW image
273 read-write, modifications will eat system memory and
274 swap space until the vkernel is shut down.
276 Boot into single-user mode.
278 Tell the vkernel to use a precise host timer when calculating clock values.
279 If the TSC isn't used, this will impose higher overhead on the vkernel as it
280 will have to make a system call to the real host every time it wants to get
282 However, the more precise timer might be necessary for your application.
284 By default, the vkernel uses the TSC cpu timer if possible, or an imprecise
285 (host-tick-resolution) timer which uses a user-mapped kernel page and does
286 not have any syscall overhead.
287 To disable the TSC cpu timer, use the
288 .Fl e Ar hw.tsc_cputimer_enable=0
291 Enable writing to kernel memory and module loading.
292 By default, those are disabled for security reasons.
294 Turn on verbose booting.
296 Force the vkernel's ram to be pre-zerod. Useful for benchmarking on
297 single-socket systems where the memory allocation does not have to be
299 This options is not recommended on multi-socket systems or when the
304 A number of virtual device drivers exist to supplement the virtual kernel.
308 driver allows for up to 16
311 The root device will be
315 for further information on how to prepare a root image).
319 driver allows for up to 16 virtual CD-ROM devices.
320 Basically this is a read only
322 device with a block size of 2048.
323 .Ss Network interface
326 driver supports up to 16 virtual network interfaces which are associated with
331 device, the per-interface read only
334 .Va hw.vke Ns Em X Ns Va .tap_unit
335 holds the unit number of the associated
339 By default, half of the total mbuf clusters available is distributed equally
340 among all the vke devices up to 256.
341 This can be overridden with the tunable
342 .Va hw.vke.max_ringsize .
343 Take into account the number passed will be aligned to the lower power of two.
345 The virtual kernel only enables
349 while operating in regular console mode.
353 to the virtual kernel causes the virtual kernel to enter its internal
355 debugger and re-enable all other terminal signals.
358 to the virtual kernel triggers a clean shutdown by passing a
360 to the virtual kernel's
364 It is possible to directly gdb the virtual kernel's process.
365 It is recommended that you do a
366 .Ql handle SIGSEGV noprint
367 to ignore page faults processed by the virtual kernel itself and
368 .Ql handle SIGUSR1 noprint
369 to ignore signals used for simulating inter-processor interrupts.
371 .Bl -tag -width ".It Pa /sys/config/VKERNEL64" -compact
378 .It Pa /sys/config/VKERNEL64
382 configuration file, for
384 .Sh CONFIGURATION FILES
385 Your virtual kernel is a complete
387 system, but you might not want to run all the services a normal kernel runs.
388 Here is what a typical virtual kernel's
390 file looks like, with some additional possibilities commented out.
393 network_interfaces="lo0 vke0"
399 .Sh BOOT DRIVE SELECTION
400 You can override the default boot drive selection and filesystem
401 using a kernel environment variable. Note that the filesystem
402 selected must be compiled into the vkernel and not loaded as
403 a module. You need to escape some quotes around the variable data
404 to avoid mis-interpretation of the colon in the
409 vfs.root.mountfrom=\\"hammer:vkd0s1d\\"
410 .Sh DISKLESS OPERATION
413 from a NFS root, a number of tunables need to be set:
414 .Bl -tag -width indent
416 IP address to be set in the vkernel interface.
417 .It Va boot.netif.netmask
418 Netmask for the IP to be set.
419 .It Va boot.netif.name
420 Network interface name inside the vkernel.
421 .It Va boot.nfsroot.server
424 .It Va boot.nfsroot.path
425 Host path where a world and distribution
426 targets are properly installed.
429 See an example on how to boot a diskless
435 A couple of steps are necessary in order to prepare the system to build and
436 run a virtual kernel.
437 .Ss Setting up the filesystem
440 architecture needs a number of files which reside in
442 Since these files tend to get rather big and the
444 partition is usually of limited size, we recommend the directory to be
447 partition with a link to it in
450 mkdir -p /home/var.vkernel/boot
451 ln -s /home/var.vkernel /var/vkernel
454 Next, a filesystem image to be used by the virtual kernel has to be
455 created and populated (assuming world has been built previously).
456 If the image is created on a UFS filesystem you might want to pre-zero it.
457 On a HAMMER filesystem you should just truncate-extend to the image size
458 as HAMMER does not re-use data blocks already present in the file.
460 vnconfig -c -S 2g -T vn0 /var/vkernel/rootimg.01
461 disklabel -r -w vn0s0 auto
462 disklabel -e vn0s0 # add `a' partition with fstype `4.2BSD'
464 mount /dev/vn0s0a /mnt
466 make installworld DESTDIR=/mnt
468 make distribution DESTDIR=/mnt
469 echo '/dev/vkd0s0a / ufs rw 1 1' >/mnt/etc/fstab
470 echo 'proc /proc procfs rw 0 0' >>/mnt/etc/fstab
477 entry with the following line and turn off all other gettys.
479 console "/usr/libexec/getty Pc" cons25 on secure
486 if you would like to automatically log in as root.
488 Then, unmount the disk.
493 .Ss Compiling the virtual kernel
494 In order to compile a virtual kernel use the
496 kernel configuration file residing in
498 (or a configuration file derived thereof):
501 make -DNO_MODULES buildkernel KERNCONF=VKERNEL64
502 make -DNO_MODULES installkernel KERNCONF=VKERNEL64 DESTDIR=/var/vkernel
504 .Ss Enabling virtual kernel operation
507 .Va vm.vkernel_enable ,
508 must be set to enable
512 sysctl vm.vkernel_enable=1
514 .Ss Configuring the network on the host system
515 In order to access a network interface of the host system from the
517 you must add the interface to a
519 device which will then be passed to the
525 ifconfig bridge0 create
526 ifconfig bridge0 addm re0 # assuming re0 is the host's interface
529 .Ss Running the kernel
530 Finally, the virtual kernel can be run:
533 \&./boot/kernel/kernel -m 1g -r rootimg.01 -I auto:bridge0
541 commands from inside a virtual kernel.
542 After doing a clean shutdown the
544 command will re-exec the virtual kernel binary while the other two will
545 cause the virtual kernel to exit.
546 .Ss Diskless operation (vkernel as a NFS client)
551 network configuration. The line continuation backslashes have been
552 omitted. For convenience and to reduce confusion I recommend mounting
553 the server's remote vkernel root onto the host running the vkernel binary
554 using the same path as the NFS mount. It is assumed that a full system
555 install has been made to /var/vkernel/root using a kernel KERNCONF=VKERNEL64
556 for the kernel build.
558 \&/var/vkernel/root/boot/kernel/kernel
559 -m 1g -n 4 -I /var/run/vknet
560 -e boot.netif.ip=10.100.0.2
561 -e boot.netif.netmask=255.255.0.0
562 -e boot.netif.gateway=10.100.0.1
563 -e boot.netif.name=vke0
564 -e boot.nfsroot.server=10.0.0.55
565 -e boot.nfsroot.path=/var/vkernel/root
568 In this example vknetd is assumed to have been started as shown below, before
569 running the vkernel, using an unbridged TAP configuration routed through
571 IP forwarding must be turned on, and in this example the server resides
572 on a different network accessible to the host executing the vkernel but not
573 directly on the vkernel's subnet.
576 sysctl net.inet.ip.forwarding=1
577 vknetd -t tap0 10.100.0.1/16
580 You can run multiple vkernels trivially with the same NFS root as long as
581 you assign each one a different IP on the subnet (2, 3, 4, etc). You
582 should also be careful with certain directories, particularly /var/run
583 and possibly also /var/db depending on what your vkernels are going to be
585 This can complicate matters with /var/db/pkg.
586 .Sh BUILDING THE WORLD UNDER A VKERNEL
587 The virtual kernel platform does not have all the header files expected
588 by a world build, so the easiest thing to do right now is to specify a
589 pc64 (in a 64 bit vkernel) target when building the world under a virtual
592 vkernel# make MACHINE_PLATFORM=pc64 buildworld
593 vkernel# make MACHINE_PLATFORM=pc64 installworld
609 .%A Aggelos Economopoulos
611 .%T "A Peek at the DragonFly Virtual Kernel"
614 Virtual kernels were introduced in
619 thought up and implemented the
621 architecture and wrote the
628 This manual page was written by