3 .\" The DragonFly Project. All rights reserved.
5 .\" Redistribution and use in source and binary forms, with or without
6 .\" modification, are permitted provided that the following conditions
9 .\" 1. Redistributions of source code must retain the above copyright
10 .\" notice, this list of conditions and the following disclaimer.
11 .\" 2. Redistributions in binary form must reproduce the above copyright
12 .\" notice, this list of conditions and the following disclaimer in
13 .\" the documentation and/or other materials provided with the
15 .\" 3. Neither the name of The DragonFly Project nor the names of its
16 .\" contributors may be used to endorse or promote products derived
17 .\" from this software without specific, prior written permission.
19 .\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 .\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 .\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
22 .\" FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
23 .\" COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
24 .\" INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES (INCLUDING,
25 .\" BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
26 .\" LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
27 .\" AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 .\" OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
29 .\" OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
32 .\" $DragonFly: src/share/man/man5/hammer.5,v 1.15 2008/11/02 18:56:47 swildner Exp $
39 .Nd HAMMER file system
41 To compile this driver into the kernel,
42 place the following line in your
43 kernel configuration file:
44 .Bd -ragged -offset indent
48 Alternatively, to load the driver as a
49 module at boot time, place the following line in
51 .Bd -literal -offset indent
57 .Bd -literal -offset indent
58 /dev/ad0s1d[:/dev/ad1s1d:...] /mnt hammer rw 2 0
63 file system provides facilities to store file system data onto disk devices
64 and is intended to replace
66 as the default file system for
68 Among its features are instant crash recovery,
69 large file systems spanning multiple volumes,
70 data integrity checking,
71 fine grained history retention,
72 mirroring capability, and pseudo file systems.
74 All functions related to managing
76 file systems are provided by the
84 For a more detailed introduction refer to the paper and slides listed in the
87 For some common usages of
92 .Ss Instant Crash Recovery
93 After a non-graceful system shutdown,
95 file systems will be brought back into a fully coherent state
96 when mounting the file system, usually within a few seconds.
97 .Ss Large File Systems & Multi Volume
100 file system can span up to 256 volumes.
101 Each volume occupies a
103 disk slice or partition, or another special file,
104 and can be up to 4096 TB in size.
105 For volumes over 2 TB in size
109 normally need to be used.
110 .Ss Data Integrity Checking
112 has high focus on data integrity,
113 CRC checks are made for all major structures and data.
115 snapshots implements features to make data integrity checking easier:
116 The atime and mtime fields are locked to the ctime for files accessed via a snapshot.
119 field is based on the PFS
121 and not on any real device.
122 This means that archiving the contents of a snaphot with e.g.\&
124 and piping it to something like
126 will yield a consistent result.
127 The consistency is also retained on mirroring targets.
131 file system uses 64 bit, hexadecimal transaction IDs to refer to historical
132 file or directory data.
138 .Li 0x00000001061a8ba6 .
144 .Ss History & Snapshots
145 History metadata on the media is written with every sync operation, so that
146 by default the resolution of a file's history is 30-60 seconds until the next
148 Prior versions of files or directories are generally accessible by appending
150 and a transaction ID to the name.
151 The common way of accessing history, however, is by taking snapshots.
153 Snapshots are softlinks to prior versions of directories and their files.
154 Their data will be retained across prune operations for as long as the
156 Removing the softlink enables the file system to reclaim the space
157 again upon the next prune & reblock operations.
167 .Ss Pruning & Reblocking
168 Pruning is the act of deleting file system history.
169 Only history used by the given snapshots and history from after the latest
170 snapshot will be retained.
171 All other history is deleted.
172 Reblocking will reorder all elements and thus defragment the file system and
173 free space for reuse.
174 After pruning a file system must be reblocked to recover all available space.
175 Reblocking is needed even when using the
185 .Ar prune-everything ,
191 .Ss Mirroring & Pseudo File Systems
192 In order to allow inode numbers to be duplicated on the slaves
194 mirroring feature uses
195 .Dq Pseudo File Systems
199 file system supports up to 65535 PFSs.
200 Multiple slaves per master are supported, but multiple masters per slave
202 Slaves are always read-only.
203 Upgrading slaves to masters and downgrading masters to slaves are supported.
205 It is recommended to use a
207 mount to access a PFS;
208 this way no tools are confused by the PFS root being a symlink
209 and inodes not being unique across a
227 .Ar mirror-read-stream ,
232 file systems support NFS export.
233 NFS export of PFSs is done using
236 For example, to export the PFS
237 .Pa /hammer/pfs/data ,
242 and export the latter path.
244 Don't export a directory containing a PFS (e.g.\&
254 (subdirectory may be escaped if exported).
256 .Ss Preparing the File System
257 To create and mount a
266 file systems must have a unique name on a per-machine basis.
267 .Bd -literal -offset indent
268 newfs_hammer -L HOME /dev/ad0s1d
269 mount_hammer /dev/ad0s1d /home
272 Similarly, multi volume file systems can be created and mounted by
273 specifying additional arguments.
274 .Bd -literal -offset indent
275 newfs_hammer -L MULTIHOME /dev/ad0s1d /dev/ad1s1d
276 mount_hammer /dev/ad0s1d /dev/ad1s1d /home
279 Once created and mounted,
281 file systems need periodic clean up making snapshots, pruning and reblocking,
282 in order to have access to history and file system not to fill up.
283 For this it is recommended to use the
291 .Nm hammer Ar cleanup
295 It is also possible to perform these operations individually via
297 For example, to reblock the
299 file system every night at 2:15 for up to 5 minutes:
300 .Bd -literal -offset indent
301 15 2 * * * hammer -c /var/run/HOME.reblock -t 300 reblock /home \e
309 command provides several ways of taking snapshots.
310 They all assume a directory where snapshots are kept.
311 .Bd -literal -offset indent
313 hammer snapshot /home /snaps/snap1
314 (...after some changes in /home...)
315 hammer snapshot /home /snaps/snap2
320 point to the state of the
322 directory at the time each snapshot was taken, and could now be used to copy
323 the data somewhere else for backup purposes.
327 is set up to create nightly snapshots of all
331 and to keep them for 60 days.
333 A snapshot directory is also the argument to the
336 command which frees historical data from the file system that is not
337 pointed to by any snapshot link and is not from after the latest snapshot.
338 .Bd -literal -offset indent
343 Mirroring can be set up using
346 To associate the slave with the master its shared UUID should be set to
347 the master's shared UUID as output by the
348 .Nm hammer Ar pfs-master
350 .Bd -literal -offset indent
351 hammer pfs-master /home/pfs/master
352 hammer pfs-slave /home/pfs/slave shared-uuid=<master's shared uuid>
357 link is unusable for as long as no mirroring operation has taken place.
359 To mirror the master's data, either pipe a
363 or, as a short-cut, use the
365 command (which works across a
368 Initial mirroring operation has to be done to the PFS path (as
370 can't access it yet).
371 .Bd -literal -offset indent
372 hammer mirror-copy /home/pfs/master /home/pfs/slave
375 After this initial step
377 mount can be setup for
378 .Pa /home/pfs/slave .
379 Further operations can use
382 .Bd -literal -offset indent
383 mount_null /home/pfs/master /home/master
384 mount_null /home/pfs/slave /home/slave
386 hammer mirror-copy /home/master /home/slave
389 To NFS export from the
395 without PFSs, and the PFS
396 .Pa /hammer/pfs/data ,
397 the latter is null mounted to
404 .Bd -literal -offset indent
405 /hammer/pfs/data /hammer/data null rw
412 .Bd -literal -offset indent
430 .%O http://www.dragonflybsd.org/hammer/hammer.pdf
431 .%T "The HAMMER Filesystem"
436 .%O http://www.dragonflybsd.org/hammer/nycbsdcon/
437 .%T "Slideshow from NYCBSDCon 2008"
439 .Sh FILESYSTEM PERFORMANCE
442 file system has a front-end which processes VNOPS and issues necessary
443 block reads from disk, and a back-end which handles meta-data updates
444 on-media and performs all meta-data write operations. Bulk file write
445 operations are handled by the front-end.
448 defers meta-data updates virtually no meta-data read operations will be
449 issued by the frontend while writing large amounts of data to the filesystem
450 or even when creating new files or directories, and even though the
451 kernel prioritizes reads over writes the fact that writes are cached by
452 the drive itself tends to lead to excessive priority given to writes.
454 There are four bioq sysctls which can be adjusted to give reads a higher
456 .Bd -literal -offset indent
457 kern.bioq_reorder_minor_bytes: 262144
458 kern.bioq_reorder_burst_bytes: 3000000
459 kern.bioq_reorder_minor_interval: 5
460 kern.bioq_reorder_burst_interval: 60
463 If a higher read priority is desired it is recommended that the
464 .Fa kern.bioq_reorder_minor_interval
465 be increased to 15, 30, or even 60, and the
466 .Fa kern.bioq_reorder_burst_bytes
467 be decreased to 262144 or 524288.
471 file system first appeared in
477 file system was designed and implemented by
478 .An Matthew Dillon Aq dillon@backplane.com .
479 This manual page was written by