3 OCFS2 is a general purpose extent based shared disk cluster file
4 system with many similarities to ext3. It supports 64 bit inode
5 numbers, and has automatically extending metadata groups which may
6 also make it attractive for non-clustered use.
8 You'll want to install the ocfs2-tools package in order to at least
9 get "mount.ocfs2" and "ocfs2_hb_ctl".
11 Project web page: http://oss.oracle.com/projects/ocfs2
12 Tools web page: http://oss.oracle.com/projects/ocfs2-tools
13 OCFS2 mailing lists: http://oss.oracle.com/projects/ocfs2/mailman/
15 All code copyright 2005 Oracle except when otherwise noted.
18 Lots of code taken from ext3 and other projects.
20 Authors in alphabetical order:
21 Joel Becker <joel.becker@oracle.com>
22 Zach Brown <zach.brown@oracle.com>
23 Mark Fasheh <mfasheh@suse.com>
24 Kurt Hackel <kurt.hackel@oracle.com>
25 Tao Ma <tao.ma@oracle.com>
26 Sunil Mushran <sunil.mushran@oracle.com>
27 Manish Singh <manish.singh@oracle.com>
28 Tiger Yang <tiger.yang@oracle.com>
32 Features which OCFS2 does not support yet:
33 - Directory change notification (F_NOTIFY)
34 - Distributed Caching (F_SETLEASE/F_GETLEASE/break_lease)
39 OCFS2 supports the following mount options:
42 barrier=1 This enables/disables barriers. barrier=0 disables it,
44 errors=remount-ro(*) Remount the filesystem read-only on an error.
45 errors=panic Panic and halt the machine if an error occurs.
46 intr (*) Allow signals to interrupt cluster operations.
47 nointr Do not allow signals to interrupt cluster
49 noatime Do not update access time.
50 relatime(*) Update atime if the previous atime is older than
52 strictatime Always update atime, but the minimum update interval
53 is specified by atime_quantum.
54 atime_quantum=60(*) OCFS2 will not update atime unless this number
55 of seconds has passed since the last update.
56 Set to zero to always update atime. This option need
57 work with strictatime.
58 data=ordered (*) All data are forced directly out to the main file
59 system prior to its metadata being committed to the
61 data=writeback Data ordering is not preserved, data may be written
62 into the main file system after its metadata has been
63 committed to the journal.
64 preferred_slot=0(*) During mount, try to use this filesystem slot first. If
65 it is in use by another node, the first empty one found
66 will be chosen. Invalid values will be ignored.
67 commit=nrsec (*) Ocfs2 can be told to sync all its data and metadata
68 every 'nrsec' seconds. The default value is 5 seconds.
69 This means that if you lose your power, you will lose
70 as much as the latest 5 seconds of work (your
71 filesystem will not be damaged though, thanks to the
72 journaling). This default value (or any low value)
73 will hurt performance, but it's good for data-safety.
74 Setting it to 0 will have the same effect as leaving
75 it at the default (5 seconds).
76 Setting it to very large values will improve
78 localalloc=8(*) Allows custom localalloc size in MB. If the value is too
79 large, the fs will silently revert it to the default.
80 localflocks This disables cluster aware flock.
81 inode64 Indicates that Ocfs2 is allowed to create inodes at
82 any location in the filesystem, including those which
83 will result in inode numbers occupying more than 32
85 user_xattr (*) Enables Extended User Attributes.
86 nouser_xattr Disables Extended User Attributes.
87 acl Enables POSIX Access Control Lists support.
88 noacl (*) Disables POSIX Access Control Lists support.
89 resv_level=2 (*) Set how aggressive allocation reservations will be.
90 Valid values are between 0 (reservations off) to 8
91 (maximum space for reservations).
92 dir_resv_level= (*) By default, directory reservations will scale with file
93 reservations - users should rarely need to change this
94 value. If allocation reservations are turned off, this
95 option will have no effect.
96 coherency=full (*) Disallow concurrent O_DIRECT writes, cluster inode
97 lock will be taken to force other nodes drop cache,
98 therefore full cluster coherency is guaranteed even
100 coherency=buffered Allow concurrent O_DIRECT writes without EX lock among
101 nodes, which gains high performance at risk of getting
102 stale data on other nodes.