torture: convert torture_comment() -> torture_result() so we can knownfail flapping...
[Samba/wip.git] / ctdb / web / configuring.html
blobb8272903c067e1fe1d7b3f97bc0ff9be8ad36937
1 <!--#set var="TITLE" value="Configuring CTDB" -->
2 <!--#include virtual="header.html" -->
4 <H2 align="center">Configuring CTDB</H2>
6 <h2>Clustering Model</h2>
8 The setup instructions on this page are modelled on setting up a cluster of N
9 nodes that function in nearly all respects as a single multi-homed node.
10 So the cluster will export N IP interfaces, each of which is equivalent
11 (same shares) and which offers coherent CIFS file access across all
12 nodes.<p>
14 The clustering model utilizes IP takeover techniques to ensure that
15 the full set of public IP addresses assigned to services on the
16 cluster will always be available to the clients even when some nodes
17 have failed and become unavailable.
19 <h2>CTDB Cluster Configuration</h2>
21 These are the primary configuration files for CTDB.<p>
23 When CTDB is installed, it will install template versions of these
24 files which you need to edit to suit your system.
26 <h3>/etc/sysconfig/ctdb</h3>
28 This file contains the startup parameters for ctdb.<p>
30 When you installed ctdb, a template config file should have been
31 installed in /etc/sysconfig/ctdb.<p>
33 Edit this file, following the instructions in the template.<p>
35 The most important options are:
36 <ul>
37 <li>CTDB_NODES
38 <li>CTDB_RECOVERY_LOCK
39 <li>CTDB_PUBLIC_ADDRESSES
40 </ul>
42 Please verify these parameters carefully.
44 <h4>CTDB_RECOVERY_LOCK</h4>
46 This parameter specifies the lock file that the CTDB daemons use to arbitrate
47 which node is acting as a recovery master.<br>
49 This file MUST be held on shared storage so that all CTDB daemons in the cluster will access/lock the same file.<br><br>
51 You <strong>must</strong> specify this parameter.<br>
52 There is no default for this parameter.
54 <h3>CTDB_NODES</h3>
56 This file needs to be created and should contain a list of the private
57 IP addresses that the CTDB daemons will use in your cluster. One IP
58 address for each node in the cluster.<p>
60 This should be a private non-routable subnet which is only used for
61 internal cluster traffic. This file must be the same on all nodes in
62 the cluster.<p>
64 Make sure that these IP addresses are automatically started when the
65 cluster node boots and that each node can ping each other node.<p>
67 Example 4 node cluster:
68 <pre>
69 CTDB_NODES=/etc/ctdb/nodes
70 </pre>
71 Content of /etc/ctdb/nodes:
72 <pre>
73 10.1.1.1
74 10.1.1.2
75 10.1.1.3
76 10.1.1.4
77 </pre>
79 The default for this file is /etc/ctdb/nodes.
82 <h3>CTDB_PUBLIC_ADDRESSES</h3>
84 Each node in a CTDB cluster contains a list of public addresses which that
85 particular node can host.<p>
87 While running the CTDB cluster will assign each public address that exists in the entire cluster to one node that will host that public address.<p>
89 These are the addresses that the SMBD daemons and other services will
90 bind to and which clients will use to connect to the cluster.<p>
92 <h3>Example 4 node cluster:</h3>
93 <pre>
94 CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
95 </pre>
96 Content of /etc/ctdb/public_addresses:
97 <pre>
98 192.168.1.1/24 eth0
99 192.168.1.2/24 eth0
100 192.168.2.1/24 eth1
101 192.168.2.2/24 eth1
102 </pre>
104 These are the IP addresses that you should configure in DNS for the
105 name of the clustered samba server and are the addresses that CIFS
106 clients will connect to.<p>
108 Configure it as one DNS A record (==name) with multiple IP addresses
109 and let round-robin DNS distribute the clients across the nodes of the
110 cluster.<p>
112 The CTDB cluster utilizes IP takeover techniques to ensure that as long as at least one node in the cluster is available, all the public IP addresses will always be available to clients.<p>
114 This means that if one physical node fails, the public addresses that
115 node was serving will be taken over by a different node in the cluster. This
116 provides a guarantee that all ip addresses exposed to clients will
117 always be reachable by clients as long as at least one node still remains available in the cluster with the capability to host that public address (i.e. the public address exists in that nodes public_addresses file).
119 Do not assign these addresses to any of the interfaces on the
120 host. CTDB will add and remove these addresses automatically at
121 runtime.<p>
123 This parameter is used when CTDB operated in takeover ip mode.<p>
125 The usual location for this file is /etc/ctdb/public_addresses.<p><p>
127 <h3>Example 2:</h3>
128 By using different public_addresses files on different nodes it is possible to
129 partition the cluster into subsets of nodes.
131 <pre>
132 Node 0 : /etc/ctdb/public_addresses
133 10.1.1.1/24 eth0
134 10.1.2.1/24 eth1
135 </pre>
137 <pre>
138 Node 1 : /etc/ctdb/public_addresses
139 10.1.2.1/24 eth1
140 10.1.3.1/24 eth2
141 </pre>
143 <pre>
144 Node 2 : /etc/ctdb/public_addresses
145 10.1.3.2/24 eth2
146 </pre>
148 In this example we have three nodes but a total of 4 public addresses.<p>
150 10.1.2.1 can be hosted by either node 0 or node 1 and will be available to clients as long as at least one of these nodes are available. Only if both nodes 0 and 1 fails will this public address become unavailable to clients.<p>
152 All other public addresses can only be served by one single node respectively and will therefore only be avialable if the respective node is also available.
155 <h2>Event scripts</h2>
157 CTDB comes with a number of application specific event scripts that
158 are used to do service specific tasks when the cluster has been
159 reconfigured. These scripts are stored in /etc/ctdb/events.d/<p>
161 You do not need to modify these scripts if you just want to use
162 clustered Samba or NFS but they serve as examples in case you want to
163 add clustering support for other application servers we do not yet
164 proivide event scripts for.<p>
166 Please see the service scripts that installed by ctdb in
167 /etc/ctdb/events.d for examples of how to configure other services to
168 be aware of the HA features of CTDB.<p>
170 Also see /etc/ctdb/events.d/README for additional documentation on how to
171 create and manage event scripts.
173 <h2>TCP port to use for CTDB</h2>
175 CTDB defaults to use TCP port 4379 for its traffic.<p>
177 Configuring a different port to use for CTDB traffic is done by adding
178 a ctdb entry to the /etc/services file.<p>
180 Example: for change CTDB to use port 9999 add the following line to /etc/services
181 <pre>
182 ctdb 9999/tcp
183 </pre>
185 Note: all nodes in the cluster MUST use the same port or else CTDB
186 will not start correctly.
188 <h2>Name resolution</h2>
190 You need to setup some method for your Windows and NFS clients to find
191 the nodes of the cluster, and automatically balance the load between
192 the nodes.<p>
194 We recommend that you use public ip addresses using
195 CTDB_PUBLIC_INTERFACE/CTDB_PUBLIC_ADDRESSES and that you setup a
196 round-robin DNS entry for your cluster, listing all the public IP
197 addresses that CTDB will be managing as a single DNS A record.<p>
199 You may also wish to setup a static WINS server entry listing all of
200 your cluster nodes IP addresses.
202 <!--#include virtual="footer.html" -->