1 <?xml version="1.0" encoding="iso-8859-1"?>
3 PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
4 "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd">
8 <refentrytitle>ctdb</refentrytitle>
9 <manvolnum>7</manvolnum>
10 <refmiscinfo class="source">ctdb</refmiscinfo>
11 <refmiscinfo class="manual">CTDB - clustered TDB database</refmiscinfo>
16 <refname>ctdb</refname>
17 <refpurpose>Clustered TDB</refpurpose>
21 <title>DESCRIPTION</title>
24 CTDB is a clustered database component in clustered Samba that
25 provides a high-availability load-sharing CIFS server cluster.
29 The main functions of CTDB are:
35 Provide a clustered version of the TDB database with automatic
36 rebuild/recovery of the databases upon node failures.
42 Monitor nodes in the cluster and services running on each node.
48 Manage a pool of public IP addresses that are used to provide
49 services to clients. Alternatively, CTDB can be used with
56 Combined with a cluster filesystem CTDB provides a full
57 high-availablity (HA) environment for services such as clustered
58 Samba, NFS and other services.
63 <title>ANATOMY OF A CTDB CLUSTER</title>
66 A CTDB cluster is a collection of nodes with 2 or more network
67 interfaces. All nodes provide network (usually file/NAS) services
68 to clients. Data served by file services is stored on shared
69 storage (usually a cluster filesystem) that is accessible by all
73 CTDB provides an "all active" cluster, where services are load
74 balanced across all nodes.
79 <title>Recovery Lock</title>
82 CTDB uses a <emphasis>recovery lock</emphasis> to avoid a
83 <emphasis>split brain</emphasis>, where a cluster becomes
84 partitioned and each partition attempts to operate
85 independently. Issues that can result from a split brain
86 include file data corruption, because file locking metadata may
87 not be tracked correctly.
91 CTDB uses a <emphasis>cluster leader and follower</emphasis>
92 model of cluster management. All nodes in a cluster elect one
93 node to be the leader. The leader node coordinates privileged
94 operations such as database recovery and IP address failover.
95 CTDB refers to the leader node as the <emphasis>recovery
96 master</emphasis>. This node takes and holds the recovery lock
97 to assert its privileged role in the cluster.
101 By default, the recovery lock is implemented using a file
102 (specified by <parameter>recovery lock</parameter> in the
103 <literal>[cluster]</literal> section of
104 <citerefentry><refentrytitle>ctdb.conf</refentrytitle>
105 <manvolnum>5</manvolnum></citerefentry>) residing in shared
106 storage (usually) on a cluster filesystem. To support a
107 recovery lock the cluster filesystem must support lock
109 <citerefentry><refentrytitle>ping_pong</refentrytitle>
110 <manvolnum>1</manvolnum></citerefentry> for more details.
114 The recovery lock can also be implemented using an arbitrary
115 cluster mutex helper (or call-out). This is indicated by using
116 an exclamation point ('!') as the first character of the
117 <parameter>recovery lock</parameter> parameter. For example, a
118 value of <command>!/usr/local/bin/myhelper recovery</command>
119 would run the given helper with the specified arguments. The
120 helper will continue to run as long as it holds its mutex. See
121 <filename>ctdb/doc/cluster_mutex_helper.txt</filename> in the
122 source tree, and related code, for clues about writing helpers.
126 When a file is specified for the <parameter>recovery
127 lock</parameter> parameter (i.e. no leading '!') the file lock
128 is implemented by a default helper
129 (<command>/usr/local/libexec/ctdb/ctdb_mutex_fcntl_helper</command>).
130 This helper has arguments as follows:
132 <!-- cmdsynopsis would not require long line but does not work :-( -->
134 <command>ctdb_mutex_fcntl_helper</command> <parameter>FILE</parameter> <optional><parameter>RECHECK-INTERVAL</parameter></optional>
137 <command>ctdb_mutex_fcntl_helper</command> will take a lock on
138 FILE and then check every RECHECK-INTERVAL seconds to ensure
139 that FILE still exists and that its inode number is unchanged
140 from when the lock was taken. The default value for
141 RECHECK-INTERVAL is 5.
145 If a cluster becomes partitioned (for example, due to a
146 communication failure) and a different recovery master is
147 elected by the nodes in each partition, then only one of these
148 recovery masters will be able to take the recovery lock. The
149 recovery master in the "losing" partition will not be able to
150 take the recovery lock and will be excluded from the cluster.
151 The nodes in the "losing" partition will elect each node in turn
152 as their recovery master so eventually all the nodes in that
153 partition will be excluded.
157 CTDB does sanity checks to ensure that the recovery lock is held
162 CTDB can run without a recovery lock but this is not recommended
163 as there will be no protection from split brains.
168 <title>Private vs Public addresses</title>
171 Each node in a CTDB cluster has multiple IP addresses assigned
177 A single private IP address that is used for communication
183 One or more public IP addresses that are used to provide
184 NAS or other services.
191 <title>Private address</title>
194 Each node is configured with a unique, permanently assigned
195 private address. This address is configured by the operating
196 system. This address uniquely identifies a physical node in
197 the cluster and is the address that CTDB daemons will use to
198 communicate with the CTDB daemons on other nodes.
202 Private addresses are listed in the file
203 <filename>/usr/local/etc/ctdb/nodes</filename>). This file
204 contains the list of private addresses for all nodes in the
205 cluster, one per line. This file must be the same on all nodes
210 Some users like to put this configuration file in their
211 cluster filesystem. A symbolic link should be used in this
216 Private addresses should not be used by clients to connect to
217 services provided by the cluster.
220 It is strongly recommended that the private addresses are
221 configured on a private network that is separate from client
222 networks. This is because the CTDB protocol is both
223 unauthenticated and unencrypted. If clients share the private
224 network then steps need to be taken to stop injection of
225 packets to relevant ports on the private addresses. It is
226 also likely that CTDB protocol traffic between nodes could
227 leak sensitive information if it can be intercepted.
231 Example <filename>/usr/local/etc/ctdb/nodes</filename> for a four node
234 <screen format="linespecific">
243 <title>Public addresses</title>
246 Public addresses are used to provide services to clients.
247 Public addresses are not configured at the operating system
248 level and are not permanently associated with a particular
249 node. Instead, they are managed by CTDB and are assigned to
250 interfaces on physical nodes at runtime.
253 The CTDB cluster will assign/reassign these public addresses
254 across the available healthy nodes in the cluster. When one
255 node fails, its public addresses will be taken over by one or
256 more other nodes in the cluster. This ensures that services
257 provided by all public addresses are always available to
258 clients, as long as there are nodes available capable of
259 hosting this address.
263 The public address configuration is stored in
264 <filename>/usr/local/etc/ctdb/public_addresses</filename> on
265 each node. This file contains a list of the public addresses
266 that the node is capable of hosting, one per line. Each entry
267 also contains the netmask and the interface to which the
268 address should be assigned. If this file is missing then no
269 public addresses are configured.
273 Some users who have the same public addresses on all nodes
274 like to put this configuration file in their cluster
275 filesystem. A symbolic link should be used in this case.
279 Example <filename>/usr/local/etc/ctdb/public_addresses</filename> for a
280 node that can host 4 public addresses, on 2 different
283 <screen format="linespecific">
291 In many cases the public addresses file will be the same on
292 all nodes. However, it is possible to use different public
293 address configurations on different nodes.
297 Example: 4 nodes partitioned into two subgroups:
299 <screen format="linespecific">
300 Node 0:/usr/local/etc/ctdb/public_addresses
304 Node 1:/usr/local/etc/ctdb/public_addresses
308 Node 2:/usr/local/etc/ctdb/public_addresses
312 Node 3:/usr/local/etc/ctdb/public_addresses
317 In this example nodes 0 and 1 host two public addresses on the
318 10.1.1.x network while nodes 2 and 3 host two public addresses
319 for the 10.1.2.x network.
322 Public address 10.1.1.1 can be hosted by either of nodes 0 or
323 1 and will be available to clients as long as at least one of
324 these two nodes are available.
327 If both nodes 0 and 1 become unavailable then public address
328 10.1.1.1 also becomes unavailable. 10.1.1.1 can not be failed
329 over to nodes 2 or 3 since these nodes do not have this public
333 The <command>ctdb ip</command> command can be used to view the
334 current assignment of public addresses to physical nodes.
341 <title>Node status</title>
344 The current status of each node in the cluster can be viewed by the
345 <command>ctdb status</command> command.
349 A node can be in one of the following states:
357 This node is healthy and fully functional. It hosts public
358 addresses to provide services.
364 <term>DISCONNECTED</term>
367 This node is not reachable by other nodes via the private
368 network. It is not currently participating in the cluster.
369 It <emphasis>does not</emphasis> host public addresses to
370 provide services. It might be shut down.
376 <term>DISABLED</term>
379 This node has been administratively disabled. This node is
380 partially functional and participates in the cluster.
381 However, it <emphasis>does not</emphasis> host public
382 addresses to provide services.
388 <term>UNHEALTHY</term>
391 A service provided by this node has failed a health check
392 and should be investigated. This node is partially
393 functional and participates in the cluster. However, it
394 <emphasis>does not</emphasis> host public addresses to
395 provide services. Unhealthy nodes should be investigated
396 and may require an administrative action to rectify.
405 CTDB is not behaving as designed on this node. For example,
406 it may have failed too many recovery attempts. Such nodes
407 are banned from participating in the cluster for a
408 configurable time period before they attempt to rejoin the
409 cluster. A banned node <emphasis>does not</emphasis> host
410 public addresses to provide services. All banned nodes
411 should be investigated and may require an administrative
421 This node has been administratively exclude from the
422 cluster. A stopped node does no participate in the cluster
423 and <emphasis>does not</emphasis> host public addresses to
424 provide services. This state can be used while performing
425 maintenance on a node.
431 <term>PARTIALLYONLINE</term>
434 A node that is partially online participates in a cluster
435 like a healthy (OK) node. Some interfaces to serve public
436 addresses are down, but at least one interface is up. See
437 also <command>ctdb ifaces</command>.
446 <title>CAPABILITIES</title>
449 Cluster nodes can have several different capabilities enabled.
450 These are listed below.
456 <term>RECMASTER</term>
459 Indicates that a node can become the CTDB cluster recovery
460 master. The current recovery master is decided via an
461 election held by all active nodes with this capability.
473 Indicates that a node can be the location master (LMASTER)
474 for database records. The LMASTER always knows which node
475 has the latest copy of a record in a volatile database.
486 The RECMASTER and LMASTER capabilities can be disabled when CTDB
487 is used to create a cluster spanning across WAN links. In this
488 case CTDB acts as a WAN accelerator.
497 LVS is a mode where CTDB presents one single IP address for the
498 entire cluster. This is an alternative to using public IP
499 addresses and round-robin DNS to loadbalance clients across the
504 This is similar to using a layer-4 loadbalancing switch but with
509 One extra LVS public address is assigned on the public network
510 to each LVS group. Each LVS group is a set of nodes in the
511 cluster that presents the same LVS address public address to the
512 outside world. Normally there would only be one LVS group
513 spanning an entire cluster, but in situations where one CTDB
514 cluster spans multiple physical sites it might be useful to have
515 one LVS group for each site. There can be multiple LVS groups
516 in a cluster but each node can only be member of one LVS group.
520 Client access to the cluster is load-balanced across the HEALTHY
521 nodes in an LVS group. If no HEALTHY nodes exists then all
522 nodes in the group are used, regardless of health status. CTDB
523 will, however never load-balance LVS traffic to nodes that are
524 BANNED, STOPPED, DISABLED or DISCONNECTED. The <command>ctdb
525 lvs</command> command is used to show which nodes are currently
526 load-balanced across.
530 In each LVS group, one of the nodes is selected by CTDB to be
531 the LVS master. This node receives all traffic from clients
532 coming in to the LVS public address and multiplexes it across
533 the internal network to one of the nodes that LVS is using.
534 When responding to the client, that node will send the data back
535 directly to the client, bypassing the LVS master node. The
536 command <command>ctdb lvs master</command> will show which node
537 is the current LVS master.
541 The path used for a client I/O is:
545 Client sends request packet to LVSMASTER.
550 LVSMASTER passes the request on to one node across the
556 Selected node processes the request.
561 Node responds back to client.
568 This means that all incoming traffic to the cluster will pass
569 through one physical node, which limits scalability. You can
570 send more data to the LVS address that one physical node can
571 multiplex. This means that you should not use LVS if your I/O
572 pattern is write-intensive since you will be limited in the
573 available network bandwidth that node can handle. LVS does work
574 very well for read-intensive workloads where only smallish READ
575 requests are going through the LVSMASTER bottleneck and the
576 majority of the traffic volume (the data in the read replies)
577 goes straight from the processing node back to the clients. For
578 read-intensive i/o patterns you can achieve very high throughput
583 Note: you can use LVS and public addresses at the same time.
587 If you use LVS, you must have a permanent address configured for
588 the public interface on each node. This address must be routable
589 and the cluster nodes must be configured so that all traffic
590 back to client hosts are routed through this interface. This is
591 also required in order to allow samba/winbind on the node to
592 talk to the domain controller. This LVS IP address can not be
593 used to initiate outgoing traffic.
596 Make sure that the domain controller and the clients are
597 reachable from a node <emphasis>before</emphasis> you enable
598 LVS. Also ensure that outgoing traffic to these hosts is routed
599 out through the configured public interface.
603 <title>Configuration</title>
606 To activate LVS on a CTDB node you must specify the
607 <varname>CTDB_LVS_PUBLIC_IFACE</varname>,
608 <varname>CTDB_LVS_PUBLIC_IP</varname> and
609 <varname>CTDB_LVS_NODES</varname> configuration variables.
610 <varname>CTDB_LVS_NODES</varname> specifies a file containing
611 the private address of all nodes in the current node's LVS
617 <screen format="linespecific">
618 CTDB_LVS_PUBLIC_IFACE=eth1
619 CTDB_LVS_PUBLIC_IP=10.1.1.237
620 CTDB_LVS_NODES=/usr/local/etc/ctdb/lvs_nodes
625 Example <filename>/usr/local/etc/ctdb/lvs_nodes</filename>:
627 <screen format="linespecific">
634 Normally any node in an LVS group can act as the LVS master.
635 Nodes that are highly loaded due to other demands maybe
636 flagged with the "slave-only" option in the
637 <varname>CTDB_LVS_NODES</varname> file to limit the LVS
638 functionality of those nodes.
642 LVS nodes file that excludes 192.168.1.4 from being
645 <screen format="linespecific">
648 192.168.1.4 slave-only
655 <title>TRACKING AND RESETTING TCP CONNECTIONS</title>
658 CTDB tracks TCP connections from clients to public IP addresses,
659 on known ports. When an IP address moves from one node to
660 another, all existing TCP connections to that IP address are
661 reset. The node taking over this IP address will also send
662 gratuitous ARPs (for IPv4, or neighbour advertisement, for
663 IPv6). This allows clients to reconnect quickly, rather than
664 waiting for TCP timeouts, which can be very long.
668 It is important that established TCP connections do not survive
669 a release and take of a public IP address on the same node.
670 Such connections can get out of sync with sequence and ACK
671 numbers, potentially causing a disruptive ACK storm.
677 <title>NAT GATEWAY</title>
680 NAT gateway (NATGW) is an optional feature that is used to
681 configure fallback routing for nodes. This allows cluster nodes
682 to connect to external services (e.g. DNS, AD, NIS and LDAP)
683 when they do not host any public addresses (e.g. when they are
687 This also applies to node startup because CTDB marks nodes as
688 UNHEALTHY until they have passed a "monitor" event. In this
689 context, NAT gateway helps to avoid a "chicken and egg"
690 situation where a node needs to access an external service to
694 Another way of solving this type of problem is to assign an
695 extra static IP address to a public interface on every node.
696 This is simpler but it uses an extra IP address per node, while
697 NAT gateway generally uses only one extra IP address.
701 <title>Operation</title>
704 One extra NATGW public address is assigned on the public
705 network to each NATGW group. Each NATGW group is a set of
706 nodes in the cluster that shares the same NATGW address to
707 talk to the outside world. Normally there would only be one
708 NATGW group spanning an entire cluster, but in situations
709 where one CTDB cluster spans multiple physical sites it might
710 be useful to have one NATGW group for each site.
713 There can be multiple NATGW groups in a cluster but each node
714 can only be member of one NATGW group.
717 In each NATGW group, one of the nodes is selected by CTDB to
718 be the NATGW master and the other nodes are consider to be
719 NATGW slaves. NATGW slaves establish a fallback default route
720 to the NATGW master via the private network. When a NATGW
721 slave hosts no public IP addresses then it will use this route
722 for outbound connections. The NATGW master hosts the NATGW
723 public IP address and routes outgoing connections from
724 slave nodes via this IP address. It also establishes a
725 fallback default route.
730 <title>Configuration</title>
733 NATGW is usually configured similar to the following example configuration:
735 <screen format="linespecific">
736 CTDB_NATGW_NODES=/usr/local/etc/ctdb/natgw_nodes
737 CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
738 CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
739 CTDB_NATGW_PUBLIC_IFACE=eth0
740 CTDB_NATGW_DEFAULT_GATEWAY=10.0.0.1
744 Normally any node in a NATGW group can act as the NATGW
745 master. Some configurations may have special nodes that lack
746 connectivity to a public network. In such cases, those nodes
747 can be flagged with the "slave-only" option in the
748 <varname>CTDB_NATGW_NODES</varname> file to limit the NATGW
749 functionality of those nodes.
753 See the <citetitle>NAT GATEWAY</citetitle> section in
754 <citerefentry><refentrytitle>ctdb-script.options</refentrytitle>
755 <manvolnum>5</manvolnum></citerefentry> for more details of
762 <title>Implementation details</title>
765 When the NATGW functionality is used, one of the nodes is
766 selected to act as a NAT gateway for all the other nodes in
767 the group when they need to communicate with the external
768 services. The NATGW master is selected to be a node that is
769 most likely to have usable networks.
773 The NATGW master hosts the NATGW public IP address
774 <varname>CTDB_NATGW_PUBLIC_IP</varname> on the configured public
775 interfaces <varname>CTDB_NATGW_PUBLIC_IFACE</varname> and acts as
776 a router, masquerading outgoing connections from slave nodes
777 via this IP address. If
778 <varname>CTDB_NATGW_DEFAULT_GATEWAY</varname> is set then it
779 also establishes a fallback default route to the configured
780 this gateway with a metric of 10. A metric 10 route is used
781 so it can co-exist with other default routes that may be
786 A NATGW slave establishes its fallback default route to the
787 NATGW master via the private network
788 <varname>CTDB_NATGW_PRIVATE_NETWORK</varname>with a metric of 10.
789 This route is used for outbound connections when no other
790 default route is available because the node hosts no public
791 addresses. A metric 10 routes is used so that it can co-exist
792 with other default routes that may be available when the node
793 is hosting public addresses.
797 <varname>CTDB_NATGW_STATIC_ROUTES</varname> can be used to
798 have NATGW create more specific routes instead of just default
803 This is implemented in the <filename>11.natgw</filename>
804 eventscript. Please see the eventscript file and the
805 <citetitle>NAT GATEWAY</citetitle> section in
806 <citerefentry><refentrytitle>ctdb-script.options</refentrytitle>
807 <manvolnum>5</manvolnum></citerefentry> for more details.
814 <title>POLICY ROUTING</title>
817 Policy routing is an optional CTDB feature to support complex
818 network topologies. Public addresses may be spread across
819 several different networks (or VLANs) and it may not be possible
820 to route packets from these public addresses via the system's
821 default route. Therefore, CTDB has support for policy routing
822 via the <filename>13.per_ip_routing</filename> eventscript.
823 This allows routing to be specified for packets sourced from
824 each public address. The routes are added and removed as CTDB
825 moves public addresses between nodes.
829 <title>Configuration variables</title>
832 There are 4 configuration variables related to policy routing:
833 <varname>CTDB_PER_IP_ROUTING_CONF</varname>,
834 <varname>CTDB_PER_IP_ROUTING_RULE_PREF</varname>,
835 <varname>CTDB_PER_IP_ROUTING_TABLE_ID_LOW</varname>,
836 <varname>CTDB_PER_IP_ROUTING_TABLE_ID_HIGH</varname>. See the
837 <citetitle>POLICY ROUTING</citetitle> section in
838 <citerefentry><refentrytitle>ctdb-script.options</refentrytitle>
839 <manvolnum>5</manvolnum></citerefentry> for more details.
844 <title>Configuration</title>
847 The format of each line of
848 <varname>CTDB_PER_IP_ROUTING_CONF</varname> is:
852 <public_address> <network> [ <gateway> ]
856 Leading whitespace is ignored and arbitrary whitespace may be
857 used as a separator. Lines that have a "public address" item
858 that doesn't match an actual public address are ignored. This
859 means that comment lines can be added using a leading
860 character such as '#', since this will never match an IP
865 A line without a gateway indicates a link local route.
869 For example, consider the configuration line:
873 192.168.1.99 192.168.1.1/24
877 If the corresponding public_addresses line is:
881 192.168.1.99/24 eth2,eth3
885 <varname>CTDB_PER_IP_ROUTING_RULE_PREF</varname> is 100, and
886 CTDB adds the address to eth2 then the following routing
887 information is added:
891 ip rule add from 192.168.1.99 pref 100 table ctdb.192.168.1.99
892 ip route add 192.168.1.0/24 dev eth2 table ctdb.192.168.1.99
896 This causes traffic from 192.168.1.1 to 192.168.1.0/24 go via
901 The <command>ip rule</command> command will show (something
902 like - depending on other public addresses and other routes on
907 0: from all lookup local
908 100: from 192.168.1.99 lookup ctdb.192.168.1.99
909 32766: from all lookup main
910 32767: from all lookup default
914 <command>ip route show table ctdb.192.168.1.99</command> will show:
918 192.168.1.0/24 dev eth2 scope link
922 The usual use for a line containing a gateway is to add a
923 default route corresponding to a particular source address.
924 Consider this line of configuration:
928 192.168.1.99 0.0.0.0/0 192.168.1.1
932 In the situation described above this will cause an extra
933 routing command to be executed:
937 ip route add 0.0.0.0/0 via 192.168.1.1 dev eth2 table ctdb.192.168.1.99
941 With both configuration lines, <command>ip route show table
942 ctdb.192.168.1.99</command> will show:
946 192.168.1.0/24 dev eth2 scope link
947 default via 192.168.1.1 dev eth2
952 <title>Sample configuration</title>
955 Here is a more complete example configuration.
959 /usr/local/etc/ctdb/public_addresses:
961 192.168.1.98 eth2,eth3
962 192.168.1.99 eth2,eth3
964 /usr/local/etc/ctdb/policy_routing:
966 192.168.1.98 192.168.1.0/24
967 192.168.1.98 192.168.200.0/24 192.168.1.254
968 192.168.1.98 0.0.0.0/0 192.168.1.1
969 192.168.1.99 192.168.1.0/24
970 192.168.1.99 192.168.200.0/24 192.168.1.254
971 192.168.1.99 0.0.0.0/0 192.168.1.1
975 The routes local packets as expected, the default route is as
976 previously discussed, but packets to 192.168.200.0/24 are
977 routed via the alternate gateway 192.168.1.254.
984 <title>NOTIFICATIONS</title>
987 When certain state changes occur in CTDB, it can be configured
988 to perform arbitrary actions via notifications. For example,
989 sending SNMP traps or emails when a node becomes unhealthy or
994 The notification mechanism runs all executable files ending in
996 <filename>/usr/local/etc/ctdb/events/notification/</filename>,
997 ignoring any failures and continuing to run all files.
1001 CTDB currently generates notifications after CTDB changes to
1006 <member>init</member>
1007 <member>setup</member>
1008 <member>startup</member>
1009 <member>healthy</member>
1010 <member>unhealthy</member>
1016 <title>LOG LEVELS</title>
1019 Valid log levels, in increasing order of verbosity, are:
1023 <member>ERROR</member>
1024 <member>WARNING</member>
1025 <member>NOTICE</member>
1026 <member>INFO</member>
1027 <member>DEBUG</member>
1033 <title>REMOTE CLUSTER NODES</title>
1035 It is possible to have a CTDB cluster that spans across a WAN link.
1036 For example where you have a CTDB cluster in your datacentre but you also
1037 want to have one additional CTDB node located at a remote branch site.
1038 This is similar to how a WAN accelerator works but with the difference
1039 that while a WAN-accelerator often acts as a Proxy or a MitM, in
1040 the ctdb remote cluster node configuration the Samba instance at the remote site
1041 IS the genuine server, not a proxy and not a MitM, and thus provides 100%
1042 correct CIFS semantics to clients.
1046 See the cluster as one single multihomed samba server where one of
1047 the NICs (the remote node) is very far away.
1051 NOTE: This does require that the cluster filesystem you use can cope
1052 with WAN-link latencies. Not all cluster filesystems can handle
1053 WAN-link latencies! Whether this will provide very good WAN-accelerator
1054 performance or it will perform very poorly depends entirely
1055 on how optimized your cluster filesystem is in handling high latency
1056 for data and metadata operations.
1060 To activate a node as being a remote cluster node you need to
1061 set the following two parameters in
1062 /usr/local/etc/ctdb/ctdb.conf for the remote node:
1063 <screen format="linespecific">
1065 lmaster capability = false
1066 recmaster capability = false
1071 Verify with the command "ctdb getcapabilities" that that node no longer
1072 has the recmaster or the lmaster capabilities.
1079 <title>SEE ALSO</title>
1082 <citerefentry><refentrytitle>ctdb</refentrytitle>
1083 <manvolnum>1</manvolnum></citerefentry>,
1085 <citerefentry><refentrytitle>ctdbd</refentrytitle>
1086 <manvolnum>1</manvolnum></citerefentry>,
1088 <citerefentry><refentrytitle>ctdbd_wrapper</refentrytitle>
1089 <manvolnum>1</manvolnum></citerefentry>,
1091 <citerefentry><refentrytitle>ctdb_diagnostics</refentrytitle>
1092 <manvolnum>1</manvolnum></citerefentry>,
1094 <citerefentry><refentrytitle>ltdbtool</refentrytitle>
1095 <manvolnum>1</manvolnum></citerefentry>,
1097 <citerefentry><refentrytitle>onnode</refentrytitle>
1098 <manvolnum>1</manvolnum></citerefentry>,
1100 <citerefentry><refentrytitle>ping_pong</refentrytitle>
1101 <manvolnum>1</manvolnum></citerefentry>,
1103 <citerefentry><refentrytitle>ctdb.conf</refentrytitle>
1104 <manvolnum>5</manvolnum></citerefentry>,
1106 <citerefentry><refentrytitle>ctdb-script.options</refentrytitle>
1107 <manvolnum>5</manvolnum></citerefentry>,
1109 <citerefentry><refentrytitle>ctdb.sysconfig</refentrytitle>
1110 <manvolnum>5</manvolnum></citerefentry>,
1112 <citerefentry><refentrytitle>ctdb-statistics</refentrytitle>
1113 <manvolnum>7</manvolnum></citerefentry>,
1115 <citerefentry><refentrytitle>ctdb-tunables</refentrytitle>
1116 <manvolnum>7</manvolnum></citerefentry>,
1118 <ulink url="http://ctdb.samba.org/"/>
1125 This documentation was written by
1134 <holder>Andrew Tridgell</holder>
1135 <holder>Ronnie Sahlberg</holder>
1139 This program is free software; you can redistribute it and/or
1140 modify it under the terms of the GNU General Public License as
1141 published by the Free Software Foundation; either version 3 of
1142 the License, or (at your option) any later version.
1145 This program is distributed in the hope that it will be
1146 useful, but WITHOUT ANY WARRANTY; without even the implied
1147 warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
1148 PURPOSE. See the GNU General Public License for more details.
1151 You should have received a copy of the GNU General Public
1152 License along with this program; if not, see
1153 <ulink url="http://www.gnu.org/licenses"/>.