1 <?xml version="1.0" encoding="UTF-8"?>
3 PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
4 "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd">
6 <refentry id="ctdb-script.options.5">
9 <refentrytitle>ctdb-script.options</refentrytitle>
10 <manvolnum>5</manvolnum>
11 <refmiscinfo class="source">ctdb</refmiscinfo>
12 <refmiscinfo class="manual">CTDB - clustered TDB database</refmiscinfo>
16 <refname>ctdb-script.options</refname>
17 <refpurpose>CTDB scripts configuration files</refpurpose>
21 <title>DESCRIPTION</title>
24 <title>Location</title>
26 Each CTDB script has 2 possible locations for its configuration options:
33 <filename>/usr/local/etc/ctdb/script.options</filename>
37 This is a catch-all global file for general purpose
38 scripts and for options that are used in multiple event
46 <parameter>SCRIPT</parameter>.options
51 <filename><parameter>SCRIPT</parameter></filename> are
52 placed in a file alongside the script, with a ".script"
53 suffix added. This style is usually recommended for event
58 Options in this script-specific file override those in
68 <title>Contents</title>
71 These files should include simple shell-style variable
72 assignments and shell-style comments.
77 <title>Monitoring Thresholds</title>
80 Event scripts can monitor resources or services. When a
81 problem is detected, it may be better to warn about a problem
82 rather than to immediately fail monitoring and mark a node as
83 unhealthy. CTDB provides support for event scripts to do
84 threshold-based monitoring.
88 A threshold setting looks like
89 <parameter>WARNING_THRESHOLD<optional>:ERROR_THRESHOLD</optional></parameter>.
90 If the number of problems is ≥ WARNING_THRESHOLD then the
91 script will log a warning and continue. If the number
92 problems is ≥ ERROR_THRESHOLD then the script will log an
93 error and exit with failure, causing monitoring to fail. Note
94 that ERROR_THRESHOLD is optional, and follows the optional
102 <title>NETWORK CONFIGURATION</title>
105 <title>10.interface</title>
108 This event script handles monitoring of interfaces using by
116 CTDB_PARTIALLY_ONLINE_INTERFACES=yes|no
120 Whether one or more offline interfaces should cause a
121 monitor event to fail if there are other interfaces that
122 are up. If this is "yes" and a node has some interfaces
123 that are down then <command>ctdb status</command> will
124 display the node as "PARTIALLYONLINE".
128 Note that CTDB_PARTIALLY_ONLINE_INTERFACES=yes is not
129 generally compatible with NAT gateway or LVS. NAT
130 gateway relies on the interface configured by
131 CTDB_NATGW_PUBLIC_IFACE to be up and LVS replies on
132 CTDB_LVS_PUBLIC_IFACE to be up. CTDB does not check if
133 these options are set in an incompatible way so care is
134 needed to understand the interaction.
147 <title>11.natgw</title>
150 Provides CTDB's NAT gateway functionality.
154 NAT gateway is used to configure fallback routing for nodes
155 when they do not host any public IP addresses. For example,
156 it allows unhealthy nodes to reliably communicate with
157 external infrastructure. One node in a NAT gateway group will
158 be designated as the NAT gateway leader node and other (follower)
159 nodes will be configured with fallback routes via the NAT
160 gateway leader node. For more information, see the
161 <citetitle>NAT GATEWAY</citetitle> section in
162 <citerefentry><refentrytitle>ctdb</refentrytitle>
163 <manvolnum>7</manvolnum></citerefentry>.
169 <term>CTDB_NATGW_DEFAULT_GATEWAY=<parameter>IPADDR</parameter></term>
172 IPADDR is an alternate network gateway to use on the NAT
173 gateway leader node. If set, a fallback default route
174 is added via this network gateway.
177 No default. Setting this variable is optional - if not
178 set that no route is created on the NAT gateway leader
185 <term>CTDB_NATGW_NODES=<parameter>FILENAME</parameter></term>
188 FILENAME contains the list of nodes that belong to the
189 same NAT gateway group.
194 <parameter>IPADDR</parameter> <optional>follower-only</optional>
198 IPADDR is the private IP address of each node in the NAT
202 If "follower-only" is specified then the corresponding node
203 can not be the NAT gateway leader node. In this case
204 <varname>CTDB_NATGW_PUBLIC_IFACE</varname> and
205 <varname>CTDB_NATGW_PUBLIC_IP</varname> are optional and
210 <filename>/usr/local/etc/ctdb/natgw_nodes</filename> when enabled.
216 <term>CTDB_NATGW_PRIVATE_NETWORK=<parameter>IPADDR/MASK</parameter></term>
219 IPADDR/MASK is the private sub-network that is
220 internally routed via the NAT gateway leader node. This
221 is usually the private network that is used for node
231 <term>CTDB_NATGW_PUBLIC_IFACE=<parameter>IFACE</parameter></term>
234 IFACE is the network interface on which the
235 CTDB_NATGW_PUBLIC_IP will be configured.
244 <term>CTDB_NATGW_PUBLIC_IP=<parameter>IPADDR/MASK</parameter></term>
247 IPADDR/MASK indicates the IP address that is used for
248 outgoing traffic (originating from
249 CTDB_NATGW_PRIVATE_NETWORK) on the NAT gateway leader
250 node. This <emphasis>must not</emphasis> be a
251 configured public IP address.
260 <term>CTDB_NATGW_STATIC_ROUTES=<parameter>IPADDR/MASK[@GATEWAY]</parameter> ...</term>
263 Each IPADDR/MASK identifies a network or host to which
264 NATGW should create a fallback route, instead of
265 creating a single default route. This can be used when
266 there is already a default route, via an interface that
267 can not reach required infrastructure, that overrides
268 the NAT gateway default route.
271 If GATEWAY is specified then the corresponding route on
272 the NATGW leader node will be via GATEWAY. Such routes
274 <varname>CTDB_NATGW_DEFAULT_GATEWAY</varname> is not
275 specified. If GATEWAY is not specified for some
276 networks then routes are only created on the NATGW
277 leader node for those networks if
278 <varname>CTDB_NATGW_DEFAULT_GATEWAY</varname> is
282 This should be used with care to avoid causing traffic
283 to unnecessarily double-hop through the NAT gateway
284 leader, even when a node is hosting public IP addresses.
285 Each specified network or host should probably have a
286 corresponding automatically created link route or static
298 <title>Example</title>
300 CTDB_NATGW_NODES=/usr/local/etc/ctdb/natgw_nodes
301 CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
302 CTDB_NATGW_DEFAULT_GATEWAY=10.0.0.1
303 CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
304 CTDB_NATGW_PUBLIC_IFACE=eth0
308 A variation that ensures that infrastructure (ADS, DNS, ...)
309 directly attached to the public network (10.0.0.0/24) is
310 always reachable would look like this:
313 CTDB_NATGW_NODES=/usr/local/etc/ctdb/natgw_nodes
314 CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
315 CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
316 CTDB_NATGW_PUBLIC_IFACE=eth0
317 CTDB_NATGW_STATIC_ROUTES=10.0.0.0/24
320 Note that <varname>CTDB_NATGW_DEFAULT_GATEWAY</varname> is
328 <title>13.per_ip_routing</title>
331 Provides CTDB's policy routing functionality.
335 A node running CTDB may be a component of a complex network
336 topology. In particular, public addresses may be spread
337 across several different networks (or VLANs) and it may not be
338 possible to route packets from these public addresses via the
339 system's default route. Therefore, CTDB has support for
340 policy routing via the <filename>13.per_ip_routing</filename>
341 eventscript. This allows routing to be specified for packets
342 sourced from each public address. The routes are added and
343 removed as CTDB moves public addresses between nodes.
347 For more information, see the <citetitle>POLICY
348 ROUTING</citetitle> section in
349 <citerefentry><refentrytitle>ctdb</refentrytitle>
350 <manvolnum>7</manvolnum></citerefentry>.
355 <term>CTDB_PER_IP_ROUTING_CONF=<parameter>FILENAME</parameter></term>
358 FILENAME contains elements for constructing the desired
359 routes for each source address.
363 The special FILENAME value
364 <constant>__auto_link_local__</constant> indicates that no
365 configuration file is provided and that CTDB should
366 generate reasonable link-local routes for each public IP
373 <parameter>IPADDR</parameter> <parameter>DEST-IPADDR/MASK</parameter> <optional><parameter>GATEWAY-IPADDR</parameter></optional>
379 <filename>/usr/local/etc/ctdb/policy_routing</filename>
387 CTDB_PER_IP_ROUTING_RULE_PREF=<parameter>NUM</parameter>
391 NUM sets the priority (or preference) for the routing
392 rules that are added by CTDB.
396 This should be (strictly) greater than 0 and (strictly)
397 less than 32766. A priority of 100 is recommended, unless
398 this conflicts with a priority already in use on the
400 <citerefentry><refentrytitle>ip</refentrytitle>
401 <manvolnum>8</manvolnum></citerefentry>, for more details.
408 CTDB_PER_IP_ROUTING_TABLE_ID_LOW=<parameter>LOW-NUM</parameter>,
409 CTDB_PER_IP_ROUTING_TABLE_ID_HIGH=<parameter>HIGH-NUM</parameter>
413 CTDB determines a unique routing table number to use for
414 the routing related to each public address. LOW-NUM and
415 HIGH-NUM indicate the minimum and maximum routing table
416 numbers that are used.
420 <citerefentry><refentrytitle>ip</refentrytitle>
421 <manvolnum>8</manvolnum></citerefentry> uses some
422 reserved routing table numbers below 255. Therefore,
423 CTDB_PER_IP_ROUTING_TABLE_ID_LOW should be (strictly)
428 CTDB uses the standard file
429 <filename>/etc/iproute2/rt_tables</filename> to maintain
430 a mapping between the routing table numbers and labels.
431 The label for a public address
432 <replaceable>ADDR</replaceable> will look like
433 ctdb.<replaceable>addr</replaceable>. This means that
434 the associated rules and routes are easy to read (and
439 No default, usually 1000 and 9000.
446 <title>Example</title>
448 CTDB_PER_IP_ROUTING_CONF=/usr/local/etc/ctdb/policy_routing
449 CTDB_PER_IP_ROUTING_RULE_PREF=100
450 CTDB_PER_IP_ROUTING_TABLE_ID_LOW=1000
451 CTDB_PER_IP_ROUTING_TABLE_ID_HIGH=9000
458 <title>91.lvs</title>
461 Provides CTDB's LVS functionality.
465 For a general description see the <citetitle>LVS</citetitle>
466 section in <citerefentry><refentrytitle>ctdb</refentrytitle>
467 <manvolnum>7</manvolnum></citerefentry>.
474 CTDB_LVS_NODES=<parameter>FILENAME</parameter>
478 FILENAME contains the list of nodes that belong to the
484 <parameter>IPADDR</parameter> <optional>follower-only</optional>
488 IPADDR is the private IP address of each node in the LVS
492 If "follower-only" is specified then the corresponding node
493 can not be the LVS leader node. In this case
494 <varname>CTDB_LVS_PUBLIC_IFACE</varname> and
495 <varname>CTDB_LVS_PUBLIC_IP</varname> are optional and
500 <filename>/usr/local/etc/ctdb/lvs_nodes</filename> when enabled.
507 CTDB_LVS_PUBLIC_IFACE=<parameter>INTERFACE</parameter>
511 INTERFACE is the network interface that clients will use
512 to connection to <varname>CTDB_LVS_PUBLIC_IP</varname>.
513 This is optional for follower-only nodes.
521 CTDB_LVS_PUBLIC_IP=<parameter>IPADDR</parameter>
525 CTDB_LVS_PUBLIC_IP is the LVS public address. No
537 <title>SERVICE CONFIGURATION</title>
540 CTDB can be configured to manage and/or monitor various NAS (and
541 other) services via its eventscripts.
545 In the simplest case CTDB will manage a service. This means the
546 service will be started and stopped along with CTDB, CTDB will
547 monitor the service and CTDB will do any required
548 reconfiguration of the service when public IP addresses are
553 <title>20.multipathd</title>
556 Provides CTDB's Linux multipathd service management.
560 It can monitor multipath devices to ensure that active paths
567 CTDB_MONITOR_MPDEVICES=<parameter>MP-DEVICE-LIST</parameter>
571 MP-DEVICE-LIST is a list of multipath devices for CTDB to monitor?
582 <title>31.clamd</title>
585 This event script provide CTDB's ClamAV anti-virus service
590 This eventscript is not enabled by default. Use <command>ctdb
591 enablescript</command> to enable it.
598 CTDB_CLAMD_SOCKET=<parameter>FILENAME</parameter>
602 FILENAME is the socket to monitor ClamAV.
615 <title>40.vsftpd</title>
618 Provides CTDB's vsftpd service management.
624 CTDB_VSFTPD_MONITOR_THRESHOLDS=<parameter>THRESHOLDS</parameter>
628 THRESHOLDS indicates how many consecutive monitoring
629 attempts need to report that vsftpd is not listening on
630 TCP port 21 before a warning is logged and before
631 monitoring fails. See the <citetitle>Monitoring
632 Thresholds</citetitle> for a description of how
633 monitoring thresholds work.
646 <title>48.netbios</title>
649 Provides CTDB's NetBIOS service management.
655 CTDB_SERVICE_NMB=<parameter>SERVICE</parameter>
659 Distribution specific SERVICE for managing nmbd.
662 Default is distribution-dependant.
672 <title>49.winbind</title>
675 Provides CTDB's Samba winbind service management.
682 CTDB_SERVICE_WINBIND=<parameter>SERVICE</parameter>
686 Distribution specific SERVICE for managing winbindd.
689 Default is "winbind".
699 <title>50.samba</title>
702 Provides the core of CTDB's Samba file service management.
709 CTDB_SAMBA_CHECK_PORTS=<parameter>PORT-LIST</parameter>
713 When monitoring Samba, check TCP ports in
714 space-separated PORT-LIST.
717 Default is to monitor ports that Samba is configured to listen on.
724 CTDB_SAMBA_SKIP_SHARE_CHECK=yes|no
728 As part of monitoring, should CTDB skip the check for
729 the existence of each directory configured as share in
730 Samba. This may be desirable if there is a large number
741 CTDB_SERVICE_SMB=<parameter>SERVICE</parameter>
745 Distribution specific SERVICE for managing smbd.
748 Default is distribution-dependant.
758 <title>60.nfs</title>
761 This event script (along with 06.nfs) provides CTDB's NFS
766 This includes parameters for the kernel NFS server.
767 Alternative NFS subsystems (such as <ulink
768 url="https://github.com/nfs-ganesha/nfs-ganesha/wiki">NFS-Ganesha</ulink>)
769 can be integrated using <varname>CTDB_NFS_CALLOUT</varname>.
776 CTDB_NFS_CALLOUT=<parameter>COMMAND</parameter>
780 COMMAND specifies the path to a callout to handle
781 interactions with the configured NFS system, including
782 startup, shutdown, monitoring.
785 Default is the included
786 <command>nfs-linux-kernel-callout</command>.
793 CTDB_NFS_CHECKS_DIR=<parameter>DIRECTORY</parameter>
797 Specifies the path to a DIRECTORY containing files that
798 describe how to monitor the responsiveness of NFS RPC
799 services. See the README file for this directory for an
800 explanation of the contents of these "check" files.
803 CTDB_NFS_CHECKS_DIR can be used to point to different
804 sets of checks for different NFS servers.
807 One way of using this is to have it point to, say,
808 <filename>/usr/local/etc/ctdb/nfs-checks-enabled.d</filename>
809 and populate it with symbolic links to the desired check
810 files. This avoids duplication and is upgrade-safe.
814 <filename>/usr/local/etc/ctdb/nfs-checks.d</filename>,
815 which contains NFS RPC checks suitable for Linux kernel
823 CTDB_NFS_SKIP_SHARE_CHECK=yes|no
827 As part of monitoring, should CTDB skip the check for
828 the existence of each directory exported via NFS. This
829 may be desirable if there is a large number of exports.
839 CTDB_RPCINFO_LOCALHOST=<parameter>IPADDR</parameter>|<parameter>HOSTNAME</parameter>
843 IPADDR or HOSTNAME indicates the address that
844 <command>rpcinfo</command> should connect to when doing
845 <command>rpcinfo</command> check on IPv4 RPC service during
846 monitoring. Optimally this would be "localhost".
847 However, this can add some performance overheads.
850 Default is "127.0.0.1".
857 CTDB_RPCINFO_LOCALHOST6=<parameter>IPADDR</parameter>|<parameter>HOSTNAME</parameter>
861 IPADDR or HOSTNAME indicates the address that
862 <command>rpcinfo</command> should connect to when doing
863 <command>rpcinfo</command> check on IPv6 RPC service
864 during monitoring. Optimally this would be "localhost6"
865 (or similar). However, this can add some performance
876 CTDB_NFS_STATE_FS_TYPE=<parameter>TYPE</parameter>
880 The type of filesystem used for a clustered NFS' shared
888 CTDB_NFS_STATE_MNT=<parameter>DIR</parameter>
892 The directory where a clustered NFS' shared state will be
903 <title>70.iscsi</title>
906 Provides CTDB's Linux iSCSI tgtd service management.
913 CTDB_START_ISCSI_SCRIPTS=<parameter>DIRECTORY</parameter>
917 DIRECTORY on shared storage containing scripts to start
918 tgtd for each public IP address.
936 CTDB checks the consistency of databases during startup.
940 <title>00.ctdb</title>
945 <term>CTDB_MAX_CORRUPT_DB_BACKUPS=<parameter>NUM</parameter></term>
948 NUM is the maximum number of volatile TDB database
949 backups to be kept (for each database) when a corrupt
950 database is found during startup. Volatile TDBs are
951 zeroed during startup so backups are needed to debug
952 any corruption that occurs before a restart.
966 <title>SYSTEM RESOURCE MONITORING</title>
974 Provides CTDB's filesystem and memory usage monitoring.
978 CTDB can experience seemingly random (performance and other)
979 issues if system resources become too constrained. Options in
980 this section can be enabled to allow certain system resources
981 to be checked. They allows warnings to be logged and nodes to
982 be marked unhealthy when system resource usage reaches the
983 configured thresholds.
987 Some checks are enabled by default. It is recommended that
988 these checks remain enabled or are augmented by extra checks.
989 There is no supported way of completely disabling the checks.
996 CTDB_MONITOR_FILESYSTEM_USAGE=<parameter>FS-LIMIT-LIST</parameter>
1000 FS-LIMIT-LIST is a space-separated list of
1001 <parameter>FILESYSTEM</parameter>:<parameter>WARN_LIMIT</parameter><optional>:<parameter>UNHEALTHY_LIMIT</parameter></optional>
1002 triples indicating that warnings should be logged if the
1003 space used on FILESYSTEM reaches WARN_LIMIT%. If usage
1004 reaches UNHEALTHY_LIMIT then the node should be flagged
1005 unhealthy. Either WARN_LIMIT or UNHEALTHY_LIMIT may be
1006 left blank, meaning that check will be omitted.
1010 Default is to warn for each filesystem containing a
1012 (<literal>volatile database directory</literal>,
1013 <literal>persistent database directory</literal>,
1014 <literal>state database directory</literal>)
1015 with a threshold of 90%.
1022 CTDB_MONITOR_MEMORY_USAGE=<parameter>MEM-LIMITS</parameter>
1026 MEM-LIMITS takes the form
1027 <parameter>WARN_LIMIT</parameter><optional>:<parameter>UNHEALTHY_LIMIT</parameter></optional>
1028 indicating that warnings should be logged if memory
1029 usage reaches WARN_LIMIT%. If usage reaches
1030 UNHEALTHY_LIMIT then the node should be flagged
1031 unhealthy. Either WARN_LIMIT or UNHEALTHY_LIMIT may be
1032 left blank, meaning that check will be omitted.
1035 Default is 80, so warnings will be logged when memory
1048 <title>EVENT SCRIPT DEBUGGING</title>
1052 debug-hung-script.sh
1058 <term>CTDB_DEBUG_HUNG_SCRIPT_STACKPAT=<parameter>REGEXP</parameter></term>
1061 REGEXP specifies interesting processes for which stack
1062 traces should be logged when debugging hung eventscripts
1063 and those processes are matched in pstree output.
1064 REGEXP is an extended regexp so choices are separated by
1065 pipes ('|'). However, REGEXP should not contain
1066 parentheses. See also the <citerefentry><refentrytitle>ctdb.conf</refentrytitle>
1067 <manvolnum>5</manvolnum></citerefentry>
1068 [event] "debug script" option.
1071 Default is "exportfs|rpcinfo".
1082 <title>FILES</title>
1085 <member><filename>/usr/local/etc/ctdb/script.options</filename></member>
1090 <title>SEE ALSO</title>
1092 <citerefentry><refentrytitle>ctdbd</refentrytitle>
1093 <manvolnum>1</manvolnum></citerefentry>,
1095 <citerefentry><refentrytitle>ctdb</refentrytitle>
1096 <manvolnum>7</manvolnum></citerefentry>,
1098 <ulink url="http://ctdb.samba.org/"/>
1105 This documentation was written by
1113 <holder>Andrew Tridgell</holder>
1114 <holder>Ronnie Sahlberg</holder>
1118 This program is free software; you can redistribute it and/or
1119 modify it under the terms of the GNU General Public License as
1120 published by the Free Software Foundation; either version 3 of
1121 the License, or (at your option) any later version.
1124 This program is distributed in the hope that it will be
1125 useful, but WITHOUT ANY WARRANTY; without even the implied
1126 warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
1127 PURPOSE. See the GNU General Public License for more details.
1130 You should have received a copy of the GNU General Public
1131 License along with this program; if not, see
1132 <ulink url="http://www.gnu.org/licenses"/>.