1 <!--#set var="TITLE" value="CTDB Testing" -->
2 <!--#include virtual="header.html" -->
4 <H2 align=
"center">Starting and testing CTDB
</h2>
6 The CTDB log is in /var/log/log.ctdb so look in this file if something
7 did not start correctly.
<p>
9 You can ensure that ctdb is running on all nodes using
11 onnode all service ctdb start
13 Verify that the CTDB daemon started properly. There should normally be at least
2 processes started for CTDB, one for the main daemon and one for the recovery daemon.
15 onnode all pidof ctdbd
18 Once all CTDB nodes have started, verify that they are correctly
19 talking to each other.
<p>
21 There should be one TCP connection from the private ip address on each
22 node to TCP port
4379 on each of the other nodes in the cluster.
24 onnode all netstat -tn | grep
4379
28 <h2>Automatically restarting CTDB
</h2>
30 If you wish to cope with software faults in ctdb, or want ctdb to
31 automatically restart when an administration kills it, then you may
32 wish to add a cron entry for root like this:
35 * * * * * /etc/init.d/ctdb cron
> /dev/null
2>&
1
41 Once your cluster is up and running, you may wish to know how to test that it is functioning correctly. The following tests may help with that
43 <h3>The ctdb tool
</h3>
45 The ctdb package comes with a utility called ctdb that can be used to
46 view the behaviour of the ctdb cluster.
<p>
48 If you run it with no options it will provide some terse usage information. The most commonly used commands are:
57 The status command provides basic information about the cluster and the status of the nodes. when you run it you will get some output like:
60 <strong>Number of nodes:
4
61 vnn:
0 10.1.1.1 OK (THIS NODE)
64 vnn:
3 10.1.1.4 OK
</strong>
71 <strong>Recovery mode:NORMAL (
0)
</strong>
75 The important parts are in bold. This tells us that all
4 nodes are in
78 It also tells us that recovery mode is normal, which means that the
79 cluster has finished a recovery and is running in a normal fully
82 Recovery state will briefly change to
"RECOVERY" when there ahs been a
83 node failure or something is wrong with the cluster.
<p>
85 If the cluster remains in RECOVERY state for very long (many seconds)
86 there might be something wrong with the configuration. See
91 This command prints the current status of the public ip addresses and which physical node is currently serving that ip.
102 this command tries to
"ping" each of the CTDB daemons in the cluster.
106 response from
0 time=
0.000050 sec (
13 clients)
107 response from
1 time=
0.000154 sec (
27 clients)
108 response from
2 time=
0.000114 sec (
17 clients)
109 response from
3 time=
0.000115 sec (
59 clients)
112 <!--#include virtual="footer.html" -->