1 <!--#set var="TITLE" value="CTDB and ClamAV Daemon" -->
2 <!--#include virtual="header.html" -->
4 <h1>Setting up ClamAV with CTDB
</h1>
7 Configure CTDB as above and set it up to use public ipaddresses.
<br>
8 Verify that the CTDB cluster works.
10 <h2>Configuration
</h2>
12 Configure clamd on each node on the cluster.
<br><br>
13 For details how to configure clamd check its documentation.
15 <h2>/etc/sysconfig/ctdb
</h2>
17 Add the following lines to the /etc/sysconfig/ctdb configuration file.
19 CTDB_MANAGES_CLAMD=yes
20 CTDB_CLAMD_SOCKET=
"/path/to/clamd.sock"
23 Disable clamd in chkconfig so that it does not start by default. Instead CTDB will start/stop clamd as required.
28 <h2>Events script
</h2>
30 The CTDB distribution already comes with an events script for clamd in the file /etc/ctdb/events.d/
31.clamd
<br><br>
31 There should not be any need to edit this file.
32 What you need is to set it as executable, with command like this:
34 chmod +x /etc/ctdb/events.d/
31.clamd
36 To check if ctdb monitoring and handling with clamd, you can check outpout of command:
41 <h2>Restart your cluster
</h2>
42 Next time your cluster restarts, CTDB will start managing the clamd service.
<br><br>
43 If the cluster is already in production you may not want to restart the entire cluster since this would disrupt services.
<br>
45 Insted you can just disable/enable the nodes one by one. Once a node becomes enabled again it will start the clamd service.
<br><br>
47 Follow the procedure below for each node, one node at a time :
49 <h3>1 Disable the node
</h3>
50 Use the ctdb command to disable the node :
55 <h3>2 Wait until the cluster has recovered
</h3>
57 Use the ctdb tool to monitor until the cluster has recovered, i.e. Recovery mode is NORMAL. This should happen within seconds of when you disabled the node.
62 <h3>3 Enable the node again
</h3>
64 Re-enable the node again which will start the newly configured vsftp service.
71 The CLAMAV section in the ctdbd manpage.
77 <!--#include virtual="footer.html" -->