1 <?xml version="1.0" encoding="iso-8859-1"?>
3 PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
4 "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd">
8 <refentrytitle>ctdb</refentrytitle>
9 <manvolnum>7</manvolnum>
10 <refmiscinfo class="source">ctdb</refmiscinfo>
11 <refmiscinfo class="manual">CTDB - clustered TDB database</refmiscinfo>
16 <refname>ctdb</refname>
17 <refpurpose>Clustered TDB</refpurpose>
21 <title>DESCRIPTION</title>
24 CTDB is a clustered database component in clustered Samba that
25 provides a high-availability load-sharing CIFS server cluster.
29 The main functions of CTDB are:
35 Provide a clustered version of the TDB database with automatic
36 rebuild/recovery of the databases upon node failures.
42 Monitor nodes in the cluster and services running on each node.
48 Manage a pool of public IP addresses that are used to provide
49 services to clients. Alternatively, CTDB can be used with
56 Combined with a cluster filesystem CTDB provides a full
57 high-availablity (HA) environment for services such as clustered
58 Samba, NFS and other services.
63 <title>ANATOMY OF A CTDB CLUSTER</title>
66 A CTDB cluster is a collection of nodes with 2 or more network
67 interfaces. All nodes provide network (usually file/NAS) services
68 to clients. Data served by file services is stored on shared
69 storage (usually a cluster filesystem) that is accessible by all
73 CTDB provides an "all active" cluster, where services are load
74 balanced across all nodes.
79 <title>Recovery Lock</title>
82 CTDB uses a <emphasis>recovery lock</emphasis> to avoid a
83 <emphasis>split brain</emphasis>, where a cluster becomes
84 partitioned and each partition attempts to operate
85 independently. Issues that can result from a split brain
86 include file data corruption, because file locking metadata may
87 not be tracked correctly.
91 CTDB uses a <emphasis>cluster leader and follower</emphasis>
92 model of cluster management. All nodes in a cluster elect one
93 node to be the leader. The leader node coordinates privileged
94 operations such as database recovery and IP address failover.
95 CTDB refers to the leader node as the <emphasis>recovery
96 master</emphasis>. This node takes and holds the recovery lock
97 to assert its privileged role in the cluster.
101 The recovery lock is implemented using a file residing in shared
102 storage (usually) on a cluster filesystem. To support a
103 recovery lock the cluster filesystem must support lock
105 <citerefentry><refentrytitle>ping_pong</refentrytitle>
106 <manvolnum>1</manvolnum></citerefentry> for more details.
110 If a cluster becomes partitioned (for example, due to a
111 communication failure) and a different recovery master is
112 elected by the nodes in each partition, then only one of these
113 recovery masters will be able to take the recovery lock. The
114 recovery master in the "losing" partition will not be able to
115 take the recovery lock and will be excluded from the cluster.
116 The nodes in the "losing" partition will elect each node in turn
117 as their recovery master so eventually all the nodes in that
118 partition will be excluded.
122 CTDB does sanity checks to ensure that the recovery lock is held
127 CTDB can run without a recovery lock but this is not recommended
128 as there will be no protection from split brains.
133 <title>Private vs Public addresses</title>
136 Each node in a CTDB cluster has multiple IP addresses assigned
142 A single private IP address that is used for communication
148 One or more public IP addresses that are used to provide
149 NAS or other services.
156 <title>Private address</title>
159 Each node is configured with a unique, permanently assigned
160 private address. This address is configured by the operating
161 system. This address uniquely identifies a physical node in
162 the cluster and is the address that CTDB daemons will use to
163 communicate with the CTDB daemons on other nodes.
166 Private addresses are listed in the file specified by the
167 <varname>CTDB_NODES</varname> configuration variable (see
168 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
169 <manvolnum>5</manvolnum></citerefentry>, default
170 <filename>/etc/ctdb/nodes</filename>). This file contains the
171 list of private addresses for all nodes in the cluster, one
172 per line. This file must be the same on all nodes in the
176 Private addresses should not be used by clients to connect to
177 services provided by the cluster.
180 It is strongly recommended that the private addresses are
181 configured on a private network that is separate from client
186 Example <filename>/etc/ctdb/nodes</filename> for a four node
189 <screen format="linespecific">
198 <title>Public addresses</title>
201 Public addresses are used to provide services to clients.
202 Public addresses are not configured at the operating system
203 level and are not permanently associated with a particular
204 node. Instead, they are managed by CTDB and are assigned to
205 interfaces on physical nodes at runtime.
208 The CTDB cluster will assign/reassign these public addresses
209 across the available healthy nodes in the cluster. When one
210 node fails, its public addresses will be taken over by one or
211 more other nodes in the cluster. This ensures that services
212 provided by all public addresses are always available to
213 clients, as long as there are nodes available capable of
214 hosting this address.
217 The public address configuration is stored in a file on each
218 node specified by the <varname>CTDB_PUBLIC_ADDRESSES</varname>
219 configuration variable (see
220 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
221 <manvolnum>5</manvolnum></citerefentry>, recommended
222 <filename>/etc/ctdb/public_addresses</filename>). This file
223 contains a list of the public addresses that the node is
224 capable of hosting, one per line. Each entry also contains
225 the netmask and the interface to which the address should be
230 Example <filename>/etc/ctdb/public_addresses</filename> for a
231 node that can host 4 public addresses, on 2 different
234 <screen format="linespecific">
242 In many cases the public addresses file will be the same on
243 all nodes. However, it is possible to use different public
244 address configurations on different nodes.
248 Example: 4 nodes partitioned into two subgroups:
250 <screen format="linespecific">
251 Node 0:/etc/ctdb/public_addresses
255 Node 1:/etc/ctdb/public_addresses
259 Node 2:/etc/ctdb/public_addresses
263 Node 3:/etc/ctdb/public_addresses
268 In this example nodes 0 and 1 host two public addresses on the
269 10.1.1.x network while nodes 2 and 3 host two public addresses
270 for the 10.1.2.x network.
273 Public address 10.1.1.1 can be hosted by either of nodes 0 or
274 1 and will be available to clients as long as at least one of
275 these two nodes are available.
278 If both nodes 0 and 1 become unavailable then public address
279 10.1.1.1 also becomes unavailable. 10.1.1.1 can not be failed
280 over to nodes 2 or 3 since these nodes do not have this public
284 The <command>ctdb ip</command> command can be used to view the
285 current assignment of public addresses to physical nodes.
292 <title>Node status</title>
295 The current status of each node in the cluster can be viewed by the
296 <command>ctdb status</command> command.
300 A node can be in one of the following states:
308 This node is healthy and fully functional. It hosts public
309 addresses to provide services.
315 <term>DISCONNECTED</term>
318 This node is not reachable by other nodes via the private
319 network. It is not currently participating in the cluster.
320 It <emphasis>does not</emphasis> host public addresses to
321 provide services. It might be shut down.
327 <term>DISABLED</term>
330 This node has been administratively disabled. This node is
331 partially functional and participates in the cluster.
332 However, it <emphasis>does not</emphasis> host public
333 addresses to provide services.
339 <term>UNHEALTHY</term>
342 A service provided by this node has failed a health check
343 and should be investigated. This node is partially
344 functional and participates in the cluster. However, it
345 <emphasis>does not</emphasis> host public addresses to
346 provide services. Unhealthy nodes should be investigated
347 and may require an administrative action to rectify.
356 CTDB is not behaving as designed on this node. For example,
357 it may have failed too many recovery attempts. Such nodes
358 are banned from participating in the cluster for a
359 configurable time period before they attempt to rejoin the
360 cluster. A banned node <emphasis>does not</emphasis> host
361 public addresses to provide services. All banned nodes
362 should be investigated and may require an administrative
372 This node has been administratively exclude from the
373 cluster. A stopped node does no participate in the cluster
374 and <emphasis>does not</emphasis> host public addresses to
375 provide services. This state can be used while performing
376 maintenance on a node.
382 <term>PARTIALLYONLINE</term>
385 A node that is partially online participates in a cluster
386 like a healthy (OK) node. Some interfaces to serve public
387 addresses are down, but at least one interface is up. See
388 also <command>ctdb ifaces</command>.
397 <title>CAPABILITIES</title>
400 Cluster nodes can have several different capabilities enabled.
401 These are listed below.
407 <term>RECMASTER</term>
410 Indicates that a node can become the CTDB cluster recovery
411 master. The current recovery master is decided via an
412 election held by all active nodes with this capability.
424 Indicates that a node can be the location master (LMASTER)
425 for database records. The LMASTER always knows which node
426 has the latest copy of a record in a volatile database.
438 Indicates that a node is configued in Linux Virtual Server
439 (LVS) mode. In this mode the entire CTDB cluster uses one
440 single public address for the entire cluster instead of
441 using multiple public addresses in failover mode. This is
442 an alternative to using a load-balancing layer-4 switch.
443 See the <citetitle>LVS</citetitle> section for more
453 Indicates that this node is configured to become the NAT
454 gateway master in a NAT gateway group. See the
455 <citetitle>NAT GATEWAY</citetitle> section for more
464 The RECMASTER and LMASTER capabilities can be disabled when CTDB
465 is used to create a cluster spanning across WAN links. In this
466 case CTDB acts as a WAN accelerator.
475 LVS is a mode where CTDB presents one single IP address for the
476 entire cluster. This is an alternative to using public IP
477 addresses and round-robin DNS to loadbalance clients across the
482 This is similar to using a layer-4 loadbalancing switch but with
487 In this mode the cluster selects a set of nodes in the cluster
488 and loadbalance all client access to the LVS address across this
489 set of nodes. This set of nodes are all LVS capable nodes that
490 are HEALTHY, or if no HEALTHY nodes exists all LVS capable nodes
491 regardless of health status. LVS will however never loadbalance
492 traffic to nodes that are BANNED, STOPPED, DISABLED or
493 DISCONNECTED. The <command>ctdb lvs</command> command is used to
494 show which nodes are currently load-balanced across.
498 One of the these nodes are elected as the LVSMASTER. This node
499 receives all traffic from clients coming in to the LVS address
500 and multiplexes it across the internal network to one of the
501 nodes that LVS is using. When responding to the client, that
502 node will send the data back directly to the client, bypassing
503 the LVSMASTER node. The command <command>ctdb
504 lvsmaster</command> will show which node is the current
509 The path used for a client I/O is:
513 Client sends request packet to LVSMASTER.
518 LVSMASTER passes the request on to one node across the
524 Selected node processes the request.
529 Node responds back to client.
536 This means that all incoming traffic to the cluster will pass
537 through one physical node, which limits scalability. You can
538 send more data to the LVS address that one physical node can
539 multiplex. This means that you should not use LVS if your I/O
540 pattern is write-intensive since you will be limited in the
541 available network bandwidth that node can handle. LVS does work
542 wery well for read-intensive workloads where only smallish READ
543 requests are going through the LVSMASTER bottleneck and the
544 majority of the traffic volume (the data in the read replies)
545 goes straight from the processing node back to the clients. For
546 read-intensive i/o patterns you can acheive very high throughput
551 Note: you can use LVS and public addresses at the same time.
555 If you use LVS, you must have a permanent address configured for
556 the public interface on each node. This address must be routable
557 and the cluster nodes must be configured so that all traffic
558 back to client hosts are routed through this interface. This is
559 also required in order to allow samba/winbind on the node to
560 talk to the domain controller. This LVS IP address can not be
561 used to initiate outgoing traffic.
564 Make sure that the domain controller and the clients are
565 reachable from a node <emphasis>before</emphasis> you enable
566 LVS. Also ensure that outgoing traffic to these hosts is routed
567 out through the configured public interface.
571 <title>Configuration</title>
574 To activate LVS on a CTDB node you must specify the
575 <varname>CTDB_PUBLIC_INTERFACE</varname> and
576 <varname>CTDB_LVS_PUBLIC_IP</varname> configuration variables.
577 Setting the latter variable also enables the LVS capability on
583 <screen format="linespecific">
584 CTDB_PUBLIC_INTERFACE=eth1
585 CTDB_LVS_PUBLIC_IP=10.1.1.237
593 <title>NAT GATEWAY</title>
596 NAT gateway (NATGW) is an optional feature that is used to
597 configure fallback routing for nodes. This allows cluster nodes
598 to connect to external services (e.g. DNS, AD, NIS and LDAP)
599 when they do not host any public addresses (e.g. when they are
603 This also applies to node startup because CTDB marks nodes as
604 UNHEALTHY until they have passed a "monitor" event. In this
605 context, NAT gateway helps to avoid a "chicken and egg"
606 situation where a node needs to access an external service to
610 Another way of solving this type of problem is to assign an
611 extra static IP address to a public interface on every node.
612 This is simpler but it uses an extra IP address per node, while
613 NAT gateway generally uses only one extra IP address.
617 <title>Operation</title>
620 One extra NATGW public address is assigned on the public
621 network to each NATGW group. Each NATGW group is a set of
622 nodes in the cluster that shares the same NATGW address to
623 talk to the outside world. Normally there would only be one
624 NATGW group spanning an entire cluster, but in situations
625 where one CTDB cluster spans multiple physical sites it might
626 be useful to have one NATGW group for each site.
629 There can be multiple NATGW groups in a cluster but each node
630 can only be member of one NATGW group.
633 In each NATGW group, one of the nodes is selected by CTDB to
634 be the NATGW master and the other nodes are consider to be
635 NATGW slaves. NATGW slaves establish a fallback default route
636 to the NATGW master via the private network. When a NATGW
637 slave hosts no public IP addresses then it will use this route
638 for outbound connections. The NATGW master hosts the NATGW
639 public IP address and routes outgoing connections from
640 slave nodes via this IP address. It also establishes a
641 fallback default route.
646 <title>Configuration</title>
649 NATGW is usually configured similar to the following example configuration:
651 <screen format="linespecific">
652 CTDB_NATGW_NODES=/etc/ctdb/natgw_nodes
653 CTDB_NATGW_PRIVATE_NETWORK=192.168.1.0/24
654 CTDB_NATGW_PUBLIC_IP=10.0.0.227/24
655 CTDB_NATGW_PUBLIC_IFACE=eth0
656 CTDB_NATGW_DEFAULT_GATEWAY=10.0.0.1
660 Normally any node in a NATGW group can act as the NATGW
661 master. Some configurations may have special nodes that lack
662 connectivity to a public network. In such cases,
663 <varname>CTDB_NATGW_SLAVE_ONLY</varname> can be used to limit the
664 NATGW functionality of thos nodes.
668 See the <citetitle>NAT GATEWAY</citetitle> section in
669 <citerefentry><refentrytitle>ctdb.conf</refentrytitle>
670 <manvolnum>5</manvolnum></citerefentry> for more details of
677 <title>Implementation details</title>
680 When the NATGW functionality is used, one of the nodes is
681 selected to act as a NAT gateway for all the other nodes in
682 the group when they need to communicate with the external
683 services. The NATGW master is selected to be a node that is
684 most likely to have usable networks.
688 The NATGW master hosts the NATGW public IP address
689 <varname>CTDB_NATGW_PUBLIC_IP</varname> on the configured public
690 interfaces <varname>CTDB_NATGW_PUBLIC_IFACE</varname> and acts as
691 a router, masquerading outgoing connections from slave nodes
692 via this IP address. If
693 <varname>CTDB_NATGW_DEFAULT_GATEWAY</varname> is set then it
694 also establishes a fallback default route to the configured
695 this gateway with a metric of 10. A metric 10 route is used
696 so it can co-exist with other default routes that may be
701 A NATGW slave establishes its fallback default route to the
702 NATGW master via the private network
703 <varname>CTDB_NATGW_PRIVATE_NETWORK</varname>with a metric of 10.
704 This route is used for outbound connections when no other
705 default route is available because the node hosts no public
706 addresses. A metric 10 routes is used so that it can co-exist
707 with other default routes that may be available when the node
708 is hosting public addresses.
712 <varname>CTDB_NATGW_STATIC_ROUTES</varname> can be used to
713 have NATGW create more specific routes instead of just default
718 This is implemented in the <filename>11.natgw</filename>
719 eventscript. Please see the eventscript file and the
720 <citetitle>NAT GATEWAY</citetitle> section in
721 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
722 <manvolnum>5</manvolnum></citerefentry> for more details.
729 <title>POLICY ROUTING</title>
732 Policy routing is an optional CTDB feature to support complex
733 network topologies. Public addresses may be spread across
734 several different networks (or VLANs) and it may not be possible
735 to route packets from these public addresses via the system's
736 default route. Therefore, CTDB has support for policy routing
737 via the <filename>13.per_ip_routing</filename> eventscript.
738 This allows routing to be specified for packets sourced from
739 each public address. The routes are added and removed as CTDB
740 moves public addresses between nodes.
744 <title>Configuration variables</title>
747 There are 4 configuration variables related to policy routing:
748 <varname>CTDB_PER_IP_ROUTING_CONF</varname>,
749 <varname>CTDB_PER_IP_ROUTING_RULE_PREF</varname>,
750 <varname>CTDB_PER_IP_ROUTING_TABLE_ID_LOW</varname>,
751 <varname>CTDB_PER_IP_ROUTING_TABLE_ID_HIGH</varname>. See the
752 <citetitle>POLICY ROUTING</citetitle> section in
753 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
754 <manvolnum>5</manvolnum></citerefentry> for more details.
759 <title>Configuration</title>
762 The format of each line of
763 <varname>CTDB_PER_IP_ROUTING_CONF</varname> is:
767 <public_address> <network> [ <gateway> ]
771 Leading whitespace is ignored and arbitrary whitespace may be
772 used as a separator. Lines that have a "public address" item
773 that doesn't match an actual public address are ignored. This
774 means that comment lines can be added using a leading
775 character such as '#', since this will never match an IP
780 A line without a gateway indicates a link local route.
784 For example, consider the configuration line:
788 192.168.1.99 192.168.1.1/24
792 If the corresponding public_addresses line is:
796 192.168.1.99/24 eth2,eth3
800 <varname>CTDB_PER_IP_ROUTING_RULE_PREF</varname> is 100, and
801 CTDB adds the address to eth2 then the following routing
802 information is added:
806 ip rule add from 192.168.1.99 pref 100 table ctdb.192.168.1.99
807 ip route add 192.168.1.0/24 dev eth2 table ctdb.192.168.1.99
811 This causes traffic from 192.168.1.1 to 192.168.1.0/24 go via
816 The <command>ip rule</command> command will show (something
817 like - depending on other public addresses and other routes on
822 0: from all lookup local
823 100: from 192.168.1.99 lookup ctdb.192.168.1.99
824 32766: from all lookup main
825 32767: from all lookup default
829 <command>ip route show table ctdb.192.168.1.99</command> will show:
833 192.168.1.0/24 dev eth2 scope link
837 The usual use for a line containing a gateway is to add a
838 default route corresponding to a particular source address.
839 Consider this line of configuration:
843 192.168.1.99 0.0.0.0/0 192.168.1.1
847 In the situation described above this will cause an extra
848 routing command to be executed:
852 ip route add 0.0.0.0/0 via 192.168.1.1 dev eth2 table ctdb.192.168.1.99
856 With both configuration lines, <command>ip route show table
857 ctdb.192.168.1.99</command> will show:
861 192.168.1.0/24 dev eth2 scope link
862 default via 192.168.1.1 dev eth2
867 <title>Sample configuration</title>
870 Here is a more complete example configuration.
874 /etc/ctdb/public_addresses:
876 192.168.1.98 eth2,eth3
877 192.168.1.99 eth2,eth3
879 /etc/ctdb/policy_routing:
881 192.168.1.98 192.168.1.0/24
882 192.168.1.98 192.168.200.0/24 192.168.1.254
883 192.168.1.98 0.0.0.0/0 192.168.1.1
884 192.168.1.99 192.168.1.0/24
885 192.168.1.99 192.168.200.0/24 192.168.1.254
886 192.168.1.99 0.0.0.0/0 192.168.1.1
890 The routes local packets as expected, the default route is as
891 previously discussed, but packets to 192.168.200.0/24 are
892 routed via the alternate gateway 192.168.1.254.
899 <title>NOTIFICATION SCRIPT</title>
902 When certain state changes occur in CTDB, it can be configured
903 to perform arbitrary actions via a notification script. For
904 example, sending SNMP traps or emails when a node becomes
905 unhealthy or similar.
908 This is activated by setting the
909 <varname>CTDB_NOTIFY_SCRIPT</varname> configuration variable.
910 The specified script must be executable.
913 Use of the provided <filename>/etc/ctdb/notify.sh</filename>
914 script is recommended. It executes files in
915 <filename>/etc/ctdb/notify.d/</filename>.
918 CTDB currently generates notifications after CTDB changes to
923 <member>init</member>
924 <member>setup</member>
925 <member>startup</member>
926 <member>healthy</member>
927 <member>unhealthy</member>
933 <title>DEBUG LEVELS</title>
936 Valid values for DEBUGLEVEL are:
940 <member>ERR (0)</member>
941 <member>WARNING (1)</member>
942 <member>NOTICE (2)</member>
943 <member>INFO (3)</member>
944 <member>DEBUG (4)</member>
950 <title>REMOTE CLUSTER NODES</title>
952 It is possible to have a CTDB cluster that spans across a WAN link.
953 For example where you have a CTDB cluster in your datacentre but you also
954 want to have one additional CTDB node located at a remote branch site.
955 This is similar to how a WAN accelerator works but with the difference
956 that while a WAN-accelerator often acts as a Proxy or a MitM, in
957 the ctdb remote cluster node configuration the Samba instance at the remote site
958 IS the genuine server, not a proxy and not a MitM, and thus provides 100%
959 correct CIFS semantics to clients.
963 See the cluster as one single multihomed samba server where one of
964 the NICs (the remote node) is very far away.
968 NOTE: This does require that the cluster filesystem you use can cope
969 with WAN-link latencies. Not all cluster filesystems can handle
970 WAN-link latencies! Whether this will provide very good WAN-accelerator
971 performance or it will perform very poorly depends entirely
972 on how optimized your cluster filesystem is in handling high latency
973 for data and metadata operations.
977 To activate a node as being a remote cluster node you need to set
978 the following two parameters in /etc/sysconfig/ctdb for the remote node:
979 <screen format="linespecific">
980 CTDB_CAPABILITY_LMASTER=no
981 CTDB_CAPABILITY_RECMASTER=no
986 Verify with the command "ctdb getcapabilities" that that node no longer
987 has the recmaster or the lmaster capabilities.
994 <title>SEE ALSO</title>
997 <citerefentry><refentrytitle>ctdb</refentrytitle>
998 <manvolnum>1</manvolnum></citerefentry>,
1000 <citerefentry><refentrytitle>ctdbd</refentrytitle>
1001 <manvolnum>1</manvolnum></citerefentry>,
1003 <citerefentry><refentrytitle>ctdbd_wrapper</refentrytitle>
1004 <manvolnum>1</manvolnum></citerefentry>,
1006 <citerefentry><refentrytitle>ltdbtool</refentrytitle>
1007 <manvolnum>1</manvolnum></citerefentry>,
1009 <citerefentry><refentrytitle>onnode</refentrytitle>
1010 <manvolnum>1</manvolnum></citerefentry>,
1012 <citerefentry><refentrytitle>ping_pong</refentrytitle>
1013 <manvolnum>1</manvolnum></citerefentry>,
1015 <citerefentry><refentrytitle>ctdbd.conf</refentrytitle>
1016 <manvolnum>5</manvolnum></citerefentry>,
1018 <citerefentry><refentrytitle>ctdb-statistics</refentrytitle>
1019 <manvolnum>7</manvolnum></citerefentry>,
1021 <citerefentry><refentrytitle>ctdb-tunables</refentrytitle>
1022 <manvolnum>7</manvolnum></citerefentry>,
1024 <ulink url="http://ctdb.samba.org/"/>
1031 This documentation was written by
1040 <holder>Andrew Tridgell</holder>
1041 <holder>Ronnie Sahlberg</holder>
1045 This program is free software; you can redistribute it and/or
1046 modify it under the terms of the GNU General Public License as
1047 published by the Free Software Foundation; either version 3 of
1048 the License, or (at your option) any later version.
1051 This program is distributed in the hope that it will be
1052 useful, but WITHOUT ANY WARRANTY; without even the implied
1053 warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
1054 PURPOSE. See the GNU General Public License for more details.
1057 You should have received a copy of the GNU General Public
1058 License along with this program; if not, see
1059 <ulink url="http://www.gnu.org/licenses"/>.