Migrating to Automatic Clustering and Configuration Sharing¶
Clearwater now supports an automatic clustering and configuration sharing feature. This makes Clearwater deployments much easier to manage. However deployments created before the ‘For Whom The Bell Tolls’ release do not use this feature. This article explains how to migrate a deployment to take advantage of the new feature.
Upgrade the Deployment¶
Upgrade to the latest stable Clearwater release. You will also need to update your firewall settings to support the new clearwater management packages; open port 2380 and 4000 between every node (see here for the complete list).
Verify Configuration Files¶
Do the following on each node in turn:
- Run
/usr/share/clearwater/infrastructure/migration-utils/configlint.py
. This examines the existing/etc/clearwater/config
file and checks that the migration scripts can handle all the settings defined in it. - If
configlint.py
produces a warning about a config option, this can mean one of two things:- The config option is invalid (for example, because there is a typo, or this option has been retired). Check the configuration options reference for a list of valid options.
- The config option is valid, but the migration script doesn’t recognise the option and won’t automatically migrate it. In this case, you will need to make a note of this config option now, and add it back in after the rest of the migration has run. (A later step in this process covers that.)
Once you have checked your configuration file and taken a note of any unrecognised settings, continue with the next step.
Prepare Local Configuration Files¶
Do the following on each node in turn:
- Run
sudo /usr/share/clearwater/infrastructure/migration-utils/migrate_local_config /etc/clearwater/config
. This examines the existing/etc/clearwater/config
file and produces a new/etc/clearwater/local_config
which contains the settings only relevant to this node. Check that this file looks sensible. - Edit
/etc/clearwater/local_config
to add a lineetcd_cluster="<NodeIPs>"
whereNodeIPs
is a comma separated list of the private IP addresses of nodes in the deployment. For example if your deployment contained nodes with IP addresses of 10.0.0.1 to 10.0.0.6,NodeIPs
would be10.0.0.1,10.0.0.2,10.0.0.3,10.0.0.4,10.0.0.5,10.0.0.6
. If your deployment was GR, this should include the IP addresses of nodes in both sites. - If your deployment was geographically redundant, you should choose
arbitrary names for each site (e.g. ‘site1’ and ‘site2’), and set the
local_site_name
andremote_site_name
settings in/etc/clearwater/local_config
accordingly. For example, if the node is in ‘site1’, you should havelocal_site_name=site1
andremote_site_name=site2
. - If the node is a Sprout or Ralf node, run
sudo /usr/share/clearwater/bin/chronos_configuration_split.py
. This examines the existing/etc/chronos/chronos.conf
file and extracts the clustering settings into a new file called/etc/chronos/chronos_cluster.conf
. Check each of these files by hand to make sure they look sensible. If thechronos_cluster.conf
file already exists, then the script will exit with a warning. In this case, please check the configuration files by hand, and either delete thechronos_cluster.conf
file and re-run the script, or manually split the configuration yourself. Details of the expected configuration are here. - Run
sudo touch /etc/clearwater/no_cluster_manager
on all nodes. This temporarily disables the cluster manager (which is installed in the next step) so that you can program it with the current deployment topology.
Install Clustering and Configuration Management Services¶
On each node run sudo apt-get install clearwater-management
.
Upload the Current Cluster Settings¶
Now you need to tell the cluster manager about the current topology of the various database clusters that exist in a Clearwater deployment. For each of the nodes types listed below, log onto one of the nodes of that type and run the specified commands.
Sprout¶
/usr/share/clearwater/clearwater-cluster-manager/scripts/load_from_memcached_cluster sprout
/usr/share/clearwater/clearwater-cluster-manager/scripts/load_from_chronos_cluster sprout
Ralf¶
/usr/share/clearwater/clearwater-cluster-manager/scripts/load_from_memcached_cluster ralf
/usr/share/clearwater/clearwater-cluster-manager/scripts/load_from_chronos_cluster ralf
Homestead¶
/usr/share/clearwater/clearwater-cluster-manager/scripts/load_from_cassandra_cluster homestead
Homer¶
/usr/share/clearwater/clearwater-cluster-manager/scripts/load_from_cassandra_cluster homer
Memento¶
/usr/share/clearwater/clearwater-cluster-manager/scripts/load_from_memcached_cluster memento
/usr/share/clearwater/clearwater-cluster-manager/scripts/load_from_cassandra_cluster memento
Tidy Up¶
The final step is to re-enable the cluster manager by running the following commands:
sudo rm /etc/clearwater/no_cluster_manager