Skip to navigation

DB Cluster Utils

Sometimes it is necessary to do maintenance tasks on a cluster of database servers. Servers may breakdown, disks fail from time to time, and servers need to be added as the load on a cluster grows.

Typing commands in the MySQL command prompt on each server is a cumbersome way to deal with these kinds of tasks.  I'd rather have a Banana Pi set up as a control node where I can use simple commands to manage the entire cluster.

I recently wrote a simple script to simplify setting up a cluster of databases.  I decided to extend this script into a more complete program that I can use to do the following:

  • add new servers to the cluster,
  • demote the master,
  • promote a slave
  • move the database to a new server,
  • start and stop replication,
  • remove a server.

You can download DB Cluster Utils from Github at https://github.com/Pyplate/db-cluster-utils.

If anything goes wrong with my database cluster, I'll use this tool to fix it.  This script also provides a Python API which will probably be useful at some point in the future.

DB Cluster Utils is intended to work with MySQL 5.5.  In MySQL 5.6, Global Transaction IDentifiers (GTIDs) are used to synchronize servers, so there's no need to keep track of the binary logs on each server.

I don't recommend using this tool with databases that contain critical data.  This program was written mainly for demonstration purposes.  If you see any errors or omissions in it, feel free to comment below.

Install

I've set up a cluster of four Banana Pi servers, and a fifth Pi to use as a control node. Each one is running Raspbian and is set up as described on this page.

All nodes have static IP addresses.  The IP address of my control node is 192.168.0.8, and the IP addresses of the server nodes range from 192.168.0.35 to 192.168.0.38.

I logged into each server using ssh and logged into the MySQL shell so that I could run this query:

GRANT ALL ON *.* TO 'root'@'192.168.0.8' IDENTIFIED BY 'rootpassword';

This allows the root user to access each server from the control node.  On the control node, install the MySQL client and the Python bindings:

sudo apt-get install python-mysqldb mysql-client -y

Next install DB Cluster Utils:

cd ~
git clone git://github.com/Pyplate/db-cluster-utils.git
cd db_cluster_utils

Configuration

There is a configuration file containing the following parameters:

  • database name - the name of the database to be managed
  • master ip - the IP address of the master server
  • control host - the IP address of the node used for controlling the cluster
  • slave ip list - a comma separated list of slave servers
  • root password - the password of the MySQL root user's account
  • db user pw - the password for the MySQL user account that's used for queries
  • slave user pw - the password for the MySQL user account that's used for queries
  • host username - the password for the MySQL user account that's used for replication
  • server pool - a list of servers that are available to join the cluster

You can find this configuration file in ~/db-cluster-utils/cluster_utils.conf.  I set it up as follows:

[cluster_utils]

# the name of the database to create
database name = my_db

# IP address of the master server
master ip = 192.168.0.35

# the IP address of the control node
control host = 192.168.0.8

# a list of IP addresses of the slave servers
slave ip list = 192.168.0.36, 192.168.0.37

# root user's pass word 
root password = rootpwd

# database user's password
db user pw = mypassword

# slave user's password
slave user pw = mypassword

# server host username
host username = bananapi

# server pool
server pool = 192.168.0.35,192.168.0.36,192.168.0.37,192.168.0.38

The server pool includes all the servers in the cluster.  The server at 192.168.0.35 is the master server, and the servers at 192.168.0.36 and 192.168.0.37 are slaves.  I left 192.168.0.38 unused to begin with.

Using DB Cluster Utils

On the control node, open a terminal window, change directory to ~/db_cluster utils/ and use this command to display the available options:

./db_cluster_utils.py -h
Usage:
db_cluster_utils.py -option [argument]
 -a    --add <slave ip>        add a new slave to the cluster
 -c    --config                view cluster configuration information
 -d    --demote                demote a master to a slave
 -h    --help                  display this help message
 -i    --init                  initialize the cluster based on settings 
                               in cluster_utils.conf
 -m    --move <destination ip> Move the database to another server
 -p    --promote <slave ip>    promote a slave to master
 -r    --remove <slave ip>     remove this node from the cluster
 -s    --start                 start replication
 -t    --stop                  stop replication
 -w    --wipe [IP address]     wipe a database from the entire cluster,
                               or from a single server if an IP address 
                               is specified

Commands need to be used in the correct sequence.  At the time of writing the use of combinations of options hasn't been tested - just use one option at a time.

The first time you use DB Cluster Utils, you need to use the '-i' option to initialize the cluster:

./db_cluster_utils.py -i
Database name: my_db
Master server IP: 192.168.0.35
Control Host: 192.168.0.8
Slave IP address list:
['192.168.0.36', '192.168.0.37']
Root password: T4nk3r!12
User password: mypassword
Slave user password: mypassword
192.168.0.35 Created database my_db
192.168.0.35 Created user db_user
192.168.0.35 Granted privileges
192.168.0.35 Created user slave_user
192.168.0.35 Granted privileges
192.168.0.35 Got master status: mysql-bin.000007, 64322
192.168.0.35 Granted replication privilege to slave_user@192.168.0.36 id'd by mypassword
192.168.0.35 Granted replication privilege to slave_user@192.168.0.37 id'd by mypassword
Init server 192.168.0.36
192.168.0.36 Created database my_db
192.168.0.36 Created user db_user
192.168.0.36 Granted privileges
192.168.0.36 Created user slave_user
192.168.0.36 Granted privileges
192.168.0.36 Got master status: mysql-bin.000031, 27173
192.168.0.36 Set read only
CHANGE MASTER TO MASTER_HOST='192.168.0.35', 
                    MASTER_USER='slave_user', 
                    MASTER_PASSWORD='mypassword', 
                    MASTER_LOG_FILE='mysql-bin.000007', 
                    MASTER_LOG_POS=64322;
192.168.0.36 Set master
Init server 192.168.0.37
192.168.0.37 Created database my_db
192.168.0.37 Created user db_user
192.168.0.37 Granted privileges
192.168.0.37 Created user slave_user
192.168.0.37 Granted privileges
192.168.0.37 Got master status: mysql-bin.000030, 25414
192.168.0.37 Set read only
CHANGE MASTER TO MASTER_HOST='192.168.0.35', 
                    MASTER_USER='slave_user', 
                    MASTER_PASSWORD='mypassword', 
                    MASTER_LOG_FILE='mysql-bin.000007', 
                    MASTER_LOG_POS=64322;
192.168.0.37 Set master

This command initializes each server, creating the database, creating users and granting privileges.  

On the master server, replication privileges are granted for the replication users on the other servers.  On each slave, the CHANGE MASTER directive is used to tell the slave which node to use as master.  The slaves are then set read only.

This command also creates a file containing information about each node.  The log file and log position for each server are recorded in a file called status_log_records.dat.

ip address:binlog file:position:role

A typical example of the contents of status_log_records.dat might look like this:

192.168.0.35:mysql-bin.000032:8813:master
192.168.0.36:mysql-bin.000031:34486:slave
192.168.0.37:mysql-bin.000030:31273:slave

Insert Data into the cluster

I wrote a script called insert_data.py which creates some tables and stores sample data on the master server.  When I run it, the script connects to the master server, and runs some queries to create tables and insert records:

./insert_data.py
Done
Done
Done
Done
Done
Done
Done
Done
Done
Done
Done
DB Connection closed

Start replication

At this point the master server contains data, but the slaves don't.  When I run db_cluster_utils.py with the '-s' option, the slaves start replicating data on the master server:

./db_cluster_utils -s
192.168.0.36 Started slave thread
192.168.0.37 Started slave thread

Now each node contains the sample data.  I used ssh to log into each node in a different terminal window, and log into the MySQL shell as root:

mysql -u root -p

I can run the following query in the MySQL shell of any of the servers that has been initialized so far (192.168.0.35 to 192.168.0.37), and I will see the same result:

mysql> SELECT * FROM users;
+----------+----------+-----------------+---------------------+
| username | password | email_addr      | joined              |
+----------+----------+-----------------+---------------------+
| Anna     | 123456   | anna@email.com  | 2014-10-20 00:00:00 |
| Bob      | 333456   | bob@addr.com    | 2014-10-20 00:00:00 |
| Claire   | 444456   | claire@addr.com | 2014-10-20 00:00:00 |
| Dave     | 555456   | dave@addr.com   | 2014-10-20 00:00:00 |
| Emma     | 666456   | emma@addr.com   | 2014-10-20 00:00:00 |
| Fred     | 111456   | fred@addr.com   | 2014-10-20 00:00:00 |
| Gwen     | 222456   | gwen@addr.com   | 2014-10-20 00:00:00 |
| Harry    | 123123   | harry@mail.com  | 2014-10-20 00:00:00 |
+----------+----------+-----------------+---------------------+
8 rows in set (0.00 sec)

This shows that the database has been set up, and replication is working.  I can confirm this by adding a record on the master server at IP address 192.168.0.35:

mysql> INSERT INTO users VALUES ('Isabelle', '123456', 'email@addr.com', CURRENT_DATE());
Query OK, 1 row affected (0.02 sec)

>mysql> COMMIT;
Query OK, 0 rows affected (0.01 sec)

Again, when I select all records in the users table, I should see the same result on all servers:

mysql> SELECT * FROM users;
+----------+----------+-----------------+---------------------+
| username | password | email_addr      | joined              |
+----------+----------+-----------------+---------------------+
| Anna     | 123456   | anna@email.com  | 2014-10-20 00:00:00 |
| Bob      | 333456   | bob@addr.com    | 2014-10-20 00:00:00 |
| Claire   | 444456   | claire@addr.com | 2014-10-20 00:00:00 |
| Dave     | 555456   | dave@addr.com   | 2014-10-20 00:00:00 |
| Emma     | 666456   | emma@addr.com   | 2014-10-20 00:00:00 |
| Fred     | 111456   | fred@addr.com   | 2014-10-20 00:00:00 |
| Gwen     | 222456   | gwen@addr.com   | 2014-10-20 00:00:00 |
| Harry    | 123123   | harry@mail.com  | 2014-10-20 00:00:00 |
| Isabelle | 123456   | email@addr.com  | 2014-10-20 00:00:00 |
+----------+----------+-----------------+---------------------+
9 rows in set (0.01 sec)

Share this page:

comments powered by Disqus