Skip to navigation

Automate MySQL replication on four nodes

In the previous post I talked about setting up MySQL replication so that two database servers can be kept synchronized with each other.

Sometimes it's necessary to replicate a database on several servers.  The more servers there are, the more queries can be handled.  I'm going to set up a cluster of four database servers, and use three of them as slaves.  Each of the slaves is replicated from the master.

Instead of setting up the databases on all servers by entering a lot of commands at the MySQL prompt, I have written a Python script to set them up.  The script will also set up replication.

The master is at IP address 192.168.0.35, and the slaves are at 192.168.0.36, 192.168.0.37 and 192.168.0.38.  On each server, I created two users - one for replicating data between servers, and another user for executing queries.

I'm using fifth Banana Pi as a control node to manage the cluster.  This Banana Pi is at IP address 192.168.0.8, and I connected a keyboard monitor and mouse to it.  I opened four terminal windows and used each one to ssh to one of the MySQL servers.

Each Banana Pi had an 8GB class 10 memory card with Raspbian installed on it.

Set up the control node

I set the host name and password using bpi-config, and expanded the root partition.  Next I installed the MySQL client, and the Python bindings for it:

sudo apt-get install python-mysqldb mysql-client -y

Set up each server

You can set up one server, and copy the sd card image.  The downside of this is that you have to copy an 8GB image to the other SD cards which takes longer, and then I still needed to log into each Pi to change the IP address and host name.  I just installed the default Raspbian image for the Banana Pi on four different SD cards, and set up each one individually.

You need to go throught the following steps on each server.  

Set a static IP address in /etc/network/interfaces 

auto lo

iface lo inet loopback
iface eth0 inet static
address 192.168.0.35
netmask 255.255.255.0
gateway 192.168.0.1

allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp

Run bpi-config to expand the root partition and change the hostname and password:

sudo bpi-config

Set DNS servers in /etc/resolv.conf

nameserver 8.8.8.8
nameserver 8.8.4.4

(These are Google's DNS servers.  You can use different name servers if you want.)

Update Linux:

sudo apt-get update && sudo apt-get update -y

Install MySQL: 

sudo apt-get install python-mysqldb mysql-server -y

Set up MySQL configuration files

On the control node, get a copy of the MySQL server config file from one of the servers, and copy it to the control node:

scp bananapi@192.168.0.35:/etc/mysql/my.cnf ./my.cnf

Open my.cnf in a text editor and set the bind address:

bind-address        = 0.0.0.0

Uncomment the following variables, and set the values of server-id and binlog_do_db:

server-id        = 1
log_bin            = /var/log/mysql/mysql-bin.log
binlog_do_db        = my_db

Add this line just below the lines with binlog_do_db:

relay-log = /var/log/mysql/mysql-relay-bin.log

Now copy my.cnf from the control node back to each server (make sure to update the server-id value for each server):

scp ./my.cnf bananapi@192.168.0.36:./my.cnf
scp ./my.cnf bananapi@192.168.0.37:./my.cnf
scp ./my.cnf bananapi@192.168.0.38:./my.cnf

On each server, change the owner of the config file to root, and copy it to the correct place:

sudo chown root:root my.cnf ; sudo cp  my.cnf /etc/mysql/my.cnf

Restart the MySQL server on each node:

sudo service mysql restart

The next step is to grant privileges to the MySQL root user on each server for requests from the control node at IP address 192.168.0.8.  You can do this by logging in to each server and using the GRANT command:

mysql -u root -p

mysql> GRANT ALL ON *.* TO 'root'@'192.168.0.8' IDENTIFIED BY 'T4nk3r!12' WITH GRANT OPTION;
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

Now each server can be managed from the control node.  Check that you can connect from the control node to a server by typing this command:

bananapi@control ~ $ mysql -h 192.168.0.35 -u root -p

Once you've entered the root user's database password, you should see a MySQL command prompt.

In the last post on setting up replication, I described how to dump the contents of the master into a file and load that file into the slave database. You only need to do this if you're setting up replication on an existing database.  In this case, I'm setting up replication before any data has been added to the master database, so I can skip this step.

At this point the servers have been set up, but there's no database on any of them, and replication hasn't been set up yet.  The next step is to create a database and its tables on each server.  I could create the database by logging into the MySQL shell on each server and manually typing commands to create the database, but it's easier to write a scipt to do it.

You can download the code that I used here:

Note that this is sample code to illustrate the basics of automating MySQL replication, it is not a finished application.  There are two scripts: db_cluster_init.py, which sets up databases on each node and sets up replication, and there's a script called clean_cluster.py, which deletes the databases and returns the servers to their original state.

Configuring the Python script

There are some configuration options that you need to set in db-cluster.py before you run it. There's no GUI for this program, it has to be configured by editing global variables at the top of the file.

The first one is the name of the database, and the next two are the IP address of the control node and the master server:

# the name of the database to create
database_name = "my_db"

# the IP address of the control node
control_host = "192.168.0.8"

# the IP address of the master server
master_ip = "192.168.0.35"

The next three variables are passwords for the three users that will be used on this server:

# root user's pass word 
root_pw = "somepassword"

# database user's password
db_user_pw = "mypassword"

# slave user's password
slave_user_pw = "mypassword"

The IP addresses of the slave servers need to be contained in a list:

# a list of IP addresses of the slave servers
slave_ip_list = ["192.168.0.36", "192.168.0.37", "192.168.0.38"]

There are two variables that you shouldn't change, mysqlbin and binlog.  These variables store a location in the master server's log file.  The code to initialize a master server will capture values from the master and store them in these variables:

# These strings will store information about the master server's
# log position. Leave them set to "". 
mysqlbin = ""
binlog = ""

The slave servers will start replicating data from this point in the log.  Earlier entries in the master's log will be ignored.

The program starts at the main function, which calls the function init_server for each server.  This function creates a database, creates some tables, and creates three users:

def init_server (host, root_pw, master):
    """
    Initialize a MySQL server as either a master or slave
    """

    conn = MySQLdb.connect(host, 'root', root_pw, '')
    curs = conn.cursor()

    create_db (curs, database_name)
    print "Created database"

    create_users (curs)
    print "Created users"

    # create the database tables
    try:
        create_tables(curs)
        conn.commit()
    except Exception, e:
        conn.rollback()
        print "Failed to create tables"
        print_exception(e)

    if master == True:
        # set up this server as the master
        set_as_master (curs, host)

    else:
        # this server is a slave
        set_read_only(curs)
        # tell the slave which master to use
        use_master (curs, master_ip, slave_user_pw, mysqlbin, binlog)
        start_slave (curs)

    conn.commit()

    # close the connection to the database
    conn.close()
    print "DB Connection closed\n"

Note that create_tables creates some sample tables, not tables from a real application. This function can be customized to create tables for different applications.

The master parameter passed to init_server is a boolean variable which determines whether the server will be set up as a master or slave.  If the server is a master, a function called set_as_master will be called:


def set_as_master (curs, host):
    """
    Make a host a master
    """

    global mysqlbin
    global binlog

    # Grant replication rights to slaves
    for host in slave_ip_list:
        grant_replication (curs, 'slave_user', host, slave_user_pw)

    # get master status
    curs.execute("FLUSH TABLES WITH READ LOCK")
    curs.execute("SHOW MASTER STATUS")
    master_status = curs.fetchall()
    curs.execute("UNLOCK TABLES")

    mysqlbin = str(master_status[0][0])
    binlog = str(master_status[0][1])

This function uses the SHOW MASTER STATUS query to get the position of the master server's log file.  It also grants replication privileges to the slave_user on the master server.

If the server is a slave, the following functions are called:

        # this server is a slave
        set_read_only(curs)
        # tell the slave which master to use
        use_master (curs, master_ip, slave_user_pw, mysqlbin, binlog)
        start_slave (curs)

These functions make slave servers read only.  The use_master function tells a slave which server to use as master, and it tells slaves where to start reading the master's log.  The start_slave function starts the slave IO thread.

When you run db-cluster-init.py on the control node, you should see output like this:

./db-cluster-init.py 
Init Master
Created database my_db
Created database
Created user db_user
Granted privileges
Created user slave_user
Granted privileges
Created users
Created tables
Granted replication privilege to slave_user@192.168.0.36 id'd by mypassword
Granted replication privilege to slave_user@192.168.0.37 id'd by mypassword
Granted replication privilege to slave_user@192.168.0.38 id'd by mypassword
DB Connection closed

MySQL bin: mysql-bin.000001
Bin log: 3771
Init server 192.168.0.36
Created database my_db
Created database
Created user db_user
Granted privileges
Created user slave_user
Granted privileges
Created users
Created tables
Set Master
DB Connection closed

Init server 192.168.0.37
Created database my_db
Created database
Created user db_user
Granted privileges
Created user slave_user
Granted privileges
Created users
Created tables
Set Master
DB Connection closed

Init server 192.168.0.38
Created database my_db
Created database
Created user db_user
Granted privileges
Created user slave_user
Granted privileges
Created users
Created tables
Set Master
DB Connection closed

At this point, the databases are set up, and replication is enabled.  Check the status of each slave using the following command:

mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.0.35
                  Master_User: slave_user
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000003
          Read_Master_Log_Pos: 107
               Relay_Log_File: mysql-relay-bin.000008
                Relay_Log_Pos: 210
        Relay_Master_Log_File: mysql-bin.000003
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 107
              Relay_Log_Space: 512
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 1
1 row in set (0.00 sec)

There still isn't any data in the database, so on the master, add a record to one of the tables:

USE my_db;
INSERT INTO users VALUES ('Dave', '123456', 'email@addr.com', CURRENT_DATE());
COMMIT;

On any of the slaves, use these commands to make sure that the entry has been replicated:

mysql> USE my_db
Database changed
mysql> SELECT * FROM users;
+----------+----------+----------------+---------------------+
| username | password | email_addr     | joined              |
+----------+----------+----------------+---------------------+
| Dave     | 123456   | email@addr.com | 2014-09-23 00:00:00 |
+----------+----------+----------------+---------------------+
1 row in set (0.00 sec)

 

Share this page:

comments powered by Disqus