In the previous post I talked about setting up MySQL replication so that two database servers can be kept synchronized with each other.
Sometimes it's necessary to replicate a database on several servers. The more servers there are, the more queries can be handled. I'm going to set up a cluster of four database servers, and use three of them as slaves. Each of the slaves is replicated from the master.
Instead of setting up the databases on all servers by entering a lot of commands at the MySQL prompt, I have written a Python script to set them up. The script will also set up replication.
The master is at IP address 192.168.0.35, and the slaves are at 192.168.0.36, 192.168.0.37 and 192.168.0.38. On each server, I created two users - one for replicating data between servers, and another user for executing queries.
I'm using fifth Banana Pi as a control node to manage the cluster. This Banana Pi is at IP address 192.168.0.8, and I connected a keyboard monitor and mouse to it. I opened four terminal windows and used each one to ssh to one of the MySQL servers.
Each Banana Pi had an 8GB class 10 memory card with Raspbian installed on it.
I set the host name and password using bpi-config, and expanded the root partition. Next I installed the MySQL client, and the Python bindings for it:
You can set up one server, and copy the sd card image. The downside of this is that you have to copy an 8GB image to the other SD cards which takes longer, and then I still needed to log into each Pi to change the IP address and host name. I just installed the default Raspbian image for the Banana Pi on four different SD cards, and set up each one individually.
You need to go throught the following steps on each server.
Set a static IP address in /etc/network/interfaces
Run bpi-config to expand the root partition and change the hostname and password:
Set DNS servers in /etc/resolv.conf
(These are Google's DNS servers. You can use different name servers if you want.)
On the control node, get a copy of the MySQL server config file from one of the servers, and copy it to the control node:
Open my.cnf in a text editor and set the bind address:
Uncomment the following variables, and set the values of server-id and binlog_do_db:
Add this line just below the lines with binlog_do_db:
Now copy my.cnf from the control node back to each server (make sure to update the server-id value for each server):
On each server, change the owner of the config file to root, and copy it to the correct place:
Restart the MySQL server on each node:
The next step is to grant privileges to the MySQL root user on each server for requests from the control node at IP address 192.168.0.8. You can do this by logging in to each server and using the GRANT command:
Now each server can be managed from the control node. Check that you can connect from the control node to a server by typing this command:
Once you've entered the root user's database password, you should see a MySQL command prompt.
In the last post on setting up replication, I described how to dump the contents of the master into a file and load that file into the slave database. You only need to do this if you're setting up replication on an existing database. In this case, I'm setting up replication before any data has been added to the master database, so I can skip this step.
At this point the servers have been set up, but there's no database on any of them, and replication hasn't been set up yet. The next step is to create a database and its tables on each server. I could create the database by logging into the MySQL shell on each server and manually typing commands to create the database, but it's easier to write a scipt to do it.
You can download the code that I used here:
Note that this is sample code to illustrate the basics of automating MySQL replication, it is not a finished application. There are two scripts: db_cluster_init.py, which sets up databases on each node and sets up replication, and there's a script called clean_cluster.py, which deletes the databases and returns the servers to their original state.
There are some configuration options that you need to set in db-cluster.py before you run it. There's no GUI for this program, it has to be configured by editing global variables at the top of the file.
The first one is the name of the database, and the next two are the IP address of the control node and the master server:
The next three variables are passwords for the three users that will be used on this server:
The IP addresses of the slave servers need to be contained in a list:
There are two variables that you shouldn't change, mysqlbin and binlog. These variables store a location in the master server's log file. The code to initialize a master server will capture values from the master and store them in these variables:
The slave servers will start replicating data from this point in the log. Earlier entries in the master's log will be ignored.
The program starts at the main function, which calls the function init_server for each server. This function creates a database, creates some tables, and creates three users:
Note that create_tables creates some sample tables, not tables from a real application. This function can be customized to create tables for different applications.
The master parameter passed to init_server is a boolean variable which determines whether the server will be set up as a master or slave. If the server is a master, a function called set_as_master will be called:
This function uses the SHOW MASTER STATUS query to get the position of the master server's log file. It also grants replication privileges to the slave_user on the master server.
If the server is a slave, the following functions are called:
These functions make slave servers read only. The use_master function tells a slave which server to use as master, and it tells slaves where to start reading the master's log. The start_slave function starts the slave IO thread.
When you run db-cluster-init.py on the control node, you should see output like this:
At this point, the databases are set up, and replication is enabled. Check the status of each slave using the following command:
There still isn't any data in the database, so on the master, add a record to one of the tables:
On any of the slaves, use these commands to make sure that the entry has been replicated:
Share this page: