Skip to navigation

Overview

A Banana Pi can easily host a small website, but could easily be overwhelmed by spikes in traffic.  One answer would be to host this site on a PC.  That would be simple, but it would use a lot more power.  

I have found that a cluster of eight Raspberry Pi servers can handle relatively large amounts of traffic.  My Raspberry Pi cluster handled 10,000 hits in one day when my site was featured on Hackaday.

This isn't a huge amount of traffic, but it's much more than my Raspberry Pi site usually gets. Page load times stayed consistent while the cluster handled the spike.  The cluster can't serve as much traffic as a PC would be able to, but it fits my needs, and uses less power than a PC. 

How does it work?

My router is configured to forward incoming HTTP requests on port 80 to a load balancer.  The load balancer is a PC running Lubuntu, with Apache configured as a reverse proxy.  It has two network interfaces (both ethernet), one for receiving requests from the router, and another one to forward requests to the servers.  

The servers are on a different subnet than the local network, and they have different IP addresses.  Anything on the local network has a 192.168.0.x IP address, whereas the servers have 192.168.1.x IP addresses.  The load balancer interface connected to the router has an IP address on the local network, and the interface that communicates with the servers has an IP address on the subnet.

The load balancer receives HTTP requests from the router, and forwards them to one of the servers in the cluster.  The server handles the request and sends the result back to the load balancer.  You can read more about the load balancer's configuration on my Raspberry Pi Cluster

I could use a Banana Pi as a load balancer, and it would probably perform quite well.  However, my load balancer needs to act as a gateway for three different web sites (this site, Pyplate.com and RaspberryWebserver.com), so it needs to be quite powerful.  For that reason, I'm using a PC as my load balancer.

The servers

There are four servers in the cluster.  Each one is simply a Banana Pi with Raspbian running on an SD card.  A copy of this site is installed on each server along with the Nginx webserver. 

One server is a master node.  The site is edited on the master node.  A web API is used to synchronize the files on each node.  This API creates a .tar.gz file containing a backup of the site on the master server, transfers it to the other servers, and unpacks it.

This site is built with the Pyplate CMS, which uses an SQLite database.  SQLite stores data in a file, so the servers' databases can be synchronized simply by copying the database file from the master node to the other servers.

This wouldn't work for many types of sites, but it works in this situation because there's only one person editing this site, and it doesn't need to be updated very often.  

Setting up a Pyplate server

Pyplate admin area

Start by installing Raspbian on an SD card, and changing the default settings.  You can set up the operating system on a hard disk if you need your server to run as fast as possible, but this isn't necessary.  I'm just using SD cards in my servers.

Edit the interfaces file

I'm going to give each server a static IP address.  I could use DHCP, but then it would be difficult to tell which server has which IP address.  There are only four servers, and I only have to set the IP address once for each server.  

Open the interfaces file with this command:

sudo nano /etc/network/interfaces

On the line that starts with 'iface eth0', the word 'dhcp' needs to be replaced with the word 'static'.  You also need to add an IP address and netmask.  The interfaces file should look like this:

auto lo

iface lo inet loopback
iface eth0 inet static
address 192.168.1.2
netmask 255.255.255.0

allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp

To save changes, press control-o and press return.  To close the file, press control-x.

Install Nginx and Pyplate

I'm using Nginx because it has a reputation for serving static content quickly.  Pyplate's page caching mechanism generates static versions of pages, so Pyplate sites are mostly made of static content.  Nginx only serves static files, so it must be used with a WSGI server.  

uWSGI is a server which runs in parallel with Nginx.  Any request that Nginx can't handle by serving a static file is passed to the uWSGI server.  The uWSGI server executes the main script in Pyplate, and returns the result to Nginx.

Install Nginx and some packages needed by uWSGI and Pyplate:

sudo apt-get install nginx python-libxml2 build-essential python-dev

Download and run the Pyplate installation script for Nginx:

curl http://pyplate.com/install_nginx.sh | sudo bash

Edit the webserver's crontab file:

sudo crontab -e -u www-data

Paste the following command at the end of the file, save the changes, and close the file.

@reboot /usr/share/pyplate/uwsgi /usr/share/pyplate/uwsgi_config.ini

In many Linux distributions the nano editor is used to edit crontab files.  To save changes, press control-o and press return.  To close the file, press control-x.

Back up Nginx's default config file:

sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/default.backup

Replace it with the sample config file

sudo cp /usr/share/pyplate/sample_configs/nginx/default /etc/nginx/sites-available/default

Make sure that you've made a note of the password generated by the installation script, and then delete this file:

sudo rm /usr/share/pyplate/wsgi-scripts/create_passwd_file.py

Restart your server

sudo reboot

Login, and change the password to something more memorable than the random string used as a temporary password.

Note that Pyplate can also be installed with Apache or a simple Python webserver.

Tuning Nginx

I did a few simple tests to improve performance.  I did these tests using an ethernet connection rather than wifi.  I used this command to bombard my Banana Pi with HTTP GET requests:

siege -d1 -c350 -t1m http://192.168.0.5/main.html

When I tried using more than 350 concurrent users, Nginx started dropping requests.  With no tuning, Nginx handles 407.2 transactions per second.  

Linux sets a limit on the number of file descriptors that a process can open.  There is a hard limit which can be set by root, and there is a soft limit that can be adjusted by users.  Users can't increase the soft limit beyond the hard limit.  

You can find your server's hard limit by typing this command:

ulimit -n
1024

Open /etc/nginx/nginx.conf in a text editor and find worker_connections.  Increase this value to 1024 (or whatever number is printed out by the ulimit command).  When I did this, I ran siege again, and this time the server handled 426.37 transactions per second.

I reduced the number of Nginx worker threads from 4 (default) to 2.  Nginx only needs one worker process for each CPU, and adding extra processes doesn't increase performance. Extra Nginx worker processes are generally considered to be a harmless waste of memory, however, when I retested the server's performance with only two worker processes, I found that getting rid of the spare processes improved performance.  This time the server handled 443.96 transactions per second. 

It's also recommended that you reduce the keepalive_timeout value from 65 to 30, although this didn't make any noticeable difference to performance.

See also: Install Pyplate.

Building the cluster

Building the rack

I built a rack for the Banana Pi boards using blanking plates for mains sockets, and four large bolts.  I made a wooden template from a piece of MDF, and used the template to drill holes in each of the banking plates.  I drilled small holes for plastic PCB supports which I use to hold each board in place.  The bolts are inserted through the holes at the corners of the blanking plates, and held in place with glue.

 

 

 

 

 

 

 

 

 

 

 

 

I placed the Pi computers in the rack, and connected them to the ethernet switch.

Set up rsync and ssh

The master node uses rsync over ssh to synchronize cached files.

All the pages on this site are cached, meaning each page is stored in a static file in the web root directory tree, so we just need to sync the web root folder.  There's no need to synchronize the databases or scripts in each node.

Set up the master node's ssh keys

This process is initiated when the admin user clicks on the 'Build Cache' button, so it runs as user www-data.  This means user www-data needs ssh keys to log into the other nodes.  

Normally a user's ssh keys would be stored in that user's home directory.  Www-data's home directory is /var/www.  It isn't safe to put the ssh keys in the web root directory where anyone could download them, so I've put them in /usr/share/pyplate/.ssh.

I used this command to start a shell as user www-data:

exec sudo -u www-data -s

Then I created the .ssh directory and made sure it can only be accessed by www-data:

cd /usr/share/pyplate
mkdir ./.ssh
chmod 700 ./.ssh

The next step is to create the ssh rsa keys:

ssh-keygen -t rsa  -f ./.ssh/id_rsa

You'll be prompted to enter a pass phrase.  Just hit return to leave it blank.  This command will generate a pair of public and private keys in /usr/share/pyplate/.ssh.  

Setting up the worker nodes

On each server, there must be a user that can write to the folders in /var/www.  Rsync on the master node will log in as that user over ssh.

I changed the name of the default user on each server, and added that user to the www-data group:

sudo usermod -a -G www-data node0

Reboot for changes to take effect:

sudo reboot

Change the owner and permissions of /var/www:

sudo chown node0:www-data -R /var/www
sudo chmod g+rw -R /var/www

Create a directory where ssh keys will be stored:

mkdir ~/.ssh
chmod 700 ~/.ssh

Transfer master node's public key to each of the worker nodes:

cat ./.ssh/id_rsa.pub | ssh node0@192.168.1.20 'cat >> ./.ssh/authorized_keys'
cat ./.ssh/id_rsa.pub | ssh node1@192.168.1.21 'cat >> ./.ssh/authorized_keys'
cat ./.ssh/id_rsa.pub | ssh node2@192.168.1.22 'cat >> ./.ssh/authorized_keys'

You'll be prompted for a password each time you execute these commands.  Once you've transferred the keys you should be able to ssh from the master node to the server nodes without being prompted for a password.  Test your ssh set up by running this command on the master node:

ssh node0@192.168.1.20

If you are prompted for a password, go back and check the previous steps.

Rsync

I modified the code in pyplate to execute a script when the cache is built.  When the admin user clicks on the button to 'Build the Cache' on the Caching page, the cache is built in the normal way, and then a script named sync.sh is called. This script uses rsync to copy the contents of /var/www to each of the worker nodes:

rsync -a --no-perms -e "ssh -i /usr/share/pyplate/.ssh/id_rsa" /var/www/. node0@192.168.1.20:/var/www
rsync -a --no-perms -e "ssh -i /usr/share/pyplate/.ssh/id_rsa" /var/www/. node1@192.168.1.21:/var/www
rsync -a --no-perms -e "ssh -i /usr/share/pyplate/.ssh/id_rsa" /var/www/. node2@192.168.1.22:/var/www

Now when I click on the build cache button in the admin UI, the cache is built and synchronized with the other servers.

At this point, all four nodes are up and running.

Connecting the cluster to the internet

I've built the cluster, and I can access each node individually, and  I've put the cluster in place in my miniature ARM server farm.  In these pictures, the Banana Pis are in the rack on the left, and the other two racks contain Raspberry Pis which host RaspberryWebserver.com.

ARM server farm

The next step is to make the cluster accessible from the internet.  

Configure the load balancer

The load balancer is a PC running Apache on Lubuntu.  I originally set it up to feed HTTP requests from my router to my Raspberry Pi cluster.  I needed to configure Apache to send traffic for the Raspberry Pi site to one cluster, and traffic for this site to the Banana Pi cluster. I did this by adding a new virtual host to Apache.   

I created a new virtual host file in /etc/apache2/sites-available/banoffeepiserver.conf.  I copied the contents of the default configuration file and pasted them into the new file, and made a few simple changes.  This is what the new virtual host file looks like:

<VirtualHost *:80>
    ServerName  banoffeepiserver.com
    ServerAlias  *.banoffeepiserver.com
    ProxyRequests Off
    <Proxy balancer://bpicluster>
        BalancerMember http://192.168.1.20:80
        BalancerMember http://192.168.1.21:80
        BalancerMember http://192.168.1.22:80
        BalancerMember http://192.168.1.24:80
        AllowOverride None
        Order allow,deny
        allow from all
        ProxySet lbmethod=byrequests
    </Proxy>
    <Location /bpi-balancer-manager>
        SetHandler balancer-manager
        Order allow,deny
        allow from 192.168.0
    </Location>
    <Location /admin/*>
        Order allow,deny
        allow from None
    </Location>
    ProxyPass /bpi-balancer-manager !
    ProxyPass / balancer://bpicluster/
    ErrorLog ${APACHE_LOG_DIR}/error.log
    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    LogLevel warn
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

The ServerName directive tells Apache to use this virtual host for any HTTP request with banoffeepiserver.com in the host field.  

For local testing the load balancer's IP address can be entered in the hosts file on your PC.   Adding this line to my laptop's hosts file allowed me to see my site in a web browser when I typed my site's name in my browser:

192.168.0.2    banoffeepiserver.com

In Linux this file is /etc/hosts.  In Windows, it's C:\Windows\System32\drivers\etc\hosts.

ProxyRequests are turned off so that the load balancer can't be used as an open proxy.

The balancer is defined as bpicluster.  It contains four members defined by their IP addresses.

The balancer module can display a web UI that can be used to monitor and control the cluster. The location of this UI is defined as /bpi-balancer-manager, and access is limited to IP addresses on the local network.

Any path in Pyplate's admin directory is blocked from the outside world.  The admin area of the CMS only needs to be accessible locally.

Finally, the ProxyPass directives tell Apache to pass most requests to the cluster, apart from requests for /bpi-balancer-manager, which are handled by the load balancer.

After saving the new virtual host file, the site has to be enabled:

sudo a2ensite banoffeepiserver.com
sudo service apache2 restart

The a2ensite command creates a symbolic link from /etc/apache2/sites-available/banoffeepiserver.conf to /etc/apache2/sites-enabled/banoffeepiserver.conf. Apache must be restarted for these changes to take effect.

Follow this link to see detailed information on how to set up an Apache 2.4 reverse proxy on Ubuntu.

Testing

I did some benchmarking using a PC on my local network to generate HTTP get requests. I used the siege command to test the cluster:

siege -d1 -c800 -t1m http://banoffeepiserver.com/main.html

The cluster could handlle 1050 transactions per second, with through put of 8.11MB/Sec.

I haven't done detailed a comparison with my Raspberry Pi cluster because there are so many differences between them.  

The Raspberry Pi cluster uses an older version of Pyplate CMS which used CGI scripting.  The latest version of Pyplate uses WSGI instead of CGI making dynamic pages much faster.  I've also used Nginx on the nodes in this cluster, instead of Aapche.

To complicate things further, the load balancer appears to be a bottleneck.  One Banana Pi is capable of handling 440 transactions per second.  With two nodes in the cluster, the transaction rate went up to 910 transactions per second.  When I added a third node to the cluster, there was a relatively small increase in performance, up to 1050 transactios per second.  Adding a fourth node didn't improve perfomance at all.  While siege was running, the load balancer's CPU utilization was 100%. 

There is no benefit in allowing the master node to serve traffic, so I removed its entry from the bpicluster in the banoffeepiserver.conf virtual host file.  In some ways this is a good thing because it make the master node harder to hack into.

Until I get a new load balancer, it's going to be difficult to determine how powerful the cluster actually is.

Domain name

When I bought the domain name for this site, I updated it to point to my router's public IP address.  If you don't have a static IP address, you should use a dynamic DNS service.

It can take time for DNS changes to propagate to every DNS server.  In theory it can take 24-48 hours for DNS settings to propagate, but in this case it seemed to happen in just a few hours.

Port forwarding

My router is configured to forward incoming HTTP requests on port 80 to the load balancer's public IP address.  I already had this set up for my Raspberry Pi site, so I didn't need to make any changes to my router.  

Pyplate Multi-Site

I plan to scale the Banana Pi cluster in order to cope with more traffic.  I also want to use it to host more than one web site.  Before I scale up, I need to prepare the Pyplate CMS to run on a larger architecture.  I want to set up a mass blogging platform where any server in the cluster to be able to serve any page from any site, a bit like Tumblr or Blogger.

Pyplate was originally developed to serve one site per installation.  I've modified it so that a single installation can now serve serveral sites from the same server.  Each site has its own templates, its own web root directory and database tables, but all sites share the same Python application code in /usr/share/pyplate/wsgi-scripts.  Once several sites have been set up on one server, replicating those sites across a cluster is relatively simple.

When a server receives a request, it now has to determine which site the requested page belongs to.  It does this by checking each HTTP request's host field.  This field contains the domain name of the site, which is used to work out the path to the site data.

All of a Pyplate site's information used to be stored in /usr/share/pyplate like this:

/usr/share/pyplate/
 |_ backup
 |_ content
 |_ database
 |_ sample_configs
 |_ template
 |_ themes
 |_ wsgi-scripts

Now /usr/share/pyplate contains a directory for each site in the cluster:

/usr/share/pyplate
 |_ 192.168.0.6
 |_ banoffeepiserver.com
 |_ blog.pyplate.com
 |_ linuxwebservers.net
 |_ wsgi-scripts
 |_ www.pyplate.com

These site directories contain all the data for each Pyplate site:

/usr/share/pyplate/banoffeepiserver.com/
 |_ backup
 |_ content
 |_ database
 |_ sample_configs
 |_ template
 |_ themes
 |_ wsgi-scripts

Each site needs its own web root directory.  I usually use /var/www as the web root directory, but now I need to create a new directory for each site in /var/www.  I modified Nginx's default server block in /etc/nginx/sites-available/default (equivalent to a virtual host in Apache) so that the $host variable is used in the root directive:

root /var/www/$host;

Now Nginx will append the host field in HTTP requests to /var/www to generate the docment root.  This approach means I can use one generic configuration file to serve each site. This is a good thing because it means I don't have to create a new configuration file for each new site that I create.

Now the /var/www directory contains the web root for each of the sites on that server:

/var/www
 |_ 192.168.0.6
 |_ banoffeepiserver.com
 |_ blog.pyplate.com
 |_ linuxwebservers.net
 |_ www.pyplate.com

Pyplate needs to be able to access each sites' files.  Pyplate.py contains functions named getCMSRoot and getWebRoot to return the path to a site's files and web root directory. These functions have been modified so that they now append the name of the site to the path that they return:

# get CMS path
# return CSM root directory
#
def getCMSRoot ():
    global host_name
    return "/usr/share/pyplate/%s" % host_name

# getWebRoot
# return web server root directory
#
def getWebRoot ():
    return "/var/www/%s" % host_name

Until now, I've used SQLite to store data in Pyplate sites. I want to use a network database that supports replication, so I modified Pyplate to work with MySQL.  This means I can now put the database and web servers on seperate clusters for increased efficiency.  

There is still one database, but it now contains data for several sites.  There is a set of tables for each site, and each set is prefiexed with the name of the site.  In SQL, '.' is a special character, so any dot in the site name needs to be converted into an '_'.  

When an instance of the database object is created, a site prefix is passed to the class constructor:

    def __init__(self, hostname, username, password, dbname, prefix):
        self.conn = MySQLdb.connect(hostname, username, password, dbname)
        self.curs = self.conn.cursor()
        self.prefix = prefix.replace('.', '_')

In subsequent calls using the database object, queries are formatted with the prefix at the start of the table name:

    def get_page_from_db(self, uri_path):

        query = "SELECT * FROM {prefix}_pages WHERE path= (%s)"
        sql_str = self.prefix_format(query)

        self.curs.execute(sql_str, (uri_path,))

        row=self.curs.fetchone()

        return row

Importing data

Pyplate databases can be exported to a set of XML files.  I took the XML files from the SQLite version of banoffeepiserver.com, and imported the data into a MySQL database.  The tables where created with the prefix banoffeepiserver_com.  

I imported the data using a command line utility that I wrote.  The script can also be used to drop tables and set a random admin password.  The usage information for this tool is as follows:

Usage:
create_pyplate_db.py -site <site name> [options]
 -c    --create      create pyplate tables
 -d    --drop        drop pyplate tables
 -i    --init        init directories for a new site
 -p    --password    generate a password amd save it in the database
 -s    --site        site name used as prefix for database tables

This is the command that I used to import the data:

./create_pyplate_db.py -s banoffeepiserver.com -c -p
Setting up database...banoffeepiserver.com
/usr/share/pyplate/banoffeepiserver.com
banoffeepiserver.com
Connected to database
Creating tables
Created tables
Committed tables
importing data
done importing data

This updated version of Pyplate is not available yet.  It still needs to be tested more thoroughly, and there are a few UI issues that need to be addressed.  In time it will be available to download from www.pyplate.com.

Scaling out

Usually the purpose of scaling out is to cope with a high volume of traffic by adding more servers.  I want to try this with my Banana Pi cluster in order to develop a system which I can use on clusters of more powerful machines.

The first step was to develop Pyplate Multi-Site, which allows several sites to be served from a single server.  The next step is to set up Pyplate multi-site on a cluster of servers.  I need to scale up database access, and make sure that all content can be accessed from each web server.

Scaling database access

Pyplate was originally written to work with SQLite, which is not a network database.  I modified Pyplate to work with MySQL so that I could offload the database to another set of servers. This means the web server nodes don't have to use CPU and memory processing database queries, so they can handle more concurrent connections.  

I'm using a four node database cluster that I used to develop a database cluster management utility.  I changed the value of binlog_do_db in /etc/mysql/my.cnf on each server from my_db to pyplate_db.  On the control node, I edited cluster-utils.conf (the configuration file for my database cluster management tool) to include settings for the Pyplate database, and I used the db_cluster_utils.py script's init option to set up the cluster:

bananapi@master ~/db-cluster-utils $ ./db_cluster_utils.py --init
Database name: pyplate_db
Master server IP: 192.168.0.35
Control Host: 192.168.0.8
Slave IP address list:
['192.168.0.36', '192.168.0.37', '192.168.0.38']
Root password: somepassword
User password: mypassword
Slave user password: mypassword
192.168.0.35 Created database pyplate_db
192.168.0.35 Created user db_user
192.168.0.35 Granted privileges
192.168.0.35 Created user slave_user
192.168.0.35 Granted privileges
192.168.0.35 Got master status: mysql-bin.000067, 2382
192.168.0.35 Granted replication privilege to slave_user@192.168.0.36 id'd by mypassword
192.168.0.35 Granted replication privilege to slave_user@192.168.0.37 id'd by mypassword
192.168.0.35 Granted replication privilege to slave_user@192.168.0.38 id'd by mypassword
Init server 192.168.0.36
192.168.0.36 Created database pyplate_db
192.168.0.36 Created user db_user
192.168.0.36 Granted privileges
192.168.0.36 Created user slave_user
192.168.0.36 Granted privileges
192.168.0.36 Got master status: mysql-bin.000032, 13863
192.168.0.36 Set read only
CHANGE MASTER TO MASTER_HOST='192.168.0.35', 
                    MASTER_USER='slave_user', 
                    MASTER_PASSWORD='mypassword', 
                    MASTER_LOG_FILE='mysql-bin.000067', 
                    MASTER_LOG_POS=2382;
192.168.0.36 Set master
Init server 192.168.0.37
192.168.0.37 Created database pyplate_db
192.168.0.37 Created user db_user
192.168.0.37 Granted privileges
192.168.0.37 Created user slave_user
192.168.0.37 Granted privileges
192.168.0.37 Got master status: mysql-bin.000088, 2382
192.168.0.37 Set read only
CHANGE MASTER TO MASTER_HOST='192.168.0.35', 
                    MASTER_USER='slave_user', 
                    MASTER_PASSWORD='mypassword', 
                    MASTER_LOG_FILE='mysql-bin.000067', 
                    MASTER_LOG_POS=2382;
192.168.0.37 Set master
Init server 192.168.0.38
192.168.0.38 Created database pyplate_db
192.168.0.38 Created user db_user
192.168.0.38 Granted privileges
192.168.0.38 Created user slave_user
192.168.0.38 Granted privileges
192.168.0.38 Got master status: mysql-bin.000082, 2382
192.168.0.38 Set read only
CHANGE MASTER TO MASTER_HOST='192.168.0.35', 
                    MASTER_USER='slave_user', 
                    MASTER_PASSWORD='mypassword', 
                    MASTER_LOG_FILE='mysql-bin.000067', 
                    MASTER_LOG_POS=2382;
192.168.0.38 Set master

This command starts database replication on the cluster:

bananapi@master ~/db-cluster-utils $ ./db_cluster_utils.py -s
192.168.0.36 Started slave thread
192.168.0.37 Started slave thread
192.168.0.38 Started slave thread

I exported the databases of a couple of Pyplate sites that I own into XML files.  Data was imported from the XML files into the master MySQL server in the database cluster. 

bananapi@lemaker /usr/share/pyplate/wsgi-scripts $ ./create_pyplate_db.py --site banoffeepiserver.com -c
Setting up database...banoffeepiserver.com
/usr/share/pyplate/banoffeepiserver.com
banoffeepiserver.com
Connected to database
Creating tables
Created tables
Committed tables
importing data
done importing data
bananapi@lemaker /usr/share/pyplate/wsgi-scripts $ ./create_pyplate_db.py --site pyplate.com -c
Setting up database...pyplate.com
/usr/share/pyplate/pyplate.com
pyplate.com
Connected to database
Creating tables
Created tables
Committed tables
importing data
done importing data

The imported data is replicated on the other MySQL servers in the cluster.

Scaling file access

Each web server needs to be able to access the files (themes, templates and content) for every site hosted on the cluster.  I considered two approaches:

  1. use rsync to copy files across to each server,
  2. store files on network storage.

The rsync option is quite simple to set up.  The down side is that it means there has to be a complete copy of every site's /var/www directory on each server.  My sites are quite small (in the range of tens of megabytes), so making a complete copy of all sites on each server isn't a problem.  Even using 8GB SD cards, there's enough space to run several Pyplate sites on each server.  In this configuration, one web server is a master which executes rsync to copy data to all the slave servers.  All modifications to files on the web servers should be made on the master server, and rsync will propagate changes to the other servers. 

Copying each site's data to every server is not an efficient use of disk space, so it's not a sensible approach for large web sites with huge datasets.  A web site the size of Wikipedia or Facebook cannot be contained on a single server, so using rsync to copy the site's content to each node won't work.  Instead, files that need to be accessed by several servers can be stored on a network storage device.  This can be as simple as a single node with an NFS share, or a group of nodes with a distributed file-system like GlusterFS.

I decided against using Gluster because it would have been an extra system to maintain and debug.  This site's storage requirements aren't that complicated, and Gluster isn't really necessary.  The Banana Pi nodes that I would have used to run Gluster can be used as web servers instead.  If I was setting up a mass hosting system capable of running thousands of sites, I would use Gluster.

I'm using rsync as described on this page about building the Banana Pi cluster.  
I've set up ssh keys so that rsync can run without prompting for a user password.  

I considered using rsync in daemon mode so that changes to the master node are flushed through to the rest of the cluster immediately, but this has some disadvantages. When I'm modifying a site, I don't want changes to be flushed through to the cluster while I'm halfway through making changes.  I want to be able to make changes on the master node, and only synchronize the master node with the rest of the cluster when I've finished making updates.  After making changes, I always clear the cache and rebuild it, so I added some code to trigger rsync every time the cache is built.  

I wrote a bash script named sync.sh to run rsync:

#!/bin/bash

rsync -a -r --delete --no-perms -e "ssh -i /path/to/ssh/keys/.ssh/id_rsa"   /usr/share/pyplate/. node0@192.168.1.20:/usr/share/pyplate
rsync -a -r --delete --no-perms -e "ssh -i /path/to/ssh/keys/.ssh/id_rsa"   /var/www/. node0@192.168.1.20:/var/www
rsync -a -r --delete --no-perms -e "ssh -i /path/to/ssh/keys/.ssh/id_rsa"   /usr/share/pyplate/. node1@192.168.1.21:/usr/share/pyplate
rsync -a -r --delete --no-perms -e "ssh -i /path/to/ssh/keys/.ssh/id_rsa"   /var/www/. node1@192.168.1.21:/var/www
rsync -a -r --delete --no-perms -e "ssh -i /path/to/ssh/keys/.ssh/id_rsa"   /usr/share/pyplate/. node2@192.168.1.22:/usr/share/pyplate
rsync -a -r --delete --no-perms -e "ssh -i /path/to/ssh/keys/.ssh/id_rsa"   /var/www/. node2@192.168.1.22:/var/www
rsync -a -r --delete --no-perms -e "ssh -i /path/to/ssh/keys/.ssh/id_rsa"   /usr/share/pyplate/. node2@192.168.1.23:/usr/share/pyplate
rsync -a -r --delete --no-perms -e "ssh -i /path/to/ssh/keys/.ssh/id_rsa"   /var/www/. node2@192.168.1.23:/var/www

This code synchronizes the CMS and web root directories on all nodes in the web server cluster. If caching is always enabled for all sites, then only the /var/www directory needs to be synchronized.  I've updated this script so that it also synchronizes the CMS directories. Sync.sh is called from a Python function which I added to the code that handles caching:

def sync ():

    page_str = ""

    page_str += "Syncing!<br>"

    rsync_cmd = ["/usr/share/pyplate/wsgi-scripts/sync.sh"]
    page_str += subprocess.check_output (rsync_cmd)

    return page_str

When the cache is built, the sync function is called, and the slave servers are synchronized with the master server.  

Testing the cluster with live traffic

My reconfigured Banana Pi cluster has been running for a few weeks, and I have moved several sites to it:

None of these sites gets a lot of traffic, so most of the traffic served by the cluster is for this site, banoffeepiserver.com.

The cluster has been getting a trickle of traffic from Google, but not enough to really test it properly.  At the time of writing, it only serves about 20 hits an hour at peak time. I've done some testing with Siege, but I want to see how the cluster copes with real traffic.

I posted a link to my site in the sysadmin section on reddit.com to generate a surge in traffic. Traffic started ramping up quickly, and peaked at 479 hits per hour.  There were 2000 page views on Sunday evening, and 4507 in the following 24 hours.  

This isn't a lot of traffic (my Raspberry Pi site handles this many hits on a daily basis), but it's best to start small so that I can spot issues and fix them before testing with larger amounts of traffic.

These Ganglia screenshots show the cluster's performance four hours after I posted the link on Reddit.  I posted the link at about 5.15pm on Sunday the 4th of January, and Ganglia shows an increase in traffic at about this time.  These graphs show aggregate statistics for the entire server farm:

ARM server farm statistics

These graphs show statistics for the database cluster and the web server cluster:

Information for the database cluster and the web server cluster

This screenshot show statistics for the entire cluster during the 24 hour period after traffic started coming in:

Cluster information for 24 hours

And the statistics for the the database cluster and web server cluster over 24 hours:

web server cluster statistics over 24 hours

There's an increase in the amount of traffic being served by the web server cluster, but the CPUs appear to be under very little load.  The database cluster wasn't really affected at all. All pages on each site are cached on the web servers, and the CMS only generates pages dynamically when someone requests a page that doesn't exist and gets a 404 response.  

According to Google Analytics the average page load time on Sunday was 6.99 seconds, and 4.30 seconds on Monday. These timing measurements include time taken to render pages in a browser and download advertisments and images, not just the time taken to download pages from the cluster.  

I use UptimeRobot.com to monitor my servers performance from the outside.  It gets the head section of my site's home page every 5 minutes and shows a graph of response times. This graph stayed pretty flat:

Response times measured externally with UptimeRobot.com

This was not enough traffic to really test the cluster thoroughly, but there weren't any major problems.  The increased network traffic was visible in Ganglia, the cluster handled the load the way I expected that it would.

Share this page: