Wednesday, April 22, 2015

Arm Nginx With 'Keepalived' For High Availability (HA)

Prerequisite

After obtaining a general understanding and grasp on the basics of  Nginx deployed upon Vagrant environment, which could be found at this post, today I'm gonna enhance my load balancer with a tool named 'Keepalived' for the purpose of keeping it HA.

Keepalived is a routing software whose main goal is to provide simple and robust facilities for load balancing and high-availability to Linux system and Linux based infrastructures.

There're 4 vagrant envs, from node1(192.168.10.10) to node4(192.168.10.13) respectively. I intend to deploy Nginx and Keepalived on node1 and node2, whereas NodeJS resides in node3 and node4 supplying with web service.

The network interfaces on every node resembles:
[root@node1 vagrant]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:ae:97:4f brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
    inet6 fe80::a00:27ff:feae:974f/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:61:c2:b2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.10/24 brd 192.168.10.255 scope global eth1
    inet6 fe80::a00:27ff:fe61:c2b2/64 scope link 
       valid_lft forever preferred_lft forever

As we can see, all our inner IP communications are on 'eth1' interface. Thus we will create our virtual IP on 'eth1'.

Install Keepalived

It's quite easy to deploy it on my vagrant env (CentOS), `yum install keepalived` will do all the jobs for u.

Vim '/etc/keepalived/keepalived.conf' to configure Keepalived for both node1 and node2. All is the same BUT the 'priority' parameter: setting to 101 and 100 respectively.
vrrp_instance VI_1 {
        interface eth1
        state MASTER
        virtual_router_id 51
        priority 101
        authentication {
            auth_type PASS
            auth_pass Add-Your-Password-Here
        }
        virtual_ipaddress {
            192.168.10.100/24 dev eth1 label eth1:vi1
        }
}

In this way, we've set up a virtual interface 'eth1:vi1' with the IP address 192.168.10.100.

Start Keepalived by issuing command `/etc/init.d/keepalived start` on both nodes. We should see that there's a multiple IP address configured on 'eth1' interface via command `ip addr` on the node which starts Keepalived first:
[root@node2 conf.d]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:ae:97:4f brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
    inet6 fe80::a00:27ff:feae:974f/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:3d:42:e0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.11/24 brd 192.168.10.255 scope global eth1
    inet 192.168.10.100/24 scope global secondary eth1:vi1
    inet6 fe80::a00:27ff:fe3d:42e0/64 scope link 
       valid_lft forever preferred_lft forever

In my case, it is node2 who currently is the master of Keepalived. When shutting down node2 from host machine with command `vagrant suspend node2`, we could see that this secondary IP address '192.168.10.100' switches to node1. At this time, we should be able to `ping 192.168.10.100` from any nodes successfully.

Combine With Nginx

Now it's time to take full advantage of Keepalived to avoid Nginx (our load balancer) from single point of failure (SPOF). `vim /etc/nginx/conf.d/virtual.conf` to revise Nginx configuration file:
upstream nodejs {
    server 192.168.10.12:1337;
    server 192.168.10.13:1337;
    keepalive 64;    # maintain a maximum of 64 idle connections to each upstream server
}

server {
    listen 80;
    server_name 192.168.10.100
                127.0.0.1;
    access_log /var/log/nginx/test.log;
    location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host  $http_host;
        proxy_set_header X-Nginx-Proxy true;
        proxy_set_header Connection "";
        proxy_pass      http://nodejs;

    }
}

In our 'server_name' parameter, we set both the Virtual IP set by Keepalived and localhost IP. The former one is for HA whereas the latter is for vagrant port forwarding.

Luanch Nginx service on both node1 and node2, meanwhile, run NodeJS on node3 and node4. Then we should be able to retrieve the web content on any nodes via command `curl http://192.168.10.100` as well as `curl http://127.0.0.1`. Shutting down either node1 or node2 will not affect any node retrieving web content via `curl http://192.168.10.100`.



Related Post:
1. Guide On Deploying Nginx And NodeJS Upon Vagrant On MAC


Reference:
1. keepalived Vagrant demo setup - GitHub
2. How to assign multiple IP addresses to one network interface on CentOS
3. Nginx - Server names




No comments:

Post a Comment