Creating a Highly Available Load Balancer in OpenStack (instead of LBaaS)

Approximately remain in this minutes read.

Creating a Highly Available Load Balancer in OpenStack (instead of LBaaS)

Creating a Highly Available Load Balancer in OpenStack (instead of LBaaS)
Written by

Haproxy Load BalancingIntroduction

We introduced the option of using an OpenStack instance to substitute the built-in LBaaS (Load Balancer as a Service) in our previous article . One of the basic flaws in the solution described there, was that the HAproxy instance that was created, was itself a single point of failure.

In this article we will describe a method by which you can provide a highly available load balancer that can be used in production workloads using the same HAproxy instances.

Architecture

HAproxy can be deployed in a highly available manner. The solution is described in the diagram below:

haproxy load balancing

There are two instances. Both instances have haproxy installed on them and also a component named keepalived. Keepalived basically implements a set of checkers to dynamically and adaptively maintain and manage a load balanced server pool according their health.

Keepalived uses the VRRP protocol to maintain the high availability state between the nodes. The protocol achieves this by creating virtual routers that perform as a group, with one master and one backup, and monitors the health of each of the nodes and assigns the role to the appropriate node and manages the virtual IP (VIP) between the two nodes.

Learn more about high availability here.

Port Creation

The VIP will need a valid IP address – one that does not belong to any other instance.

A port (and IP address) can be created through OpenStack Neutron

openstack port create --network 2199c2df-c8b9-4487-a67b-23e28c3ff2d3 vip

We will use this port later on for configuration.

Installation

Since we have already described how to install haproxy in our previous article we will dive straight into the setup of keepalived; this is a simple step.

yum install -y keepalived

The configuration of the two instances are almost identical – but there is one slight difference.

On haproxy1 configure the keepalived configuration file in /etc/keepalived/keepalived.conf

! Configuration File for keepalived 

global_defs {
    notification_email {
      youremail@yourdomain.com
    }
    notification_email_from haproxy1@yourdomain.com
    smtp_server yoursmtp.yourdomain.com
    smtp_connect_timeout 30
    router_id haproxy
 }
 vrrp_script haproxy {
   script "killall -0 haproxy"
   interval 2
   weight 2
 }
 vrrp_instance VI_1 {
     state MASTER
     interface eth0
     virtual_router_id 71
     priority 100
     advert_int 1
     smtp_alert
     authentication {
         auth_type PASS
         auth_pass YourSecretPassword
     }
     virtual_ipaddress {
         192.168.1.3 dev eth0
     }
     track_script {
       haproxy
     }
 }

Here is some more information on the options available in the configuration file:

Lines 4-9       Mail settings to allow for outbound notification on state changes

Line 10          Here you define a easy name for your (virtual) router

Lines 12-16   Script definition to perform an action

Line 20          Ethernet adapter definition – here we choose the first network interface – it can be an adapter of your choice.

Line 21           Definition of Router ID

Line 22           Priority defines which node is master and which is slave. The higher value is the master.

Line 24           Notification through SMTP

Lines 25-28   Authentication mechanism between the two nodes

Lines 29-31   Definition of the Virtual IP address

Lines 32-34   As soon as the track_script returns another code than 0 two times, the VRRP instance will change the state to FAULT, and it will removes the 192.168.1.3 VIP from eth0 and stops sending multicast VRRP packets

On haproxy two, the file will be exactly the same except for Line 22. Set a lower priority than the value set for haproxy to make this node the slave.

priority 99

Restart the services on both nodes:

service haproxy restart
service keepalived restart

Ready to build your own private cloud? Click here to download the complete guide.

 

OpenStack configuration

If we were to have deployed the two haproxy nodes on physical machines this would have been sufficient, but since we are deploying on OpenStack, a number of additional steps are required.

Security Groups

By default, unless you have allowed traffic in and out of your instances, the traffic is blocked. Therefore you have to define specific rules to allow traffic related to the VRRP, that is to open protocol 112 (not port 112) to all instances in the LB_group security group.

This can be accomplished with the following commands:

openstack security group create LB_group


openstack security group rule create --protocol 112 --ingress --ethertype IPv4 --src-group LB_group LB_group

 

This is what it will look like in Horizon:

haproxy load balancing

Neutron configuration

By default, Neutron only allows each port to listen for traffic on a single IP address with its MAC address.

You can see this with the following command:

openstack port show 5a8cf410-0fec-438f-8323-77fad89729e8

But the haproxy pair needs to have a VIP, which is an additional IP address. By default, Neutron will not allow a port to respond to packets if it does not have the relevant IP address.

This can be changed by allowing the port to listen to more than one IP.

First we need to find the port IDs to configure. We know that we named our instances haproxy1 and haproxy2, but we we need to get the port information for each instance.

nova interface-list haproxy1
 nova interface list
 
nova interface-list haproxy2
nova interface list 2


We then look at port itself:

openstack port show e19b49d1-86f7-4baa-b3b6-387a463d09ca

haproxy load balancing
openstack port show 26c76001-7dc9-490d-afb4-10beebc471f9

 

Now we add an extra IP (the VIP that was created earlier) to each port.

neutron port-update e19b49d1-86f7-4baa-b3b6-387a463d09ca --allowed_address_pairs list=true type=dict ip_address=192.168.1.3
neutron port update e19
neutron port-update 26c76001-7dc9-490d-afb4-10beebc471f9 --allowed_address_pairs list=true type=dict ip_address=192.168.1.3

haproxy load balancing

Validation

No solution is complete without checking that it actually works. For this we need to go into the instances and verify that the VIP exists and moves between the two instances.

Let’s look at the IP addresses on haproxy1:

haproxy load balancing

 

We see that eth0 has two IP addresses – 192.168.1.1 and 192.168.1.3

On haproxy2 there is only one IP address.

one IP on haproxy 1

When the haproxy service is stopped on haproxy1, we see that the VIP disappears.

haproxy load balancing

And immediately it moves over to haproxy2.

haproxy load balancing

Summary

In this article, we provided a solution to support production applications with a redundant load balancing solution, using haproxy and keepalived. The components are tried and tested throughout the industry and are used in millions of deployments around the world.

A better approach to leveraging OpenStack Private Cloud - Stratoscale

OpenStack is the de facto private cloud solution today, but there are still pieces that are not ready for prime time. This provides you with an opportunity to expand on the default possibilities and enhance your overall solution, to meet the demanding needs of the enterprise and mission critical applications.
August 3, 2016

Simple Share Buttons
Simple Share Buttons