Load Balancing in OpenStack without LBaaS

Approximately remain in this minutes read.

Load Balancing in OpenStack without LBaaS

Load Balancing in OpenStack without LBaaS
Written by

load balancing woman rice fieldsOne of the fundamental use cases of any cloud platform is to enable a quick and easy way to scale your applications. The easiest way to do that is to place a Load Balancer (LB) in front of them and spread the load.

OpenStack currently has a Load Balancer as a Service (LBaaS) which is built in as part of Neutron – it has been in place for quite a while. Unfortunately there are a number of places where LBaaS falls short for those who are relying on a load balancer as a critical part of their application stack.

Neutron LBaaS traditionally uses a standard Linux tool called HAproxy, which is a standard for Linux load balancing. It is used today as the underlying load balancing infrastructure for spreading the software load between instances.

Discover new cutting edge methods to implement load balancing.

LBaaS Shortcomings

Resiliency

The basic concept of putting things behind a load balancer is two fold. The first is to provide the capability to scale your application in a linear fashion. The second is to provide resiliency and redundancy to your application. The current way that LBaaS is implemented in OpenStack (v1) moves your single point of failure out of the realm of the application, up to the point where the Load Balancer itself is the Single Point of Failure.

Initially LBaaS was provisioned as a process on the network nodes, as a single process with no redundancy. There was no monitoring of the HAproxy process and no way of knowing if this process was functioning correctly. If it crashed, or the Network node crashed – all the traffic that supposed to flow through the LB would stop, and there was no real way of restoring the traffic flow.

This has improved with v2 (also originally known as Octavia) where instead of having a single process, the function of the load balancer have now been moved into a separate instance in its own right – actually two instances with a controller that is managing them. Each time you request a new load balancer, the controller will spawn a new set. But as of today that controller is only a single node – again, a single point of failure.

Learn how to implement performance-based resource allocation.

One Port Only

Many people are interested in managing traffic between their instances, and it is not uncommon to have an instance serve traffic on only one port. With LBaaS today, for each and every port that you want to manage you will need a separate load balancer. You have pair of instances serving traffic with different applications on different ports. It soon becomes apparent that the amount of resources used for load balancing can be larger than the resources actually providing the services themselves. Essentially the option to serve multiple ports is a mere 4-5 lines in the haproxy.cfg file.

Flexibility

The amount of flexibility available with LBaaS today to manage the traffic in the load balancers today is very limited. There are so many options, such as traffic shaping, SSL offloading and so much that can be used in HAproxy today (that have been available for many many years) that are not possible when using LBaaS in OpenStack.

Logging

One of the basic principles of management is the ability to see the traffic that is flowing; this is usually accomplished by utilizing the logs. This functionality is not available in OpenStack today.

Solution

How can you overcome all of the problems above? By using the tool the way it was supposed to be used in the first place. This means provisioning your own instances with an HAproxy to do the work it was meant to do, the way it was meant to.

It goes without saying that deploying a single HAproxy instance will still leave you with a single point of failure – but this can be addressed by providing a second instance and configuring them as a highly available pair (this configuration is beyond the scope of this article but will be addressed in a future post).

The tool of choice here is Terraform and it is used to provision two Linux instances and install a web server inside them (httpd). The web server has a simple static page which will have the hostname of the instance. The IP addresses of the instances will then be stored and populated on a third instance that will be provisioned. On this instance, HAproxy will be installed and the IP addresses of the web servers will be injected into the HAproxy configuration to allow load balancing of the web application.

The source code used for the example below can be found here.

After running terraform the instances are created:

running terraform

Each of the web servers are provisioned with the correct software:

OpenStack Load Balancer dependencies

Each of the web servers are now hosting a simple HTTP page.

For Web1

OpenStack Load Balancer Web1

And for Web2

OpenStack load balancer Web2

When we look at the loadbalancer (lb1) we can see that the /etc/haproxy/haproxy.cfg has the following configuration:

Continuous query - LBaaS

frontend http

  bind *:80

  default_backend web-servers

Here we defined HAproxy to listen on port 80, and the default location that the traffic will be forwarded to is defined in the default_backend web-servers:

backend web-servers

    balance     roundrobin

    option httpchk GET /

    option  httplog

    server web1 192.168.100.29:80

    server web2 192.168.100.30:80

Here we define the load balancing mechanism and the two nodes that the traffic should be forwarded to, in our case web1 and web2.

When running a continuous query against the load balancer you can see that the traffic alternates between the two servers.

There are more sophisticated ways of automating this setup and automating the registration (and removal) of the web servers from HAproxy, but that is beyond the scope of this article.

OpenSource for your private cloud

Summary

A load balancing solution is a crucial part of your cloud infrastructure. As such it should be a tool that you can manage in a number of ways that suit your needs and provide the maximum amount of flexibility, allowing you to accomplish your goals.

Above we showed a basic example of how to use an OpenStack instance with HAproxy installed to load balance your applications, without having to rely on the built-in LBaaS in Neutron.
This solution can be expanded to support a highly available solution – both for your workloads and for the load balancers themselves.
July 12, 2016

Simple Share Buttons
Simple Share Buttons