Bootstrapping Your OpenStack Data Center with Terraform

Approximately remain in this minutes read.

Bootstrapping Your OpenStack Data Center with Terraform

Bootstrapping Your OpenStack Data Center with Terraform
Written by

Server rack cluster/ data centerOpenStack has been deployed in the datacenter for over six years.  Today, many of us already know know how to provision an instance as well as manage them.

The more complex your deployments become, the more you should be looking at standardizing them. Many organizations today treat their applications as code and maintain these in version control (which is the right thing to do). You should also be treating your infrastructure as code as well. The creation of all your underlying resources should be stored in version control, the same as the rest of your applications. Terraform is one of the tools that can help you accomplish this goal (others of note are Ansible, CloudFormation, Chef, Puppet and Saltstack).

Our usual day-to-day operations assume that our infrastructure and your tenant have already been set up for you we have our networks, routers, users, and images from which we will provision our instances. The user logs into their environment and all they need to do is to start provisioning instances.

But that is not always the case. Sometimes you need to create all of the items above all you get is a blank slate and you need to create everything on your own. Usually every user will do this for themselves, using either the CLI or the UI.

Manually setting up all this underlying infrastructure in the tenant as a one-off task is great. But for operators provisioning multiple tenants daily (and sometimes multiple times per day), this manual task is not sustainable. In keeping with the “Infrastructure as Code” motto from above, this task really should be automated.

Check out this infographic for a look at OpenStack deployment tools.

Automating Tenant = Creation  

Terraform has the capability to provision most resources in your OpenStack environment today.

Now let’s get down to bootstrapping your OpenStack tenant. The following code will provision a router and four networks:

  • Router – my router
  • 192.168.219.0/24 – DHCP addresses 192.168.219.10-250, gateway – 192.168.219.254
  • 192.168.220.0/23 – DHCP addresses 192.168.220.10-221.250, gateway – 192.168.221.254
  • 192.168.222.0/24 – DHCP addresses 192.168.222.10-250, gateway – 192.168.222.254
  • 192.168.223.0/24 – DHCP addresses 192.168.223.10-250, gateway – 192.168.223.254

In order to access the OpenStack API, you first need to source your OpenStack credential file. The file usually resembles the following:

#!/bin/bash

# To use an Openstack cloud you need to authenticate against keystone, which
# returns a **Token** and **Service Catalog**.  The catalog contains the
# endpoint for all services the user/tenant has access to - including nova,
# glance, keystone, swift.
#
# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0.  We
# will use the 1.1 *compute api*
export OS_AUTH_URL=https://<my_openstack_API>:5000

# With the addition of Keystone we have standardized on the term **tenant**
# as the entity that owns the resources.
export OS_TENANT_ID=e3a94abefde4438dbde224c5c1ad0
export OS_TENANT_NAME="My-Tenant"

# In addition to the owning entity (tenant), openstack stores the entity
# performing the action as the **user**.
export OS_USERNAME="myuser"
export OS_PASSWORD="mypassword"

You can confirm your access to the OpenStack API by running a

nova list

Moving over to the Terraform provisioning, let’s define some parameters.

# Declare Provider for OpenStack
provider "openstack" {
}

This is the syntax in Terraform to declare what provider you are using–  in this case, OpenStack.

#Define Remote State Local backend
data "terraform_remote_state" "remote_state" {
    backend = "local"
    config {
        path = "terraform.tfstate"
    }
}

The remote state backend is a location where you are going to store the results of the Terraform run. In this case,  it will be in a local file named terraform.state.

variable "router_name" {
    description = "The name for external router"
    default  = "my-router"
}
variable "network_219" {
  type = "map"
  description = "Variables for the 219 Network"
  default = {
      network_name = "219"
      subnet_name = "219_Subnet"
      cidr = "192.168.219.0/24"
      gateway = "192.168.219.254"
      start_ip = "192.168.219.10"
      end_ip = "192.168.219.250"
  }
}

variable "network_220" {
  type = "map"
  description = "Variables for the 220 Network"
  default = {
      network_name = "220"
      subnet_name = "220_Subnet"
      cidr = "192.168.220.0/23"
      gateway = "192.168.221.254"
      start_ip = "192.168.220.10"
      end_ip = "192.168.221.250"
  }
}
variable "223" {
  type = "map"
  description = "Variables for the 223 Network"
  default = {
      network_name = "223"
      subnet_name = "223_Subnet"
      cidr = "192.168.223.0/24"
      gateway = "192.168.223.254"
      start_ip = "192.168.223.10"
      end_ip = "192.168.223.250"
  }
}

variable "224" {
  type = "map"
  description = "Variables for the 224 Network"
  default = {
      network_name = "224"
      subnet_name = "224_Subnet"
      cidr = "192.168.224.0/24"
      gateway = "192.168.224.254"
      start_ip = "192.168.224.10"
      end_ip = "192.168.224.250"
  }
}

# Declare variable for external network ID
variable "external_netid" {
  description = "Defines if the UUID of the external network"
  default = "c331d4d9-ce0a-472c-944b-347a4656970d3"
}

These are the variables that will be used for each and every subnet. Each subnet needs a network_name, a subnet_name, a network address (cidr), a gateway, and a range of IP addresses that will be allocated by DHCP.

The last variable is one that you will need to provide from your OpenStack environment. This will be the external network that provides access to the outside world. This can be retrieved by running the following command, which will print out the list of networks in your environment and highlight the one defined as your external network:

for i in $(neutron net-list  | awk {'print $2}' | grep -); do  neutron net-show $i |
grep "router:external | True"; done

Next, we use Terraform to actually provision the resources.

resource "openstack_networking_router_v2" "my_router" {
  name = "${var.router_name}"
  external_gateway = "${var.TF_external_netid}"
}

resource "openstack_networking_network_v2" "219" {
  name = "${var.219["network_name"]}"
  admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "219_Subnet" {
  name = "${var.219["subnet_name"]}"
  network_id = "${openstack_networking_network_v2.219.id}"
  cidr = "${var.219["cidr"]}"
  gateway_ip = "${var.219["gateway"]}"
  enable_dhcp = "true"
  allocation_pools {
    start = "${var.219["start_ip"]}"
    end = "${var.219["end_ip"]}"
  }
}

resource "openstack_networking_router_interface_v2" "219_interface" {
  router_id = "${openstack_networking_router_v2.my_router.id}"
  subnet_id = "${openstack_networking_subnet_v2.219_Subnet.id}"
}

resource "openstack_networking_network_v2" "220" {
  name = "${var.220["network_name"]}"
  admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "220_Subnet" {
  name = "${var.220["subnet_name"]}"
  network_id = "${openstack_networking_network_v2.220.id}"
  cidr = "${var.220["cidr"]}"
  gateway_ip = "${var.220["gateway"]}"
  enable_dhcp = "true"
  allocation_pools {
    start = "${var.220["start_ip"]}"
    end = "${var.220["end_ip"]}"
  }
}

resource "openstack_networking_router_interface_v2" "220_interface" {
  router_id = "${openstack_networking_router_v2.my_router.id}"
  subnet_id = "${openstack_networking_subnet_v2.220_Subnet.id}"
}

resource "openstack_networking_network_v2" "222" {
  name = "${var.222["network_name"]}"
  admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "222_Subnet" {
  name = "${var.222["subnet_name"]}"
  network_id = "${openstack_networking_network_v2.222.id}"
  cidr = "${var.222["cidr"]}"
  gateway_ip = "${var.222["gateway"]}"
  enable_dhcp = "true"
  allocation_pools {
    start = "${var.222["start_ip"]}"
    end = "${var.222["end_ip"]}"
  }
}

resource "openstack_networking_router_interface_v2" "222_interface" {
  router_id = "${openstack_networking_router_v2.my_router.id}"
  subnet_id = "${openstack_networking_subnet_v2.222_Subnet.id}"
}

resource "openstack_networking_network_v2" "223" {
  name = "${var.223["network_name"]}"
  admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "223_Subnet" {
  name = "${var.223["subnet_name"]}"
  network_id = "${openstack_networking_network_v2.223.id}"
  cidr = "${var.223["cidr"]}"
  gateway_ip = "${var.223["gateway"]}"
  enable_dhcp = "true"
  allocation_pools {
    start = "${var.223["start_ip"]}"
    end = "${var.223["end_ip"]}"
  }
}

resource "openstack_networking_router_interface_v2" "223_interface" {
  router_id = "${openstack_networking_router_v2.my_router.id}"
  subnet_id = "${openstack_networking_subnet_v2.223_Subnet.id}"
}

resource "openstack_networking_network_v2" "224" {
  name = "${var.224["network_name"]}"
  admin_state_up = "true"
}

resource "openstack_networking_subnet_v2" "22_Subnet" {
  name = "${var.224["subnet_name"]}"
  network_id = "${openstack_networking_network_v2.224.id}"
  cidr = "${var.224["cidr"]}"
  gateway_ip = "${var.224["gateway"]}"
  enable_dhcp = "true"
  allocation_pools {
    start = "${var.224["start_ip"]}"
    end = "${var.224["end_ip"]}"
  }
}

resource "openstack_networking_router_interface_v2" "224_interface" {
  router_id = "${openstack_networking_router_v2.my_router.id}"
  subnet_id = "${openstack_networking_subnet_v2.224_Subnet.id}"
}

The code above uses all of the variables defined above, and the openstack_networking_network_v2 provider resource creates the network. The same applies to the openstack_networking_router_v2 and openstack_networking_subnet_v2 and openstack_networking_router_interface_v2.

Unfortunately, it is not currently possible to manage all of the OpenStack resources through Terraform (for example, interaction with the Glance API).

Terraform can still be used to interact with Glance (for example, for uploading images), but you will have to use a different provisioner for this purpose, the local-exec. This is basically a resource runs a script on your local machine.

Click here to discover an OpenStack-based private cloud solution

Let’s take the scenario where you would like to upload a custom image into glance  so that your newly created tenants can begin their work, without having to import it themselves.

This is accomplished by using the glance client installed on your local machine and using Terraform to run a bash command to do this for you (you need to ensure that the client is installed on your machine – before hand).

This is the code that will do this for you:

## Upload image to Glance
resource "null_resource" "my_img" {
  provisioner "local-exec" {
      command = "source /root/openstack.rc; glance image-create --name my_image
 --disk-format=qcow2 --container-format=ovf < /root/my-image.qcow2"
  }
}

To run the terraform script, – all you need to do is type the following command:

terraform apply

If there is one thing that’s important to learn from this article, it is that there are many tools available today to automate the creation of resources in the cloud;  Terraform is an open-source tool that is being actively developed and is widely used across the industry today. It enables you to safely and predictably create, update, and improve your production infrastructure.

Using scripts and methods like the one, you can provision a tenant with ease and  prepare the entire network infrastructure for your tenants and do so in a repetitive and predictable way.

Discover a hardware-agnostic software stack that is 100% compatible with OpenStack.

February 17, 2017

Simple Share Buttons
Simple Share Buttons