Separation of Critical Resources in OpenStack. Tenants? Aggregates? Availability Zones?

Approximately remain in this minutes read.

Separation of Critical Resources in OpenStack. Tenants? Aggregates? Availability Zones?

Separation of Critical Resources in OpenStack. Tenants? Aggregates? Availability Zones?
Written by

openstack tenant separation critical resourcesOpenStack usage is on the rise and not only for development or test workloads. Enterprises are entrusting their mission critical applications to their OpenStack infrastructure. The basic assumption that any cloud architect will emphasize when designing your applications for the cloud – is that things will fail, all the time. You cannot rely on the underlying cloud infrastructure, no matter who the provider is, to guarantee that your workloads will always be available.

It is important that you understand the different types of segregation options available to you within your OpenStack cloud, how you can use them – and when (and if) they provide an additional level of protection.


In OpenStack, you will see the terms tenants and projects used interchangeably – they are one and the same.

One OpenStack tenant can be for Production, another for Test & Dev, a a different one for Financial applications and another tenant can be used as a staging area. Each of these tenants are completely separate and autonomous of the others, and this includes (but not only) the following:

openstack tenant image 1

  • Users
  • Networks
  • Security groups
  • Volumes
  • Object Storage
  • Key pairs
  • Images
  • Instances

They will share the same underlying physical infrastructure, use the same Horizon, and the same OpenStack ‘control plane’ and services – but they essentially are independent entities.

You have the capability of assigning quotas to each and every one of your tenants, and and defining different roles for your users – for example an admin, a user and also define custom roles if you so wish.

Host Aggregates

Host aggregates are a mechanism to arrange logical units in an OpenStack deployment by grouping together hosts according to criteria that you choose. This could be a specific model of hardware, or servers that have SSD’s located in a specific rack.

The aggregate contains compute nodes and whatever metadata you define. These are not visible to regulars users – only to those with Administrator rights in the OpenStack deployment.

A compute node can belong to more than one aggregate – which allows more flexibility in managing your cloud.

openstack tenant zone 1 zone 2


The placement of resources onto specific aggregates is done by adding certain metadata to the relevant aggregate:

nova aggregate-create SSD nova
| Id | Name    | Availability Zone | Hosts | Metadata |
| 1  | SSD     | nova              |       |          |

Adding the appropriate metadata to the aggregate:

nova aggregate-set-metadata 1 ssd=true
| Id | Name    | Availability Zone | Hosts | Metadata          |
| 1  | SSD     | nova              | []    | {u'ssd': u'true'} |

Adding hosts to the aggregate:

nova aggregate-add-host 1 node1
| Id | Name    | Availability Zone | Hosts      | Metadata          |
| 1  | SSD     | nova              | [u'node1'] | {u'ssd': u'true'} |

nova aggregate-add-host 1 node2
| Id | Name    | Availability Zone | Hosts               | Metadata          |
| 1  | SSD     | nova              |[u'node1', u'node2'] | {u'ssd': u'true'} |

You then need to associate certain flavors with the aggregate:

nova flavor-create ssd.large 6 8192 80 4
| ID | Name | Memory_MB | Disk | Ephemeral | VCPUs | RXTX_Factor | Is_Public |
| 6  | my.L | 8192      | 80   | 0         | 4     | 1.0         | True      |

Associate the flavor with the correct metadata:

nova flavor-key my.L set aggregate_instance_extra_specs:ssd=true

Any instance that will be spawned off of this flavor will automatically be assigned to the nodes defined in the aggregate.

To learn how to create a highly available load balancer in OpenStack, click here.

Availability Zones

The end user is interested in the logical segregation of the cloud. They are not interested if this group of hardware is located in in Rack #5 on the second floor, in Utah. That is what the abstraction of Availability Zones is used for.

The cloud admin groups together a set of resources, which can be based on a number of criteria, such as:

  • Geographic Location
  • Hardware
  • Datacenter
  • Rack
  • Chassis

Choose the logical groups that suit your operational model and infrastructure deployment.

Creating an availability zone is a simple task:

nova aggregate-create <aggregate_name> <availability_zone_name>

It might seem confusing to create an aggregate when you want to actually want to create an Availability Zone, but remember an Availability zone is just another representation of an aggregate – one that the users in your cloud can actually see and use.

By default – all compute nodes will belong to an Availability Zone, even if that host it does not belong to an aggregate – and this is a configurable option.

To learn how to harden the security of OpenStack clouds, click here.

Affinity / Anti Affinity rules

There are cases where you will need more granular control over the instances you provision in your cloud, and this where the Nova scheduler comes into play.

Grouping Instances Together

The ServerGroupAffinityFilter allows you to group certain instances together on a single host. This can be very useful when you would like to have two instances using the shortest possible path of communication to reduce latency and improve performance of your application.

Separating Instances

The ServerGroupAntiAffinityFilter allows you to ensure that the instances will never reside on the same physical server. This is a necessity when you are provisioning parts of a cluster and you want to make sure that the application will continue to operate, even if a single physical host disappears.

The way that you can use ServerGroups is through the command line. There is no UI option for this. This command creates a ServerGroup with the affinity policy, which means that they should stay together.

To create a ServerGroup:

nova server-group-create --policy affinity besties

This command creates a ServerGroup that will ensure the instances will never run on the same host:

nova server-group-create --policy anti-affinity enemies

To add an instance to a ServerGroup you need to provide the correct reference to the appropriate ServerGroup:

nova boot --image IMAGE_ID --flavor 1 --hint group=besties server-1


We have described a number of ways that you can protect, segregate, group or  ‘shield’ your critical workloads in an OpenStack. It is important to understand the different options you have at your disposal; how to make use of them, and which one of these options suits your requirements to provide the highest availability possible for your applications running in the cloud.

A better approach to leveraging OpenStack Private Cloud - Stratoscale

August 15, 2016

Simple Share Buttons
Simple Share Buttons