[Step-by-Step Guide] How to Export Your EC2 Instances to Your Openstack Cloud

Approximately remain in this minutes read.

[Step-by-Step Guide] How to Export Your EC2 Instances to Your Openstack Cloud

[Step-by-Step Guide] How to Export Your EC2 Instances to Your Openstack Cloud
Written by

We are very much aware that all public cloud providers are interested in our business and therefore make it relatively simple to migrate our workloads into their cloud. But migrating from any cloud provider to your on-premises cloud is not as straightforward as you might think. In fact, the process is complicated.

You might ask yourself why would I want to migrate instances off AWS into my private cloud?

Isn’t the world moving in the opposite direction, i.e., moving everything to the public cloud? Well, yes the general trend today is to move workloads from your on-premises locations into AWS. But there is a caveat that you need to be aware of.

Not all of your workloads are suitable for running on a public cloud provider. An increased number of companies have decided that the public cloud is not a suitable solution for their products and services (Gitlab is one such example). This could be because of cost, it also could be because of performance requirements. And when that happens, you will need a plan of action to migrate workloads that currently reside on AWS to your private OpenStack on-premises cloud.

This article will display the steps required to export an instance out of AWS into your OpenStack cloud.

Deploying  across multiple OpenStack providers? Discover how Heat templates can simplify the process.


There are several basic requirements for exporting your instances from AWS.

AWS CLI: There are a number of operations performed in this article that cannot be achieved with AWS UI console and require API access using the AWS CLI.

S3 Bucket: An AWS S3 bucket must be available for your use in order to export and download the instance images from AWS.


The biggest limitation is stated in plain sight in the AWS documentation.

When attempting to export an instance that was not imported into AWS, you will encounter an error:

[root@jumpbox ~]# aws ec2 create-instance-export-task –instance-id i-02651f8e6d0b4378c –target-environment vmware –export-to-s3-task DiskImageFormat=VMDK,ContainerFormat=ova,S3Bucket=my-b1-bucket,S3Prefix=prefix

An error occurred (NotExportable) when calling the CreateInstanceExportTask operation: Only imported instances can be exported.

To overcome this, you need to export the image in a different way. (If the image was previously imported into AWS, then following the documentation steps above will suffice.)

Use the “old-fashioned” way of exporting a disk image with dd.


Create a new volume and attach it to the instance. The volume must be at least as big as the original disk that you would like to export.

Create the volume with the following command:

[root@jumpbox ~]# aws ec2 create-volume –size 16 –region us-west-1 –availability-zone us-west-1b –volume-type gp2
  “AvailabilityZone”: “us-west-1b”,

  “Encrypted”: false,
  “VolumeType”: “gp2”,
  “VolumeId”: “vol-0c6df10a4d2d2a408“,
  “State”: “creating”,
  “Iops”: 100,
  “SnapshotId”: “”,
  “CreateTime”: “2016-12-22T20:08:54.131Z”,
  “Size”: 16

Attach the volume to your instance:

[root@jumpbox ~]# aws ec2 attach-volume –volume-id vol-0c6df10a4d2d2a408 –instance-id i-09ec9d3f6f91eead5 –device /dev/sdf

  “AttachTime”: “2016-12-22T20:11:21.068Z”,
  “InstanceId”: “i-09ec9d3f6f91eead5“,
  “VolumeId”: “vol-0c6df10a4d2d2a408“,
  “State”: “attaching”,
  “Device”: “/dev/sdf”

Partition the newly added disk:

Create a file system on the new disk:
mkfs.ext4 /dev/xvdf1

Create a temporary directory and mount the disk:
mkdir /tmp/disk
mount /dev/xvdf1 /tmp/disk

Important: Ensure you know the root password for the instance being exported this can be critical. If there are complications with the export, you will need to log in locally and fix potential problems. Usually, the root password on a provisioned instance in AWS is not known because all authentication is done with SSH Keys.

Copy the disk to an img file:
dd if=/dev/xvda of=/tmp/disk/disk.img
16777216+0 records in
16777216+0 records out
8589934592 bytes (8.6 GB) copied, 133.599 s, 64.3 MB/s

This will take some time (and the following steps as well) depending on the size of the image involved, so be prepared to wait until it completes.

Export the disk file to S3:
This should be run from the instance where the AWS CLI is installed:

[root@ip-192-168-224-150 ~]# aws s3 cp /tmp/disk/disk.img s3://my-b1-bucket/
upload: ../tmp/disk/disk.img to s3://my-b1-bucket/disk.img

Copy to your OpenStack environment:
Next, you need to copy the file to your OpenStack environment so that the image can be imported.

Using the same AWS CLIs, copy the disk image from S3:
[root@openstack]$ aws s3 cp s3://my-b1-bucket/disk.img /tmp/ami/
download: s3://my-b1-bucket/disk.img to ../tmp/ami/disk.img

Import image into OpenStack

Using the OpenStack Glance client, import the image into your OpenStack environment:
glance image-create –disk-format raw –container-format bare –name myami –file /tmp/ami/disk.img –progress

Once the image has been imported, provision a new instance from your provisioned Glance image:[root@openstack]$ nova boot –image ac2e35bf-11a1-445f-b0dc-fd56e382a46b \
–flavor ac13f762-9eed-44a3-a620-302f51d86933 \
–nic net-id=83294637-b4a9-44b4-a113-9c1ec050cfde mytest

You can see the console of your instance within OpenStack:

After applying the correct security groups, you should be able to SSH into the instance as well.

[root@openstack]$ ssh ec2-user@
Last login: Thu Dec 22 21:35:18 2016 from host-192-168-104-10.mydomain.com

   __| __|_ )
   _| (   /  Amazon Linux AMI

[ec2-user@host-192-168-104-16 ~]$

Learn more about high availability in OpenStack.


The process works, but it is currently cumbersome. There are a number of things that you must take into account when embarking on such an exercise:

  1. Depending on the size of your instance, this could take hours per each instance. You are actually copying block-by-block from one operating system to another. The images are not small.
  2. The process is not automated. In an upcoming post, we will demonstrate a way to wrap all these steps into an automated process.
  3. There could be differences in the underlying platforms and hypervisors. AWS and OpenStack are not the same, therefore the instances may have issues along the way and over time.
  4. Cost. The use of resources in AWS costs money:
    1. S3 to store the disk image
    2. Traffic to copy the file
  5. This will not work on all operating systems. Some Linux flavors have been tested; Windows operating systems were not.


As you can see, the process of migrating from AWS to your own private OpenStack cloud is not a simple one. It is cumbersome and involves multiple steps, multiple tools, and fair amount of time. The option of migrating your data and your workloads out of the public cloud is an important capability that you should consider as part of your enterprise hybrid cloud strategy for allowing greater flexibility in the day-to-day operations of your business.

A better approach to leveraging OpenStack Private Cloud - Stratoscale
January 5, 2017

Simple Share Buttons
Simple Share Buttons