How to Automate the Export of AWS EC2 to Your Openstack Cloud

Approximately remain in this minutes read.

How to Automate the Export of AWS EC2 to Your Openstack Cloud

How to Automate the Export of AWS EC2 to Your Openstack Cloud
Written by

In the previous article, [Step-by-Step Guide] How to Export Your EC2 Instances to Your Openstack Cloud, we described the steps required to export an instance out of AWS and import it into your OpenStack cloud.

The process required a significant number of manual steps –  making it error prone and difficult to use for more than a one-off export – which is not very useful in your cloud management strategy. You are entitled to have multiple options and to this end, you need to have a way to perform this export procedure on a regular basis –  with little or no manual intervention.

In this article, we provide  an example of a script that automates the export of an instance from AWS to OpenStack.

Prerequisites

You will need the following elements in order to execute this script:

  1. A Linux Instance that you can access through SSH. This instance shall be named: Worker.
  2. The Worker instance must:
    1. Be able to access both your OpenStack cloud API endpoints and the AWS endpoints. (If your OpenStack cloud is not publicly accessible, then it makes sense that this instance resides on the corporate network with access to both.)
    2. Have the following software installed (with all their prerequisites)
      1. AWS CLI
      2. OpenStack Glance and Nova clients
    3. Have enough disk space available to download the exported image from AWS to a local disk.
  3. Credentials to AWS – already sourced and loaded into the Environment variables
  4. Credentials to OpenStack – already sourced and loaded into the Environment variables.
  5. The ID of the instance that you want to export from AWS ( in the format i-09ec9d3f6f91eead5)
  6. An S3 bucket  that already exists in the same region your instance is located.
  7. The instance that is being exported must also have the AWS CLI installed. (This is usually the case when the instance is an AWS AMI – if not – then the AWS CLI needs to be installed on the instance.)

Learn more about how to monitor EC2 instances.

Export Script

The script should be run from the Worker instance:

#!/bin/bash
##
## This script automates the process of exporting a single AWS instance and
## importing it as an Glance image into OpenStack
##


progname=`basename “$0″`

##Check for parameters
if [ $# -ne 5 ]
then
   echo “Not all arguments supplied were supplied. Please provide all the arguments to the script.”
   echo “Usage: $progname <instance-id> <bucket-name> <aws_access_key_id> <aws_secret_access_key> <default_region>”
   echo “For example: $progname i-09ec9d3gd731eead5 my-bucket AKIXVSDESIIDIH5OBXZA s4Zss3tF0fFF9FOOISIISwHrWVs3VG us-west-1”
   echo “Script now Exiting…..”
   exit
fi

INSTANCE_ID=$1
BUCKET_NAME=$2
AWS_ACCESS_KEY_ID=$3
AWS_SECRET_ACCESS_KEY=$4
AWS_DEFAULT_REGION=$5The scripts requires five parameters, and will fail to launch if any of the five are not present:

  1. The AWS ID of the instance
  2. The S3 Bucket Name where the exports will be stored
  3. Your AWS Access KEY ID
  4. Your AWS Secret Access Key
  5. Your Default AWS region

## Get Volume and PublicIP information from instance
VOL_ID=$(aws ec2 describeinstances instanceid $INSTANCE_ID query ‘Reservations[].Instances[].BlockDeviceMappings[].Ebs[].VolumeId’ | grep vol | sed e ‘s/^[ \t]*//’ e ‘s/[ \t]*$//’ e ‘s/\”//g’)

PUBLIC_IP=$(aws ec2 describeinstances instanceid $INSTANCE_ID | grep PublicIpAddress | sed e ‘s/^[ \t]*//’ e ‘s/[ \t]*$//’ e ‘s/\”//g’ e ‘s/PublicIpAddress: //g’ e ‘s/,//g’)

— The AWS CLI query returns a JSON object with all the information on the instance, using SED to zoom in on the Volume ID and the Public IP Address of instance.

## Get volume size and type
VOL_TYPE=$(aws ec2 describevolumes volumeid $VOL_ID | grep VolumeType |  sed e ‘s/^[ \t]*//’ e ‘s/[ \t]*$//’ e ‘s/\”//g’ e ‘s/VolumeType: //g’ e ‘s/,//g’)

VOL_SIZE=$(aws ec2 describevolumes volumeid $VOL_ID | grep Size | sed e ‘s/^[ \t]*//’ e ‘s/[ \t]*$//’ e ‘s/\”//g’ e ‘s/Size: //g’)

VOL_AZ=$(aws ec2 describevolumes volumeid $VOL_ID | grep AvailabilityZone| sed e ‘s/^[ \t]*//’ e ‘s/[ \t]*$//’ e ‘s/\”//g’ e ‘s/AvailabilityZone: //g’ e ‘s/,//g’)

—  The commands above use the AWS CLI to retrieve specific information about the volume that is attached to instance: its type, size and Availability Zone. These are all required for the creation of another volume with the same characteristics.

## Create new Volume
NEW_VOL=$(aws ec2 createvolume size $VOL_SIZE region uswest1 availabilityzone $VOL_AZ volumetype $VOL_TYPE | grep VolumeId | sed e ‘s/^[ \t]*//’ e ‘s/[ \t]*$//’ e ‘s/\”//g’ e ‘s/VolumeId: //g’ e ‘s/,//g’)

— Now that the instance has a second disk,we need to format and mount it so the Linux Operating system can use it as a temporary storage location for cloning the disk.

## Attach Volume to instance
aws ec2 attachvolume volumeid $NEW_VOL instanceid $INSTANCE_ID device /dev/xvdf

## Partition Disk

ssh ec2user@$PUBLIC_IP ‘(echo n; echo p; echo 1; echo “”; echo “”; echo w) | sudo fdisk /dev/xvdf’

## Format disk and Mount

ssh ec2user@$PUBLIC_IP ‘sudo mkfs.ext4 /dev/xvdf1; sudo mkdir /tmp/disk; sudo mount /dev/xvdf1 /tmp/disk’

## Copy disk to Image file

ssh ec2user@$PUBLIC_IP ‘sudo dd if=/dev/xvda of=/tmp/disk/disk.img’

— The commands above use the AWS CLI to attach the volume to the instance, partition the disk, and copy an image of the original disk to an image file.

## Add AWS credentials to remote instance

ssh ec2user@$PUBLIC_IP “aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID; aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY; aws configure set default.region $AWS_DEFAULT_REGION”

## Copy image to S3 *FROM* instance
ssh ec2user@$PUBLIC_IP ‘aws s3 cp /tmp/disk/disk.img s3://my-b1-bucket/’

— In order to copy the newly created image to S3, the instance needs to have the correct credentials to push a file to the S3 bucket. This is injected into the instance above and then the file is copied.

## unmount disk and remove from instance

ssh ec2user@$PUBLIC_IP ‘sudo umount /tmp/disk’
aws ec2 detachvolume volumeid $NEW_VOL
aws ec2 deletevolume volumeid $NEW_VOL

— Next we clean up –

there’s no reason to keep this additional volume around and keep on paying. This was only a temporary measure and the volume can now be removed.

## Copy image from S3 *TO* Worker

mkdir /tmp/ami
aws s3 cp s3://my-b1-bucket/disk.img /tmp/ami/

## Add image to glance

glance imagecreate diskformat raw containerformat bare name myami file /tmp/ami/disk.img progress

— Once the image is copied to the Worker, it can be imported into your OpenStack environment with the glance command.


You can build on the script above to automate the export of one or all of your AWS instances to your private OpenStack cloud  to create a hybrid environment. The script can be expanded in many ways to add additional functionality and error handling,  and integrated into the workflow that you use in your environment to even run an export of all your AWS instances into your cloud on a regular basis.

Discover a solution that provides AWS functionality on premise.

Final Notes

This is not a comprehensive solution to ensure seamlessly migration of  your   AWS environment, or even part of it,  into your OpenStack private cloud. When you decide to migrate will vary according to your specific requirements – be they performance, security, or costs.

Many additional services that people enjoy AWS do not have a decent counterpart in OpenStack. This script will not cover the network topology and security that you have already implemented in AWS for your instances (Network ACLs and Security groups, for example). The process can be quite time consuming as you are copying large amounts of data (even though a good part of that data is probably blank blocks as instances are not usually maximised to full capacity). However, it does provide some degree of freedom and allows you to choose to move some of your resources back to your on-premises OpenStack environment.

To create a real hybrid cloud environment, you should consider an out-of-the-box solution such as Symphony, which offers a quick-to-deploy OpenStack private cloud for the enterprise’s data center. The platform ensures optimal utilization of your data center hardware and enables users to use all services via APIs. In addition, it has a native plugin to your AWS cloud environment. This inherent compatibility with AWS simply makes your private cloud your on-premises AWS region, providing your enterprise users a unified experience.

Learn more: OpenStack APIs versus AWS APIs

Building a Private Cloud Step by Step
January 18, 2017

Simple Share Buttons
Simple Share Buttons