EC2 UserData script that waits for volumes to be properly attached before proceeding

When creating EC2 instances with extra volumes it might be needed to format these volumes in the UserData script when creating them. With big volumes you can run into the issue that the volume is not yet attached to the instance when you try to format it, so you need to add a wait condition in the UserData to deal with this.

The UserData script below does just that: wait for a volume /dev/sdh to be attached properly before trying to format and mount it.

(This is a code snippet from a CloudFormation stack in YAML format)

UserData:
  "Fn::Base64": !Sub |
    #!/bin/bash -xe
    #
    # See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
    #
    # Make sure both volumes have been created AND attached to this instance !
    #
    # We do not need a loop counter in the "until" statements below because
    # there is a 5 minute limit on the CreationPolicy for this EC2 instance already.

    EC2_INSTANCE_ID=$(curl -s http://instance-data/latest/meta-data/instance-id)

    ######################################################################
    # Volume /dev/sdh (which will get created as /dev/xvdh on Amazon Linux)

    DATA_STATE="unknown"
    until [ "${!DATA_STATE}" == "attached" ]; do
      DATA_STATE=$(aws ec2 describe-volumes \
        --region ${AWS::Region} \
        --filters \
            Name=attachment.instance-id,Values=${!EC2_INSTANCE_ID} \
            Name=attachment.device,Values=/dev/sdh \
        --query Volumes[].Attachments[].State \
        --output text)

      sleep 5
    done

    # Format /dev/xvdh if it does not contain a partition yet
    if [ "$(file -b -s /dev/xvdh)" == "data" ]; then
      mkfs -t ext4 /dev/xvdh
    fi

    mkdir -p /data
    mount /dev/xvdh /data

    # Persist the volume in /etc/fstab so it gets mounted again
    echo '/dev/xvdh /data ext4 defaults,nofail 0 2' >> /etc/fstab

That’s all.

Deploying a Hugo website to Amazon S3 using Bitbucket Pipelines

Atlassian recently released a new feature for their hosted Bitbucket product called “Pipelines”. It’s basically their version of Travis CI, that can do simple building, testing and deployment.

In this blog post I’ll show you how I use Pipelines to deploy my Hugo site to AWS S3. This is short and to-the-point, if you know AWS this should tell you enough to set up your own deployment in about 5 minutes.

Create an AWS user for Pipelines

You need an AWS user that can deploy to your bucket, do NOT use your admin user for this! Simply create a new user called “pipelines” and give it only access to your blog bucket.

This inline policy should be enough access to do these deployments (replace BUCKETNAME with the name of your bucket):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:List*",
                "s3:Put*",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::BUCKETNAME",
                "arn:aws:s3:::BUCKETNAME/*"
            ],
            "Effect": "Allow",
            "Sid": "AllowPipelinesDeployAccess"
        }
    ]
}

Configure Pipelines with your AWS credentials

Generate an access and secret key for this new user and add these 3 variables in the environment variables settings page in Bitbucket:

AWS Variables:

AWS_ACCESS_KEY_ID: xxx
AWS_SECRET_ACCESS_KEY: xxx
AWS_DEFAULT_REGION: (your bucket's region)

Bitbucket Pipelines environment settings page:

Bitbucket Pipelines environment variables settings page

Create the Pipelines build config

I’m assuming your hugo site lives in the root of your git repository. In my case my repository looks like this:

karel:Hostile ~/KarelBemelmans/karelbemelmans-hugo$ tree -L 2
.
├── README.md
├── bitbucket-pipelines.yml
├── config.toml
├── content
│   ├── about-me.md
│   └── post
├── public
│   ├── 2015
│   ├── 2016
│   ├── 404.html
│   ├── CNAME
│   ├── about-me
│   ├── categories
│   ├── css
│   ├── favicon.png
│   ├── goals
│   ├── images
│   ├── index.html
│   ├── index.xml
│   ├── js
│   ├── page
│   ├── post
│   ├── sitemap.xml
│   ├── touch-icon-144-precomposed.png
│   └── wp-content
├── static
│   ├── CNAME
│   ├── css
│   ├── images
│   └── wp-content
└── themes
    └── hyde-x

Then create the file bitbucket-pipelines.yml in the root of your repository, replace BUCKETNAME with the name of your blog’s bucket:

image: karelbemelmans/pipelines-hugo

pipelines:
  default:
    - step:
        script:
          - hugo
          - aws s3 sync --delete public s3://BUCKETNAME

Docker Hub and Github links for this Docker image, feel free to fork and modify:

That’s all.

One single remark though

As you can see I use the aws s3 sync method to upload to S3. When I do this from my laptop, where files persist over deployments, that actually makes sense and saves me some upload traffic.

Doing this on Pipelines, where the hugo site is always completely re-generated from scratch inside a Docker container, is actually useless as it will always upload the entire site as every file is “new”.

CloudFormation YAML support

CloudFormation recently added support for YAML, so I’ve updated my Drupal 7 stack with a YAML version. Check the github repository for the new stack:

https://github.com/karelbemelmans/drupal7-on-aws/tree/master/AWS/CloudFormation

Running Drupal 7 on AWS with EFS

In two previous blog posts I talked about running Drupal 7 on AWS:

Since writing part 2 of this topic AWS has finally released Elastic File System (EFS), so I had to write an update for the stack that uses EFS instead of S3.

Elastic File System (EFS)

EFS is a shared nfs filesystem you can attach to one or more EC2 instances. While we can store our user uploaded content in S3 using the Drupal s3fs module, getting the css and js aggregation cache to work over multiple servers was still an issue with S3.

If we use EFS instead of S3, and share the sites/default/files directory over every EC2 instance, we remove that problem.

The source code for this stack is on Github:

  • drupal7-efs.json: A very minimal Drupal 7 stack setup
  • drupal7-efs-realistic: A more realistic Drupal 7 site with a lot of contrib modules. This also uses a Docker hub container image instead of building an image in the Launch Configuration.

I will continue to work on the second one, so you probably want to take that stack.

A short note about this stack and Docker

While this stack uses Docker it is not a complete container management system like ECS is intended to be. Rolling out a new version of a Docker image with this stack is pretty much a manual job: you scale the Auto Scaling Group down to 0 nodes, then scale it up again to the required number. All the new instances that get created that way will have the new version of your Docker image. (or you can scale it up to double the normal size and then scale down again to remove the old instances).

Docker cleanup commands

Running Docker containers also includes a little housekeeping to keep your Docker hosts running optimal and not wasting resources. This blog post provides an overview of which commands you can use.

Currently there are a lot of blog posts and stackoverflow questions that talk about clean up commands for old Docker versions, that are not very useful anymore. In this blog post I will try to keep them updated with newer versions of Docker.

Current Docker version as of 2016/07/20: 1.11 (stable), 1.12 (beta)

Clean up old containers

Originally copied from this blog post: source

These commands can be dangerous! So don’t just copy/paste them without at least having a clue what they do.

# Kill all running containers:
docker kill $(docker ps -q)

# Delete all stopped containers (including data-only containers):
docker rm $(docker ps -a -q)

# Delete all exited containers
docker rm $(docker ps -q -f status=exited)

# Delete ALL images:
docker rmi $(docker images -q)

# Delete all 'untagged/dangling' (<none>) images ():
docker rmi $(docker images -q -f dangling=true)

Clean up old volumes

When a container defines a VOLUME it will not automatically delete this volume when the container gets deleted. Some manual clean up is needed to get rid of these “dangling” volumes.

Originally found on Stackoverflow: source

# List all orphaned volumes:
docker volume ls -qf dangling=true

# Eliminate all of them with:
docker volume rm $(docker volume ls -qf dangling=true)