Skip to content

Continuous Delivery with CircleCI, ECR and Kubernetes

As well as being a great drop-in hosting system for a lot of bare-metal and “legacy” cloud workloads, Kubernetes provides some spectacular developer tools and access to automation.  It is often very easy to do things that on other platforms would be difficult or impossible.

Continuous Delivery is an example of something that, in practice, can prove difficult to orchestrate. Even with high automation, release processes often have enough moving parts, or sufficient latency, that operating them frequently is prohibitively expensive, difficult or error-prone.

Releases in Kubernetes however are generally so rapid and so well orchestrated that this is not a problem.

This week I put together a simple CD pipeline using CircleCI which is a good example of how simple this can be.

There are three phases to a software update based on docker images: build, push and update.

Build and push

CircleCI makes orchestrating this in a Continuous Integration system really easy. We’re storing our images in AWS Elastic Container Registry (ECR), which adds a little bit of complexity, but even then it’s pretty easy. Here’s the relevant part of the CircleCI configuration:

jobs:
  deploy:
    docker:
    - image: circleci/python:3.6.1
    working_directory: ~/repo
    steps:
    - checkout
    - setup_remote_docker:
        docker_layer_caching: true
    - run:
        name: Push to ECR
          command: |
            python3 -m venv venv
            . venv/bin/activate
            pip install awscli
            TAG=0.1.$CIRCLE_BUILD_NUM
            docker build -t local:$TAG .
            eval `aws ecr get-login | sed -e's/-e none//'`
            docker tag local:$TAG $AWS_ECR_REGISTRY:$TAG
            docker push $AWS_ECR_REGISTRY:$TAG

This has to jump through a couple of hoops. First, install the awscli, needed to log in to ECR:

python3 -m venv ven
. venv/bin/activate
pip install awscli

This is why we’re basing the build on a python image, so we have pip.

Build the local copy of the actual deployment image:

docker build -t local:$TAG .

Then do the login with some nasty shell hackery, tag the remote image and push it:

eval `aws ecr get-login | sed -e's/-e none//'`
docker tag local:$TAG $AWS_ECR_REGISTRY:$TAG
docker push $AWS_ECR_REGISTRY:$TAG

At this point, we’ve got the image in the remote registry. Authentication with AWS is handled by putting AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables with appropriate credentials in the CircleCI project environment.

Update

Actually deploying into the cluster uses a tool we wrote a while ago, k8ecr. This uses the AWS-SDK, the Kubernetes client-go packages and the docker client to coordinate various common operations on ECR repositories and Kubernetes. In particular it can issue image updates to Kubernetes deployment resources.

It has a mode where you can tell it to update every relevant deployment in a namespace:

> k8ecr deploy stage -

Running this command will cause it to compare (using semver) all the tags in all the ECR repositories in your AWS account with all the containers in all the deployments in the specified namespace (in this case stage), and issue rolling updates for all the containers for which there is a new version. Kubernetes then does whatever is necessary to get the new code running.

So if you have previously pushed a new version of an image, and there is a deployment using an earlier version of that image, then it will get updated.

The only missing part of the orchestration is doing this regularly, and we can do this, naturally, with another kubernetes deployment. Here’s a Dockerfile:

FROM alpine
ENV AWS_REGION eu-west-2
RUN apk add --no-cache ca-certificates
ADD k8ecr /
CMD while true; do /k8ecr deploy $NAMESPACE -; sleep 60; done

and deployment resource:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
 name: autodeploy-stage
spec:
 replicas: 1
 template:
   metadata:
     labels:
       app: autodeploy-stage
   spec:
     containers:
     - name: app
       image: isotoma/k8ecr-autodeploy
       env:
       - name: NAMESPACE
         value: stage

Every 60 seconds this will perform the checks and trigger any appropriate deployments. Voila, an auto-updating stage namespace. Now developers can do whatever is necessary in CI and magically have their stage updated. CircleCI provides filters so that, for example, only tags get deployed, and we can include the tag versions in the image versions.

If you want to use this, then an image built with that Dockerfile is available on docker hub.

Promotion to production then just requires running k8ecr with the appropriate arguments and we’re done.