Category Archives: Kubernetes

Containerisation: tips for using Kubernetes with AWS

Containers have been a key part of developer toolkits for many years now, but they are now becoming more common to use in production. Driving this adoption, in part, is the maturity of production-grade tooling and systems.

The leading container management product is Docker, but on its own docker does not provide enough to deploy into production, which has led to a new product category Container Orchestration.

The leading product in this space is Kubernetes, developed initially by Google and then released as open source software in 2015. Kubernetes differs from some of the competing container orchestration products in its design philosophy which is committed to open source (with components like iptables, nginx and etcd as core moving parts) and by being entirely API first in its design.

Our experience is that Kubernetes is ridiculously easy to deploy and manage and has many benefits over straight virtualisation for deploying mixed workloads, particularly in a public cloud environment.

Our services

We are working towards becoming a Kubernetes Certified Service Provider and are actively delivering Kubernetes solutions for customers, primarily on AWS. If you are interested in our consulting or implementation services please just drop us a line.

Why containers?

The primary benefits are cost and management effort. Cost because expensive compute resource can be efficiently shared between multiple workloads. Management because the container paradigm packages up an application with its dependencies in a way that facilitates flexible release and deployment.

A container cluster of 2 or 3 computers can host dozens of containers, all delivering different workloads. the Kubernetes software can scale containers within this cluster and can scale the overall cluster up and down depending on the needs of the workloads. This allows the cluster to be downsized to a minimum size when load is low. It also means containers that require very very low levels of resources can remain in service without needing to take a whole virtual machine.

Management time benefits enormously because of the packaging of applications with their dependencies. It allows you to share compute resource even when the workloads have conflicting dependencies – a very common problem. It allows upgrades to progress across your estate in a partial manner, rather than requiring big bang upgrades of possibly risky underlying software.

Finally it also allows you to safely upgrade the underlying operating system of your cluster without downtime. Workloads are automatically migrated around the cluster as nodes are taken out of service and new, upgraded, nodes are brought in.  We’ve done this a bunch of times now and it is honestly kind of magic.

There are other benefits to do with ease of accessgranular access control and federation, and I might deal with those in later posts.

Tips

Here are a few tips if you are considering getting started with Kubernetes.

Domains

Buy a new top level domain for every cluster. This makes your URLs so much nicer, and it really isn’t that expensive! 🙂

AWS accounts

We consider best practice to be a MASTER account, where your user accounts sit, and then one sub account for your production environment, with further sub accounts for pre-production environments. Note that you can run staging sites in your production cluster – this pattern should become much more common, since you are not staging the cluster, but staging the sites.

A staging cluster is only needed to test cluster-wide upgrades and changes.

Security

When all your sites are in a single cluster, and behind a single AWS ELB (yes, you can do this), it makes things such as Web Application Firewall automation and IP restricted ELBs more cost-effective. These things only need to be applied once to provide benefit across your estate.

Role-Based Access Control

This is a relatively new feature of Kubernetes, but it is solid and well-designed. I’d recommend turning this on from day one, so the capabilities are available to you later.

Flannel and Calico, or Weave

Similarly I’d recommend enabling an overlay network from day one. These are easily deployed into an AWS Kubernetes cluster using the kops tool, but they provide advanced network capabilities if you ever need them in the future.

Namespaces

Use namespaces to subdivide your estate into logical partitions. production and staging are an obvious distinction, but you may well have user groups where namespaces make a sensible boundary for applying access control.

Tooling

Currently integrating kubernetes configuration with cloudformation configuration means writing some custom tooling. Bite the bullet with this and dedicate some time to making a good job of this. I’m expecting to see Kubernetes become a first-class citizen within AWS at some point, but until then you are going to need to own your devops and do a good job of this.

Resource records

Create Route53 ALIAS records for all your exposed endpoints (which could be just your single ELB for your ingress controller), and use this in your Cloudfront distributions. This makes upgrades a lot easier!