Kubernetes is cluster orchestration and scheduling software to “manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops”.
Flocker does a great job of orchestrating data volumes around a cluster of machines and can automatically move those volumes between nodes when your containers move.
Combining these two tools means users can run stateful Docker containers and automatically recover from node failure. In other words - Highly Available Stateful Containers with Kubernetes!. See a demo of this solution below.
HA Demo Video
Running applications in production means applications and their services always need to be available for their users. If the host where the application is running fails, we expect for our framework to respond by recovering from such a failure. It it doesnt, we may lose data, have unusable services or have incorrect services running for the users.
A common way frameworks do this is to incorporate a scheduler. The schedulers job is to place the application on a healthy host when a failure is detected. i.e. The scheduler owns the ability to re-schedule the application onto another host. In the case of Kubernetes, the Kubernetes Scheduler re-schedules our POD onto a healthy host and Flocker responds by moving the data the POD needs to the correct host before the containers in that POD start.
This is hard to acheive with complex applications where applications may have many moving parts including external dependencies such as network attached storage. If we are running an application which uses a persistent datastore, we expect that our data will be available wherever our application comes back up. This is precisely the problem using Flocker and Kubernetes together can help solve.
Below is the demo architecture used in the above video. There are:
- 4 nodes running an operating system called CoreOS on Amazon EC2.
- 1 Node is running the Kube Pod Master, Kube API Service, Kube Proxy, Kube Scheduler, Kube Controller Manager and Flocker Control Service.
- The other 3 run Flocker agents, a Kubelet and Kube Proxy.
If you not farmiliar with CoreOS, thats ok, domain knowledge isn’t necessary for the demo. CoreOS maintains a lightweight host system and uses Docker containers for all applications. This system provides process isolation and also allows applications to be moved throughout a cluster easily. You can also find more information here
Note Kubernetes v1.1 must be used in order for Flocker to work for this demo.
Try the demo yourself
Prepare the Demo
We can use the
kube-aws tool from this guide which will let you spin up Kubernetes on CoreOS running on EC2.
First, download the
kube-aws tool from here
After that, make sure your AWS credentials are in
~/.aws/credentials. Below is an example of this file.
Once the CLI is setup, you can use a configuration file called
cluster.yaml. Make sure and replace
<awsAvailabilityZone>. You can also reference the configuration guide for what each options means.
Then, you can create your Kubernetes cluster by running the following command.
Open up ports
80in the SecurityGroups for the worker nodes and the master node.
After that is done, create a
flocker.yml file that looks like the below example. Replace everything within the
< > characters with your own configuration.
You can find Public and Private IP addresses as well as DNS names from within your AWS Console.
Get the Flocker installation tools.
Then install Flocker on your nodes
Next, we’ll need an API certificate for Kubernetes to talk to Flocker.
After you run this, upload
/etc/flocker on every node in your Kubernetes cluster.
Next, create a file called
Then, upload it to on every node in your Kubernetes cluster.
Next, on every node in your Kubernetes cluster you will also have to change the file
/etc/systemd/system/kubelet.service to have
EnvironmentFile=/etc/flocker/env and use
/root/kubelet instead of the configured kubelet binary. See below for example.
Finally, on every node in your Kubernetes cluster, run the following snippet to download the
v1.1 Kubelet and restart the service.
Make sure you are in the
Double-check and make sure you’re running v1.1.1
Use Kubernetes with Flocker!
kubectl command line tool. (This will work for Mac)
Before you can use
kubectl, you must add the
externalDNSName:used in your
externalDNSName: k8s-masterto your
Then, from the directory where you ran
kube-aws you should see
clusters/. You can run
kubctl with the following.
Run the demo
You can use the demo Kubernetes definition in the command. Save it to
Create a volume
You can get a
NodeUUID for the following command by running
$ flockerctl --control-service=<PublicDNSNameOfMasterNode> list-nodes and choosing any Node UUID from the list.
Create the app
Once your POD is running you can terminate the EC2 instance running your POD to simulate a failure and see your POD failover to another node using Kubernetes and Flocker!
Note: shutting down your instance is different than terminating and may not react the same. This demo works with terminating the EC2 instance.
We’d love to hear your feedback!