Demo: High Availability with Kubernetes and Flocker

Flocker + Kubernetes logo

Kubernetes is cluster orchestration and scheduling software to “manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops”.

Flocker does a great job of orchestrating data volumes around a cluster of machines and can automatically move those volumes between nodes when your containers move.

Combining these two tools means users can run stateful Docker containers and automatically recover from node failure. In other words - Highly Available Stateful Containers with Kubernetes!. See a demo of this solution below.

Our previous demo used a intermediary solution called Powerstrip. This demo uses the native flocker integrations release with Kubernetes v1.1.

HA Demo Video

Walkthrough

Problem

Running applications in production means applications and their services always need to be available for their users. If the host where the application is running fails, we expect for our framework to respond by recovering from such a failure. It it doesnt, we may lose data, have unusable services or have incorrect services running for the users.

Solution

A common way frameworks do this is to incorporate a scheduler. The schedulers job is to place the application on a healthy host when a failure is detected. i.e. The scheduler owns the ability to re-schedule the application onto another host. In the case of Kubernetes, the Kubernetes Scheduler re-schedules our POD onto a healthy host and Flocker responds by moving the data the POD needs to the correct host before the containers in that POD start.

Why

This is hard to acheive with complex applications where applications may have many moving parts including external dependencies such as network attached storage. If we are running an application which uses a persistent datastore, we expect that our data will be available wherever our application comes back up. This is precisely the problem using Flocker and Kubernetes together can help solve.

Demo Architecture

Below is the demo architecture used in the above video. There are:

  • 4 nodes running an operating system called CoreOS on Amazon EC2.
    • 1 Node is running the Kube Pod Master, Kube API Service, Kube Proxy, Kube Scheduler, Kube Controller Manager and Flocker Control Service.
    • The other 3 run Flocker agents, a Kubelet and Kube Proxy.

If you not farmiliar with CoreOS, thats ok, domain knowledge isn’t necessary for the demo. CoreOS maintains a lightweight host system and uses Docker containers for all applications. This system provides process isolation and also allows applications to be moved throughout a cluster easily. You can also find more information here

Note Kubernetes v1.1 must be used in order for Flocker to work for this demo.

Flocker + Kubernetes Architecture

Try the demo yourself

Prepare the Demo

We can use the kube-aws tool from this guide which will let you spin up Kubernetes on CoreOS running on EC2.

First, download the kube-aws tool from here

After that, make sure your AWS credentials are in ~/.aws/credentials. Below is an example of this file.

[default]
aws_access_key_id = MY-AWS_KEY
aws_secret_access_key = MY-SECRET-KEY

Once the CLI is setup, you can use a configuration file called cluster.yaml. Make sure and replace <yourKeyname> , <awsRegion>, and <awsAvailabilityZone>. You can also reference the configuration guide for what each options means.

clusterName: kubernetes-ha-demo
keyName: <yourKeyname, ex: if myname.pem the keyname is "myname">
region: <awsRegion, ex: us-east>
availabilityZone: <awsAvailabilityZone, ex : us-east-1b>
externalDNSName: k8s-master
controllerInstanceType: m3.medium
workerCount: 3
workerInstanceType: m3.medium

Then, you can create your Kubernetes cluster by running the following command.

./kube-aws up

Open up ports 4523, 4524 and 80 in the SecurityGroups for the worker nodes and the master node.

After that is done, create a flocker.yml file that looks like the below example. Replace everything within the < > characters with your own configuration.

You can find Public and Private IP addresses as well as DNS names from within your AWS Console.

cluster_name: cluster
agent_nodes:
 - {public: <publicIPNode1>, private: <privateIPNode1>}
 - {public: <publicIPNode2>, private: <privateIPNode2>}
 - {public: <publicIPNode3>, private: <privateIPNode3>}

control_node: <PublicDNSNameOfMasterNode>
users:
 - coreuser
os: coreos
private_key_path: </Local/Path/To/AWS_KEY_NAME.pem>
agent_config:
  version: 1
  control-service:
     hostname: <PublicDNSNameOfMasterNode>
     port: 4524
  dataset:
    backend: "aws"
    region: "<awsRegion>"
    zone: "<awsAvailabilityZone>"
    access_key_id: "<AcessKeyId>"
    secret_access_key: "<SecretAccessKey>"

Install Flocker

Get the Flocker installation tools.

curl https://get.flocker.io |sh 

Then install Flocker on your nodes

$ uft-flocker-install flocker.yml
$ uft-flocker-config flocker.yml

Configure Kubernetes

Next, we’ll need an API certificate for Kubernetes to talk to Flocker.

$ uft-flocker-ca create-api-certificate api

After you run this, upload api.crt and api.key to /etc/flocker on every node in your Kubernetes cluster.

$ scp -i YOUR_AWS_KEY api.* root@<PublicIPofServer>:/etc/flocker/

Next, create a file called ./env

FLOCKER_CONTROL_SERVICE_HOST=<PublicDNSNameOfMasterNode>
FLOCKER_CONTROL_SERVICE_PORT=4523
FLOCKER_CONTROL_SERVICE_CA_FILE=/etc/flocker/cluster.crt
FLOCKER_CONTROL_SERVICE_CLIENT_KEY_FILE=/etc/flocker/api.key
FLOCKER_CONTROL_SERVICE_CLIENT_CERT_FILE=/etc/flocker/api.crt

Then, upload it to on every node in your Kubernetes cluster.

$ scp -i YOUR_AWS_KEY ./env root@<PublicIPofServer>:/etc/flocker/

Next, on every node in your Kubernetes cluster you will also have to change the file /etc/systemd/system/kubelet.service to have EnvironmentFile=/etc/flocker/env and use /root/kubelet instead of the configured kubelet binary. See below for example.

[Service]
EnvironmentFile=/etc/flocker/env
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStart=/root/kubelet   --api_servers=http://127.0.0.1:8080   --register-node=false   --allow-privileged=true   --config=/etc/kubernetes/manifests   --cluster_dns=10.3.0.10   --cluster_domain=cluster.local   --cadvisor-port=0
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Finally, on every node in your Kubernetes cluster, run the following snippet to download the v1.1 Kubelet and restart the service.

Make sure you are in the /root/ directory.

$ wget https://storage.googleapis.com/kubernetes-release/release/v1.1.1/bin/linux/amd64/kubelet; chmod +x kubelet
$ systemctl daemon-reload
$ systemctl restart kubelet

Double-check and make sure you’re running v1.1.1

$ /root/kubelet --version=true
Kubernetes v1.1.1

Use Kubernetes with Flocker!

Download the kubectl command line tool. (This will work for Mac)

wget https://storage.googleapis.com/kubernetes-release/release/v1.1.1/bin/darwin/amd64/kubectl
chmod +x kubectl
mv kubectl /usr/local/bin/

Before you can use kubectl, you must add the externalDNSName: used in your cluster.yaml such as externalDNSName: k8s-master to your /etc/hosts file.

$ echo "<Public IP of Kubernetes Master> k8s-master" | sudo tee -a /etc/hosts

Then, from the directory where you ran kube-aws you should see clusters/. You can run kubctl with the following.

$ kubectl --kubeconfig=clusters/kubernetes-ha-demo/kubeconfig get no

Run the demo

You can use the demo Kubernetes definition in the command. Save it to flocker-demo.yml

Create a volume

You can get a NodeUUID for the following command by running $ flockerctl --control-service=<PublicDNSNameOfMasterNode> list-nodes and choosing any Node UUID from the list.

$ flockerctl --control-service=<PublicDNSNameOfMasterNode> create -m name=FlockerVolume1 -s 10G --node=<NodeUUID>

List volumes

flockerctl --control-service=<PublicDNSNameOfMasterNode> list

Create the app

$ curl -O https://gist.githubusercontent.com/wallnerryan/331e40f7467d0910787a/raw/5c98bd68dd36be69e093b97a5e3e84141d924eca/flocker-demo.yml
$ kubectl --kubeconfig=clusters/kubernetes-ha-demo/kubeconfig create -f flocker-demo.yml

Get PODs

$ kubectl --kubeconfig=clusters/kubernetes-ha-demo/kubeconfig get po

Once your POD is running you can terminate the EC2 instance running your POD to simulate a failure and see your POD failover to another node using Kubernetes and Flocker!

Note: shutting down your instance is different than terminating and may not react the same. This demo works with terminating the EC2 instance.

Feedback

We’d love to hear your feedback!

Like what you read?

Signup for a free FlockerHub account.
Sign Up

Get all the ClusterHQ News

Stay up to date with our newsletter. No spam. Ever.