Tutorial: Deploying a Replicated Redis Cluster on Kubernetes with Flocker

Flocker + Redis logo

Table of Contents

Background

Tutorial

Take-aways

Background

Redis is the most popular stateful service to run on Docker with almost 18 million pulls on the Docker Hub.

What is Redis? It is an open source (BSD licensed), in-memory data structure store, used as database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs and geospatial indexes with radius queries. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.

Flocker is a container data volume manager that is designed to allow databases like Redis to easily run inside containers in production. When running a database in production, its important to think about things like recovering from host failure and availability of data. Flocker provides tools for managing data volumes across a cluster of machines like those in a production environment. For example, as a Redis container is scheduled between hosts in response to server failure, Flocker can automatically move its associated data volume between hosts at the same time. This operation can be scheduled using the Flocker API or CLI, or automatically by a container orchestration framework that Flocker integrates with such as Docker Swarm, Kubernetes and Mesos.

In this example, we’ll be using Kubernetes to move Docker containers among nodes in a cluster.

Why run Redis in Docker?

Redis has a number of persistence options that have certain advantages such as increased durability and advanced snapshots. By using these optiones along with Flocker, Redis can get increased flexibility and safety for persisted data in a container environment.

Also, as database workloads like Redis scale up or down, we need to make sure that Redis Server has enough or as little CPU, RAM, and network bandwidth as needed to handle our capacity needs. Running Redis Server in a container makes it portable so it may manually or automatically move to a different, possibly more powerful server with ease. Additionally, running Redis in a container means it can be automatically rescheudled to a new host in failure senarios such as crashed or failing servers.

This is where Flocker comes in. By running Redis in a Docker container managed by Flocker, when the container is rescheduled, its data moves to the new host along with the container, reducing downtime and headaches. Flocker can be used to manage volumes attached in a 1:1 fashion with replicated Redis slaves or individual Redis masters.

What you will learn in this blog

In this tutorial you’ll learn how to run a Redis master-slave cluster deployment on Kubernetes. Additionally, we’ll show you how to migrate a Redis container and its data stored on AWS EBS between hosts in case of scheduled maintenance or other operational tasks.

The tutorial will take advantage of Kubernetes to easily deploy Redis slaves and easily connect them to the cluster. We will create a Replication Controller and a Service for the Redis master and every Redis slave. This will enable Redis slaves to easily query Kubernetes’ built-in methods for discovering services to easily join the cluster over the network.

In order to run the tutorial, we’ll first start by installing and configuring Kubernetes and Flocker. If you already have Kubernetes and Flocker installed, skip to the good bits.

NOTE: The redis containers in this tutorial use AOF (Append Only File) To learn about reasons why you might want to do this, click the link.

Tutorial

Architecture

In this tutorial there are:

  • 5 Nodes total.
  • 1 master node with Kubernetes Master and Docker installed.
  • 1 Control Service for the Flocker Control Service node.
  • 3 Nodes with our Flocker Agent Services, Docker, and Kubernetes node services installed (Our Redis cluster is going to be deployed and move between these nodes inside of PODs).

In this example we will be running our nodes on Amazon EC2 and creating and attaching volumes from Amazon’s EBS service.

Installing prerequisites

QuickStart: Setting up a Flocker + Kubernetes Cluster

Step 1

This QuickStart installs Kubernetes v1.1

Sign up for an Amazon AWS account

Install Kubernetes on Ubuntu using this guide

Step 2

The guide above uses some environments variables for installation. Export some optional settings if you would like to follow along.

NOTE: Using larger instance types will cost more. Lastly, make sure you have AWS CLI installed and configured.

export KUBE_AWS_ZONE=us-east-1c
export NUM_MINIONS=3
export MINION_SIZE=m3.medium
export MASTER_SIZE=m3.medium
export AWS_S3_REGION=us-east-1c

Create the Kubernetes Cluster

NOTE: You will need EC2 Full access and S3 Access for this process and it could take up to 10-15 minutes to complete.

export KUBERNETES_PROVIDER=aws; curl -sS https://get.k8s.io | bash

Example Output

Kubernetes cluster is running.  The master is running at:

  https://54.175.188.62

The user name and password to use is located in /Users/username/.kube/config.

... calling validate-cluster
Waiting for 2 ready nodes. 0 ready nodes, 2 registered. Retrying.
Found 2 node(s).
NAME                           LABELS                                                STATUS    AGE
ip-172-20-0-103.ec2.internal   kubernetes.io/hostname=ip-172-20-0-103.ec2.internal   Ready     1h
ip-172-20-0-104.ec2.internal   kubernetes.io/hostname=ip-172-20-0-104.ec2.internal   Ready     1h
ip-172-20-0-105.ec2.internal   kubernetes.io/hostname=ip-172-20-0-105.ec2.internal   Ready     1h
Validate output:
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   nil
etcd-0               Healthy   {"health": "true"}   nil
etcd-1               Healthy   {"health": "true"}   nil
controller-manager   Healthy   ok                   nil
Cluster validation succeeded
Done, listing cluster services:

Kubernetes master is running at https://54.175.188.62
Elasticsearch is running at https://54.175.188.62/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://54.175.188.62/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://54.175.188.62/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://54.175.188.62/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://54.175.188.62/api/v1/proxy/namespaces/kube-system/services/kube-ui
Grafana is running at https://54.175.188.62/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://54.175.188.62/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb

Kubernetes binaries at /Users/username/kubernetes/cluster/
You may want to add this directory to your PATH in $HOME/.profile
Installation successful!

You can view your UI by going to the KubeUI URL above and entering your username and password from the .kube/config location.

KubeUI

Download the kubtctl tool to use k8s

Use kubctl

The kubectl.sh path is from the installation output.

$ kubernetes/cluster/kubectl.sh get no
NAME                           LABELS                                                STATUS    AGE
ip-172-20-0-103.ec2.internal   kubernetes.io/hostname=ip-172-20-0-103.ec2.internal   Ready     1h
ip-172-20-0-104.ec2.internal   kubernetes.io/hostname=ip-172-20-0-104.ec2.internal   Ready     1h
ip-172-20-0-105.ec2.internal   kubernetes.io/hostname=ip-172-20-0-105.ec2.internal   Ready     1h

NOTE: We suggest having a seperate node for your Flocker Control Service and Kubernetes Master Services.

We’ll be creating a seperate node for the Flocker Control Service

You can use the AWS Console Launch Wizard for this.

Use the same AMI that the Kubernetes Install is using.

AMI

Launch in the correct VPC, you can find the VPC information by clicking any existing Kubernetes node.

VPC

Use the same kubernetes KeyPair created with the above tool. It should look something like this.

KEYPAIR

Add the security groups for kubernetes.

SECGROUPS

Finally create an elastic IP for the Flocker Control node and associate the control service. You can do this by selecting “Elastic IPs” from the menu on the left of the AWS console.

DNS

Step 3

Find the security group ids for the Master and Worker nodes. You can find then with aws ec2 or in your AWS Console.

Examples

kubernetes-master-kubernetes --> id: sg-0b203872
kubernetes-minion-kubernetes --> id: sg-fb203882

Open up the Flocker ports using aws cli.

(Flocker)
$ CONTROLLER_SEC_GROUP=sg-0b203872
$ WORKER_SEC_GROUP=sg-fb203882
$ aws ec2 authorize-security-group-ingress --group-id $CONTROLLER_SEC_GROUP --protocol tcp --port 4523-4524 --cidr 0.0.0.0/0

$ aws ec2 authorize-security-group-ingress --group-id $WORKER_SEC_GROUP --protocol tcp --port 4523-4524 --cidr 0.0.0.0/0

Step 4

Install the Flocker CLI and create a yaml file for Flocker installation.

Install CLI on your local machine.

$ curl -sSL https://get.flocker.io |sh

Create a flocker.yaml

Example Flocker YAML.

You will need to replace the Public IPs, Private IPs, Key location and Control Node DNS name.

cluster_name: my-cluster
agent_nodes:
 # Replace these PUBLIC and PRIVATE IPS!
 - {public: 54.88.115.196, private: 172.20.0.103}
 - {public: 54.208.144.177, private: 172.20.0.104}
 - {public: 54.86.193.203, private: 172.20.0.105}

control_node: ec2-52-72-122-10.compute-1.amazonaws.com # YOUR Control Node DNS
users:
 - ubuntu
os: ubuntu
private_key_path: /Users/username/.ssh/kube_aws_rsa #Your Pub Key for `kube_aws_rsa`
agent_config:
  version: 1
  control-service:
     hostname: ec2-52-72-122-10.compute-1.amazonaws.com # YOUR Control Node DNS.
     port: 4524
  dataset:
    backend: "aws"
    region: "<Your Amazon Region>" # YOUR AWS Region
    zone: "<Your Amazon AvailabilityZone>" # YOUR AWS AvailabilityZone
    access_key_id: "<Your AWS Key>" # YOUR AWS AccessKey
    secret_access_key: "<Your AWS Secre Key>" # YOUR AWS SecretAccessKey

Hint: You can find the IPs easily with the AWS CLI if you want. Examples below.

# (Controller IP)
$ aws ec2 describe-instances --filter Name=tag:Name,Values=kubernetes-master | grep -A1 "PublicIpAddress"

# (Controller DNS)
$ aws ec2 describe-instances --filter Name=tag:Name,Values=kubernetes-master | grep -m 2 -e "PublicDnsName"

# (Woker Node IPs)
$ aws ec2 describe-instances --filter Name=tag:Name,Values=kubernetes-minion | grep -A1 "PublicIpAddress"

Once your flocker.yaml file is complete, install Flocker with these commands.

$ uft-flocker-install flocker.yaml
$ uft-flocker-config flocker.yaml
$ uft-flocker-plugin-install flocker.yaml

Configure your Flocker + K8S Cluster to use Flocker

NOTE: You can follow manual instruction for configuration flocker here or use the script below for automation.

Run the configuration script

You will need Python Twisted, and Python YAML installed to run this script.

$ git clone https://github.com/wallnerryan/kube-aws-flocker
$ .//kube-aws-flocker/config_k8s_flocker.py flocker.yaml

You should see output like the following if the script was succesful.

.
.
Uploaded api certs
Enabled flocker ENVs for 54.86.193.203
Enabled flocker ENVs for 54.208.144.177
Enabled flocker ENVs for 54.88.115.196
Uploaded Flocker ENV file.
Configured Flocker ENV file.
Restarted Kubelet for 54.86.193.203
Restarted Kubelet for 54.208.144.177
Restarted Kubelet for 54.88.115.196
Restarted Kubelet
Completed

Make sure you certs are correct so you can use flockerctl against your cluster.

$ cp ubuntu.crt user.crt
$ cp ubuntu.key user.key

Once Flocker and Kubernetes are installed, Install the Volume Hub Dashboard on the Flocker/Kubernetes nodes.

Setup Volume Hub Dashboard

The Volume Hub displays a catalog of all the data volumes in a Flocker cluster, gives visibility into your logs, and makes for a more seamless user experience.

  1. Create a Free to Volume Hub Account

Volume Hub Signup

Configure your cluster for the Volume Hub

The Volume Hub Agents push the latest metadata about your cluster into the Volume Hub.

1. On the control service node
$ TARGET=control-service TOKEN="[your token]" \
    sh -c 'curl -ssL https://get-volumehub.clusterhq.com/ |sh'
2. On just one of your agent nodes
$ TARGET=agent-node RUN_FLOCKER_AGENT_HERE=1 \
    TOKEN="[your token]" \
    sh -c 'curl -ssL https://get-volumehub.clusterhq.com/ |sh'
3. On all the rest of your agent nodes
$ TARGET=agent-node TOKEN="[your token]" \
    sh -c 'curl -ssL https://get-volumehub.clusterhq.com/ |sh'

Go back to your Volume Hub account and you should see the cluster and agent nodes displayed with no attached volumes.

Volume Hub Dashboard

Getting Started Running Redis on Kubernetes

Now that Kubernetes and Flocker are installed we can begin the steps needed to deploy Redis.

The first thing that needs to be done is to create the volume resources so Kubernetes can use them when deploying PODs. Let’s start by creating two volumes, one for a Redis master and another for a Redis slave.

  1. Create 2 seperate volumes, one for the Redis master and one for the Redis slave
  2. Deploy Redis Master Replication Controller and Service Endpoint
  3. Generate some sample data into Redis
  4. Deploy the Redis slave Replication Controller and Service Endpoint

Creating Volumes for our Kubernetes PODS

The first step is to create the volume resources that our Kubernetes PODS will use to store the Redis data.

List the nodes, and choose one UUID.

$ flockerctl --control-service=<Control Service DNS> list-nodes
SERVER     ADDRESS
695591f9   172.20.0.34
d74be175   172.20.0.35
f65019a5   172.20.0.36

Volume for Redis Master

$ flockerctl --control-service=<Control Service DNS> create -m name=redis-master -s 10G -n <Flocker Agent UUID>

Volume for Redis Slave

$ flockerctl --control-service=<Control Service DNS> create -m name=redis-slave -s 10G -n <Flocker Agent UUID>

You should be able to look at your Volume Hub dashboard and see the volume being created and attached.

Volumes being created Flocker + Redis Created

Volumes attached Flocker + Redis Attached

Volumes View Flocker + Redis Vols View

Running a Redis Master Replication Controller

Now that we have created the Volumes we can deploy the Replication Controllers, PODs, and Services. The first thing we want to do is define a Replication Controller which defines the containers that will make up a POD and the minimum number of PODs to run at any given time.

This POD will have a replica of 1 (because we only want 1 container using our 1 volume), a Redis Master Docker container and a Flocker Volume that is used by the Redis Master. Below is an example of how this will look, you can save this file as redis-controller.yaml

kind: "ReplicationController"
apiVersion: "v1"
metadata: 
  name: "redis-master"
  labels: 
    app: "redis"
    role: "master"
spec: 
  replicas: 1
  selector: 
    app: "redis"
    role: "master"
  template: 
    metadata: 
      labels: 
        app: "redis"
        role: "master"
    spec: 
      containers: 
        - name: "redis-master"
          image: "wallnerryan/redis"
          env:
          - name: GET_HOSTS_FROM
            value: env
          command: ["/usr/bin/redis-server", "--appendonly","yes"]
          ports: 
            - name: "redis-server"
              containerPort: 6379
          volumeMounts:
            - name: redis-master-data
              mountPath: "/var/lib/redis"
      volumes:
        - name: redis-master-data
          flocker:
            datasetName: redis-master

To run this this POD use the kubectl command with the create command and pass it the yaml file from above.

NOTE: The script automatically uses the kubeconfig file created by the installation above. Read the documentation above on kubectl for more information.

$ kubernetes/cluster/kubectl.sh  create -f redis-controller.yaml
replicationcontroller "redis-master" created

The Master POD should become available if you issue the get po command.

$ kubernetes/cluster/kubectl.sh  get po
NAME                 READY     STATUS    RESTARTS   AGE
redis-master-pxvuh   1/1       Running   0          7m

To verify the 10G volume is available inside your container, you can use the following command to check the /var/lib/redis directory and device.

NOTE: replace <Redis POD NAME> with the name from above command.

$ kubernetes/cluster/kubectl.sh exec <Redis POD NAME> -- df -h | grep redis
/dev/xvdf       9.8G   23M  9.2G   1% /var/lib/redis

Next, create a Kubernetes Service for the master to expose it within your cluster as a service. Below is an example YAML you can use, save it as redis-slave.yaml

kind: "Service"
apiVersion: "v1"
metadata: 
  name: "redis-master"
  labels: 
    app: "redis"
    role: "master"
spec: 
  ports: 
    - 
      port: 6379
      targetPort: "redis-server"
  selector: 
    app: "redis"
    role: "master"

Then create the service the same way as the controller

$ kubernetes/cluster/kubectl.sh create -f redis-service.yaml
service "redis-master" created

Redis Master should then be available as a Service and Endpoint. Use the get svc and get ep commands to see this. Here is an example of get ep.

$ kubernetes/cluster/kubectl.sh get ep
NAME           ENDPOINTS        AGE
kubernetes     10.0.0.50:443    19h
redis-master   10.2.51.8:6379   11h

This endpoint can now be used by other PODs becuase it is exposed as a service. The service is available at the IP from above or in a environment variable REDIS_MASTER_SERVICE_HOST.

NOTE: Kubernetes exposes services this way, it is one way out of others for accessing services. Please see the Kubernetes documentation for more detail.

Add data to Redis Master

To add data to Redis, log into the container and run the redis CLI. Below is an example of this.

NOTE: The IP (10.2.51.8) used in the example comes from the get ep command above.

$ kubernetes/cluster/kubectl.sh exec redis-master-pxvuh -c redis-master -it -- /bin/bash
root@redis-master-pxvuh:/var/lib/redis# redis-cli -h 10.2.51.8 rpush mylist A
root@redis-master-pxvuh:/var/lib/redis# redis-cli -h 10.2.51.8 rpush mylist B
root@redis-master-pxvuh:/var/lib/redis# redis-cli -h 10.2.51.8 lpush mylist first
root@redis-master-pxvuh:/var/lib/redis# redis-cli -h 10.2.51.8 lpush mylist last
root@redis-master-pxvuh:/var/lib/redis# redis-cli -h 10.2.51.8 lrange mylist 0 -1
1) "laste"
2) "first"
3) "A"
4) "B"
root@redis-master-pxvuh:/var/lib/redis# exit
exit

Adding a Redis Slave for Replication

The next step is to create a Redis slave and add it to our Redis master to achieve a small replicated Redis cluster.

NOTE: You may add 1 or more slaves at this point, adding one is only an example.

To do this, repeat the process from above, except this time we define a Service Spec and a Replication Controller Spec for the slave.

Here is the Spec for the controller. Notice it uses the redis-slave volume created earlier. Save this one as redis-slave-controller.yaml.

kind: "ReplicationController"
apiVersion: "v1"
metadata: 
  name: "redis-slave"
  labels: 
    app: "redis"
    role: "slave"
spec: 
  replicas: 1
  selector: 
    app: "redis"
    role: "slave"
  template: 
    metadata: 
      labels: 
        app: "redis"
        role: "slave"
    spec: 
      containers: 
        - name: "redis-slave"
          image: "wallnerryan/redis-slave"
          env:
          - name: GET_HOSTS_FROM
            value: env
          ports: 
            - name: "redis-server"
              containerPort: 6379
          volumeMounts:
            - mountPath: "/var/lib/redis"
              name: redis-slave-data
      volumes:
        - name: redis-slave-data
          flocker:
            datasetName: redis-slave

Here is the spec for the service. Save this one as redis-slave-service.yaml

kind: "Service"
apiVersion: "v1"
metadata: 
  name: "redis-slave"
  labels: 
    app: "redis"
    role: "slave"
spec: 
  ports: 
    - 
      port: 6379
      targetPort: "redis-server"
  selector: 
    app: "redis"
    role: "slave"

Then, create them with kubectl.

$ kubernetes/cluster/kubectl.sh create -f redis-slave-controller.yaml
replicationcontroller "redis-slave" created

$ kubernetes/cluster/kubectl.sh create -f redis-slave-service.yaml
service "redis-slave" created

Verify that the slave is running by running get po. This command should output both Redis Master and Redis Slave PODs.

$ kubernetes/cluster/kubectl.sh get po
NAME                 READY     STATUS    RESTARTS   AGE
redis-master-pxvuh   1/1       Running   0          26m
redis-slave-zszm7    1/1       Running   0          1m

View that our Redis Slave SYNCed with Master

To check on the status of the Redis Slave, run the following.

$ kubernetes/cluster/kubectl.sh logs -c redis-slave redis-slave-zszm7

The logs should display output showing our Master/Slave syncing and replicating our sample data from our master into memory.

HINT: Notice the MASTER <-> SLAVE sync started portion of the logs.

...
[6] 26 Jan 05:11:38.894 # Server started, Redis version 2.8.4
[6] 26 Jan 05:11:38.894 * The server is now ready to accept connections on port 6379
[6] 26 Jan 05:11:38.894 * Connecting to MASTER 10.3.0.60:6379
[6] 26 Jan 05:11:38.894 * MASTER <-> SLAVE sync started
[6] 26 Jan 05:11:38.903 * Non blocking connect for SYNC fired the event.
[6] 26 Jan 05:11:38.904 * Master replied to PING, replication can continue...
[6] 26 Jan 05:11:38.904 * Partial resynchronization not possible (no cached master)
[6] 26 Jan 05:11:38.905 * Full resync from master: 552a27f5bb5f2284c436f278c64c16dad04c95ad:3417
[6] 26 Jan 05:11:38.937 * MASTER <-> SLAVE sync: receiving 60 bytes from master
[6] 26 Jan 05:11:38.938 * MASTER <-> SLAVE sync: Flushing old data
[6] 26 Jan 05:11:38.938 * MASTER <-> SLAVE sync: Loading DB in memory
[6] 26 Jan 05:11:38.938 * MASTER <-> SLAVE sync: Finished with success
[6] 26 Jan 05:11:38.939 * Background append only file rewriting started by pid 9
[9] 26 Jan 05:11:38.957 * SYNC append only file rewrite performed
[9] 26 Jan 05:11:38.958 * AOF rewrite: 6 MB of memory used by copy-on-write
[6] 26 Jan 05:11:38.997 * Background AOF rewrite terminated with success
[6] 26 Jan 05:11:38.997 * Parent diff successfully flushed to the rewritten AOF (0 bytes)
[6] 26 Jan 05:11:38.997 * Background AOF rewrite finished successfully

To see that our Redis Slave actually SYNCed the data, let’s log-in and list the contents.

NOTE: The IP (10.2.64.6) used in the example comes from the get ep command above.

$ kubernetes/cluster/kubectl.sh exec redis-slave-zszm7 -c redis-slave -it -- /bin/bash
root@redis-slave-zszm7:/var/lib/redis# redis-cli -h 10.2.64.6 lrange mylist 0 -1
1) "last"
2) "first"
3) "A"
4) "B"

There you have it! All our data is in the database. We can also check that the data is persisted to disk by running the following command. The output is the Redis protocol saving data to disk. This handy to save for example because it can be replayed for recovery purposes.

$ kubernetes/cluster/kubectl.sh exec redis-slave-zszm7 -c redis-slave -it -- cat /var/lib/redis/appendonly.aof
*2
$6
SELECT
$1
0
*3
$5
rpush
$6
mylist
$1
A
*3
$5
rpush
$6
mylist
$1
B
*3
$5
lpush
$6
mylist
$5
first

Performing a data migration

A common operation might be to migrate your Redis containers / PODs in the case of scheduled downtime or scheduled upgrades. One way we can do this is by labeling Kubernetes nodes and migrating our application while letting Flocker worry about where data should be moved to.

Label a node

The first thing we want to do is label a node that wont be affected by the upgrade or maintenance. You can do this with kubectl. Let’s label a node with servertype=staging.

$ kubernetes/cluster/kubectl.sh label nodes ip-10-0-0-234.ec2.internal servertype=staging
node "ip-10-0-0-235.ec2.internal" labeled

Now we can list the nodes and see the label is there.

$ kubernetes/cluster/kubectl.sh get no
NAME                           LABELS                                                                   STATUS    AGE
ip-172-20-0-103.ec2.internal   kubernetes.io/hostname=ip-172-20-0-103.ec2.internal                      Ready     1h
ip-172-20-0-104.ec2.internal   kubernetes.io/hostname=ip-172-20-0-104.ec2.internal,servertype=staging   Ready     1h
ip-172-20-0-105.ec2.internal   kubernetes.io/hostname=ip-172-20-0-105.ec2.internal                      Ready     1h

To move our POD we can add the nodeSelector attribute to the spec file. To move the Redis slave we open our redis-slave-controller.yaml file and add the nodeSelector section.

spec:
  containers: 
    - name: "redis-slave"
    image: "wallnerryan/redis-slave"
    env:
    - name: GET_HOSTS_FROM
      value: env
    ports: 
      - name: "redis-server"
        containerPort: 6379
    volumeMounts:
      - mountPath: "/var/lib/redis"
        name: redis-slave-data
  nodeSelector:
    servertype: "staging"

To redeploy the container and make sure it moves to a the selected server along with its data, stop and start the POD.

$ kubernetes/cluster/kubectl.sh delete -f redis-slave-controller.yaml

$ kubernetes/cluster/kubectl.sh create -f redis-slave-controller.yaml

This will cause the Redis slave POD to be rescheduled onto the specific node with servertype: "staging". During this rescheduling, Flocker will move the data volume and attach it to the host where the POD started on, this way your data remains available!

NOTE: If you had more than one server with this label, Kubernetes would choose one of them.

You can view the fact that Flocker has moved your volume by using the ClusterHQ Volume Hub and taking note of the changes.

Volume is being moved Flocker + Redis ReAttached

Volumes re-attached to staging Kubernetes node Flocker + Redis ReAttached

Click Containers view and search for redis-slave Flocker + Redis Search

You can view your Redis Master and Slave in the Kube UI both configured with Flocker EBS Volumes Flocker + Redis In UI

Using nodeSelector can be a useful scheduling filter and in this case we use it for migration. Flocker and Kubernetes will react the same way in the case of a node failure where Kubernetes will recognize the failed node and reschedule the container automatically while Flocker will react to placing your data where its needed.

Take-aways

Running a Redis cluster on Kubernetes with Flocker volumes is useful in single instance examples but to run Redis is a more production like configuration, using clustering options is preferable. The main take aways from this post should be:

  • Using Redis and a Master-Slave configuration works well with Flocker and Kubernetes
  • Using the Volume Hub makes it easy to visualize containers, volumes and movements with Kubernetes.
  • Spinning up new Redis Slaves and persisting the data to shared storage can easily be achieved by using Flocker
  • Getting the extra level of safety for your data can be achieved by not only running a cluster but using shared storage behind the Redis database.
  • Migrations and movements done by the Kubernetes scheduler are atomic with Flocker, your data moves with the container.

We’d love to hear your feedback!

Like what you read?

Signup for a free FlockerHub account.
Sign Up

Get all the ClusterHQ News

Stay up to date with our newsletter. No spam. Ever.