SwarmWeek - Flocker Edition Part 1
Since it’s Swarmweek we wanted to do a series that gave a number of different examples on how you can use Docker Swarm and Flocker together!
This is the first post in the series for #SwarmWeek. The series will take you through using a Flocker + Swarm cluster in a number of different use-cases. These use-cases include the following:
- Setting up a Swarm Cluster with Consul for Service Discovery on a Flocker Cluster
- Creating a multi-service Twitter, NodeJS app without overlay networking and transitioning its configuration to use overlay networking.
- Creating a multi-host Cassandra Cluster with overlay networking and Flocker Volumes
- Creating a single Redis server with a Flocker volume and testing the experimental
This series is jam packed with goodies and is meant to be read as a fun overview of how to use Flocker and Swarm together during your #SwarmWeek adventures.
This portion of the series will focus on a subset of the list above, since its the first part in the series, we will also install and configure Swarm.
Setting up a Swarm Cluster with Consul
This portion assumes you have 3 Docker hosts already created running Flocker. We are also using Ubuntu 14.04 as its example. Learn how to install Flocker here.
On one of your Docker hosts, we will start a Consul server. Docker uses Consul as a Key/Value store to store cluster state such as networking and manager/engine info. You can do this container-based but I like to start it on the server so I don’t have to worry about bring stopping and starting the Docker daemon and killing my consul server with it.
Once the consul server is up and running and before we enable Swarm to
manage our cluster, we need to first prep our Docker daemon. The first
thing we need to do is add some
DOCKER_OPTS to the daemon on every node.
Make sure to
restart the Docker daemon after you have made the above change to your
Docker options. Then, on one of your nodes start a Primary Swarm Manager
On a second node, start a Secondary Swarm Manager Replica
Then, on every Docker host that will participate in the Swarm cluster run the following
Now on your Primary Swarm Manager node, you can run the
docker info command to see your
Thats it! Your ready to start deploying applications.
Warning: this configuration does not include setting up TLS. Learn more about Docker Security
Creating a multi-host Cassandra Cluster
Now for our first application. We have Flocker and Docker Swarm setup to support overlay networking so we can start on our first example. This example will use this repoisitory for creating a Multi-Host Cassandra Cluster.
What is Cassandra?
“Apache Cassandra™ is a massively scalable open source NoSQL database. Cassandra is perfect for managing large amounts of structured, semi-structured, and unstructured data across multiple data centers and the cloud. Cassandra delivers continuous availability, linear scalability, and operational simplicity across many commodity servers with no single point of failure, along with a powerful dynamic data model designed for maximum flexibility and fast response times.”
Cassandra has automatic data distribution and built-in, customizable data replication to support transparent partitioning and redundant copies of its data. Learn more about Cassandra here.
Why use Flocker with Cassandra?
Cassandra states that in-memory approaches to data storage can give you “blazing speed” however the cost of being limited to small data sets may not be so desirable. Cassandra implements a “commit-log based persistence design” that lets you tune to your desirable needs of security and performance. Allowing Cassandra to write to disk has security improvements for your data and you can use containerized environments with Flocker to help you do so. To learn more about Cassandra and persistence, read the article What persistence is and what does it matter.
Running a multi-node Cassandra cluster with your Swarm cluster
The first thing we want to do is create an overlay network for our cluster to use. Docker multi-host networking allows containers to easily span multiple machines while being able to access containers by name over the same isolated network. Lets create an overlay network in our setup.
Note: run these docker commands against your Swarm Manager!
Next, we need to create the persistent volume resources needed by our Cassandra cluster.
We will create three volumes named
Once your network and volumes resources are in place, you can copy this Docker Compose file or pull it from the repository linked earlier.
Notice in the below Docker Compose v2 file that we are referencing our Cassandra
containers by name in
of by IP address. This is because the containers are deployed on our overlay
overlay-net and can access each other by name! We also reference a Flocker
volume for each Cassandra container to store state. This makes our Cassandra cluster
very flexible and means the Cassandra containers will always be able to connect to
each other no matter where they are started as long as they are part of the network.
Next, we can instruct Docker Compose to start our Cassandra cluster.
View the running containers. Notice that our Cassandra nodes are deployed to 2 different Docker Hosts, this is because we are using Swarm to schedule our Cassandra containers.
Note: We enable
restart: alwaysto keep our Cassandra containers up and because Swarm may deploy the containers too fast for Cassandra to bootstrap correctly causing a
Other bootstrapping/leaving/moving nodes detectederror and the restart will try and recover the bootstrap correctly when this happens which in this case you would see a
Detected previous bootstrap failure; retryingmessage
View that the Cassandra containers are using the Flocker volumes.
SSH into one of your Docker hosts that is running a cassandra cluster container.
Next, lets connect to our Cassandra cluster and interact with it. We can run a
one-off CLI container on the same network and connect to any of our
There you have it, you’ve deployed Cassandra with Docker Swarm and Flocker with overlay networking using Docker Compose.
We’d love to hear your feedback!