This is Part 2 of 2 in a series about using Docker to deploy and migrate an Elasticsearch-Logstash-Kibana stack. Part 1 showed you how to deploy all three containers to a single host using the Docker CLI and Docker Compose. In Part 2, we will deploy the three containers across multiple hosts using Docker Swarm.
##Part 1 Summary
- Elasticsearch is a distributed RESTful search engine ususally used in conjunction with Logstash and Kibana
- Docker is a great place to run ElasticSearch because Docker makes it easy to move your ElasticSearch, Logstash and Kibana containers between servers.
- While Logtash and Kibana are stateless, ElasticSearch is stateful. If you want to be able to migrate your ElasticSearch data along with your ElasticSearch container between hosts, you can use Flocker.
##What you will learn in Part 2 Tutorial 1 deployed all three containers in the ELK stack to a single host. While useful for exploratory purposes, this is not a very realistic production set up. ElasticSearch is notoriously memory hungry, so you probably want to give it its own dedicated machine, maybe one with access to high performance storage for fast reads and writes.
In this Tutorial, you will learn:
- How to deploy the ELK stack across multiple hosts.
- How to orchestrate your containers using Docker Swarm.
- How to automatically move your three ELK containers and ElasticSearch’s data to a new server.
Like before, we will use the official ElasticSearch, Logstash and Kibana images from Docker Hub for images.
We’ll use Docker Swarm and Flocker, an open-source data volume for deployment, networking and migrations ops tasks.
We will be using this repository for the examples in the blog post, feel free to try it out.
Multi Node Deployment
In this tutorial there are:
- 3 nodes.
- 1 master node with our Flocker Control Service and Docker installed.
- 2 nodes with our Flocker Agent Services and Docker installed (our ELK stack is going to be deployed across these nodes via Swarm.)
In this example we will be running our nodes on Amazon EC2 and creating and attaching volumes from Amazons EBS service.
To run an ELK stack in a multi-node fashion, we can use Docker Swarm for orchestrating the placement of our containers.
First, make sure Swarm is setup on your cluster. Below is an example of what this may look like, notice we have 2 nodes in our cluster
Swarm setup will not be covered in this post. See Swarm docs for more on installing Swarm
From this repository you can
cd multi-node/ for an example
docker-compose-multi.yml file you can use for deploying ELK against a Swarm cluster.
docker-compose-multi.yml to contain the node #1 name from
docker info output in the Elasticsearch section of the Compose file. Also add the ip address of the same node in the Logstash and Kibana configuration sections.
This will make sure Kibana points correctly at the right IP address of the Elasticsearch node.
An alternate solution would be to enable Docker overlay networks but it will not be covered in this post.
Here is a snippet of what your
docker-compose-multi.yml may look like.
Now, run your ELK stack in the same fashion as you did in Part 1, but this time were talking to our Swarm Master
During this process you should see a volume getting created for the Elasticsearch container, here is the output of the Flocker CLI during the process.
Volume is being created
Volume is created and attached to the host
Now, if we run a
docker ps command, we can see our containers are running on different nodes in our cluster. Elasticsearch is on
ip-10-0-70-72 and Logstash and Kibana are on
Now if we login to our node running Logstash, we can add some logs.
Now we can visit our Kibana UI and see we have some data.
Now we can move our Elasticsearch database to perform a migration. You may want to do this as your workload scales up or the server doesn’t have enough CPU, RAM, and network bandwidth to handle near and longterm capacity needs. Running your ELK stack in containers makes it portable so you can manually or automatically move that container to a more powerful machine with ease. You can also more easily respond to system failures like crashed servers. Flocker comes in by making sure your data directory moves to the new host along with your container reducing downtime and headaches. The same thought process can be used if you wanted to downgrade the host server to something more affordable with moderate performance attributes.
Migrating our ELK stack with Docker Swarm means re-deploying Docker Compose against Swarm with a new configuration.
Here is our new
Notice, our IP address and node have changed to another node in the configuration.
We can re-deploy our ELK stack by running the following with our new
Now, if we run a
docker ps command, we can see our containers are running on different nodes in our cluster. Elasticsearch and Kibana are on
ip-10-0-202-37 and Logstash and is on
We can also see that Flocker responded to the migration by moving Elasticsearch data to our new node.
Notice the IP address is now 10.0.202.37 now.
What we saw during this migration was:
So now if we refresh our Kibana UI since it’s on the same host as it was before, we should see the same data even though our Elasticsearch database now lives on a different host.
We’d love to hear your feedback!