Deploying and migrating an Elasticsearch-Logstash-Kibana stack using Docker Part 1

Flocker + ELK

This is Part 1 of 2 in a series about using Docker to deploy and migrate an Elasticsearch-Logstash-Kibana stack. Part 1 will deploy all three containers to a single host using the Docker CLI and Docker Compose. Part 2 will deploy the three containers across multiple hosts using Docker Swarm.

Elasticsearch is a distributed RESTful search engine built for the cloud.
It gives you a distributed and highly available search engine for your shard data and can be configured for multi-tenancy. Elasticserach has exploded in popularity recently and it seems like almost everyone is using it for something, big or small.

Elasticsearch is most often used in conjunction with two other tools, Logstash -a tool for managing events and logs- and Kibana -an open-source, browser-based analytics and search dashboard. For this reason, ElasticSearch is often referred to as the ELK stack for ElasticSearch, Logstash and Kibana.

Why run Elasticsearch in Docker

Running ElasticSearch in production does not have a reputation of being easy. First, as mentioned, most people don’t just run ElasticSearch, they run the full ELK stack. Each of these services plays a role, and for real workloads, that probably means that you want to deploy each service to its own server which introduces complexities around multi-node deployment as well as networking. Furthermore, once your ELK stack is out there doing its thing, you are going to probably need to upgrade your ElasticSearch box to a larger node or bring it down for maintenance at some point because ElasticSearch is notoriously memory hungry. With tools like Docker Compose, Docker Swarm and Flocker, all these tasks are easily accomplished for your ELK stack.

##What you will learn In this Tutorial, you will learn:

  • How to write a Docker Compose file that describes a standard ELK stack as a set of three connected Docker containers.
  • How to correctly set up Elasticsearch’s data volume so that it can be moved between hosts as operations demand
  • How to deploy your Docker Compose file to a single server using the Docker CLI.
  • How to automatically move your three ELK containers and ElasticSearch’s data to a new server.

We will use the official ElasticSearch, Logstash and Kibana images from Docker Hub for images.

We’ll use Docker and Flocker, an open-source data volume for deployment, networking and migrations ops tasks.

If you are interesting in running your ELK stack across multiple hosts, we’ve also created a tutorial using Docker Swarm. You can skip ahead to that.

Setting up ELK

First, a quick overview of the various ELK components and the roles that they play.

  • Logstash receives logged messages and relays them to ElasticSearch
  • ElasticSearch stores the logged messages in a database
  • Kibana connects to ElasticSearch to retrieve the logged messages and presents them in a web interface.

The first thing that we need to do is package up our three applications along with their dependencies into three separate Docker images. These images are the official images and exist on DockerHub here:

Deploying ELK

Now that we have our Docker images, we are ready to deploy our stack. We will be using this repository for the examples in the blog post, feel free to try it out.

If you are interested in the multi-node deployment, skip ahead to multi-node deployment of ELK

Single Node Deployment


In this tutorial there are:

  • 3 nodes.
  • 1 master node with our Flocker Control Service and Docker installed.
  • 2 nodes with our Flocker Agent Services and Docker installed (our ELK is going to move between these two nodes.)

In this example we will be running our nodes on Amazon EC2 and creating and attaching volumes from Amazons EBS service.

Note, this example will only use 1 agent node to deploy ELK as we are focused on how to do a single node deployment.

To deploy an ELK stack where Elasticsearch uses Flocker volumes to store its data, we want to make sure we tell Docker to use the Flocker Plugin for Docker in the command line or in your compose file.

In this example I will be using the docker-compose.yml file from this repository

  image: elasticsearch:latest
  command: elasticsearch
    - "9200:9200"
    - "9300:9300"
  volume_driver: flocker
    - elasticsearch1:/usr/share/elasticsearch/data
  image: logstash:latest
  command: logstash -f /etc/logstash/conf.d/logstash.conf
    - ./logstash.conf:/etc/logstash/conf.d/logstash.conf
    - "5000:5000"
    - elasticsearch
  image: kibana
    - "5601:5601"
    - elasticsearch
    - ELASTICSEARCH_URL=http://elasticsearch:9200

This Docker Compose yaml file has 3 containers:

  • Elasticsearch
    • This configuration includes volume_driver: flocker so that its volume is provisioned by Flocker
    • Exposed ports: 9200 and 9300
  • Logstash
    • This configuration links to the Elasticsearch container on the same host
    • Exposed port: 5000
    • Uses a custom logstash.conf from the same repository
  • Kibana
    • This configuration links to the Elasticsearch container on the same host
    • Exposed: port 5601
    • Adds an environment variable that points at the Elasticsearch container with a URL

To deploy this stack on a single Docker host, run the following on “node1”.

$ user@node1:~/elk-flocker-compose# docker-compose -f docker-compose.yml up -d
Recreating elkflockercompose_elasticsearch_1
Recreating elkflockercompose_logstash_1
Recreating elkflockercompose_kibana_1

You should then have 3 containers running your ELK stack

$ user@node1:~/elk-flocker-compose# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                                            NAMES
fb11285f1b94        kibana                 "/docker-entrypoint.s"   4 seconds ago       Up 3 seconds>5601/tcp                           elkflockercompose_kibana_1
e4c8fd4bc8b5        logstash:latest        "/docker-entrypoint.s"   4 seconds ago       Up 3 seconds>5000/tcp                           elkflockercompose_logstash_1
4b45e21bdf66        elasticsearch:latest   "/docker-entrypoint.s"   27 seconds ago      Up 4 seconds>9200/tcp,>9300/tcp   elkflockercompose_elasticsearch_1

To add some log data to the Logstash container, you can run the following.

$ user@node1:~/elk-flocker-compose#  nc localhost 5000 < /var/log/flocker/flocker-dataset-agent.log

This log data gets sent to Elasticsearch from Logstash and is stored in Elasticsearch. When we started our ELK stack, Flocker provisioned a volume for Elasticsearch, so your data is actually stored in shared storage. We can see the volume provisioned to Elasticsearch by running the following.

$ user@localhost:-> flockerctl list
DATASET                                SIZE     METADATA              STATUS         SERVER
08159d0b-d876-4f8a-85ee-a6b17c0a8a95   75.00G   name=elasticsearch1   attached ✅   c194ff2a (

If we visit our Kibana dashboard we can browse this log data.

Flocker ELK data 1

Now it’s time to migrate our ELK stack. You may want to do this as your workload scales up or the server doesn’t have enough CPU, RAM, and network bandwidth to handle near and longterm capacity needs. Running your ELK stack in containers makes it portable so you can manually or automatically move that container to a more powerful machine with ease. You can also more easily respond to system failures like crashed servers. Flocker comes in by making sure your data directory moves to the new host along with your container reducing downtime and headaches. The same thought process can be used if you wanted to downgrade the host server to something more affordable with moderate performance attributes.

First, let’s kill and remove our ELK stack from “node1”

$ user@node1:~/elk-flocker-compose# docker-compose -f docker-compose.yml stop
Stopping elkflockercompose_kibana_1 ... done
Stopping elkflockercompose_logstash_1 ... done
Stopping elkflockercompose_elasticsearch_1 ... done

$ user@node1:~/elk-flocker-compose# docker-compose -f docker-compose.yml rm
Going to remove elkflockercompose_kibana_1, elkflockercompose_logstash_1, elkflockercompose_elasticsearch_1
Are you sure? [yN] y
Removing elkflockercompose_kibana_1 ... done
Removing elkflockercompose_logstash_1 ... done
Removing elkflockercompose_elasticsearch_1 ... done

Next, let’s start our ELK stack using the same yml file we had before on “node2”

$ user@node2:~/elk-flocker-compose# docker-compose -f docker-compose.yml up -d
Recreating elkflockercompose_elasticsearch_1
Recreating elkflockercompose_logstash_1
Recreating elkflockercompose_kibana_1

During this time, Flocker will realize that the redeployment is asking for a volume on a different node (node2) and will move the volume that Elasticsearch expects to the right node. During this time, you will see the volume’s destination IP change, and become detached then attached again.

During migration

$ user@localhost:-> flockerctl list
DATASET                                SIZE     METADATA              STATUS         SERVER
08159d0b-d876-4f8a-85ee-a6b17c0a8a95   75.00G   name=elasticsearch1    detached    1dded19e (

After migration

$ user@localhost:-> flockerctl list
08159d0b-d876-4f8a-85ee-a6b17c0a8a95   75.00G   name=elasticsearch1  attached ✅   1dded19e (

We can verify that our ELK stack is back up and running on our second node “node2”

$ user@node2:~/elk-flocker-compose# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                                            NAMES
55e7c808bde4        kibana                 "/docker-entrypoint.s"   4 minutes ago       Up 4 minutes>5601/tcp                           elkflockercompose_kibana_1
e445c65d44bd        logstash:latest        "/docker-entrypoint.s"   4 minutes ago       Up 4 minutes>5000/tcp                           elkflockercompose_logstash_1
f0a21b9c2aa3        elasticsearch:latest   "/docker-entrypoint.s"   4 minutes ago       Up 4 minutes>9200/tcp,>9300/tcp   elkflockercompose_elasticsearch_1

If we visit the IP Address of our new node with the correct port we can see that all of our log data is still there despite us stopping, removing and starting new containers! This is because our Elasticsearch data was saved in the Flocker volume and moved to the new node when we restarted the stack.

Flocker ELK data 2

Ready to see how to deploy and migrate ELK across multiple nodes using Docker Swarm? Read on to Part 2.


We’d love to hear your feedback!

Like what you read?

Signup for a free FlockerHub account.
Sign Up

Get all the ClusterHQ News

Stay up to date with our newsletter. No spam. Ever.