SwarmWeek Part 3: Deploying a NodeJS and MongoDB Microservice with Docker Swarm

Flocker-Swarm

SwarmWeek - Flocker Edition Part 3

Welcome to part 3 in this #SwarmWeek series. If you missed part 1 and 2, Go back and check out part 1 of the series to learn how to install the environment we are working with and take a look at part 2 while you’re at it to learn about Docker Swarm’s experimental rescheduling feature.

    Deploying a NodeJS and MongoDB Microservice with Docker Swarm

    In this last part of the series we’re going to show you how to deploy a 3 microservice app that has 2 stateless services and one stateful service.

    First, a bit on stateless vs. stateful. Stateful-ness and persistence in general can mean “the continuance of an effect after its cause is removed” meaning we care about what “has” happened and would like to access it later. Stateless on the other hand means that the “continuance of an effect” no longer matters and that whatever service is stateless doesn’t need to worry about retaining that information.

    In the example we will show, there are two services that do not need to retain state when requests to access them are handled. On the other hand, there is one stateful service that is a database which stores state after the request to save information that comes in. It’s expected that the lifecycle of the stateless containers doesn’t matter, but the lifecycle of the stateful service does in-so-far as the data (“continuance of effect”) remains accessible.

    The stateless services are:

    • Tweet Streamer - this NodeJS container streams tweets bases on a hash-tag filter from the Twitter API.
    • Tweets Web - this NodeJS container requests and serves the most recent filtered tweet from the database.

    The stateful service is:

    • MongoDB - this database stores tweets that are streamed in from the Tweet Streamer service.

    The architecture looks like this.

    TweetStreamer

    Stateless Services

    The nice part about microservices is the ability to mix and match your service languages because the interaction points are typically over something like HTTP(S). The stateless services in this case are written using the MEAN stack with NodeJS, Express, and MongoDB (minus the Angular).

    In reality, these services could have been written in Python, or PHP or Ruby, it really doesn’t matter because they are all interacting with each other in common interfaces. Now, these services are not solely interacting with REST APIs such as the Twitter API in this case but they are interacting with other common interfaces such as common MongoDB database connections as well. The flexibility and ease of this concept should be comforting to developers. Another key concept for this style of application is keeping your microservices small enough to easily “fit in your head”. Both of these services are around ~50 lines of Javascript formatted the way they are, and they could most likely be shortened and improved, they don’t do too much.

    The point here is that we could have written the web service and the streaming service and put it in one container, but then we can’t scale these services individually if needed, nor could we assign them to different developers or small teams to work on or fix bugs for them and individually roll out patches. Even though this is a simple example of microservices with Docker Swarm and Flocker, it applies to bigger and more complex use cases as well. To learn more, check out this Microservices article by Martin Fowler which does a really good job at explaining these concepts and more.

    Tweet Streamer

    var Twitter = require('twitter');
    var MongoClient = require('mongodb').MongoClient;
    Server = require('mongodb').Server;
    
    var client = new Twitter({
      consumer_key: process.env.CONSUMER_KEY,
      consumer_secret: process.env.CONSUMER_SECRET,
      access_token_key: process.env.ACCESS_TOKEN_KEY,
      access_token_secret: process.env.ACCESS_TOKEN_SECRET
    });
    
    client.stream('statuses/filter', {
      track: process.env.TWITTER_TRACK
    }, function(stream) {
      stream.on('data', function(tweet) {
        console.log(tweet.text);
        var client = new MongoClient(new Server(process.env.MONGODB_SERVICE_SERVICE_HOST, "27017", {
          auto_reconnect: true
        }, {
          numberOfRetries: 10,
          retryMilliseconds: 500
        }));
        client.open(function(err, client) {
          if (err) {
            console.log(err);
          } else {
            var db = client.db("test");
            db.collection('records', function(err, collection) {
              collection.insert({
                'tweet': tweet.text
              }, {
                safe: true
              }, function(err, result) {
                if (!err) {
                  console.log(result);
                } else {
                  console.log(err);
                }
              });
            });
          }
        });
        client.close();
      });
      stream.on('error', function(error) {
        console.log("error getting tweets");
        console.log(error);
      });
    });

    Tweets Web

    var express = require('express');
    var MongoClient = require('mongodb').MongoClient;
    var app = express();
    var PORT = 8080;
    var fs = require('fs');
    
    app.engine('ntl', function(filePath, options, callback) { 
      fs.readFile(filePath, function(err, content) {
        if (err) return callback(new Error(err));
        var rendered = content.toString().replace('#tweet#', '<style type="text/css"> body { background-color: #f3f3f3; }</style><div id="twt" style="display: inline-block; position: fixed; top: 0; bottom: 0; left: 0; right: 0; width: 500px; height: 200px; margin: auto; font-size:22pt; font-weight:bold; font-family: Helvetica Neue, sans-serif; letter-spacing: -1px; line-height: 1; background-color: #f3f3f3;"><p>' + options.tweet + '</p></div>');
        return callback(null, rendered);
      })
    });
    app.set('views', './views');
    app.set('view engine', 'ntl');
    
    app.get('/', function(req, res) {
      console.log('Contacting MongoDB');
      MongoClient.connect("mongodb://" + process.env.MONGODB_SERVICE_SERVICE_HOST + ":27017/test", function(err, db) {
        if (!err) {
          console.log("We are connected to MongoDB");
          db.collection('records', function(err, collection) {
            if (!err) {
              collection.find().toArray(function(err, docs) {
                if (!err) {
                  db.close();
                  len = docs.length - 1;
                  res.render('index', {
                    tweet: docs[len].tweet
                  });
                }
              });
            }
          });
        } else {
          res.send('Cannot connect to MongoDB\n');
          console.log("Cannot connect to MongoDB");
          console.log(err);
        }
      });
    });
    app.listen(PORT);
    console.log('Running on http://localhost:' + PORT);

    To deploy these services we can use Docker Compose against Docker Swarm. Part 1 goes through installing and configuring the environment we use for these examples so if you’re following along you can go back and read what is installed.

    As a quick overview, we have a Swarm Cluster enabled for overlay networking using Flocker for volume management.

    One of the nice parts of overlay networking in Docker is the ability to access containers by name over the entire Swarm cluster. To show you how much easier it is to manage your containers, we will show the example with and without overlay networking and explain the differences along the way.

    Deploying without Overlay Networking

    Here is our Docker Compose file. There are a few items to note.

    Note: your IP Address for your constraint and MONDODB_SERVICE_SERVICE_HOST will be different depending on your environment.

    • constraint:node==ip-10-0-57-22 - We are using constraints to place our Mongo database on a specific host because we care where we put it so out stateless services know where to point.

    • MONGODB_SERVICE_SERVICE_HOST: "10.0.57.22" - We are pointing our services that need the database at a specific IP address for MongoDB which is the same as the constraint node for MongoDB.

    • We are using an external volume for MongoDB that’s managed by Flocker.

    • We are using bridge (default) networking for the application.

    version: '2'
    services:
      web:
        image: wallnerryan/tweets-web
        ports:
          - 80:8080
        environment:
            MONGODB_SERVICE_SERVICE_HOST: "10.0.57.22"
        depends_on:
          - mongo
      stream:
        image: wallnerryan/tweets-stream
        environment:
            TWITTER_TRACK: "#DonaldTrump"
            CONSUMER_KEY: "<YOURCONSUMERKEY>"
            CONSUMER_SECRET: "<YOURCONSUMERSECRET>"
            ACCESS_TOKEN_KEY: "<YOURACCESSTOKENKEY>"
            ACCESS_TOKEN_SECRET: "<YOURACCESSTOKENSECRET>"
            MONGODB_SERVICE_SERVICE_HOST: "10.0.57.22"
        depends_on:
          - mongo
      mongo:
        image: clusterhq/mongodb
        ports:
        - 27017:27017
        environment:
        - "constraint:node==ip-10-0-57-22"
        volumes:
        - "mongodb:/data/db"
    
    volumes:
      mongodb:
        external:
            name: testvol
    
    networks:
      default:
        driver: bridge

    The volume that we need hasn’t been created, so let’s create it.

    $ docker volume create -d flocker --name=testvol -o size=10G

    Now we can bring up our services with Docker Compose.

    $ docker-compose -f tweets-compose.yml up -d
    Creating network "root_default" with driver "bridge"
    Pulling mongo (clusterhq/mongodb:latest)...
    ip-10-0-195-84: Pulling clusterhq/mongodb:latest... : downloaded
    ip-10-0-57-22: Pulling clusterhq/mongodb:latest... : downloaded
    Creating root_mongo_1
    Pulling web (wallnerryan/tweets-web:latest)...
    ip-10-0-57-22: Pulling wallnerryan/tweets-web:latest... : downloaded
    ip-10-0-195-84: Pulling wallnerryan/tweets-web:latest... : downloaded
    Creating root_web_1
    Pulling stream (wallnerryan/tweets-stream:latest)...
    ip-10-0-57-22: Pulling wallnerryan/tweets-stream:latest... : downloaded
    ip-10-0-195-84: Pulling wallnerryan/tweets-stream:latest... : downloaded
    Creating root_stream_1
    
    $ docker-compose -f tweets-compose.yml ps
        Name                   Command               State              Ports
    ------------------------------------------------------------------------------------
    root_mongo_1    /bin/sh -c /home/mongodb/m ...   Up      10.0.57.22:27017->27017/tcp
    root_stream_1   sh /src/run_stream.sh            Up      #(on node 10.0.195.84)
    root_web_1      sh /src/run_web.sh               Up      10.0.57.22:80->8080/tcp

    Now that our application is running, what if the host that MondoDB is running on fails? What if it needs to be upgraded? First, we can’t tell Swarm to place it somewhere else without changing the constraint, and second if we change the constraint we need to also change the IP addresses that the other stateless containers are configured to use.

    Yes, we could use linking and have them be deployed to the same host and that way they can all move around together and act as a “group” or “pod” but both of these approaches still seem limiting and inflexible in many ways.

    This is where Docker overlay networking comes in to shine.

    Go ahead and delete the app.

    $ docker-compose -f tweets-compose.yml stop
    $ docker-compose -f tweets-compose.yml rm -f

    Deploying with Overlay Networking

    To deploy this application with overlay networking taken into account, we can change our Compose file to the following.

    version: '2'
    services:
      web:
        image: wallnerryan/tweets-web
        ports:
          - 80:8080
        environment:
            MONGODB_SERVICE_SERVICE_HOST: "mongodatabase1"
        depends_on:
          - mongo
      stream:
        image: wallnerryan/tweets-stream
        environment:
            TWITTER_TRACK: "#DonaldTrump"
            CONSUMER_KEY: "<YOURCONSUMERKEY>"
            CONSUMER_SECRET: "<YOURCONSUMERSECRET>"
            ACCESS_TOKEN_KEY: "<YOURACCESSTOKENKEY>"
            ACCESS_TOKEN_SECRET: "<YOURACCESSTOKENSECRET>"
            MONGODB_SERVICE_SERVICE_HOST: "mongodatabase1"
        depends_on:
          - mongo
      mongo:
        image: clusterhq/mongodb
        container_name: "mongodatabase1"
        ports:
        - 27017
        volumes:
        - "mongodb:/data/db"
    
    volumes:
      mongodb:
        external:
            name: testvol
    
    networks:
      default:
        external:
           name: blue-net

    A few things to note.

    • We gave our MongoDB service a name. container_name: "mongodatabase1"

    • We replaced our IP addresses with the MongoDB name mongodatabase1

    • We removed the constraint from MongoDB. Now it can be deployed on any node.

    • We added the overlay networking blue-net

    Next, create the network and storage resources needed for this.

    $ docker network create --driver overlay --subnet=192.168.0.0/24 blue-net
    
    #(You may not need to run this, it's likely already been created from the previous example.)
    $ docker volume create -d flocker --name=testvol -o size=10G

    Start up the services again with Docker Compose.

    docker-compose -f tweets-compose-overlay.yml up -d
    Creating mongodatabase1
    Creating root_web_1
    Creating root_stream_1

    Notice our mongodatabase1 is now accessed by its name, no matter what Docker host the other services land on with any given IP address.

    docker-compose -f tweets-compose.yml ps
        Name                   Command               State              Ports
    ------------------------------------------------------------------------------------
    mongodatabase1    /bin/sh -c /home/mongodb/m ...   Up      10.0.195.84:27017->27017/tcp
    root_stream_1   sh /src/run_stream.sh            Up
    root_web_1      sh /src/run_web.sh               Up      10.0.57.22:80->8080/tcp

    Again ask yourself, what if the MongoDB container moves? What if the web service gets rescheduled? The answer is that they can without any issues, and as long as they are part of the network blue-net they will be able to access each other by name. This gives us much more flexibility for our environment!

    We can double check to see that our MongoDB container is using our Flocker volume.

    $ docker inspect mongodatabase1 | grep flocker
                    "Source": "/flocker/40948462-8d21-4165-b5d5-9c7d148016f3",
                    "Driver": "flocker",

    We can also log into our mongodatabase1 to view the records.

    $ docker run --net=blue-net -it --rm mongo sh -c 'exec mongo "mongodatabase1:27017/test"'
    MongoDB shell version: 3.2.4
    connecting to: mongodatabase1:27017/test
    Welcome to the MongoDB shell.
    For interactive help, type "help".
    For more comprehensive documentation, see
      http://docs.mongodb.org/
    Questions? Try the support group
      http://groups.google.com/group/mongodb-user
    >
    > db.records.find()
    Cannot use 'commands' readMode, degrading to 'legacy' mode
    { "_id" : ObjectId("56e07fea4c91a908000b0925"), "tweet" : "RT @infowars: Marxist prof defends financial elite attack on #DonaldTrump. https://t.co/LOPTDfHW7V" }
    { "_id" : ObjectId("56e07ff34c91a908000b0926"), "tweet" : "#Media should boycott #DonaldTrump events if he won't mic reporters and/or dismiss questions. That's not a presser &amp; doesn't help public." }
    { "_id" : ObjectId("56e07ff44c91a908000b0927"), "tweet" : "MEXICO WILL PAY THE WALL WHEN AMERICAN MINING COMPANIES PAY US BACK THE GOLD AND SILVER THEY TAKE @realDonaldTrump #DonaldTrump #Usa2016" }
    { "_id" : ObjectId("56e07ffa4c91a908000b0928"), "tweet" : "The Truth About #DonaldTrump’s Populism - In These Times: https://t.co/bRtD6vMtGu\n\n#NeverTrump #DumpTrump #NoGOP #UniteBlue #Bernie2016" }
    .
    .

    For those wondering, yes, this configuration of the app does filter tweets with the hash-tag #DonaldTrump and present them to the browser like so. :)

    TweetStreamerWeb

    Happy Swarming!


    We’d love to hear your feedback!

    Like what you read?

    Signup for a free FlockerHub account.
    Sign Up

    Get all the ClusterHQ News

    Stay up to date with our newsletter. No spam. Ever.