Tech Talk: DockerCon 2016: Everything You Need to Know About Docker and Storage with Ryan Wallner

Table of Contents

In case you missed DockerCon, or could not choose among all of the amazing sessions, we wanted to share a couple of session videos and transcripts with the community. In the talk featured below, Ryan Wallner of ClusterHQ shares key concepts of stateful containers and then demos deploying a database container on a single node with UCP, Flocker and VolumeHub, then allowing Swarm to automatically reschedule the stateful service along with Flocker moving its volume when the node fails.

We hope you enjoy the presentation!

Everything You Need to Know About Docker and Storage

Presented at DockerCon 2016

Video

Slides

Transcript

Introduction

Okay, thanks for coming. My name is Ryan Wallner. I work for ClusterHQ. Many of you might know the project Flocker. I’m a technical evangelist there, and today, I will be talking about a number of different things in relation to storage and Docker volumes. Just a show of hands, who was at Brian Goff’s presentation on storage? Awesome, so this is going to be even easier. For those of you who weren’t at it, I’m going to go into what it means to have a stateful application versus stateless, what volumes do in terms of stateful containers and talk a little bit about plugins and show UCP with volumes, something we work on called Volume Hub, and hopefully show a demo on AWS, which Brandon’s a perfect lead in for that, so thank you.

Stateful vs. Stateless

A little bit about key concepts here. Docker and storage, right? The difference between stateful and stateless. When I think about stateful, I’m thinking about anything that you care about and don’t want to lose, right? This could be data within a database. It could be secrets. It could be log files that you want to shuffle off into a volume and then move somewhere else. It could be repository data or test data, test fixtures that you have in a Jenkins environment. When I’m talking about stateless, right, it’s something that’s really ephemeral. You don’t care how many times you bring up a container and you spin it down. If that container writes any bit of state, it’s okay losing it because you don’t actually need to hold onto it. You don’t really care about what’s happened. You can have many of them and scale out pretty horizontally.

I like to use the use case of HTTP. Yes, HTTP has sticky sessions and things like that, but in reality, HTTP doesn’t really need to know a lot about the request. It can still service it without those things. You can improve performance by having stateful aspects of it. Again, it doesn’t need to be.

Docker Volumes

Docker volumes, quick review. They’re a logical concept inside Docker to write data outside of the container file system. What I mean by that is that if you run a Docker command with -v, and you give it a name or path, this is managed within /var/lib/docker or if you’re using a host mount, it’s something that exists on the host. The problem with this is data is pretty inflexible, meaning anything you write to that location, yes, it’ll stay there after the container’s lifecycle, but only on that host. If you lose that host, you lose a rack of servers that are doing this, you’re going to lose your data. Very prone to data loss. Within Docker, it looks a little bit like the bottom example within the /var/lib/docker. What I’m going to focus on today is managing external storage. This could be anything from NFS to distributed file systems to block storage. The use cases I’m going to show today are block storage. This is really maintaining Docker volumes that are backed by an iSCSI or fiber channel or some type of shared storage that allows you to manage your storage separately than a container.

What this means is that you can have your container engine host die, right? It doesn’t need to be there. It doesn’t have to be, your data is still safe. You’d have to restart the container, obviously, to get access back to it unless you’re running a distributed file system or a distributed database and you’re running multiple shards, but it enables a whole bunch more use cases in terms of maintaining your data outside the container. Much more flexible. Your data is available when things move around. With some of the new Swarm features like rescheduling, this is really important. If Swarm decides to reschedule your container, you want your data to follow it. Volume plugins and shared storage enable you to do this.

Just a quick example of what this looks like in a Docker command. I have container here named database, generically database, but it’s running a Redis image. I want Flocker, say, to manage my volume. I would use the Flocker driver and give the volume a name. In this case, the container gets created, but what’s happening underneath the covers is that Flocker actually goes out and provisions the volume to the storage that it’s configured with. This could be EBS, for instance. That’s what I’ll show today. It could be Ceph. It could be Cinder, all sorts of different providers.

Notice that different from the first example I showed, the mounts inside the container actually show that it’s a Flocker-based mount point. This is sort of invisible to the container and the user. What you see is actually the mount point inside, which in this case is /data. That’s what the container cares about.

Container Movement

Just a visual aspect to what this might enable is you start a container. That container might move. That container starts on an engine host, and you want that data to exist. You want that location for the data to follow it. In shared storage, that could be a connection to an iSCSI volume like I’ve been talking about. This is really depicting how that LUN would really follows the container around with it. Running a database and things fell over, you don’t have to get up in the middle of the night and really fix things. Things should just recover in a good world, right?

Volumes in UCP

I just want to intro how volumes look like in UCP. Brandon actually showed the dashboards. Inside the dashboard, I’ll show it in a little bit, there’s a volumes tab. In the volumes tab, you can create volumes via the UI. This is just a volume name. Again, the driver you want to use, and then any volume options you want to use. This would be size or profile, things like that. Now, inside of UCP, they give you a view of the mount point that that volume has and the name, which is nice, right? You know it’s mounted. You know that the name you created is still there, but it’s not much. Something that ClusterHQ has worked on is a UI, and to give you a little bit more about the volume, such as the status, whether it’s an impending state, whether it’s trying to attach, whether it’s attached or detached, how big that volume is, what container is using that volume. This is just something a little extra that we can use with UCP and it’s really a vision of ours to make those things work together. As the open APIs still grow, we’d love to have this information, get into an inspect command, and then into the UI and things like that.

Just a visual aspect to what I’m talking about in terms of Flocker and the container stack. UCP and Swarm and Mesos and Kubernetes, they’re the orchestration piece in our view, right? They sit above us, and they talk down to Flocker. They use the driver, they use the integration of the choice. Underneath Flocker, as a unified controller of storage, you can use a number of different storage platforms.

Demo

[7:45] Now, let’s move to a demo. What I will show you is I have a UCP cluster running Docker 1.11, and UCP 1.1. This looks just like what Brandon was showing before. It’s four different nodes, three active workers. I have some Compose files here. I’m going to show you an example of what this looks like to run a Redis database.

Here, I’m using Redis and version 2 of Compose file. I have a volume name, Redis reschedule, and the driver flocker. We define the volume, and then we define the service, which is db_redis, and we give it the command, the image. There’s a constraint that says don’t run it on a certain node, because this is a node I really don’t want you to use. The important piece here is reschedule on node failure, which is saying if this node goes down, Swarm, please do your thing and reschedule my container, because I do not want to get up in the middle of the night and do that myself.

What we can do is run an up on the Dockerfile. At this point, I don’t have that volume that exists, that volume called redisresch. What’s actually happening now is Docker has noticed that, so it’s calling the Flocker driver here to create the driver. The application doesn’t just come up. There’s a prerequisite to say, “Flocker, go create that volume,” which, in this case, is an EBS volume. Then, you can start my application. If you go into my volumes view here, I can look at and search for Flocker. There’s a few volumes here, one I created earlier, one that is created during that Compose up. Apps is just the folder I was in. It groups them that way. Notice it doesn’t have a mount point, because Flocker, at this time, is provisioning attaching, waiting for that detachment to take place. Then, it gets a mount point.

I see this has completed, so if I just do a quick refresh, you should see that volume that we have here should not have a mount point as it does here. It doesn’t give us what node it’s on, those kinds of things, but we know that because you can do a docker ps and see it’s running on this demo two node, right? Demo2 node, we’re going to do a docker exec, and we’re going to use this application here. We’re going to open it up. Actually, I’m going to do this over here. We’re going to go in and add some data. Obviously, we want to use this container, and we’re going to jump into Redis. We’re going to set dockercon to seattle. There, we have some data. Not much data, but some data. Now, if we get out of this, this container, as we know, is running on demo2, the second node.

Just to be fair and showcase a little bit of how Swarm reschedules nodes and how ClusterHQ and the Flocker project can react to that. Let me just make sure I have the right node again. We’ll go to demo2, and we will just say some intern came in and said, “I didn’t like demo2, so I’m going to get rid of it.” Not probably something that’s going to happen in production, but for the use case of the demo, this is useful. Here, I can look at my Docker Machine nodes. What we should actually start happening is see some state change. Here, demo2, you should see it says stopping. At this point, we know our Redis server is running on demo2. What actually does that … what happens to that?

We can go into our nodes, get a view of the world, and everything seems to be okay. At this point, the Swarm manager has some heartbeats, some timeouts that it’s saying, “I’ll check with my nodes that they’re healthy every so often.” At a certain point, the Swarm manager will get back some news that there is not so healthy of a node. Here, we should be able to see that our demo2 node seems to already be gone, which is basically there’s three nodes now, so demo2 is gone. We stopped it. Sometimes it takes a little bit longer for the manager to realize it’s gone. This is based on the heartbeats and how long it checks. At this point, we’ve told Docker that we really want this Redis node to stay up.

If we do a Docker ps, eventually this should come back out. It’s up on demo0. If we check with our Docker machine, I’d say I do want to say “ssh mha-aws-consul sudo docker logs”. Just saying it as I type it in case you can’t read it. If we do Swarm manager. The reason I’m showing the logs from the Swarm manager is because we can actually see the event trigger that tell that the Swarm manager realizes, “Hey, this node is down and I need to reschedule it.” Here, you’ll see removed 2 because it’s no longer healthy. Rescheduled, from 2 to 0. It started. That’s the Docker manager doing that. The same command as you used before, we can do an exec into our Redis server. Two Redis CLI, and get dockercon. Notice moment of truth. seattle, our data is still there.

That’s Flocker project working with UCP and Swarm with some of the new features like rescheduling. You don’t have to worry about unmounting that storage. You don’t have to worry about how that fails over. If something dies, the container or the host, it’s going to failover. You might have a little bit of downtime unless you’re running in a distributed database. You might have a degraded system if it’s just part of your one shard or one slave of that database. Nonetheless, this auto rebalancing and letting the storage, the data, follow that container enables you to be a little more flexible about how you recover and how nodes recover.

Questions

I don’t think I have any more slides. The demo was the end. I have a little bit of time before the general session, so I think I have a good five minutes or so. I’d love to take questions. Sure. The question was where do you configure Flocker to use EBS? What I can do in here is use Docker machine to SSH into a node. We’ll go to demo0 since it’s running there. The answer, while I’m getting there is that inside of each node, there’s a small agent file. That agent file’s configured to use a certain backend, and in this case, it lives in etc/flocker/agent.yml. We’ll just have to sudo. This has things like you would configure your storage backend to use storage pools and protection or domains or things like that. In the case of AWS, you have some credentials, I’m using keys, which region, which zone. That kind of thing, so there’s a bunch of users that will configure multiple flocker clusters and different [inaudible], label their swarm nodes appropriately. That way, they can provision storage to different availability-zones and have applications across them.

Sure. The question was can you use an IAM Role instead? Right now, that answer is no, but we’ve been asked for it plenty of times where it’s one of the things we’re going to work on. Sure. The filesystem is chosen by Flocker. I believe it’s an etcd file system. Any other questions? Great, well thank you for coming, and have a good DockerCon.

About the speaker

Ryan Wallner Profile Photo

Ryan is a Technical Evangelist for ClusterHQ focused on the developer community, integration and frameworks around containers and persistence. Previously, Ryan was a software engineer in advanced development for EMC’s office of the CTO. He has contributed to various open-source projects including Flocker, Amazon ECS Agent, BigSwitch Floodlight, Kubernetes, and Docker-py.

Like what you read?

Signup for a free FlockerHub account.
Sign Up

Get all the ClusterHQ News

Stay up to date with our newsletter. No spam. Ever.