KubeCon Opening Keynote: Kubernetes Update with Kelsey Hightower

Carissa Karcher
Posted by
/ Comments

Table of Contents

Given the breadth and quality of presentations published from KubeCon EU this week, we decided to take a brief departure from our weekly microservices spotlight and instead focus on a single event. We will share our favorite talks from KubeCon throughout the week complete with video, slides and transcription to read at your leisure! To kick off this series, we open with the keynote delivered by Kelsey Hightower, Developer Advocate of Google Cloud Platform at Google.

We hope you enjoy the presentation as much as we did!

Opening Keynote: Kubernetes Update

presented at KubeCon EU

Video

Transcript

Introduction

When I say Kube, you say Con. Kube. Con. Kube. Con. You guys got to do better. You’ve got to be loud. When I say Kube, you say Con. Ah, you’re paying attention. Someone wasn’t. Good job. We’re gonna get started. What we did last year was a lot of people attended KubeCon to learn about Kubernetes. What we wanted to do was open up with an overview of Kubernetes, some live demos so people know what the rest of the talks are even talking about. What we’re going to do is, we’re going to give a high-level Kubernetes. We’re going to talk about a lot of the 1.2 features. If you’ve been a Kubernetes user, you may learn a few new things here. If you’re new to Kubernetes, we should give you a nice overview of what we mean when we talk about Kubernetes and this new way of doing things. I’m going to do a couple of questions and answers. These are real questions that I get from people. We’re going to try to answer them through the talk, and there will be enough live demos to actually drive this home.

What is Kubernetes?

[1:08] This is the way I like to think about it. Kubernetes is a framework for building distributed systems. Kubernetes is not a PaaS. You don’t install it and end up with Heroku. If you want a PaaS, there are people in the community that are building PaaS’s on top. You have Deis, you have OpenShift from Red Hat. You’ve seen the work at Red Hat, or just like that hat.

I read on the internet that Kubernetes is hard to use.

[1:40] The next statement I get a lot is, “I read on the internet that Kubernetes is hard to use.” This is a fair statement. How many people feel like Kubernetes is hard to use? All right. There’s a few. It’s all right, raise your hand. I think Kubernetes does a few things that make it a little bit hard to install, because there’s a lot of things that we have to address at the lower levels. If you start Kubernetes by trying to install it from scratch, I don’t think you’re going to have the best time in the world, but I think using Kubernetes can be super easy. This is the way I like to show people how to use Kubernetes for the first time.

Easy Demo

[2:21] We’re going to show, hopefully, an easy demo. This is a lot of beta stuff going on here, so if things don’t work, that’s the reason. It has nothing to do with me. Let’s make sure we have the newest version. Kubernetes Beta 1.2 was released about a week ago so we’re going to QA it today. To me, the easiest way we show a lot of new users yaml files to get started with Kubernetes. You are doing a disservice to new users to the Kubernetes community. I think we can show them some of the tooling that makes it a lot easier to get started with Kubernetes. One of those is the kubectl run command. We wanted to … All container demos, you must show nginx, right? That’s the standard thing you do. We’re going to use the nginx image 1.9.12. Make sure you use version numbers with your images. “Latest” is not a version, right? That’s a mistake. You’re going to get caught in production. We’re going to run this image, and what should happen is, we should get the happy path of the kubectl run command creating what we call a deployment. Last year, this would create a replication controller. We now have a higher level of abstraction and call it a deployment, and we’ll see what that looks like later. When we run this command, not only do we get a container, but we get a pod, and we’ll talk about what that means later. If we were to look at this, we’d see that we have nginx running.

The next step is, how do we get access to that particular container, now that it’s running in our cluster? You can do all kinds of tricks with proxies. Write more yaml would have created a service, but we also have a kubectl expose command so if you want to show people how to do this, you can say “Kubectl expose deployment nginx” and then we can give it a port, and we’re also going to say we want the type to be “NodePort.” What’s that? Do you need that? Do you think you need equal sign after port? No, you don’t. Here we’re going to expose that deployment to the outside world. What does that mean? We’re going to create a service that automatically figures out where that container’s running, and what port to talk to by specifying that on the command line. If we list our services now, we’ll see that we have this cluster IP that can reach n number of those nginx IPs, or pods running behind the scenes. We can describe it to figure out what high port was allocated to us. We grab this high port. What I can do is pop over to my web browser, and pop it in, and there you go, there’s nginx. Kubernetes can be simple to use by using the high-level commands to abstract the yaml, right? You don’t have to deal with the Kubernetes API, day one. There’s an easy on-ramp. When you’re showing people Kubernetes out of the box, I think the best way to do that is by using kubectl and the high-level commands, or, we’ll see later on today, the web UI that mimics a lot of those things.

Can I just use Docker and call it a day?

[5:35] Kubernetes can be easy to use. That’s a question I get, is “Can I just use Docker and call it a day?” This requires complex explanation: “No.” This is not a ding on Docker. I think a lot of people forgot that Docker is just one component in the stack. There’s more than just creating an image and running that image. Lots of things need to happen. I’m going to try to explain that with an example. Throughout this presentation, I’ll be dealing with an application called Ghost. It’s blogging software. Everyone shows WordPress. I figured I’d go hipster and do a more modern blogging software, so this one’s called Ghost. What’s that? You like Ghost? It’s written in Node.js, so that caused me, like, ten hours of pain, trying to put that in a container. Here’s a hint: Do not run NPM install on the plane. There was no more bandwidth left. Everyone else was, like, “F-! I know someone’s using Node on this plane.” This Ghost App, basically has these three layers. Most people look at this and say, “Oh, that’s simple. There’s three containers there. You have nginx, Ghost, and MySQL.” Requires a database backend. Here’s where things get confusing for people. When we look at this, that application actually has a hard dependency on nginx. It isn’t optional. Reading their readme, it requires nginx to proxy to the underlying MPM server. There’s many reasons to do that. Maybe you want to do SSL termination there, maybe complex roles for HTTP serving, but ideally, these are hard dependencies. What does it look like when you get started with Docker alone? The reason why we have them coupled is so we can scale them independently, right? We don’t want to put all of these in the same container, because then we can’t scale them independently. With these two types of dependencies, we can scale them this way.

Here’s what it looks like if you’re doing this alone with Docker just pointing at various hosts. You can say, “Docker run mysql,” “Docker run nginx,” and “Docker run ghost.” What’s the first problem here? They don’t know anything about each other. The whole startup economy was built around this: Container networking. Have you heard of it? Hundreds of them got funded. Three of them remain. People thought they could fix it, right? Just tell that container about this other container. Demos were beautiful. You took that to production, and you run into a problem. What happens when there’s multiple nginx’s? Which one should be paired with which? What happens when you do this? Nginx, nginx, more nginx. Now you’re at a capacity. Where are you going to put the Ghost application? Don’t know. This causes a problem. This is where you need some orchestration. Some people try to be clever with this. They try to fix this problem. You guys heard of those? You know what those are? Those are init scripts for hipsters. If you haven’t seen a Docker entry point, brace yourself and we’ll get one. See that? You guys hear that ring? That was the 90s calling for their tooling back. You do not need to do this! This is ridiculous. I’ve seen some of these that are bigger than the actual application being deployed. The reason why people are doing this, is they’re trying to work around this problem of startup order for containers, forking things into the background. You lose so much context when you do that. There’s got to be a better way.

Pods

[9:47] Kubernetes introduces another primitive we can use, called a pod. One simple way to think about a pod is, think about all the things you do with a single virtual machine. You can put all the tightly-coupled applications there and you have the ability to share networking and storage. We give you the same primitives so you don’t have to resort to some of those hacks. In this case, we could have multiple containers inside of a single pod, and the things that go in a single pod should be those tight dependencies that are required to be there. In this case, nginx and the Ghost application. If there’s any storage we need, we can also bi-mount that into the pod. Once we do that, then it’s easy. Now nginx will just refer to the Ghost application on a local host. We don’t have to go and discover the world anymore, and you reduce those end scripts.

Deployments

[10:37] The next thing is deployment. We needed a declarative way to say what goes where. A lot of people like to point and fire and throw containers at the individual hosts, but in this case we have a new concept called a deployment. Deployment in Kubernetes abstracts a lot of things. This is pretty new, and as of 1.1, 1.2 you’ll see it being used a lot more. Here’s what we’re doing up front. We’re using a deployment, which is a control loop that enforces state as given by the end user. This is a high-level example of this. In this case, we’re going to say we want one of those Ghost pods. We want one copy. The scheduler goes out, finds the best system or node on the network to run the application. How does it do this? By looking at a few predicates: memory, CPU, we try to find the best fit in the cluster. If we update the replicate set from one to three, what should happen? We don’t run a bunch of imperative commands to get to that state. We just declare what we want, and we allow the deployment to push things out. This is what we end up with. The benefits of doing this is that we store this state on the server side. This is why Kubernetes has multiple pieces. We store the state in a backend data store. Once you have a state somewhere, and a node goes away, our state is preserved, so we know to automatically do this. How many people have a scheduler that they use in their workplace? That’s like five or six people. That means the rest of you are either human schedulers or you have some. You usually ask someone in operations, “Hey, deploy my app” and they go through this routine. They figure out the best server, and maybe they pin it there. When a server goes down, sleep deprivation kicks in. We give them pagers, they wake up in the middle of the night, and they do this mad scramble to put things to the desired state. Kubernetes automates that for us.

What about config files and SSL certificates?

[12:36] Here’s the situation where people punt the most. Got these containers. What do you do with config files? How many people bake in their config files into their containers? You are going to be hacked one day, I promise. I’ve seen people baking configs and putting them on the public Docker Hub. Then people say, “Oh, they’re public? Yeah, I have all your passwords!” What do you do about this? Some people say we should put configuration management tools into the pod. You can. You shouldn’t. There’s a better way. In Kubernetes, we introduce config maps as of 1.2. We already had secrets. Config maps are basically just like secrets, but they come with looser restrictions on what should be backing them. They could be things that are not sensitive data, but they also give you some other primitives, like being able to use the config maps as environment variables, and also being able to tell the downward pod that your configuration has changed, and maybe you should restart yourself. What happens with that? We can take and run this command, and say we want to create one of these secrets from files.

This is new in 1.2 as well, the ability to create a secret by just pointing to a file. What you had to do before this was take the file content, base 64 and code it, and create a yaml description. I’m seeing this guy shaking his head, like, “That was some bullbip!” It’s much easier now. We push this secret to the Kubernetes API. We can reference that secret in the pod, and once we have that, we can start to build up our pod. How does this work? How does Kubernetes ensure that our config file is there before the app? One way of doing that is to make sure we mount the secret as a volume. Once the secret is attached to the pod as a volume, we can set this up first. We take the contents of that particular volume, and we export it on the file system to wherever we were told to mount it. We do this before starting any of the containers. We get our pod IP address that all of the containers will share, and then we start the application. This is automatically done for you. All you have to do is declare that you want this to happen, and now you remove another set of tooling that you need for your applications. This streamlines things big-time.

Secrets Demo

[14:43] Let’s look at that in real life. We have a pod, and here’s our configuration file, right? Node.js, of course, you write your configs in JavaScript, JavaScript all the things. What we’re going to do now is Kubectl. We’re going to create a secret, a generic one. We’re going to name it “ghost-test –from-file” and then the name of the file will be, we’ll give it this. This creates the secret from that configuration file. Kubectl, describe secret, ghost-test, and we see our secret that’s in place there. All the contents are there. In order to reference that, we take a pod, and the thing I want you to pay attention to is this section down here. Notice here, we’re referencing the secret as a volume, giving it the secret name, and exposing it to the container. Once we expose it to the container, we tell it where we want to mount the contents of that secret. In this case, the contents of that secret will be here. We just tell our application where to find it. We just do it declaratively. Kubectl, we use the create command to create these things, and once that’s in place, kubectl, get pods. We see that we now have our Ghost container running. We can run the exec command to actually see if we have the right things in there. Ghost, and we’ll do a bin/cat/etc/ghost/config.js. We go into the pod, really quick, and we find the container. We execute this command in it, and we actually see the mount points in place. How are these secrets stored? We do not write these to the file system. We don’t want someone to log into the machine and see our secrets splattered all over the place. We want the secrets tied to the lifetime of the running application. In order to do that, what we do is we just use a temp file system, and you’ll see it in this noise. Here we go. Not that one. Can someone find it? Be quick. You guys were not very helpful. There we go. There’s our temporary file system. We kill the pod, it goes away.

How do I deploy a real application?

[17:17] Those are demos. Those are some of the components we use. We’re going to deploy this Ghost app. When you start with Kubernetes, do not try to convert the world to Kubernetes. It’s called an initiative, and initiatives always fail because you cannot convert the world. Kubernetes is not magic sauce. You can’t sprinkle it on your broken applications and make them better. It doesn’t solve that problem. It does give you the right set of primitives so when people say, “Hey, I want to take my database and put it in Kubernetes,” I say, “How do you manage that database today?” “Oh, we got like 88 DBAs.” It’s like, “No. Leave the database on the outside because you haven’t learned how to automate your database deployment anyway.” Kubernetes can be used to deploy a database, but the problem is, the complexity and the adoption is too high. If you don’t have a cluster-aware database service, Kubernetes does not make it cluster-aware. I just have to say that, because I watch people deploy MySQL and say, “Why isn’t it replicating?” You need to configure it! Seriously? If you have no users or customers, do not start talking about scale day one. You have no users. Just get the damn thing deployed. Usually a single instance of your application works for testing. Start there. Make sure it works. Connect to the actual database. You have to do that. Create a service. When you’re ready to expose it to the world…

A new thing in 1.2 is we have this concept of ingress. Ingress is nice because it integrates with upstream cloud providers, so we can automatically configure load balancers. The nice thing about ingress is we can also manage TLS for the first time. This is one of those undocumented 1.2 beta features. Chatting with the engineer that worked on this primarily, his last words to me was, “There’s a less than 10% chance that it won’t work, or will work, and it’s my fault. Let me know.” We’ll see. I say you should start there. Let’s look at what it takes to do that. We won’t drive deep into all of these things, but we’ll go through the flow.

We want this system to be secure, so we need some certs. Let’s look at the certs that we need for this particular application. Here, we have a key, and a ca.crt, a private key, and this is for our web server. I also have this database certificate that I’ll use connecting my host to database. What we want to do now is, kubectl, we want to create, what do we want to create for this? Someone just say it out loud. I’m going to create a secret. We’re going to call this, “ghost-tls,” and then we’re going to say, “from file,” and now we’re just going to give it this tls directory. It’s going to be smart enough to go to that directory and create a secret for every single file found in that directory. That’s a thing that people found from just using the thing. It’ll make things a little bit easier. We also need a secret for our actual config file, so we’re going to make another secret called “Ghost,” and we’re going to bring in our config file. I’m not going to show this one to you because it actually has database passwords in it and that would be bad. There we go. Now we have two secrets in place. I also need that nginx config. How does it know to proxy to my backend server? If we look at that configuration really quick, very simple. Take traffic on port 80, and proxy it to localhost. We have to have a way to push that config to nginx at runtime. Again, we’ll use, this time, a config map. We don’t have any secret data in there. We’ll create a config map named, “nginx-ghost.” Here what we’ll do is from-file. We have those three things in place now. All our configs are in place. The next thing we need to do is create a service. Kubectl, service is Ghost. I have to expose the NodePort here, because this will be used for health checks by our edge load balancer and GCE. We’re going to assign port 3200 to that, and this will automatically find all the pods created with the labelset called “app=ghost.” We’ll do that really quick. Kubectl create -f services/ghost. This is warning me, “Look, dude, you’re using a port. We’re going to have port collisions if another app tries to buy into that port on one of your servers.” Kubernetes manages this port assignment for us. Here I’ve chosen one so that it’s easy to configure my help checks later on.

The next thing we need is that ingress piece that we showed earlier. Ingress is really simple. I use it in a simple way. I want this thing to create a load balancer for me. I’m using the GCE backend. Here, I want it to use the SSL crts from this particular secret. If I name my secrets the right way, it will automatically find the secrets by name, pull those crts, and push them to my load balancer automatically for me. The nice thing about ingress is that you can have any implementation you want. There’s one for nginx already. There’s one for HA proxy. The idea here is that you’re not necessarily stuck with what the cloud provider gives you.

Let’s create one of those really quick. Once that’s in place, kubectl describe ingress ghost. I’ll actually just do this. Here, we don’t have any backends yet because we haven’t deployed any applications. It’s constructing all the objects behind the scene needed to do this. We need some forwarding rolls, we need a load balancer, and we need to upload our certificates. It’s doing all of this in the background.

While that’s happening, let’s go ahead and look at our application. In Kubernetes 1.2, the new thing is deployment. Replication controllers got renamed to replica set, so this is so we can have some consistency across naming so there’d be pet set, daemon set, job sets I believe, and we have replica sets. The goal with this, with deployments, is where we group a bunch of things. We’re going to talk about how to deploy an application properly in Kubernetes. This is some of the best practices that you should be doing, if you want things like rolling updates to actually work well. The first thing we’re going to do is specify the number of replicas. This deployment can be used to scale things in and out. Some people use the kubectl scale command line tool. I don’t like that method, because it’s doing things imperatively. In my mind, there’s no state stored anywhere. A lot of people like to do these in files, so they can check them in, and have a record of that and apply.

In Kubernetes later versions, there’s an apply command. You can update these configs in place, and just run apply, and update the state and the cluster. Here we go. We’re going to create this particular set of pods using these labels. The main label that’s important is “app=ghost,” so it can be picked up by our load balancer. Nginx, here’s some things that we need to do. The thing fronting our traffic should handle graceful shutdowns. You can do this in nginx so what we do is we specify this lifecycle hook, that, on shut down, we want that command to actually execute. Nginx will do the right thing and try to handle all the remaining traffic, and shut down cleanly.

Then we have liveliness probes, and readiness probes. They’re pretty much the same thing. The key here is that the readiness probe will signal to the upstream service that it’s safe to add our particular pod. How many of you have seen or cared about speed of containers starting? You want those containers to go fast, get that web app you like running, and it’s up fast, but unless you’re serving 404s or 500 errors as a service, it doesn’t make sense to start containers that fast. You’re just going to come up to do nothing. It needs to actually be ready, right? Here we need to make sure we actually probe the application to say that it’s ready, and then we add it to the load balancer. This is key here. Do not just go around without these. This is the best practice here. Then we’re going to actually mount our secrets configuration files in place. Let’s create this deployment, and then we’re about to wrap up with our rolling update. Create, -f, deployment. Now we have this deployment in place. What’s happening now? The deployment will manage the replica sets for us. Whenever we do updates, it’s the deployment’s job to manage that. Whenever we want to do a rolling update, it’s the deployment’s job to manage that.

Network load balancing

[26:05] While that’s spinning up, it should be done fairly quickly, let’s look at all the networking components that have to be created here. Here’s our load balancer setup. We’ll click on this. We’ll start to see that we have our help checks in place. It looks like things spun up pretty fast, because now it’s reporting that all my instances are actually healthy. Everything’s being proxied to these back ends. You’ll notice that my certificates have been put in place. Here’s my self-signed certificate for *.example.com. All my forwarding rolls are in place. If this works, I should be able to hit … I don’t even know what IP address this thing has. Let’s find out. kubectl describe deployments, and what is this thing saying? We don’t care about deployment, we care about ingress. Let’s see. We have this IP address. Let’s check our etc/host really quick. This is the poor man’s DNS. Good, looks like it matches. If this works … How many people think this is going to work? There’s no confidence. This is the failure mode. 100%. Woo, I’m doing a little dance! In my head, at least. Yes, it works! Thank you, demo gods. Now we’re going through our load balancer, serving up our certificate, make sure this thing is valid. There’s a green lock. It took me a lot of work to get this lock to be green in the screen. I’m happy about this. The next thing we want to do is look at the admin portal so we can see what version we have running. My profile. About Ghost. This is a beautiful thing. Did you see that? That’s called good design. You zoom, ah. I love it. We’re running version 0.7.7 of our Ghost application. You start it small. It’s deployed. Everything works. Now let’s move on. As we wrap up, let’s talk about other things you’re going to have to do.

How do I scale my application?

[28:16] Now you’re ready to scale. Your parents are hitting this thing now. What you need to do, remember, it’s not about just speed. When you scale these things, you want to store the state somewhere so we know how many you want, and keep it that way. You need to actually connect to the database, don’t add yourself to the load balancer before you’re ready. Then we add you to the service. That’s the flow. What’s it take to do that in Kubernetes? This is the benefit of having these files. So deployments… so we look at this, and now we’re going to bump just the file. We’re changing the state. We talk about workflows. If you’re using something like git, you can check this out, update the state, push it back, and have automated tools that just run Apply over those files. Let’s save this. Kubectl apply -f deployments/ghost.yaml. I’m going to describe the deployment, and we’ll see that we’ve changed the number of replicas that we need. We did, right? I saw five somewhere. There we go. It’s scaling it to five. Kubectl get pods, there’s our five. It’s going through all of its health checks, and then being added. That’s all you have to do in Kubernetes. Everything is declarative.

How do I update all those containers?

[29:44] The last question is, Hacker News comes out. You’re looking at Hacker News, you’re like, “Damn it, that’s me.” What do you do? How do you update the containers? The flow you want is something like this, right? This pattern is the canary pattern. We add a new version of the application. We hook it up to the load balancer. It gets traffic. As that works, we want to propagate that through our stack. This is, ideally, what you want to do. Have you noticed, the whole time, we’re not talking about hosts? The application is completely decoupled from the local machine. Let’s do the last step of this, and then we’ll be good to go. Let’s try it. The way we do this is, we need a canary controller, so we want to test this out really quick. Looks pretty much the same. We’re not going to go through it. The big difference here is the version that I’m using of the container. We’re going to throw this into the network. One thing you need to pay attention to is the labels. Overlapping labels are “app=ghost.” When we created the service, what pods did we send traffic to? You’ll need that label to match, and you get all the traffic. This is how we’re going to send traffic to both versions at the same time. Kubectl create -f deployments/ghost-canary.

Once that’s in place, let’s look at this. You’ll see this saying one of two because one of our containers isn’t ready. I have a five-second delay before we actually start checking if nginx is up. This will not be added to the load balance until that completes. Looks like it’s running now. Both are running. How many times do you think I’m going to have to refresh before that says 0.7.8? Guess. Five times? 2.5? You must work at Google. Are you guys counting? This is pretty sad. Woo! Yes! Great! We’re happy now. Your developers didn’t break anything. Seems like the block still works. What do you do? That’s enough for me. It worked in test. What do you do? You blindly update everything else, right? You’re on the way to coffee time. In order to do that, we’re just going to do that. You said it was good, right? All right, we agree. It’s called Devops. Kubectl create -f. We want to do an apply here. -f, and then we’ll do deployments, and we’re going to use the same name here, the same file. What’s going to happen in the background is, we have a policy here telling us what to do. You can’t see that policy, because by default, we get basically, “Keep no more than one down at a time.” Let’s look at it really quick. Let’s do get deployments. What is this thing named? Ghost -output=yaml. All right. We actually have our rollingUpdate policy towards the top. Here we go. Here’s our strategy. We can have at least one plus, so we want five, we can have up to six on at one time, and we can go down to at least one unavailable. This thing will slowly roll things out for us automatically here. The nice thing about this is, before this all happened client-side, Kubectl rollingUpdate, you close your laptop halfway, that’s it. It stopped, and then you embarrassed yourself, because you thought your demo was killer, but it actually wasn’t, because it was all running client-side. Now this all runs server-side. If we get our pods, we’ll see. Did it go that fast? There’s no way. Let’s see. Did we get 0.7.8 each time? Oh man, I’m happy. That’s killer. That’s it. The rolling update is complete, and with that, I’d like to end the presentation. Thank you.

About the speaker

Kelsey Hightower Profile Photo

Kelsey has worn every hat possible throughout his career in tech and enjoys leadership roles focused on making things happen and shipping software. Kelsey is a strong open source advocate focused on building simple tools that make people smile. When he is not slinging Go code you can catch him giving technical workshops covering everything from Programming to System Administration.

About Flocker

Flocker is an open-source container data volume manager for your Dockerized application. It gives ops teams the tools they need to run containerized stateful services like databases in production.

Like what you read?

Signup for a free FlockerHub account.
Sign Up

Get all the ClusterHQ News

Stay up to date with our newsletter. No spam. Ever.