Introduction to Kubernetes

Joshua Hall

In this article you’re going to get a basic understanding of the next step in the evolution of Docker: Kubernetes.

Installation

Since adding installation instructions for each particular OS would make this article extremely long, I’m going to try and point you in the right direction.

There are three main things you need, kubectl (read as cube-c-t-l), minikube, and depending on your OS and version you’ll either need to enable HyperV for some versions of Windows 10 or install VirtualBox for most other systems.

Minikube is what what will be handling our Kubernetes clusters and kubectl will allow us to interact with them at the command line. HyperV and VirtualBox are what creates the virtual machines our Kubernetes clusters will run in.

If everything went well you should be able to go into the terminal and type kubectl get pods and get one running result.

What is Kubernetes?

Kubernetes, which you’ll often see shortened to k8s, is a more advanced system for managing multi-container based apps meant to allow for multiple instances of a container to be created or destroyed to match a workload.

Imagine you have an app that takes an image, performs some operation on it like compression or converting to greyscale, and returns it to the user. The app is split into four separate containers: NGINX, server, client, and the image processor.

Now that 10 people are using the app the service hosting our containers, like AWS, will handle this by duplicating our entire set of containers. Doesn’t that seem a bit wasteful, why create a new NGINX container per visiter? Because while having a single NGINX container would be much more resourceful we would still need different instances of the image processor, since 100 or 1,000 users waiting on that one container to finish would be horrible.

Schema illustrating the multiple containers running in parallel

Kubernetes allows us to take a much more resourceful approach by having each containerized part of our app be able to generate as many needed containers for our workload. 50 users start uploading images and 50 image processor and client containers are generated inside their pod while there remains a single server and NGINX containers. When the job is done and the users have left, any unnecessary containers are destroyed. This a vast oversimplification, but our goal is to have something that looks a bit more like this.

Schema of how Kubernetes allows to split the workload into pods

As requests come in we have a master that tells pods what to do and redirects the load appropriately.

Kubernetes Objects

While there are quite a few objects you’ll have available to you, there are three main ones you have to worry about:

  • Pods are our container generators, each containerized part of our application that may need multiple instances will be put inside its own pod.
  • Services are how to take our cluster of pods, manage the networking between them, and make them accessible to the outside world.
  • Deployments allow us to handle groups of pods dynamically. We are able to give some configuration like how many pods we want and the state of each while Kubernetes will constantly be looking to fix any inconsistencies between a pod and what we declared our desired state to be. You normally will be working with deployments in any production app instead of directly managing pods themselves.

How We Work with Kubernetes

A lot of what you’re going to be doing with Kubernetes will be in setting up the configuration. We’re going to be creating yaml files for each object we want to work with. The goal of each of these is to pull your existing images on DockerHub, add any necessary metadata, and set up its relationships to other objects. Unlike the simplicity of working with Docker Compose, the networking can be a bit more involved.

Why Bother?

  • By only creating the necessary containers for our workload, we can save a lot on the cost and use of servers, especially if you’re using a 3rd party service like AWS or Google Cloud.
  • Kubernetes, and many 3rd party services, have a lot very helpful tools for monitoring the state and performance of your clusters.
  • When working with a moderate or large amount of servers, being able to declare the desired state for each object and let Kubernetes handle everything makes quickly scaling up significantly easier.

Conclusion

Amid the swarm of verbose and vague descriptions of what Kubernetes is and does that are floating around out there, I hope this was a gentle-enough introduction to something that, at least for me, can quickly become overwhelming especially when you’re new to DevOps.

  Tweet It

🕵 Search Results

🔎 Searching...