Container Deployment is not a new thing. It's something every DevOps or Back-end Engineer or Cloud Engineer should know or have an idea about as it helps cater to faster and seamless software deployment reusability.
Containers are like a box that holds everything an application or software needs to run on a server or any target environment (cloud server or premise server). That means it does not need to care about the environments it is running on or figure out which specific versions of its dependencies will work on the target environment.
It gives you a level of abstraction on the environment the application is running on and only focus on building the app. Not only can containers be used for automating deployment, but you can also use them to automate scaling and management of applications.
Over the last few years, many containers have popped up, and the most popular ones are Docker and Kubernetes. Docker was one of the first widespread commercial and open-source containers out there, and Kubernetes is an open-source container management and orchestration tool built by Google.
I've been interested in Kubernetes for some time and have done my fair share of learning its basics and operations. I would say it has the edge over Docker as it can run multiple clusters efficiently compared to Docker that runs on a single node.
With Kubernetes being open source, you can also write your own managed containers on Kubernetes platforms like Google Kubernetes Engine(GKE) and Azure Kubernetes Service(AKS).
The idea of containers is to run our app independently on its machine. Kubernetes is built on that idea, and further expands the ability to abstract a machine's storage and networks. It also serves as an interface to deploy multiple containers to different environments, from cloud servers to virtual machines.
Kubernetes is split into two parts: client and server architecture. The server runs on the cluster you will be using to deploy your application while the client will be interacting with the cluster. An example of such a client is the kubectl CLI.
In this section, let's take a look at some basic concepts of Kubernetes:
Nodes are physical or virtual machines you set up, install and run Kubernetes on. You would need to manually or use a cloud operating system like Amazon EC2 to set it up.
A pod is a basic unit that runs on Kubernetes. They are like a group of containers that run on nodes as a logical unit. They share the same content, IP address, storage and can communicate via the localhost. More than one pod can run on one node and can also span across multiple machines.
Label is simply a method to identify your resources using a key/value pair data format. For instance, we could use it to label all pods serving production traffic with "role=production."
Selectors let you search/filter resources by labels. With selectors, you can easily get all production pods using the role=production label.
Service is a way to specify a set or a group of pods by using a selector and add a means to access them either by using an IP address or DNS name. This is responsible for enabling network access to a set of pods.
Deployment is a set of pods. It ensures that an adequate number of pods are running simultaneously to power your app and kills any idle or useless pods.
Creating Your Deployment On Kubernetes Using Minikube
minikube is a local Kubernetes you can install and run on your computer, which helps you learn how to set up, use, deploy and develop for Kubernetes. minikube helps set up a local Kubernetes cluster on macOS, Linux, and Windows quickly and easily.
Before we proceed, you will need to ensure that your computer meets the requirements:
Two or more CPUs, 2GB of available memory, and 20GB of available disk space
A virtual machine or container like Docker or VMWare
Active internet connection
Note: I am currently on Mac, so I will be showing the macOS installation process. There's little to no difference in the setup process between this OS and other OS. You can read more about it here.
The first we'll need to do is install kubectl:
$ brew install kubectl
$ brew install minikube
$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
$ sudo install minikube-darwin-amd64 /usr/local/bin/minikube
To confirm if minikube is installed, run this:
$ minikube start
If that’s successful, you can then access your cluster using kubectl:
$ kubectl get po -A
Kubernetes has a shiny and useful dashboard that gives you insights and metrics on everything going on your pod, clusters, storage usage, networks, and so on.
The minikube comes bundled with the dashboard and can be accessed using:
$ minikube dashboard
Now to create your first deployment:
$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
This might take a while as it's trying to download the image file for the deployment.
After creating a deployment, before we can access it, we'll need to expose it outside the Kubernetes virtual network as it can only be accessed using the internal IP.
$ kubectl expose deployment hello-minikube --type=NodePort --port=8080
If successful, you can now access the service using http://localhost:8080. Also you can simply run this below to see all the hello-minikube services running:
$ kubectl get services hello-minikube
You can also use the minikube dashboard to monitor the services:
$ minikube service hello-minikube
If you want to use the LoadBalancer deployment, you might need to change the type of deployment when running the expose command and use the minikube tunnel command to access it.
Let’s do this again below:
$ kubectl create deployment demo-balancer --image=k8s.gcr.io/echoserver:1.4
$ kubectl expose deployment demo-balancer --type=LoadBalancer --port=8080
Then create a new terminal and run this:
$ minikube tunnel
This will create a routable IP for demo-balancer deployment, which you can use to access the deployment.
You can get the IP with the below command:
$ kubectl get services balanced
Running this will bring out different columns. What we need is the <EXTERNAL_IP> column where our routed IP is. You should be able to see something like <EXTERNAL_IP>:8000 which is pointing to the port your service is running on. You can now access your deployment using the IP.
Learning about Kubernetes and container deployments has been a great experience for me as it helps me understand more about DevOps and scaling applications. The learning curve can be a little steep, but after a while, you should be able to get the hang of it.