Kubernetes is an open-source container orchestration tool designed to automate deploying, scaling, and operating containerized applications.
Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, facilitating declarative configuration and automation.
It has a vast, rapidly growing ecosystem.
Kubernetes support, services, and tools are widely available. However, before we dive deep, let’s define the containers first.
What is Container?
A Container is the collection of software processes unified by one namespace with access to the operating system kernel that it shares with the other containers and some to no access to different containers.
You create your Docker image and push it to a registry before referring to it in a Kubernetes pod.
Kubernetes doesn’t replace Docker, but it augments it.
However, Kubernetes replaces some top-level technologies that have emerged around Docker.
A runtime instance of the docker image contains the following three things.
- A Docker Image
- An execution environment
- The standard set of instructions
Core elements of the Docker System
The Docker ecosystem has the following two core elements.
- Docker Engine
- Docker Store
Docker Engine
The Docker Engine comprised of the runtime and packaging tools.
It must be installed on the hosts that run Docker.
Docker Store
Docker store is an online cloud service where users can store and share images. It is also known as Docker Hub.
Difference between Container and Virtual Machine
A virtual Machine contains one or more applications with the necessary binaries and libraries.
A virtual Machine(VM) contains the entire guest operating system to interact with the applications.
On the other hand, containers include the application and all of its dependencies and share the kernel with other containers.
The container does not tie to any infrastructure. It only needs a Docker engine installed on a Host.
The container runs isolated processes in the user space on the host OS.
Advantages of Containers
The container applications are portable.
The container applications are packaged in a standard way.
The deployment is easy and repeatable. DevOps engineers like them because it is straightforward to manage on different hosts.
Automated testing, packaging, and integrations can be automated more naturally.
The containers support the newer microservice architectures.
It removes the platform compatibility issues.
The DevOps team can isolate and debug the issues at a container level.
What is Container Orchestration?
Container orchestration is deploying containers on the computer cluster consisting of multiple nodes.
The containerization approach packages the different services that constitute an application into separate compute containers and deploy those containers across a cluster of physical or virtual machines.
Container orchestration is the process that automates the deployment, management, scaling, networking, and availability of container-based applications.
Containers support the VM(Virtual Machine)-like separation of concerns but with far less overhead and greater flexibility.
As an output, containers have reshaped how people think about developing, maintaining, and deploying software. In the containerized architecture, the different services that constitute the application are packaged into separate containers and deployed across the cluster of physical or virtual machines.
But it gives rise to the need for container orchestration: A tool that automates the deployment, management, scaling, networking, and availability of container-based applications.
What is Kubernetes?
Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers. The main goal of Kubernetes is to foster an ecosystem of components and tools that relieve the burden of running applications in the public and private cloud.
Kubernetes platform started at Google. It’s a successor to—though not a direct descendant of Google Borg project.
Kubernetes is the platform to schedule and run containers on clusters of virtual machines. It runs on bare metal, virtual machines, private data centers, and the public cloud.
Kubernetes is a container platform. You can use it with Docker containers to develop and build applications and then use Kubernetes to run these applications on your infrastructure.
You are not bound only to use Docker containers. You can use other containers as well.
Kubernetes is an open-source project that enables software teams of all sizes, from small startups to Fortune 500 companies, to automate deploying, scaling and managing applications on a group or cluster of server machines.
Kubernetes is a distributed system. It introduces its own orchestration space. Therefore, understanding its architecture is crucial.
Kubernetes “clusters” are composed of terms called “nodes.”
The term “cluster” means “nodes” in the aggregate.
“Cluster” refers to an entire running system.
A “node” is the worker machine within Kubernetes (previously known as a “minion”). The “node” may be a VM or a physical machine.
Each node has the software configured to run containers managed by the Kubernetes control plane.
The control plane is a set of APIs and software (such kubectl
) that Kubernetes users interact with.
A control plane services run on the master nodes.
Clusters may have multiple masters for the high availability scenarios.
What can Kubernetes Do?
Kubernetes’ features provide everything we need to deploy containerized applications. Here are some key points.
- Container Deployments and Rollout Control. Describe our containers and how many we want with a “Deployment.” Kubernetes will keep track of these containers running and handle the deploying changes, such as updating an image or changing environment variables with the “rollout.” You can pause, resume, and roll back changes as you wish.
- Resource Bin Packing. You can declare minimum and maximum compute resources (CPU and Memory) for all the containers. Kubernetes will slot our containers into where ever they find fit. It increases our computational efficiency and ultimately reduces costs.
- Built-in Service Discovery and Autoscaling. Kubernetes can automatically expose our containers to the internet or other containers in the cluster. It automatically load-balances traffic across the matching containers. Kubernetes supports service discovery via environment variables and DNS out of the box. You can also configure CPU-based autoscaling for the containers for increased resource utilization.
- Heterogeneous Clusters. Kubernetes runs anywhere. You can build your Kubernetes cluster for the mix of virtual machines (VMs) running the cloud, on-premises, or bare metal in your data center. Choose your composition according to the requirements.
- Persistent Storage. Kubernetes includes support for persistent storage connected to the stateless application containers. There is support for Amazon Web Services EBS, Google Cloud Platform persistent disks, and many more cloud services.
- High Availability Features. Kubernetes is a planet scale. Therefore, it requires special attention to high availability features such as a multi-master or cluster federation. Cluster federation allows us to link clusters together so that if one cluster goes down, the containers can automatically move to the other.
These key features make Kubernetes well-suited for running different application architectures, from monolithic web applications to highly distributed microservice and batch-driven applications.
Kubernetes Architecture
Kubernetes containers are grouped into “pods.” Pods may include one or more containers. All the containers in the pod run on the same node.
The “pod” is the lowest building block in the Kubernetes architecture. More complex abstractions come on top of “pods.”
“Services” define networking rules for exposing the pods to other pods or the public internet.
Kubernetes uses the “deployments” to manage the deploy configuration changes to the running pods and horizontal scaling.
A “deployment” is the template for creating the pods. “Deployments” are scaled horizontally by creating more “replica” pods from a template.
Changes to a “deployment” template trigger the rollout. Kubernetes uses rolling deploys to apply the changes to all running pods in the deployment.
Kubernetes provides two ways to interact with a control plane.
The primary way is the kubectl command to do anything with Kubernetes.
The second way is to use web UI with basic functionality.
Kubernetes Terminologies
Let’s see some more common terms to help you understand Kubernetes.
Master: The machine that controls the Kubernetes nodes. This is where all the task assignments originate.
Node: These machines perform the requested and assigned tasks. The Kubernetes master controls the nodes.
Pod: A group of one or more containers deployed to a single node. All the containers in the pod share an IP address, IPC, hostname, and other resources. Pods abstract the network and storage away from an underlying container. It lets you move the containers around the cluster more easily.
Replication controller: It controls how many identical copies of a pod should be running somewhere on the cluster.
Service: It decouples the work definitions from the pods. Kubernetes service proxies automatically get service requests to the right pod no matter where it moves to in the cluster or if it’s been replaced.
Kubelet: The Kubelet service runs on nodes, reads the container manifests and ensures the defined containers are started and running.
kubectl: The kubectl is the command line configuration tool for Kubernetes.
Kubernetes clusters
The highest-level Kubernetes abstraction, the cluster, refers to a group of machines running Kubernetes (itself, the clustered application) and the containers managed by it.
A Kubernetes cluster must have the master, the system that commands and controls all the other Kubernetes machines.
Kubernetes nodes and pods
Each cluster contains the Kubernetes nodes. Nodes might be physical machines or VMs. But, again, the idea is abstraction: Whatever an app runs on, Kubernetes handles the deployment on that substrate.
Kubernetes even makes it possible to ensure that specific containers run only on VMs or only on bare metal. Nodes run pods, the most basic Kubernetes objects that can be created or managed. Each pod represents a single instance of the application or running process in Kubernetes and consists of one or more containers.
Kubernetes starts, stops, and replicates all the containers in the pod as a group. Pods focus the user’s attention on an application rather than the containers themselves.
Why do you use Kubernetes?
Real production apps span into multiple containers. Those containers must be deployed across multiple server hosts.
Security for containers is multilayered and can be complicated. That’s where Kubernetes can help.
Kubernetes gives us the orchestration and management capabilities required to deploy the containers at scale or these workloads.
Kubernetes orchestration allows us to build the application services that span over multiple containers, schedule those containers across the cluster, scale those containers, and manage the health of these containers over time.
With Kubernetes, you can take real steps towards better IT security.
Kubernetes also needs to integrate with networking, storage, security, telemetry, and other services to provide a comprehensive container infrastructure.
Kubernetes can also work with any container system that conforms to the Open Container Initiative (OCI) standards for container image formats and runtimes.
That’s it.