What is Kubernetes?#
Kubernetes, (originally developed by Google), is an open-source container orchestration platform for automating deployment, scaling, and management of containerized applications.
Imagine that you run a restaurant chain.
You don’t manually assign every cook, waiter, and cashier to every location, do you? No, you have a manager who figures out staffing, handles sick days, opens new locations when demand spikes, and closes them when it’s slow.
Kubernetes is that manager, but from a software perspective.
Why use Kubernetes in the first place?#
Consider the following scenario:
You are running a set of containerized applications with Docker in a production environment, suddenly, one of your Docker containers goes down, your application is down, you have downtime in production what do you do?
In such scenario, Kubernetes could attempt to restart the container once it detected that it was down, or, it could be configured to run multiple replicas of the same container simultaneously, load balancing incoming traffic across the replicas, or, it could simply rollback to a previous deployed version.
Here are some of the common challenges that Kubernetes addresses:
- Automatic recovery: Detects failed containers and restarts or replaces them to minimize downtime;
- Replicas and load balancing: Runs multiple replicas of a service and distributes traffic across them to maintain availability under load or when instances fail.
- Declarative deployments and rollbacks: Applies updates declaratively and can perform rolling updates or roll back to a previous version if problems are detected.
- Scaling: Scales applications horizontally (automatic or manual) to match demand and supports cluster autoscaling for infrastructure capacity.
- Service discovery and stable networking: Provides stable network identities for services and internal service discovery so components can communicate reliably.
- Resource-aware scheduling: Places workloads onto nodes based on resource requests, constraints, affinities, taints/tolerations, and QoS requirements for efficient utilization.
- Configuration and secret management: Separates configuration and secrets from images and injects them securely into running containers.
- Stateful workload support and persistent storage: Manages persistent volumes and stateful sets for databases and other stateful services.
- Self-healing and declarative reconciliation: Continuously reconciles actual cluster state to the declared desired state, maintaining consistency and reducing manual intervention.
How is Kubernetes built?#
Kubernetes has a master-worker architecture, i.e. A control plane that makes decisions, and worker nodes that do the actual work.
The control plane and worker nodes form what is commonly known as a Kubernetes cluster:
Both the control plane and worker nodes are made up of distinct components, each with a specific responsibility.
Rather than a single monolithic process, Kubernetes is a collection of small, focused processes that work together, each one in charge of different things.
This design has a key benefit: if one component fails, the others keep running. The cluster can often self-recover without any manual intervention.
Control Plane#
The control plane is the brain of the cluster. It makes global decisions about the cluster’s desired state and continuously works to achieve it.
It is made up of the following components:
- API Server : The “front door” of the cluster. Every command you run goes through here. It’s a REST API that all other components talk to.
- etcd : A fast, lightweight key-value database that stores the entire state of the cluster. If Kubernetes were a person,
etcdwould be its long-term memory. - Scheduler : Responsible for distributing work, (or containers), across multiple nodes. It scans for newly created containers and then picks the best worker node to run them on (based on available CPU, memory, rules, etc.).
- Controller Manager : The controller manager is like a collection of controllers running in loops, each responsible for making sure that the right number of pods are running, handling node failures, managing rolling updates, etc.
Worker Nodes#
Worker nodes are the muscle of the cluster, they run your applications. Each node is managed by the control plane and is made up of the following components:
- kubelet : An agent that runs on every node. It receives instructions from the control plane and ensures the assigned pods are running and healthy. If a container crashes, the kubelet restarts it locally without involving the control plane.
- kube-proxy : Manages networking rules on the node, ensuring traffic is correctly routed to the right pods regardless of which node they are running on.
- Container Runtime : The actual engine that runs containers. Kubernetes doesn’t care which one, it supports Docker, containerd, CRI-O, etc.
Issuing commands to the cluster#
To interact with the cluster, you must first install the kubectl, (“kube control”), CLI tool.
Installing kubectl#
Install using brew:
brew install kubectl
kubectl version --clientFor command autocompletion, add the following to your .zshrc:
if [[ -z "$_compinit_done" ]]; then
autoload -Uz compinit
compinit
_compinit_done=1
fi
if command -v kubectl >/dev/null 2>&1; then
# Initialize the kubectl completion script
source <(kubectl completion zsh)
fiThen, assuming that you have a Kubernetes cluster up & running, you must configure the context so that kubectl knows which cluster to talk to and how to authenticate with it.
kubectl context?A context is a named combination of a cluster, a user, and optionally a namespace, stored in ~/.kube/config.
Switching context switches which cluster kubectl talks to.
Configuring a new kubectlcontext#
First, obtain the following values from your cluster’s kubeconfig:
certificate-authority-data— the cluster’s CA certificateclient-certificate-data— your client certificateclient-key-data— your private key
Decode each value and save it to a temporary file:
echo <certificate-authority-data> | base64 -d > /tmp/k8s-ca.crt
echo <client-certificate-data> | base64 -d > /tmp/k8s-client.crt
echo <client-key-data> | base64 -d > /tmp/k8s-client.keyThen register the cluster, credentials, and context with kubectl:
# Register the cluster
kubectl config set-cluster <context-name> \
--server=https://<cluster-ip>:6443 \
--embed-certs=true \
--certificate-authority=/tmp/k8s-ca.crt
# Register your credentials
kubectl config set-credentials <credentials-name>\
--embed-certs=true \
--client-certificate=/tmp/k8s-client.crt \
--client-key=/tmp/k8s-client.key
# Create a context linking the cluster and credentials
kubectl config set-context \
--cluster=<context-name> \
--user=<credentials-name>Clean up temporary files:
rm /tmp/k8s-ca.crt /tmp/k8s-client.crt /tmp/k8s-client.keyFinally, activate the context and verify the connection:
kubectl config use-context <context-name>
kubectl get nodesIf configured correctly, kubectl get nodes returns the list of nodes in your cluster:
NAME STATUS ROLES AGE VERSION
my-cluster Ready control-plane 5m v1.35.4+k3s1