By going through this article anyone should understand the K8s basic architecture/core of K8s and brief about the backbone components, terminologies that help in starting a new journey.
K8s Architecture Diagram
What is Kubernetes Cluster?
A Kubernetes cluster is a group of node machines for running containerized applications. If you’re running Kubernetes, you’re running a cluster.
A Cluster contains Control Plane also know as K8s Master/heart of K8s and one or more computing machines/VMs. Worker nodes or earlier know as Minions runs the actual application/containers inside pods. But we don’t need to run applications in worker nodes. We can override some parameters to deploy the application in the control plane. But is not recommended for security reasons. By default, the cluster will not schedule pods on the control-plane node.
Kubernetes so powerful is that it can run anywhere. Ability to schedule and run containers across worker nodes, be they physical or virtual, on-premises, or in the cloud. It’s easy to manage your cluster to match the desired state. Kubernetes containers aren’t tied to individual machines. Rather, they’re abstracted across the cluster. So by this abstract approach, we can achieve high availability and fault tolerance of the application deployed as pods that are spread across the different nodes.
The key disadvantage is that if we want to set up the whole K8s Cluster or each component ourself then it’s a bit hard. But I don’t say it’s unachievable.
Kubernetes control plane Components?
In the control plane, we find the Kubernetes components that control the cluster, along with data about the cluster’s state and configuration. These core Kubernetes components handle the important work of making sure your containers are running in enough numbers defined and with the necessary resources.
Main Components of Control Plance/Master Cluster
- Kube Scheduler
- Kube Controller Manager
We need to interact with your Kubernetes cluster to perform any activity live creating deployment, service objects. The Kubernetes API is the front end of the Kubernetes control plane, handling internal and external requests of REST requests on K8s Objects. The API server determines if a request is valid with the help of a config file that holds token and auth-related data and processes it if it’s a valid request. We can access the API through REST calls, through the kubectl/kubeadm command-line interface.
Kube Controller Manager
In Kubernetes, a controller is a control loop that watches the shared state of the cluster through the API server and makes changes attempting to move the current state towards the desired state. Ex: It monitors the deployment/pods cont mentioned if due to some error one pod goes down then Controller Manager monitors and responds scheduler to pick up the node and then deploy the pods to update to the desired state.
Makes decisions about where to launch pods when deployment is triggered. It listens to events to makes decisions of where to provision. It knows the state and resource details of worker nodes and pods. Ex: When a deployment is created with CPU and Memory request details it creates a plan to which nodes it can deploy the pods. Then finds the optimal node to which it will create pods. If the criteria are not matched then deployment will not be processed. Then once the criteria match then process the request.
Configuration data and information about the state of the cluster lives in etcd, a key-value store database. Fault-tolerant and distributed, etcd is designed to be the ultimate source of truth about your cluster. Etcd is designed to avoid single points of failure in case of hardware or network issue. Etcd’s job within Kubernetes is to safely store critical data for distributed systems. It’s known as Kubernetes' primary datastore used to store its configuration data, state, and metadata. Since Kubernetes usually runs on a cluster of several machines, it is a distributed system that requires a distributed datastore like Etcd. Etcd is a high available key-value store that can be distributed among multiple nodes.
Etcd’s watch functionality is used by Kubernetes to monitor changes to either the actual or the desired state of its system. If they are different, Kubernetes makes changes to reconcile the two states. Any change made (kubectl apply) will create or update entries in Etcd, and every crash will trigger value changes in etcd. This is the main source of truth for your cluster.
So these components constitute the Master Cluster of K8s. We will continue with the next agenda soon.
Kubernetes worker node components?
What is a worker node in K8s?
In the worker node, we run the containerized applications, and it continuously reports to the control plane’s api-server about its health.
So worker node health check is evaluated and if the node is unhealthy then K8s objects are moved to other nodes.
Worker node has the following components:
- Container runtime
The kubelet is the primary “node agent” that runs on each node. It can register the node with the api-server using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.
It also ensures that the containers inside the pods in worker nodes are running and healthy. Furthermore, it continually talks with the Kubernetes API to relay the health information of the pods.
The kube-proxy is a network proxy that runs on each node in a Kubernetes cluster. It maintains network rules on all nodes so that smooth communication between pod elements both inside and outside the cluster.
It is software that is responsible for running containers inside the cluster nodes. Examples include CRI-O, containerd, Docker, etc. So this is kind of soul of docker environment without which no docker image can be the container.