How Kubernetes is used in Industries and what all use cases are solved by Kubernetes?

Prince Raj
7 min readDec 26, 2020

What is Kubernetes?

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.

OK, but why all the buzz? Why is Kubernetes so popular?

As more and more organizations move to microservice and cloud-native architectures that make use of containers, they’re looking for strong, proven platforms. Practitioners are moving to Kubernetes for four main reasons:

1. Kubernetes helps you move faster. Indeed, Kubernetes allows you to deliver a self-service Platform-as-a-Service (PaaS) that creates a hardware layer abstraction for development teams. Your development teams can quickly and efficiently request the resources they need. If they need more resources to handle the additional load, they can get those just as quickly, since resources all come from an infrastructure shared across all your teams.

No more filling out forms to request new machines to run your application. Just provision and go, and take advantage of the tooling developed around Kubernetes for automating packaging, deployment, and testing, such as Helm (more below).

2. Kubernetes is cost-efficient. Kubernetes and containers allow for much better resource utilization than hypervisors and VMs do; because containers are so lightweight, they require less CPU and memory resources to run.

3. Kubernetes is cloud-agnostic. Kubernetes runs on Amazon Web Services (AWS), Microsoft Azure, and the Google Cloud Platform (GCP), and you can also run it on-premise. You can move workloads without having to redesign your applications or completely rethink your infrastructure — which lets you to standardize on a platform and avoid vendor lock-in.

In fact, companies like Kublr, Cloud Foundry, and Rancher provide tooling to help you deploy and manage your Kubernetes cluster on-premise or on whatever cloud provider you want.

4. Cloud providers will manage Kubernetes for you. As noted earlier, Kubernetes is currently the clear standard for container orchestration tools. It should come as no surprise, then, that major cloud providers are offering plenty of Kubernetes-as-a-Service-offerings. Amazon EKS, Google Cloud Kubernetes Engine, Azure Kubernetes Service (AKS), Red Hat Openshift, and IBM Cloud Kubernetes Service all provide a full Kubernetes platform management, so you can focus on what matters most to you — shipping applications that delight your users.

So, how does Kubernetes work?

The central component of Kubernetes is the cluster. A cluster is made up of many virtual or physical machines that each serve a specialized function either as a master or as a node. Each node hosts groups of one or more containers (which contain your applications), and the master communicates with nodes about when to create or destroy containers. At the same time, it tells nodes how to re-route traffic based on new container alignments.

The Kubernetes master

The Kubernetes master is the access point (or the control plane) from which administrators and other users interact with the cluster to manage the scheduling and deployment of containers. A cluster will always have at least one master, but may have more depending on the cluster’s replication pattern.

The master stores the state and configuration data for the entire cluster in ectd, a persistent and distributed key-value data store. Each node has access to ectd and through it, nodes learn how to maintain the configurations of the containers they’re running. You can run etcd on the Kubernetes master, or in standalone configurations.

Masters communicate with the rest of the cluster through the kube-apiserver, the main access point to the control plane. For example, the kube-apiserver makes sure that configurations in etcd match with configurations of containers deployed in the cluster.

The kube-controller-manager handles control loops that manage the state of the cluster via the Kubernetes API server. Deployments, replicas, and nodes have controls handled by this service. For example, the node controller is responsible for registering a node and monitoring its health throughout its lifecycle.

Node workloads in the cluster are tracked and managed by the kube-scheduler. This service keeps track of the capacity and resources of nodes and assigns work to nodes based on their availability.

The cloud-controller-manager is a service running in Kubernetes that helps keep it “cloud-agnostic.” The cloud-controller-manager serves as an abstraction layer between the APIs and tools of a cloud provider (for example, storage volumes or load balancers) and their representational counterparts in Kubernetes.

Nodes

All nodes in a Kubernetes cluster must be configured with a container runtime, which is typically Docker. The container runtime starts and manages the containers as they’re deployed to nodes in the cluster by Kubernetes. Your applications (web servers, databases, API servers, etc.) run inside the containers.

Each Kubernetes node runs an agent process called a kubelet that is responsible for managing the state of the node: starting, stopping, and maintaining application containers based on instructions from the control plane. The kubelet collects performance and health information from the node, pods and containers it runs and shares that information with the control plane to help it make scheduling decisions.

The kube-proxy is a network proxy that runs on nodes in the cluster. It also works as a load balancer for services running on a node.

The basic scheduling unit is a pod, which consists of one or more containers guaranteed to be co-located on the host machine and can share resources. Each pod is assigned a unique IP address within the cluster, allowing the application to use ports without conflict.

You describe the desired state of the containers in a pod through a YAML or JSON object called a Pod Spec. These objects are passed to the kubelet through the API server.

A pod can define one or more volumes, such as a local disk or network disk, and expose them to the containers in the pod, which allows different containers to share storage space. For example, volumes can be used when one container downloads content and another container uploads that content somewhere else.

Since containers inside pods are often ephemeral, Kubernetes offers a type of load balancer, called a service, to simplify sending requests to a group of pods. A service targets a logical set of pods selected based on labels (explained below). By default, services can be accessed only from within the cluster, but you can enable public access to them as well if you want them to receive requests from outside the cluster.

Deployments and replicas

A deployment is a YAML object that defines the pods and the number of container instances, called replicas, for each pod. You define the number of replicas you want to have running in the cluster via a ReplicaSet, which is part of the deployment object. So, for example, if a node running a pod dies, the replica set will ensure that another pod is scheduled on another available node.

A DaemonSet deploys and runs a specific daemon (in a pod) on nodes you specify. They’re most often used to provide services or maintenance to pods. A daemon set, for example, is how New Relic Infrastructure gets the Infrastructure agent deployed across all nodes in a cluster.

Namespaces

Namespaces allow you to create virtual clusters on top of a physical cluster. Namespaces are intended for use in environments with many users spread across multiple teams or projects. They assign resource quotas and logically isolate cluster resources.

Labels

Labels are key/value pairs that you can assign to pods and other objects in Kubernetes. Labels allow Kubernetes operators to organize and select a subset of objects. For example, when monitoring Kubernetes objects, labels let you quickly drill down to the information you’re most interested in.

Stateful sets and persistent storage volumes

StatefulSets give you the ability to assign unique IDs to pods in case you need to move pods to other nodes, maintain networking between pods, or persist data between them. Similarly, persistent storage volumes provide storage resources for a cluster to which pods can request access as they’re deployed.

What are containers, where does Kubernetes fit in, and what tools do you need for a successful implementation?

Container use is exploding right now. Developers love them and enterprises are embracing them at an unprecedented rate.

If your IT department is looking for a faster and simpler way to develop applications, then you should be considering container technology. But what are containers and what problems do they address? Where does Kubernetes fit into the container and cluster management space? Why is it presenting enterprises with implementation challenges? And, what considerations should you bear in mind as you explore whether containers and cluster management tools are right for your application development needs?

Here are some essentials that every enterprise needs to know about containers, container cluster management, the pros and cons of Kubernetes, and how to get the most out of our Kubernetes

What Enterprise Container Cluster Management Solutions Are Available?

There are many options for container cluster management. Kubernetes, however, is winning the container war and is now the most widely used, open source solution. With 15 years of Google development behind it and an enviable open source community (including Red Hat, Canonical, CoreOS, and Microsoft), Kubernetes has matured faster than any other product on the market.

Kubernetes hits the sweet spot for container cluster management because it gives developers the tools they need to quickly and efficiently respond to customer demands while relieving the burden of running applications in the cloud. It does this by eliminating many of the manual tasks associated with deploying and scaling your containerized applications so that you can run your software more reliably when moved from one environment to another. For example, you can schedule and deploy any number of containers onto a node cluster (across public, private, or hybrid clouds) and Kubernetes then does the job of managing those workloads so they are doing what you intend.

Thanks to Kubernetes, container tasks are simplified, including deployment operations (horizontal auto-scaling, rolling updates, canary deployments) and management (monitoring resources, application health checks, debugging applications, and more).

--

--