Demystifying Kubernetes — 4 Reasons Why It’s So Successful

#3 — Extensible Functionalities and Components

Photo by KAL VISUALS on Unsplash

This post is the first one of a multi-part series. In this series, I want to dig into Kubernetes but I am not planning to talk about the technical details like Deployment, Pod or Kubelet, KubeProxy. What I want to talk about is, discussing why Kubernetes matters, how it relates to modern applications, how it relates to developers, and why it succeeded in the first place.

Check this illustration to know the role of Kubernetes.

Let me first start by defining what Kubernetes is (yes I gotta do that, don’t laugh).

Kubernetes is a container orchestrator or generally, we think of Kubernetes as a container orchestrator. It was the reason for Kubernetes to exist firstly (there is a great documentary in the references section. Watch it if you haven’t yet).

It was designed to be a container orchestrator and it was announced in a Docker Conference among lots of other container orchestrators. We can even say it was docker orchestrator in those days(thinking how dominant Docker was).

What made Kubernetes to survive among other container orchestrators like Docker Swarm, Mesos, Nomad, and many others? It did not only survive it also dominated Cloud Native era. Masses even moved away from many Docker products in some way and Kubernetes has become the biggest flagship project of the Cloud Native universe.

In my point of view, these four reasons are the main parts of Kubernetes success story; being open-sourced and supported by the large community (again watch the documentary for how it is open sourced), microservice-esque distributed architecture, extensible functionalities, and components, optimal design to handle the problems in its target domain. It includes what is cloud-native in itself, no surprise and it is like the definition of the modern application.

Nowadays, it is not only a container orchestrator, it is also a virtual machine orchestrator(see Kubevirt), webassembly orchestrator(see crun, Krustlet), cloud infrastructure orchestrator(see Google Config Connector, Amazon Controllers, Azure Service Operator, Crossplane, Cluster Api), continuous delivery orchestrator(see Flux, Argo CD), continuous integration orchestrator(see Tekton, Argo Workflows), serverless application orchestrator(see Knative, Azure Container Apps, Cloud Run For Anthos) and many more.

I will try to explain the four reasons why Kubernetes is a success story.

Being Open Sourced and Supported By The Large Community

The emergence of containers gave us the flexibility to deploy our applications to any environment which has a container runtime. With containers, we gain lots of abilities including portability, immutability, and more.

There are many articles explaining their benefits and it is whole another story. After the big bang of containers when docker was dominating the ecosystem there was a need for container orchestration and there were many container orchestrators. Kubernetes was one of them.

Kubernetes was invented by Joe Beda, Brendan Burns and Craig McLuckie. Its implementation was accelerated by the experiences of Google firstly. After this bootstrapping phase, Google decided to make Kubernetes an open-source project.

The decision taken by Google to make Kubernetes an open-source project paid off enormously. This decision is one of the reasons Kubernetes stood out among the other orchestrators. CNCF was also founded as a Linux Foundation project with Kubernetes 1.0 in 2015 to support container technology. Kubernetes was also donated to CNCF and CNCF became the decision-maker on Kubernetes.

Nowadays CNCF is home to many popular open source projects in addition to Kubernetes. Kubernetes is one of the most active projects in Github. The community gathered around Kubernetes solved lots of problems quickly. The projects mentioned above (like Keda, Knative, Argo, Flux, etc) are all community-driven ones and most of them have been started by a company and donated to CNCF. The process of Kubernetes donation has become a great role model for new projects and many of them followed the same path.

Microservice-esque Distributed Architecture

Kubernetes doesn’t have a monolithic architecture. It is composed of multiple components. On the control plane, it has a Kube controller manager, Kube scheduler, Kube API server, and cloud controller manager. It has ETCD as the state store. On the worker nodes it has Kube proxy, container runtime and kubelet. There is a great harmony between these parts.

Kube API server is the main component of Kubernetes. All desired state read, write operations are processed by a server.

When creating a new resource through kubectl, requests are also accepted and stored by API server to ETCD. However, the Kube API server is not the one that applies desired state to the worker nodes. It is the responsibility of the Kube controllers.

If the resource is a deployment then the deployment controller is responsible. Deployment controller checks if pod count matches the desired state periodically. If the resource is statefulset then statefulset controller is responsible.

Mentioned controllers are in charge of their resources but they also do not send pod creation command to worker nodes (to kubelet specifically). They send their command to Kube API server and Kube scheduler decides which worker node is the suitable place for the pod. This coworking and work distribution make each component highly responsive and prevent a single point of failure. Even if the Kube API server is not available, your workloads continue to work properly.

Because of its distributed architecture, many of these components can easily be replaced with different ones depending on the requirements. For example, Kube scheduler can be replaced with another scheduler that has a different scheduling algorithm if needed(there are links for sample schedulers in the references). Kube API server cannot be replaced with another API server but it is also extensible with custom resources and aggregation layer (which I will talk about next).

Extensible Functionalities and Components

Kubernetes is not a highly opinionated tool. It is designed to be the opposite. Being not opinionated is one of the reasons why Kubernetes succeeded(By the way, being opinionated is not a good or bad thing. It is a choice and it can be evaluated with its consequences). Kubernetes can be used to build higher-level platforms and opinionated products.

The most common way to extend Kubernetes functionality is by extending Kube API server and Kube controllers.

There are two ways to extend Kubernetes API: aggregation layer and custom resource definitions(for details, links are in the references below).

The most popular one between these two is CRDs (because of their easier implementation and maintenance). CRDs alongside with custom controllers form Operator Pattern. Custom controllers are also managed by Kube controller manager and they are also subscribed to Kubernetes events and wait for events about their resources. After an event has been published they do their job. In fact, lots of tools we have mentioned above(like Argo CD, Flux, Keda, Knative, Crossplane, and more) are operators.

There are many other different ways to extend Kubernetes functionality. For example, to orchestrate the workloads beyond containers, other agents can be used. Krustlet is a great sample that manages web assembly processes. Krustlet can work on a subset of worker nodes and kubelet can run on another subset.

ETCD is the most challenging one to replace with similar technology. But even ETCD can be replaced as K3S team has achieved by using SQLite for their lightweight Kubernetes distribution. Replacing ETCD is not a straightforward process by the way but the big community can solve big problems (see Kine).

If you need extra functionality you can also deploy addons like Kubernetes dashboard, network policy providers, and Core DNS. You can also extend the functionality of your clusters by adding other components using Kubernetes deployments, daemonsets and statefulsets. Log shippers, metrics agents, telemetry agents, API gateways, service meshes, and many other components also can be deployed alongside with your workloads according to your needs.

Kubernetes dynamic admission control is another way to extend Kubernetes functionality. Kubernetes policies including pod security policy, RBAC policies and network policies are also used to extend Kubernetes functionality and security harden your clusters.

There is a great document on the references for Kubernetes extension patterns.

Optimal Design To Handle The Problems In Its Target Domain

Kubernetes is an abstraction tool in some way. It hides the container management operations and infrastructure details. Container orchestration contains managing lifecycle of containers, providing accessibility to containers from both inside and outside of the cluster, providing persistent volume when needed, providing necessary configuration data, providing a deployment procedure, and such. Kubernetes defines the problems very well in that area, uses existing best practices, and comes with correct solutions.

Kubernetes doesn’t make overengineering and doesn’t push the limits. It leaves many areas to the adopters and community. To sample a few, Kubernetes doesn’t come with a solution for observability stack. Because of this each cloud vendor can use its own solution. There are also lots of open-source alternatives for observability even under CNCF umbrella. These products can also be used with Kubernetes. This approach is similar for message streaming products and such.


I have talked about Kubernetes’ success story but as we all know there is no one size fits all solution in the software industry.

Kubernetes success story is real but where do we stand in our Kubernetes adoption story? Kubernetes is a complex tool that is hard to learn for many of us, despite it abstracting many complex infrastructure details. Is it necessary for all our IT teams to master it to perfection? Is it necessary for all of them to use it in their daily work?

My next post will be about tackling complexity in Cloud Native space and I will talk about opinionated platforms, platform engineering paradigm, infrastructure as product concept, application development platforms, and such. As Kelsey Hightower states nicely in his tweet “Kubernetes is a platform for building platforms. It’s a better place to start; not the endgame”.


Demystifying Kubernetes — 4 Reasons Why It’s So Successful was originally published in Better Programming on Medium, where people are continuing the conversation by highlighting and responding to this story.

(Visited 1 times, 1 visits today)