Partitioning a system into smaller units with boundaries has been done for decades to meet security requirements, reduce administration overheads and control resource utilisation. Clustering resources is an effective mechanism for providing larger shared environments and resilience.
While containers provide a coherent experience and alleviate tight coupling of software installations (deb’s, rpm’s etc.), systems to orchestrate these containers provide the cohesion for a production grade containerised application.
There have been various products in this space like Mesos, Docker Compose, Swarm and some proprietary ones like AWS ECS. However, when Kubernetes was released in 2015, it soon became the de facto standard for container orchestration.
History and adoption
The idea was conceived and designed by Google, given their history with containerisation. From CGROUPS being contributed to the linux community and then BORG (google’s container management system) in production. Decades of learnings has helped create this platform for container orchestration with the aim of improving programmer productivity and system management.
Being configuration driven, open source and very pluggable (I will discuss this later on), kubernetes remains one of the most loved and sought after platforms in the community. I like to believe that the success of golang (a Google open sourced language released around the same time) has vastly contributed to the adoption of kubernetes (written in go).
We have many clients conducting business in various industries. There are startups, growth companies, service providers and large enterprises, all operating under different models. In my colleague Erik Danielsen’s earlier blog post, he explored this technology in terms of the perceived value proposition from the perspective of some of these clients, but here I’d like to share some of our own learnings and insights from being in the trenches, delivering value on kubernetes.
Where we have seen the biggest impacts
Shift in team dynamics – organisational structure
With an evolutionary design, incremental software delivery and a devops approach, we tend to prescribe leaner teams comprising mostly people who can traverse both the application and infrastructure concerns. Kubernetes has a tendency to work in a different manner, whereby expertise is required. Dedicated maintenance, discovery of deployment patterns and integration with hosting providers is an additional stream of work.
Specialist roles with significant kubernetes expertise are now required to either bootstrap the platform or perform the role of an administrator (cluster admins). This has a direct impact on establishing a team with a client or providing recommendations for a product. An investment in the more specialist roles is required while managed platforms emerge and mature. For larger enterprises this typically results in forming specialist teams like platform services etc. who are building a PaaS for the organisation. This may not be suitable for startups and businesses having specific technology needs.
Building shared understanding
The terminology and concepts are different, a lot of it being declarative in nature. While we prefer to achieve a lot more by doing less with the focus on business goals, kubernetes introduces a steeper learning curve. Understanding of the platform, the numerous components and the controllers with external dependencies are not necessarily skills that product teams tend to possess.
There is a longer lead time to have an operating environment for getting software applications delivered at first. Provisioning a kubernetes cluster with compute resources and bare minimum controls (role-based access control, some networking) is a significant effort in itself. It is also critical to note, unlike what is commonly perceived, kubernetes is not an all ‘inclusive’ PaaS. It provides only the building blocks and some features as the base layer to developer platforms.
Explosion of tools – pluggability in nature
Kubernetes is not monolithic and custom objects can be built for customised features. Additionally, everything in this system is driven by yaml-file configuration. This has given way for developers and companies to discover and create patterns that are now considered standard tools for the functionality they provide. While they all have a purpose, currently there are too many to grasp and incorporate for teams. This carries the risk of diverting from focussing on the original business problem.
For example, there are installers to provision/configure kubernetes as well as software components that can be installed in your cluster to manage authentication, networking, monitoring, logging and security. Many of them come with their own tools and web components to set up and manage.
CI/CD
CI/CD workflows determined by organisations have all resulted in an explosion of tools. A brief understanding and the evolution of such tools are additional things to keep aware of.
An example of this would be helm, a tool that helps you manage kubernetes applications via charts. Charts help you define, install and upgrade applications. There was a server side component called Tiller which was retired until recently which had its own state management and security overheads to go with it. Alternatively there are Draft, Gitkube, Skaffold and various others.
Plugins
After having been on three kubernetes engagements, I recently discovered the kubectl plugins.
Plugins are typically built on top of core kubectl to achieve complex behaviour. Any executable that begins with `kubectl-` on your PATH are considered valid kubectl plugins. This has given rise to various tools for support and SRE roles. Krew is a centralised index for such plugins.
In conjunction with the cloud
Typical modern applications include stateless microservices and workloads to support them. Databases, caching systems, queues, log management are components that developers rather not maintain and prefer to be managed by a provider. This naturally creates an affinity towards certain providers. On the contrary however, kubernetes abstracts away the infrastructure or the provider while a large part of the solution is still vendor oriented. It can orchestrate your workloads in a vendor agnostic manner, to run on premise or at the edge but workloads are just a part of the solution. Now expertise with cloud providers and kubernetes is required.
What has worked really well
Immutable and experimental infrastructure
With kubernetes, immutable infrastructure has shifted from virtual compute resources to code objects like pods/services/ingresses enabling stable upgrades and rollbacks at a much faster pace thereby reducing downtime. Deployments are mere application of yaml files.
The platform natively provides service discovery and load balancing. There are no port management hassles and applications are accessed via services. Configuration of applications and interservice communication has benefited immensely from this.
Once passed the initial investment in setup and client upskilling, our teams have achieved on-demand deployments with a high degree of confidence. We have had rapid release cycles, releasing software safely multiple times an hour. By hosting multiple environments in a single cluster with the capability to spawn environments on demand, the infrastructure can certainly be leveraged to its full potential.
For when you need complete control
By abstracting away the infrastructure concerns, one of the biggest benefits we have observed is the ability to deploy anywhere. The same workloads can be deployed to infrastructure on premise, the cloud, a hybrid model or even at the edge. It also works well for when you need to be able to configure anything. For example working with plugged-in private networks, software with legacy integration needs and corporate proxies.
Summary
There are quite a few moving parts and some components in the ecosystem are inevitably ‘beta’ or at least tagged so. This is less desirable for software that needs to adhere to assurance standards and have caused hindrances in my experience.
Having expertise in the platform along with all the tools, their nuances and keeping up with their evolution can be a massive upskilling challenge. We feel that this vibrant, yet fragmented aspect of the ecosystem is an indication of its still growing maturity.
A good reason to consider Kubernetes is when a critical mass of workloads are perceived to be developed or maintained by numerous teams, such that the benefits outweigh the cumulative cost of productionising software. An experimental mindset and the buy-in to constantly evolve with the ecosystem to go with it is the ideal.