Kubernetes, it’s everywhere!

Over the last two years, I’ve worked for four different clients (across the retail, education and finance industries) and three of those are either already using or considering using Kubernetes (k8s). A large percentage of current or prospective clients we talk too are in the same situation.

If we go back three years, we had hardly heard about k8s. So what has happened? 

First of all, it’s a relatively new platform. Version 1 was only released five years ago and since then it has made its way into all the major cloud platforms (AWS, GCP, Azure, Aliyun). It’s also available to run and manage yourself e.g. in your own data centre.

While everybody seems to be talking about k8s, they have different reasons for doing so. In this blog I’ll look at the various reasons we have come across for businesses wanting to use k8s, and if and when it makes sense.

Please note, I’m not intending to explain what Kubernetes actually is. There are many resources available for this, e.g the official documentation.

Why are people looking at Kubernetes?

The promise of ‘managed’ infrastructure

What does ‘managed’ kubernetes actually mean? It means that someone else manages the master node (or cluster of master nodes) which is painful to do yourself and saves you time and money. The master node is responsible for managing the state of the cluster (e.g. making sure that your applications are running). It does this by looking at the desired cluster state (e.g. which services should be running and how many instances of each service) and makes it so. If a service dies, it will automatically start a replacement. Pretty cool!

Before you get to a managed cluster, you will need to configure it and there are many decisions to make. Granted, there are defaults and recommendations available from the cloud provider, but these may not necessarily work for you. They are also not the same across cloud providers because they all have their own distinct flavour of k8s.

You will need to understand k8s, which is an abstraction on top of cloud infrastructure. So, unless you have people in your organisation who know and have experience with it, there is a lot to learn before you can use it effectively. Even if you have very experienced ops and devops people who understand infrastructure, k8s is a different beast. Instead of knowing how to do whatever it is you want to do, you now need to know how to do that with k8s (e.g. deploy an SSL certificate so your public API can be accessed over https).

There is value in managed k8s, but it’s not going to be realised immediately unless you already have some expertise with kubernetes itself

Cost savings

In my experience there are typically two main reasons businesses have for jumping on the cloud bandwagon:

  1. Time savings
  2. Cost savings

K8s feels like the next wave of this, where cloud infrastructure for many customers has become so complex and costly that they’re looking for the next level of time and cost savings. It may help here, but it’s not a quick win for the reasons I’ve laid out above. 

Over time you’ll be able to run your applications on cheaper infrastructure as k8s will optimise the use of virtual machines used to run your containers. It may also shorten the time to market once both application and infrastructure developers know how to use it effectively, as the k8s tooling is quite good. (For example better than what currently exists for AWS Fargate or ECS which are commonly used alternatives).

Moving between cloud providers

Kubernetes is available on all the major cloud platforms (AWS, GCP, Azure and Aliyun). Even Oracle cloud has it. This makes it seem like migrating between cloud providers should be easier if our applications are running in k8s. There may be some truth in that, but it’s not a trivial task.

Applications deployed to k8s have deployment descriptors written in yaml that describe the various resources required e.g.

  • the number of containers required & container/application configuration
  • how and if the application should be exposed (e.g. to the public internet for a public API or a website)
  • how different services are allowed to communicate with each other, and
  • what services outside of kubernetes the application can access.

Some of this changes between providers because the underlying features of the cloud platforms are different e.g. load balancers work differently between AWS and GCP. You’ll have different configuration options you can provide based on the cloud provider. Providing invalid options doesn’t necessarily fail the deployment, it just may behave unexpectedly, so be ready for this. You will need to revisit your deployment descriptors carefully if you move k8s applications between cloud providers.

Cluster configuration options are also different between cloud providers. Even if you use Terraform which supports creation and configuration of k8s clusters on all of the major cloud platforms. The type of Terraform object you create will be different for each platform e.g. moving from GCP > AWS you’ll need to rewrite your cluster configuration code.

Finally, if your applications make use of cloud native services outside of kubernetes e.g. Lambda or DynamoDB in AWS or BigTable and DataFlow in GCP you need to take this into consideration as this may require a rewrite of parts of an application. Data stored in NoSQL databases may also need to be transformed in order to be migrated to a different platform where the data stores available are different.

There is portability to an extent, but it isn’t a seamless process.

Productivity boost

Going back to a time before the cloud, there used to be a clear separation between application developers and operations/network engineers. One group of people would build applications and another would be responsible for keeping them running, make sure they could be accessed and that they were secure. Applications would often run on application servers (Jboss, WebLogic, WebSphere, Tomcat etc). Some of these application servers were so complicated that we needed specialists that knew them inside and out (which for me at least is a big warning sign not to use them).

The cloud changed this. Developers started learning more about infrastructure as cloud providers offered more and more services that made us toss up between solving a problem with application code or infrastructure/pre-built services. The complexity in our applications moved from code to infrastructure and orchestration of services. 

And then along came kubernetes… Application developers are back to writing applications as k8s abstracts away more of the underlying infrastructure (which is again an abstraction over the hardware it runs on) and infrastructure engineers are back to being in charge of the infrastructure. Why? Because k8s is just as complicated as the application servers we used before the cloud specialists were needed to manage the cluster. Developers need to learn how to deploy and debug applications, but they don’t need to understand how the cluster is configured. This brings back a clearer separation of responsibilities between developers and infrastructure specialists which may lead to bottlenecks and delays. On the other hand, if the software engineers are well supported and the k8s clusters they deploy to are working, then it is likely to provide a productivity boost as the tooling that comes with k8s is a time saver once you get used to it.

Don’t expect k8s to give you an immediate productivity boost unless you have people who already have experience with it (but it might over time).

Because everyone else is

Nobody says this of course, but it certainly feels like it at times. The hype factor is big right now and we think that it’s responsible for some of the interest. If you ask techies if we should use it, they will probably say yes because they get to learn something new. But if I’m putting my consultant hat on, I’d say you need to weigh up the up-front cost of having people be less productive while they learn as well as the cost of delaying the release of your product as a result against the possibly long term benefit of lower infrastructure operational costs and faster ongoing releases. 

Should we be looking at Kubernetes?

We do think Kubernetes is an impressive platform. But using it in production will take a serious commitment. The tooling that comes with k8s is good and once you have learned to use it we find that it does speed up your time from code-change to an application running in production. However there is a lot of tooling available and a lot of new concepts to learn. Some of this tooling is also in a ‘beta’ state where it seems to have been stuck for a long time. This doesn’t mean that it can’t be used in production, but don’t be surprised if you do get breaking changes with an upgrade.

We’d recommend you start small, use k8s on something non-critical and build up some experience before using it on anything important, unless you feel that you have a critical mass of expertise already… and in which case, go for it!

Just keep in mind that for some teams, using a more ‘managed’ alternative such as AWS Fargate (or even a non-containerized solution) will be a better option, even if it does have some limitations and is a little less flashy.

Want to know more about how DiUS can help you?



Level 3, 31 Queen St
Melbourne, Victoria, 3000
Phone: 03 9008 5400

DiUS wishes to acknowledge the Traditional Custodians of the lands on which we work and gather at both our Melbourne and Sydney offices. We pay respect to Elders past, present and emerging and celebrate the diversity of Aboriginal peoples and their ongoing cultures and connections to the lands and waters of Australia.

Subscribe to updates from DiUS

Sign up to receive the latest news, insights and event invites from DiUS straight into your inbox.

© 2024 DiUS®. All rights reserved.

Privacy  |  Terms