Please note that this post, first published over a year ago, may now be out of date.
I’m a Kubernetes contributor. You might expect, given that background, that I’m going to recommend it in every consultancy engagement. Truth be told, though, even for firms that are using containers, Kubernetes is only one of several options.
In one sense, Kubernetes’ architecture is built around a really extensible REST-like API that provides basic CRUD (create, read, update, delete) operations. Developers are likely to value Kubernetes for its thoughtfully designed built-in APIs that work together to make simple deployments straightforward and complex ones possible.
To deliver that API experience, you (ideally, with help from a cloud provider) need to put the time in to deploy and manage a working Kubernetes cluster. As a technical lead for Kubernetes' documentation SIG, I’m confident I can explain what’s involved and how much effort it takes.
Here are five points that give me a strong hint towards considering a Kubernetes platform for your infrastructure. Whether you’re a big business or just starting up, these details might be a sign that Kubernetes is right for you.
5. Topology-aware task placement
Out of the box, Kubernetes gives you sensible defaults for scheduling. Every time your workload scales out, or handles infrastructure-level failure, the control plane makes a new Pod. The cluster-wide scheduler is responsible for finding the best placement for that Pod, taking tens of different factors into account.
For workloads with specialist requirements, Kubernetes has answers. For example, if you need to run groups of containers inside AWS and latency is a key factor, you can deploy worker nodes within placement groups and implement custom scheduling rules to put the tasks close to each other. Your custom scheduling rules can work as plugins into the existing scheduler. You can even write your own scheduler component and mark specific workloads to use that, rather than the default.
If resilience is a top priority then you’ll want to have sets of containers that share as little infrastructure as possible. The latest releases of Kubernetes include zone-aware spreading; again, if you need more customisation, there’s hooks you can use.
4. Two-way scaling
If you’ve got a lot of different workloads and you’re struggling to find the right resource allocations for them, Kubernetes can help.
It’s 2021. Horizontal scaling feels like a solved problem, whether that’s for physical kit running in the cloud, virtual machines, or sets of containers.
What’s less straightforward is getting the vertical scaling right. At scale, this matters: a task that’s assigned 200 MiB more than it needs sounds fine for a small app, but represents a juicy potential saving if your peak demand means thousands or millions of replicas.
With Kubernetes you have options for this. A common pattern is to have HorizontalPodAutoscaler directly hooked up to match compute to demand. At the same time you can use VerticalPodAutoscaler in recommender mode to notify when a Pod template is assigning too much (or too little) resource for a workload. You can match on the strength of these recommendations so that, for example, you only wake up the developers if there’s a sizing problem that could actually impact customer experience.
Over in Google land, there’s an experimental MultidimPodAutoscaler. You can guess what it does: optimising resource use over different dimensions. For example, you can automate vertical scaling for memory whilst using horizontal scaling to make CPU use (and Pod count) to the level of traffic you’re handling.
Right now, MultidimPodAutoscaler has no good equivalent on EKS (and I don’t recommend writing your own) so the best option is to use the recommender mode for vertical scaling, then make some manual tweaks. Even that is quite a lot simpler than trying to do the same optimisation on, say, AWS’ Elastic Container Service.
3. Operators
The Operator pattern is, for me, Kubernetes’ not-so-secret sauce. It’s a big innovation that looks deceptively straightforward. Essentially, you get to take your own workload settings and your own business outcomes and define those via the same API machinery that makes Kubernetes work.
For the Kubernetes documentation, I made up an
example operator
called “SampleDB” that’ll give you a taste of what you could
automate.
Whilst you can build that kind of automation with any tooling (such as AWS Step
Functions), there’s likely to be friction between your own API and your cloud
provider. Kubernetes Operators let you offer a common interaction style that cuts
down cognitive load for developers and engineers.
Operators are a really powerful pattern that other orchestration technologies struggle to match. You can take almost any kind of operational toil and use this pattern to simplify it, perhaps even eliminate it. Sometimes that payoff is so strong it’s worth switching to Kubernetes just to get this one benefit.
2. You’ve outgrown a simpler option
Kubernetes is complex. Yep, it is. That complexity supports flexible and extensible ways to deploy containerised workloads.
If you’re already running containers using a different orchestration solution, and the limits are showing, it could be time to think about switching. The great news is: you can keep your app images. Of course you can. Using containers lets you change the system that you use to run them with about as little friction as there could be.
If you’re using AWS ECS with Fargate then you can only scale up so far. ECS tasks on Fargate can be chunky – up to 32 GiB and 4 virtual CPUs at the time this article was published – to meet the needs of a typical containerised app. With EC2 you can run tasks with lots more local CPU and / or memory, and Kubernetes can use all of that.
You could switch from using Fargate launch to ECS with self-managed EC2 instances, but if that’s on the cards then Kubernetes with EKS should be as well.
Consider whether you want to switch over service-by-service, as a wholesale swap, or something else. The biggest influence on this will be the organisational design you want to have at the end of the migration, and how big a shift that is from where you are today.
1. You’re already cloud native
Buying into the cloud native paradigm means your workload takes for granted that it’s being deployed into a dynamic environment. Infrastructure doesn’t need to change once deployed, because you favour graceful replacement - but you do use automation for rolling out updates as needed. Your own APIs are declarative where possible, using modern authentication, and sharing common implementations.
Maybe you didn’t need me to tell you all that. If your cloud adoption story is this mature, Kubernetes is likely to be a nice fit for your workload and your organisation, and you might feel like you’re missing out if you pick something simpler.
I really like both the promise and the reality of Kubernetes. It’s still improving at an impressive velocity. Whilst there’s always improvements just round the corner, for many workloads it’s a good choice, today.
Other options
Kubernetes isn’t the only game in town. You don’t even have to pick a solution that uses containers to be able to call your platform “cloud native”. Some of the more prominent alternatives that do use containers are:
- Amazon Elastic Container Service – especially with Fargate launch
- Appfleet Cloud Docker Hosting for a compute-focused, no frills cloud offering
- Azure Container Instances
- Docker Engine in swarm mode (on your own kit)
- Google Cloud Run
- StackPath Containers; like AppFleet, their selling point is about keeping latency low from the client to the compute
You can also choose between vanilla Kubernetes or a platform that builds round it. As both of those involve picking Kubernetes, I won’t cover that any further here.
What to pick
With at least 17 ways to run containers on AWS, let alone the other vendors in the market, you’re spoilt for choice.
Even if none of those five reasons I listed above rang true, Kubernetes and the features it offers could still be right for you. It’s actually hard to think of an infrastructure challenge where you would actually 100% rule out Kubernetes - in my view, it’s that flexible.
Do you need expert advice on Kubernetes? We are a Kubernetes Certified Service Provider and have a wealth of experience with Kubernetes, EKS, and containers. Book a Kubernetes review today.
This blog is written exclusively by The Scale Factory team. We do not accept external contributions.