The Kubernetes Operator Pattern has a lot of appeal and I’ve led a team that has written and maintained several over the past three years. We’ve learned a few things in the process and I wanted to write up some thoughts around when you shouldn’t be writing an operator.

In general Kubernetes Operators encode operational knowledge about running an application on Kube. For (typically stateful) applications of a certain complexity, it can help to have this knowledge automated and the Kubernetes API makes this quite easy to do programatically. Operators typically end up being written in Go and using controller-runtime and Custom Resources for their API and data storage. They’re generally used to install, upgrade, migrate, and maintain applications running in the cluster.

This is a very appealing framework and my team has loved working with it. It’s not always easy, new contributors take some time to come to grips with the nuances of working with the Kube API, controllers, CRDs and etcd, but in general my team is fond of it. In doing so you get a free API, data storage, auth, RBAC, and the well established watch and declarative reconcile pattern that has helped make Kube so successful.

This however is where the trouble can begin. It’s very easy to fall into the mindset of how easy a problem is to solve with this pattern, you’ve got a hammer and everything begins to look like a nail. Operators are not an application framework, and their underlying technology is not well suited for general applications.

When Not To Write An Operator

(IMHO, YMMV, etc)

Almost All The Time

First and foremost the default position for running an application on Kubernetes should be that you do not need an operator. Operators should be reserved for more complex applications that require a noteworthy amount of manual intervention to keep running, upgrade or backup. This should be a relatively rare thing, Kube itself provides what most applications should need. Operators can be complex and add overhead to develop and maintain. You may reach that point someday but never assume it’s a requirement at the start.

When Your Application Isn’t Relevant To Your Cluster

Operators are best for storing and acting on resources related to the Kubernetes cluster where your operator is running. Working with data that has nothing to do with your cluster will lead to continuity problems.

Kube custom resources are stored in etcd, and thus you cannot easily move that data to another cluster. Kube Etcd contains a large amount of data specific to that cluster, URLs and certificates, node info, etc, which means if your cluster is broken or destroyed, you cannot just take an etcd backup and spin it up in a new cluster.

You would not want to use an operator for your next web application just because it gives you a free API, storage and RBAC.

However the line gets blurry in the realm of operators that are operating on other Kubernetes clusters. My team has taken this path, and there are a number of very prominent open source projects making use of this pattern, an operator in a hub cluster using custom resources to provision/configure/manage other spoke clusters. There is some sense to this and definitely some benefits. However you now have a hub cluster containing fairly critical information about other clusters. With this data in etcd, it becomes much more difficult to backup, restore or migrate, as you cannot just take an etcd backup to another cluster. You would need to write custom solutions to extract and restore only the resources relevant to your application, a non-trivial amount of work and mistakes are easy to make. This can be somewhat mitigated by the use of something like ArgoCD for gitops and Velero for backup/restore, but both come with complications that require careful operator design to work effectively and limitations in what the operator can automate for you.

Operators make sense to me when they’re managing an application on that cluster, or even cloud infrastructure for that cluster. My conclusion from personal experience is that when you cross the boundary to other Kubernetes clusters, the operator pattern should not be used and a traditional application is probably a better fit.

When You Need Scale

If you’re going to need to even think about scale, an operator is not for you. In this context lets assume scale means O(1000) resources. I’ve been told that with great care O(10000) might be possible, but in general I would recommend not even trying if you’re going to push to or beyond this limit.

This is mostly due to the nature of Kube custom resources backed by etcd. You do not want to be using etcd as you would a traditional database. We should assume that everything you’ve stored in etcd will need to be in memory all at once and possibly often. List operations by labels do not use indices, they literally iterate every resource looking at the labels on each. Many clients will also cache every resource you’re working with in memory.

Worse still, if you start running up against the scale limits of etcd on your hardware, your Kube cluster itself, the thing that is meant to help keep your app running, is likely going down. You’re also running up against the Custom Resource use of all the other operators using CRs in the cluster, potentially overloading just by a combination of what’s on the cluster rather than your specific operator’s needs.

We’ve explored using an aggregated Kubernetes API server backed by a more suitable database, but been cautioned that this is fraught with peril and many teams have failed in attempts to do so in the past, and would add significant overhead to maintain going forward even if successful.

The etcd description on their website says it best:

A distributed, reliable key-value store for the most critical data of a distributed system https://etcd.io/

Not a database.

Acknowledgements

Thanks to Stefan Schimanski (buy his Programming Kubernetes book, it’s amazing!) for helping me understand these limitations. Thanks also to my team at OpenShift, as well as the SRE teams we work with, for helping me formulate these thoughts through our combined experience running real world operators at scale.