Top Ten Challenges - Part 4: Namespace isolation
(This
is part 4 of a series about challenges and considerations that I have
come across with customers when deploying software into Kubernetes. You
can find the intro post here.)
This post is at least somewhat related to the previous one (about privileges and admin access), and it is also related to the next one (about tenancy). But it still deserves a separate discussion, in my opinion.
"In Kubernetes, namespaces provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc)."
It's all about isolating things, so that disparate workloads run on a cluster without risking to interfere with each other. As I mentioned in the previous post, I see many companies running large, shared clusters, and achieving as much isolation as possible is critically important for high availability and resilience of the environment. Which is directly leading to the rule that workloads must live in one namespace, one namespace only, and cannot have any authorization to touch anything outside of that namespace. And in Kubernetes, it is relatively easy to find out if an application violates that rule, simply by looking at the resources that are being created and their scope. Again, a key question here is whether any cluster roles are used by an application, or if only namespace scoped roles are used.
One interesting challenge we have had to find a solution for is that Cloud Paks all share a set of foundational services, and these services run in a separate namespace. So we needed a way to allow access to resources between these namespaces, but without using any cluster roles. In other words, there is a "target" namespace and a "source" namespace. The target namespace is the one within which we want to make resources accessible to resources living in the source namespace. The trick we use to support this is to create a role in the 'target' namespace and create a role binding for it that associates it with a service account in the source namespace. Resources in that source that use the service account now have access to resources in the target because of that binding.
My colleague Mike Kaczmarski has written an excellent blog about this, only accessible inside IBM, unfortunately, but I'll try to get him to make it public and will then share it here. Going forward, I think it will be important to have a first class way of allowing controlled access between resources in specific namespaces, as opposed to how it works today, where either you have namespace scope or cluster scope.
Besides security aspects, namespaces are generally useful to give a cluster structure, so to speak. As described in the opening definition above, many resources in Kubernetes are scoped to a namespace. In OpenShift, namespaces are also called "projects", by the way. The OpenShift console lets you view resources by individual namespace, and almost all of the screens have a namespace filter at the top. There is a filter for "all projects", but I barely ever use it.
Another interesting part about namespaces is the ability to apply resource quotas to them. Effectively, this lets you control how much of the cluster capacity can be used by the resources in a namespace, which prevents an individual application from monopolizing the cluster. Our software defines resource requests and limits at a pod level, but we have often discussed removing those and using namespace quotas instead. We'll see where it lands.
However, there are cases where namespaces fall short in providing control over resources in a narrowly scoped way, namely for those resources that are simply not namespace-scoped to begin with. The most important example of this, one which we come across all the time, is the CustomResourceDefinition (CRD). We use CRDs extensively through our use of operators, and they represent abstractions of components within our workloads. When installing an operator, you can choose whether you install it for all namespaces, or for just a single namespace. However, the CRD that comes with the operator will always be at cluster level. That makes it especially difficulty to run multiple, incompatible versions of an operator on the same cluster. For example, we have come across cases where different application use PostgreSQL as their database, and use different versions of it, which creates a problem with incompatible CRDs between both. But I believe allowing CRDs to be namespace scoped in on the to-do list for the Kubernetes community.
- Get link
- X
- Other Apps
Comments
Post a Comment