Underestimating Complexity at the Helm

wheelhouse-steering-wheel-ship.jpg As a package manager for Kubernetes (K8s), we can think of Helm like the RPM or Homebrew of container orchestration. It has gained popularity recently since it has been announced as an incubating project of the Cloud Native Computing Foundation (CNCF). For a new K8s engineer, Helm Charts enable you to get simple instances of K8s running specific application or database workloads quickly. Like many tools designed to simplify complex technologies, Helm has a simple architecture of two parts: the Helm Client and the Tiller Server. A Helm Chart is a collection of files organized in a specific directory structure where their configuration is managed together with the running instance. The Tiller Server interacts with the K8s APIs and is installed within the cluster itself. There it can install, upgrade, query or remove Kubernetes resources. Role-based access control (RBAC) polices are assigned at the Tiller pod level, not the user. So, any user having access to Tiller has access to everything else. Some people work around this with separate helm installations per role or team but that only adds more complexity. Anything needing privileged access rights should be cause for concern as a security vulnerability. If a potential attacker gains access to run Tiller with cluster-admin permissions, they could create or delete namespaces, delete service accounts or bring your whole cluster down. Helm 3 will drop Tiller altogether but Helm still won’t work with key K8s components such as kube-dns, CNI providers or cluster autoscalers, to name a few. Time invested understanding the core Kubernetes technology stack itself will always pay higher dividends then going very deep into ancillary tool technologies that come and go.