Skip to main content

Virtual Clusters

A virtual cluster is a fully functional Kubernetes cluster that runs inside a namespace of another Kubernetes cluster. Typically, the cluster that the virtual cluster is installed in is referred to as the "host" cluster, or the "parent" cluster.

Virtual clusters, being fully functional Kubernetes clusters in their own right, can be a very useful tool if you are running into issues with the limitations of traditional Kubernetes namespaces. Often administrator do not want to make, or cannot make any special exceptions to the multi-tenancy configuration of the underlying parent cluster in order to accommodate user requests. For example, some users may need to create their own Custom Resource Definitions (CRDs) which could potentially impact any other users in the cluster. Another user may need pods from two separate namespaces to communicate with each other, despite the standard NetworkPolicy not permitting this. In both of these (and many more!) scenarios, a virtual cluster may be a perfect solution!

The diagram below briefly outlines the attributes of virtual clusters as compared to using namespaces or physical clusters for isolation and multi-tenancy.

vcluster Comparison
vcluster - Comparison

The virtual cluster functionality of vCluster Platform comes from the popular open-source project vcluster. vCluster Platform provides a centralized management layer for virtual clusters, allowing users to provision virtual clusters in any vCluster Platform managed cluster (or virtual cluster!). vCluster Platform also offers the capability to import existing virtual clusters such that they can then be managed from the central vCluster Platform instance!

Why use Virtual Kubernetes Clusters?

Virtual clusters can be used to partition a single physical cluster into multiple logical, virtual clusters. This partitioning process still allows for leveraging the benefits of Kubernetes itself, such as optimal resource distribution and workload management.

While partitioning via Kubernetes namespaces has always been an option, traditional namespaces are limited in terms of cluster-scoped resources and control-plane usage.

  • Cluster-Scoped Resources: Certain resources live globally in the cluster, and you can’t isolate them using namespaces. For example, installing an operator in different versions at the same time is not possible within a single cluster.

  • Shared Kubernetes Control Plane: the API server, etcd, scheduler, and controller-manager are shared in a single Kubernetes cluster across all namespaces. Request or storage rate-limiting based on a namespace is very hard to enforce and faulty configuration might bring down the whole cluster.

Virtual clusters handily address these challenges by providing a dedicated Kubernetes API server per virtual cluster, thus bypassing the challenges of cluster-scoped resources and sharing a single control plane.

Virtual clusters also provide more stability than namespaces in many situations. The virtual cluster creates its own Kubernetes resource objects, which are stored in its own data store. The host cluster has no knowledge of these resources.

Isolation like this is excellent for resiliency. Engineers who use namespace-based isolation often still need access to cluster-scoped resources like cluster roles, shared CRDs or persistent volumes. If an engineer breaks something in one of these shared resources, it will likely fail for all the teams that rely on it.

Virtual clusters are not only highly functional, they can often be a huge cost saver for organizations. Many teams scale to address the problems outlined here by simply adding additional physical Kubernetes clusters. With virtual clusters, administrators are able to have many virtual clusters within a single physical cluster, this results in big savings by not having to pay for multiple control planes, and more often than not results in vastly simplified cluster maintenance. This makes virtual clusters ideal for running experiments, continuous integration, and setting up sandbox environments.

Finally, virtual clusters can be configured independently of the physical cluster. This is great for multi-tenancy, like giving your customers the ability to spin up a new environment or quickly setting up demo applications for your sales team.

Benefits of vCluster

Virtual clusters provide immense benefits for large-scale Kubernetes deployments and multi-tenancy.

  • Full Admin Access:
    • Deploy operators with CRDs, create namespaces and other cluster-scoped resources that you normally can't create inside a namespace.
    • Taint and label nodes without influencing the host cluster.
    • Reuse and share services across multiple virtual clusters with ease.
  • Cost Savings:
    • Create lightweight vCluster instances that share the underlying host cluster instead of creating separate "real" clusters.
    • Auto-scale, purge, snapshot, and move your vCluster instances, since they are Kubernetes deployments.
  • Low Overhead:
    • vCluster instances are super lightweight and only reside in a single namespace.
    • vCluster instances run with K3s, a super low-footprint K8s distribution. You can use other supported distributions such as K0s, vanilla Kubernetes, and AWS EKS.
    • The vCluster control plane runs inside a single pod. Open source vCluster also uses a CoreDNS pod for vCluster-internal DNS capabilities. With vCluster Platform, however, you can enable the integrated CoreDNS so you don't need the additional pod.
  • No Network Degradation:
    • Since the pods and services inside a vCluster are actually being synchronized down to the host cluster, they are effectively using the underlying cluster's pod and service networking. The vCluster pods are as fast as other pods in the underlying host cluster.
  • API Server Compatibility:
    • vCluster instances run with the API server from the Kubernetes distribution that you choose to use. This ensures 100% Kubernetes API server compliance.
    • vCluster manages its API server, controller-manager, and a separate, isolated data store. Use the embedded SQLite or a full-blown etcd if that's what you need.
  • Security:
    • vCluster users need fewer permissions in the underlying host cluster / host namespace.
    • vCluster users can manage their own CRDs independently and can even modify RBAC inside their own vCluster instances.
    • vCluster instances provide an extra layer of isolation. Each vCluster manages its own API server and control plane, which means that fewer requests to the underlying cluster need to be secured.
  • Scalability:
    • Less pressure / fewer requests on the K8s API server in a large-scale cluster.
    • Higher scalability of clusters via cluster sharding / API server sharding into smaller vCluster instances.
    • No need for cluster admins to worry about conflicting CRDs or CRD versions with a growing number of users and deployments.