Virtual Kubernetes Clusters that run inside regular namespaces.
Stop creating expensive under-utilized clusters, and save cost with vclusters.
Solve k8s multi-tenancy by provisioning vclusters for each tenant.
Create vclusters as easy as namespaces but benefit from much better isolation.

Why Virtual Clusters?

Virtual clusters have their own API server which makes them much more powerful and better isolated than namespaces, but they are also much cheaper than creating separate "real" Kubernetes clusters.

Separate Namespace
Separate Cluster
Isolation
very weak

Namespaces run in the same cluster, share the same API server and data store, and they share all cluster-wide resources, such as Controllers and CRDs.

strong

Virtual clusters run inside the same cluster but they all have a separate API server and a separate data store which effectively makes them separate logically isolated clusters.

very strong

Besides having a separate API server and data store, separate clusters don't even share the underlying compute nodes and may even run in different data centers.

Access
For Tenants
very restricted

If tenants are restricted to their own namespaces using RBAC and other controls, this restricted access requires a lot of compromises, e.g. CRD versions, controllers.

vcluster admin

If tenants get their own virtual clusters, they will have full admin access to these virtual clusters even without having any admin privileges in the underlying host cluster.

cluster admin

If tenants have their own clusters, they have the same flexibility as with virtual clusters but they may even be able to change the cluster's underlying nodes, for example.

Cost
very cheap

Namespaces are entirely virtual and only require an entry in etcd, so they essentially don't cost anything.

cheap

vclusters require one very lightweight pod containing the vcluster control plane (API server, syncer, data store) which costs a couple of cents per day to run.

expensive

Separate clusters come at a very high price, since they are using separate control planes and separate node pools without any option to share these resources.

Resource
Sharing
easy

Namespaces share the same underlying cluster, so it is relatively easy for tenants to use shares resources such as DNS, ingress controllers or other cluster-wide services.

easy

vclusters that run on top of the same cluster can easily share resources if an admin of the underlying cluster allows this access.

very hard

Sharing resources across the borders of Kubernetes clusters is really hard and constructs such as cluster federation are very complex to operate at scale.

Overhead
very low

Namespaces have zero computing and management overhead but they aldo don't provide any tenant isolation.

very low

vclusters require a single pod for the control plane and the management overhead for them is not higher than managing namespaces in a large cluster.

very high

Apart from the rising computing costs for operating separate clusters, there is a lot of overhead for managing a large number of Kubernetes clusters.

Get Started

Automatically creates a kube-context on your local machine, so you can now use kubectl with your virtual cluster

1
Install vcluster CLI

# amd64 (intel mac)
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-amd64" && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;

# arm64 (silicon mac)
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-arm64" && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;

md -Force "$Env:APPDATA\vcluster"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';Invoke-WebRequest -URI "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-windows-amd64.exe" -o $Env:APPDATA\vcluster\vcluster.exe;$env:Path += ";" + $Env:APPDATA + "\vcluster";[Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);

# amd64
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64" && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;

# arm64
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-arm64" && chmod +x vcluster;
sudo mv vcluster /usr/local/bin;

2
Create vcluster

vcluster create vcluster-1

3
Use the vcluster

kubectl create namespace ns-inside-vcluster
helm install ./chart
kubectl get pods --all-namespaces

How does it work?

Virtual clusters run inside namespaces of other clusters. They have a separate API server and a separate data store, so every Kubernetes object you create in the vcluster only exists inside the vcluster.

First, let's create a few namespaces inside our new vcluster:

kubectl create namespace ns-1

Now, we can deploy something into one of the namespaces of our vcluster:

kubectl create deployment nginx --image=nginx -n ns-1

The controller manager of our vcluster will create the pods for this deployment.

kubectl get pods -n ns-1

We can see pods being scheduled inside the vcluster although the vcluster does not have a scheduler and does not have any real nodes.
BUT, where do these pods get scheduled to?
If we are checking the underlying host namespace where our vcluster is running ...

kubectl get pods -n host-namespace-1

... then we can see that the pods are actually running inside the underlying cluster while every other high-level Kubernetes resource such as deployments or CRDs exist only inside the vcluster.
Try it yourself

No Admin Privileges Required

As long as you can create a deployment inside a single namespace, you will be able to create a virtual cluster and become admin of this virtual cluster.

vclusters supports a variety of use cases

Secure Multi-Tenancy

Whether you need to isolate CI/CD or dev environments for developers or you need to host isolated instances of your managed product, vclusters provides a great level of isolation.

Cluster Scaling

If you are hitting the scalability limits of k8s because you are running a large-scale multi-tenant cluster, you can now split up and effectively shared your clusters into vclusters.

Cluster Simulations

You want to test a new ingress controller or enable a Kubernetes alpha flag without impacting your cluster operations? vcluster will let you simulate such situations virtually.

vcluster uses k3s as its API server to make
virtual clusters super lightweight & cost-efficient

100% API compliant

vclusters use the k3s API server, a certified Kubernetes distribution, so when you are working with a vcluster, it will act the same as a regular cluster.

Single Namespace

The vcluster and all of its workloads will be hosted in a single underlying host namespace. Delete the namespace and everything will cleanly be gone.

Lightweight Architecture With Very Low Overhead

vclusters are super lightweight (1 pod), consume very few resources and run on any Kubernetes cluster without requiring privileged access to the underlying cluster.

Highly Configurable

vcluster expose all k8s control plane options and you can even run different k8s versions in your vclusters or enable alpha and beta flags.

Free, Open-Source & Community Driven

Star the project on GitHub, open issues and pull requests. Any contribution is welcome.

Support on Github

Join the conversation about vclusters on Slack and get help from the project maintainers.

Join Slack
very weak

Namespaces run in the same cluster, share the same API server and data store, and they share all cluster-wide resources, such as Controllers and CRDs.

very restricted

If tenants are restricted to their own namespaces using RBAC and other controls, this restricted access requires a lot of compromises, e.g. CRD versions, controllers.

very cheap

Namespaces are entirely virtual and only require an entry in etcd, so they essentially don't cost anything.

easy

Namespaces share the same underlying cluster, so it is relatively easy for tenants to use shares resources such as DNS, ingress controllers or other cluster-wide services.

very low

Namespaces have zero computing and management overhead but they aldo don't provide any tenant isolation.

strong

Virtual clusters run inside the same cluster but they all have a separate API server and a separate data store which effectively makes them separate logically isolated clusters.

vcluster admin

If tenants get their own virtual clusters, they will have full admin access to these virtual clusters even without having any admin privileges in the underlying host cluster.

cheap

vclusters require one very lightweight pod containing the vcluster control plane (API server, syncer, data store) which costs a couple of cents per day to run.

easy

vclusters that run on top of the same cluster can easily share resources if an admin of the underlying cluster allows this access.

very low

vclusters require a single pod for the control plane and the management overhead for them is not higher than managing namespaces in a large cluster.

very strong

Besides having a separate API server and data store, separate clusters don't even share the underlying compute nodes and may even run in different data centers.

cluster admin

If tenants have their own clusters, they have the same flexibility as with virtual clusters but they may even be able to change the cluster's underlying nodes, for example.

expensive

Separate clusters come at a very high price, since they are using separate control planes and separate node pools without any option to share these resources.

very hard

Sharing resources across the borders of Kubernetes clusters is really hard and constructs such as cluster federation are very complex to operate at scale.

very high

Apart from the rising computing costs for operating separate clusters, there is a lot of overhead for managing a large number of Kubernetes clusters.