Virtual Kubernetes Clusters that run inside regular namespaces.

Get Started

Stop creating expensive under-utilized clusters, and save cost with vclusters.

Get Started

Solve k8s multi-tenancy by provisioning vclusters for each tenant.

Get Started

Create vclusters as easy as namespaces but benefit from much better isolation.

Get Started

Why Virtual Clusters?

Virtual clusters have their own API server which makes them much more powerful and better isolated than namespaces, but they are also much cheaper than creating separate "real" Kubernetes clusters.

Separate Namespace

For Each Tenant

Separate Cluster

For Each Tenant


very weak

Namespaces run in the same cluster, share the same API server and data store, and they share all cluster-wide resources, such as Controllers and CRDs.


Virtual clusters run inside the same cluster but they all have a separate API server and a separate data store which effectively makes them separate logically isolated clusters.

very strong

Besides having a separate API server and data store, separate clusters don't even share the underlying compute nodes and may even run in different data centers.

Access For Tenants

very restricted

If tenants are restricted to their own namespaces using RBAC and other controls, this restricted access requires a lot of compromises, e.g. CRD versions, controllers.

vcluster admin

If tenants get their own virtual clusters, they will have full admin access to these virtual clusters even without having any admin privileges in the underlying host cluster.

cluster admin

If tenants have their own clusters, they have the same flexibility as with virtual clusters but they may even be able to change the cluster's underlying nodes, for example.


very cheap

Namespaces are entirely virtual and only require an entry in etcd, so they essentially don't cost anything.


vclusters require one very lightweight pod containing the vcluster control plane (API server, syncer, data store) which costs a couple of cents per day to run.


Separate clusters come at a very high price, since they are using separate control planes and separate node pools without any option to share these resources.

Resource Sharing


Namespaces share the same underlying cluster, so it is relatively easy for tenants to use shares resources such as DNS, ingress controllers or other cluster-wide services.


vclusters that run on top of the same cluster can easily share resources if an admin of the underlying cluster allows this access.

very hard

Sharing resources across the borders of Kubernetes clusters is really hard and constructs such as cluster federation are very complex to operate at scale.


very low

Namespaces have zero computing and management overhead but they aldo don't provide any tenant isolation.

very low

vclusters require a single pod for the control plane and the management overhead for them is not higher than managing namespaces in a large cluster.

very high

Apart from the rising computing costs for operating separate clusters, there is a lot of overhead for managing a large number of Kubernetes clusters.

Get Started

Automatically creates a kube-context on your local machine, so you can now use kubectl with your virtual cluster

# amd64 (intel mac)

curl -s -L "" | sed -nE 's!.*"([^"]*vcluster-darwin-amd64)".*!\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;

sudo mv vcluster /usr/local/bin;

# arm64 (silicon mac)

curl -s -L "" | sed -nE 's!.*"([^"]*vcluster-darwin-arm64)".*!\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;

sudo mv vcluster /usr/local/bin;

md -Force "$Env:APPDATA\vcluster"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';

Invoke-WebRequest -UseBasicParsing ((Invoke-WebRequest -URI "" -UseBasicParsing).Content -replace "(?ms).*`"([^`"]*vcluster-windows-amd64.exe)`".*","`$1") -o $Env:APPDATA\vcluster\vcluster.exe;

$env:Path += ";" + $Env:APPDATA + "\vcluster";

[Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);

# amd64

curl -s -L "" | sed -nE 's!.*"([^"]*vcluster-linux-amd64)".*!\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;

sudo mv vcluster /usr/local/bin;

# arm64

curl -s -L "" | sed -nE 's!.*"([^"]*vcluster-linux-arm64)".*!\1!p' | xargs -n 1 curl -L -o vcluster && chmod +x vcluster;

sudo mv vcluster /usr/local/bin;

Install vcluster CLI

vcluster create vcluster-1 -n host-namespace-1

Create vcluster

vcluster connect vcluster-1 -n host-namespace-1
export KUBECONFIG=./kubeconfig.yaml

Retrieve kube-context

kubectl create namespace ns-inside-vcluster

helm install ./chart

kubectl get pods --all-namespaces

Use the vcluster

No Admin Privileges Required

As long as you can create a deployment inside a single namespace, you will be able to create a virtual cluster and become admin of this virtual cluster.

How does it work?

Virtual clusters run inside namespaces of other clusters. They have a separate API server and a separate data store, so every Kubernetes object you create in the vcluster only exists inside the vcluster.

First, let's create a few namespaces
inside our new vcluster:

kubectl create namespace ns-1

Now, we can deploy something into one of the namespaces of our vcluster:

kubectl create deployment nginx --image=nginx -n ns-1

The controller manager of our vcluster will create the pods for this deployment.

kubectl get pods -n ns-1

We can see pods being scheduled inside the vcluster although the vcluster does not have a scheduler and does not have any real nodes.

BUT, where do these pods get scheduled to?

If we are checking the underlying host namespace where our vcluster is running ...

kubectl get pods -n host-namespace-1

... then we can see that the pods are actually running inside the underlying cluster while every other high-level Kubernetes resource such as deployments or CRDs exist only inside the vcluster.
Try it yourself

vclusters supports a variety of use cases

Secure Multi-Tenancy

Whether you need to isolate CI/CD or dev environments for developers or you need to host isolated instances of your managed product, vclusters provides a great level of isolation.

Cluster Scaling

If you are hitting the scalability limits of k8s because you are running a large-scale multi-tenant cluster, you can now split up and effectively shared your clusters into vclusters.

Cluster Simulations

You want to test a new ingress controller or enable a Kubernetes alpha flag without impacting your cluster operations? vcluster will let you simulate such situations virtually.

vcluster uses k3s as its API server to make
virtual clusters super lightweight & cost-efficient

100% API compliant

vckusters use the k3s API server, a certified Kubernetes distribution, so when you are working with a vcluster, it will act the same as a regular cluster.

Lightweight Architecture With Very Low Overhead

vclusters are super lightweight (1 pod), consume very few resources and run on any Kubernetes cluster without requiring privileged access to the underlying cluster.

Single Namespace

The vcluster and all of its workloads will be hosted in a single underlying host namespace. Delete the namespace and everything will cleanly be gone.

Highly Configurable

vcluster expose all k8s control plane options and you can even run different k8s versions in your vclusters or enable alpha and beta flags.

Free, Open-Source & Community Driven

Star the project on GitHub, open issues and pull requests. Any contribution is welcome.

Support on Github

Join the conversation about vclusters on Slack and get help from the project maintainers.

Join Slack

Open-Source at Loft Labs

At Loft Labs, we are committed to building open-source tools such as DevSpace, vcluster and kiosk alongside our commercial offering Loft. We want to give back to the community and we believe open-source projects are the best way to accelerate the speed of innovation in the cloud-native space.

Get The Latest News About Our Projects