Why Virtual Clusters?

Virtual clusters have their own API server which makes them much more powerful and better isolated than namespaces, but they are also much cheaper than creating separate "real" Kubernetes clusters.

loft
Isolation
Access for Tenants
Cost
Resource Sharing
Overhead
Separate Namespace
very weak
very restricted
very cheap
easy
very low
strong
vcluster admin
cheap
easy
very low
Separate Cluster
very strong
cluster admin
expensive
very hard
very high

Virtual Kubernetes Clusters

For Secure Multi-Tenancy, Reduced Cost & Simplified Operations

Secure Multi-Tenancy

vCluster unlocks a great developer experience for anyone having to develop against Kubernetes as a deployment target.

Cluster Scaling

If you are hitting the scalability limits of k8s because you are running a large-scale multi- tenant cluster, you can now split up and effectively shared your clusters into vclusters.

Cluster Simulations

You want to test a new ingress controller or enable a Kubernetes alpha flag without impacting your cluster operations? Vcluster will let you simulate such situations virtually.

Get Started

Automatically creates a kube-context on your local machine, so you can now use kubectl with your virtual cluster.

# amd64 (intel mac)

curl -L -o vcluster "https://github.com/loftsh/vcluster/releases/latest/download/vcluster-darwin-amd64" && chmod +x vcluster;sudo mv vcluster /usr/local/bin;
# arm64 (silicon mac)

curl -L -o vcluster "https://github.com/loftsh/vcluster/releases/latest/download/vcluster-darwin-arm64" && chmod +x vcluster;sudo mv vcluster /usr/local/bin;
1
Install vCluster CLI
vcluster create vcluster-1
2
Create vCluster
kubectl create namespace ns-inside-vcluster
helm install ./chart
kubectl get pods --all-namespaces
3
Use the vCluster
# amd64 (intel mac)

curl -L -o vcluster "https://github.com/loftsh/vcluster/releases/latest/download/vcluster-darwin-amd64" && chmod +x vcluster;sudo mv vcluster /usr/local/bin;
# arm64 (silicon mac)

curl -L -o vcluster "https://github.com/loftsh/vcluster/releases/latest/download/vcluster-darwin-arm64" && chmod +x vcluster;sudo mv vcluster /usr/local/bin;
1
Install vCluster CLI
vcluster create vcluster-1
2
Create vCluster
kubectl create namespace ns-inside-vcluster
helm install ./chart
kubectl get pods --all-namespaces
3
Use the vCluster
# amd64 (intel mac)

curl -L -o vcluster "https://github.com/loftsh/vcluster/releases/latest/download/vcluster-darwin-amd64" && chmod +x vcluster;sudo mv vcluster /usr/local/bin;
# arm64 (silicon mac)

curl -L -o vcluster "https://github.com/loftsh/vcluster/releases/latest/download/vcluster-darwin-arm64" && chmod +x vcluster;sudo mv vcluster /usr/local/bin;
1
Install vCluster CLI
vcluster create vcluster-1
2
Create vCluster
kubectl create namespace ns-inside-vcluster
helm install ./chart
kubectl get pods --all-namespaces
3
Use the vCluster

How does it work?

Virtual clusters run inside namespaces of other clusters. They have a separate API server and a separate data store, so every Kubernetes object you create in the vcluster only exists inside the vcluster.

First, let's create a few namespaces inside our new vcluster:

kubectl create namespace ns-1G

Have a question?

Contact us

Now, we can deploy something into one of the namespaces of our vcluster:

kubectl create deployment nginx --image=nginx -n ns-1

Have a question?

Contact us

The controller manager of our vcluster will create the pods for this deployment.

kubectl get pods -n ns-1

We can see pods being scheduled inside the vcluster although the vcluster does not have a scheduler and does not have any real nodes.

Have a question?

Contact us

BUT, where do these pods get scheduled to?

If we are checking the underlying host namespace where our vcluster is running ...

kubectl get pods -n ns-1

... then we can see that the pods are actually running inside the underlying cluster while every other high-level Kubernetes resource such as deployments or CRDs exist only inside the vcluster.

Have a question?

Contact us

BUT, where do these pods get scheduled to?

If we are checking the underlying host namespace where our vcluster is running ...

kubectl get pods -n ns-1

... then we can see that the pods are actually running inside the underlying cluster while every other high-level Kubernetes resource such as deployments or CRDs exist only inside the vcluster.

Have a question?

Contact us

Star the project on GitHub, open issues and pull requests. Any contribution is welcome.

Support on Github

Join the conversation about vclusters on Slack and get help from the project maintainers.

Join Slack

Optimized for Performance. Ready for Enterprise Scale.

Explore vCuster.Pro