Table of Contents
vCluster Standalone turns your VM or bare-metal host into a full Kubernetes control plane, with worker nodes you can join -> no upstream host cluster needed.
In this post, we’ll explore:
Why vCluster Standalone
How it works under the hood
How to install a standalone cluster and join worker nodes
A full demo script
Additional considerations
Introduction: Why vCluster Standalone?
The Problem Space
vCluster support for Kubernetes multitenancy has greatly expanded in recent releases enabling multiple options including Private Nodes and Auto Nodes, creating a “tenancy spectrum”. However, one recurring bootstrap requirement remained:
To run virtual clusters with vCluster on Kubernetes, you need a “host cluster.”
vCluster traditionally runs on top of an existing Kubernetes cluster (called the host cluster) in EKS, AKS, GKE, OpenShift, Baremetal, etc. The Kubernetes host cluster manages the control plane and shared worker nodes, or allows private nodes to be joined to the host. This adds extra operational complexity and dependency on other vendors to provision and maintain the host Kubernetes cluster.
What vCluster Standalone Changes
With the release of vCluster Standalone (v0.29+), the host cluster is no longer required, enabling additional use cases:
The vCluster Standalone Kubernetes cluster is the host cluster - no external host cluster is required.
This Standalone cluster bootstraps the control plane directly on the VM or bare-metal host.
From there, you can join additional worker nodes (private nodes) and run workloads as usual or you can use the Auto Nodes feature to make this autonomous.
In effect, vCluster Standalone collapses the host-cluster + virtual-cluster layers into one bootstrap layer.
Note: Standalone uses the Private Nodes model (i.e. joined worker nodes), not the “Shared Nodes” model.
Standalone becomes a compelling option when running a bare-metal Kubernetes setup and using the vCluster Auto Nodes feature (built on Karpenter) to automatically scale your nodes, independent of any hosting vendor. This setup enables true multi-tenancy through various models, allowing you to build a production-ready environment for both CPU and GPU workloads.
How It Works Under the Hood
To understand how the commands tie together, it's good to know the architecture.
The install script sets up a control plane (etcd, API server, scheduler, controller manager) on a host cluster.
It also launches the vCluster binary and sets up networking (Flannel by default, plus CoreDNS, kube-proxy).
The control plane exposes a node bootstrap API endpoint (e.g. via https://<control-plane-host>:port/node/join?token=...)
To join a worker, you run a script (via curl | sh) which registers kubelet and connects that node to the control plane.
The cluster must have networking configured so the worker can reach the control-plane endpoint (or use tunnel/konnectivity).
The token-based approach ensures only authorized nodes join.
Because vCluster Standalone uses private nodes, there’s no syncing with a host cluster — workloads run natively on the joined nodes as if they were part of the same cluster. You can also enable vCluster Auto Nodes (built on Karpenter) to have autoscaling for automatic joining and removal of nodes in your cluster.
Demo Setup: Machines & Assumptions
Before we dive into commands, here are assumptions:
Have at least two Ubuntu (or Linux) VMs (or machines reachable via network).
VM A: will be control-plane + optionally worker
VM B (and more): will be worker nodes
They can communicate over the network (VM B can talk to VM A on the join endpoint port).
You have sudo or root on both machines / VMs.
You have external connectivity to download the install script (or you host it internally).
This demo is on Iximiuz Labs miniLan playground where you get four Ubuntu VMs connected into a single network.
Step-by-Step: Installing Standalone + Joining Workers
1. Start the MiniLAn playground:

Once you start the miniLAN playground you will get access to 4 Ubuntu nodes where they are connected so that we can experiment and create a 4 node cluster.

This ensures the cluster is created with private node support and join capability.
2. Bootstrap the standalone cluster (on node-01)
Run the command below in order to create vCluster Standalone, you can also pass on the config file to enable Auto Nodes feature.
curl -sfL https://github.com/loft-sh/vcluster/releases/download/v0.29.1/install-standalone.sh | \
sh -s -- --vcluster-name standalone --config vcluster.yamlAfter installation, the script:
deploys control plane components
write
kubectlconfig to your machinestart cluster services
Also have vcluster binary
curl -sfL https://github.com/loft-sh/vcluster/releases/download/v0.29.1/install-standalone.sh | \
sh -s -- --vcluster-name standalone
🔄 Downloading vcluster binary...
vcluster CLI installed in /var/lib/vcluster/bin/vcluster-cli
🔄 Setting up persistent logging...
âś… systemd-journald restarted.
🔄 Creating vCluster systemd service...
🔄 Creating systemd service file /etc/systemd/system/vcluster.service
âś… vcluster.service created.
🔄 Starting vcluster.service...
Created symlink /etc/systemd/system/multi-user.target.wants/vcluster.service → /etc/systemd/system/vcluster.service.
âś… Successfully started vcluster.service
🔄 vCluster is initializing kubernetes control plane...
🔄 vCluster is initializing kubernetes control plane...
🔄 vCluster is initializing kubernetes control plane...
🔄 vCluster is initializing kubernetes control plane...
🔄 vCluster is initializing kubernetes control plane...
🔄 vCluster is initializing kubernetes control plane...
🔄 vCluster is initializing kubernetes control plane...
🔄 vCluster is initializing kubernetes control plane...
đź”— Linking vcluster kubeconfig for kubectl...
âś… vCluster is ready. Use 'kubectl get pods' to access the vCluster.
To check vCluster logs, use 'journalctl -u vcluster.service --no-pager'(If you don’t pass --config, it uses defaults; but in order for the nodes to automatically join you should enable Auto Nodes along with the Private Nodes feature. In order to know more on Auto Nodes, you can read the end to end blog here and how it works.
Below is the config that you can pass with the command using the --config flag
controlPlane:
# Enable standalone
standalone:
enabled: true
# Optional: Control Plane node will also be considered a worker node
joinNode:
enabled: true
# Required for adding additional worker nodes
privateNodes:
enabled: trueVerify:
kubectl get nodesYou’ll see one node (the control-plane node):
NAME STATUS ROLES AGE VERSION
node-01 Ready control-plane,master 40m v1.33.43. Create a join token for worker nodes (on node-01)
Use the vCluster CLI (installed by the script) to generate a node token:
NOTE: lthe vcluster binary is installed by default at
/var/lib/vcluster/bin/vcluster-cliSo you can move that to /usr/local/bin
mv /var/lib/vcluster/bin/vcluster-cli /usr/local/bin/vclusterNow you can create the join token using below command:
vcluster token create --expires=1hThis outputs something like:
curl -fsSLk "https://172.16.0.2:8443/node/join?token=2jdyog.4ik0o4c3gosqjk4x" | sh -Save that join command.
4. Run the join command on each worker VM
On VM node-02 (and any additional worker nodes), run the printed join command. E.g.:
curl -fsSLk "https://172.16.0.2:8443/node/join?token=2jdyog.4ik0o4c3gosqjk4x" | sh -Output:

This script will:
download necessary Kubernetes binaries
install kubelet, configure it
register this node into the control-plane
5. Verify nodes joined
Back on VM node-01 (or your kubectl context):
kubectl get nodes
You should see all nodes where you ran the join command:

6. Create virtual cluster
Now create virtual clusters just like you used to do on any other host cluster.
Command:
vcluster create demo
Once you open another tab of this node, you will be able to communicate with the virtual cluster.
root@node-01:laborant# kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-744d9bd8bf-pm5tj 1/1 Running 0 3m5sWhy This Approach is Powerful (and What It Enables)
Zero dependency on a host cluster: With vCluster standalone, you can bootstrap the host cluster from scratch and then create virtual clusters on top of it.
Flexibility: you can join as many worker nodes as needed, when needed or simply enable the auto nodes feature and run it on autopilot mode.
Multitenancy models: you get to choose which multitenancy model you want to run your virtual cluster with - shared,dedicated, private or auto nodes while the standalone mode is your host cluster.
Great for bare metal, demos, PoCs, labs: minimal friction, maximal control and streamlined developer experience
Bridges to vCluster’s multitenancy future: once you have a working cluster, you can deploy virtual clusters on top of this standalone cluster.
With vCluster Standalone, and the recent release of Private Nodes, and Auto Nodes, vCluster now powers enables a broad selection of options across the multitenancy spectrum.

Conclusion & Summary
vCluster Standalone is a powerful new offering that expands the options for virtual cluster support across an array of multitenancy options: instead of needing a Kubernetes host cluster to run virtual clusters, the first cluster itself is the host cluster for vCluster. By enabling Private Modes and Auto Nodes, you can setup a control plane and flexibly attach worker nodes. So let us know what you feel about this release on our community slack.
Also if you are a visual learner then you can watch the video here.




