What is vCluster?
vCluster provisions fully isolated Kubernetes environments, called tenant clustersTenant ClusterA fully isolated Kubernetes environment provisioned for a single tenant. Each tenant cluster has its own API server, controller manager, and resource namespace, backed by a virtualized control plane hosted on a Control Plane Cluster. From the tenant's perspective it behaves exactly like a standard Kubernetes cluster., on your infrastructure or directly on bare metal. Each tenant gets a dedicated API server, its own CRDs and RBAC, and a cluster experience indistinguishable from a dedicated Kubernetes cluster.
The control plane is completely invisible to tenants. There are no shared control plane nodes, no in-cluster agent pods, and no lateral path between environments. vCluster suits any environment where isolation is a hard requirement, from developer platforms and CI/CD pipelines to GPU cloud infrastructure serving paying tenants.
Ready to deploy? See the Quick Start to choose the path that fits your environment.
How it works​
vCluster works across a range of infrastructure configurations. Each tenant cluster is backed by a dedicated virtualized control plane. Tenant clusters are certified Kubernetes distributions. Any conformant tool works against them without modification: kubectl, Helm, Argo, Crossplane, and others.
Where the control plane runs
- On an existing Kubernetes cluster — the control plane runs as a pod in a dedicated namespace. Your existing cluster becomes the Control Plane Cluster and no additional infrastructure is required.
- Standalone — a complete, zero-dependency Kubernetes distribution that runs as a self-contained binary on bare metal or VMs. Use it as a full Kubernetes environment for any workload. When connected to vCluster Platform, it can act as a Control Plane Cluster for tenant isolation.
- vind — the full stack runs in Docker containers with no Kubernetes dependency. Suited for local development and CI/CD pipelines.
How tenant workloads run
On cluster-based deployments, you choose how tenant workloads land on compute:
- Shared nodes — tenants share the existing node pool. Multiple tenant clusters run on the same physical nodes with full API-level isolation. Suited for developer platforms, CI environments, and internal tooling where compute density matters.
- Private nodes — each tenant cluster gets dedicated nodes enrolled through a token-based process. Network, storage, and compute are fully isolated per tenant, with no cross-tenant visibility at the infrastructure level. This is the isolation model for GPU workloads, regulated industries, and AI cloud platforms serving paying tenants. Nodes can come from bare metal (vMetal), cloud VMs, or any Linux machine.
Features and plans​
vCluster is available as open source and as an enterprise-grade platform with additional features across paid tiers:
Next steps​
- Architecture — control plane internals, syncer behavior, and networking
- Private Nodes — dedicated infrastructure for GPU tenants and regulated workloads
- vCluster Standalone — zero-dependency Kubernetes for bare metal and edge
- Building a GPU cloud platform — deployment models for AI cloud providers
- vcluster.yaml reference — full configuration reference