Deploy vCluster on EKS
This guide provides step-by-step instructions for deploying vCluster on Amazon EKS.
Prerequisitesβ
Before staring, ensure you have the following tools installed:
kubectlinstalled: Kubernetes command-line tool for interacting with the cluster. See Install and Set Up kubectl for installation instructions.vCluster CLI- Homebrew
- Mac (Intel/AMD)
- Mac (Silicon/ARM)
- Linux (AMD)
- Linux (ARM)
- Download Binary
- Windows Powershell
brew install loft-sh/tap/vclusterThe binaries in the tap are signed using the Sigstore framework for enhanced security.
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vclustercurl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vclustercurl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vclustercurl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vclusterDownload the binary for your platform from the GitHub Releases page and add this binary to your $PATH.
md -Force "$Env:APPDATA\vcluster"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';
Invoke-WebRequest -URI "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-windows-amd64.exe" -o $Env:APPDATA\vcluster\vcluster.exe;
$env:Path += ";" + $Env:APPDATA + "\vcluster";
[Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);Reboot RequiredYou may need to reboot your computer to use the CLI due to changes to the PATH variable (see below).
Check Environment Variable $PATHLine 4 of this install script adds the install directory
%APPDATA%\vclusterto the$PATHenvironment variable. This is only effective for the current Powershell session, i.e. when opening a new terminal window,vclustermay not be found.Make sure to add the folder
%APPDATA%\vclusterto thePATHenvironment variable after installing vcluster CLI via Powershell. Afterward, a reboot might be necessary.Confirm that you've installed the correct version of the vCluster CLI.
vcluster --version- AWS CLI version 1.16.156 or greater
note
AWS IAM permissions to create roles and policies
- eksctl installed for cluster management
note
Upgrade
eksctlto the latest version to ensure latest Kubernetes version is deployed.
Create EKS clusterβ
Start by creating EKS cluster using eksctl. This command creates a file named
cluster.yaml with the required settings. Adjust the cluster name, region, and instance type as needed.
# This will create a file with your custom values
cat << EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: vcluster-demo
region: eu-central-1
iam:
withOIDC: true
nodeGroups:
- name: ng-1
instanceType: t3.medium
desiredCapacity: 2
iam:
withAddonPolicies:
ebs: true
volumeSize: 80
addons:
- name: aws-ebs-csi-driver
version: latest
attachPolicyARNs:
- arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
EOF
The file defines a cluster with two t3.medium instances located in the eu-central-1 region. The configuration includes:
- OIDC provider enabled for IAM roles for service accounts
- Node group with EBS addon policy for volume management
- EBS CSI driver addon with the official AWS managed IAM policy
Create the cluster by running:
eksctl create cluster -f cluster.yaml
This command automatically updates your kubeconfig file with the new
cluster configuration.
This process typically takes about 15-20 minutes.
Verify the host cluster creationβ
Verify the installation by checking if the CSI driver pods are running:
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-ebs-csi-driver
Expected output should look similar to:
NAME READY STATUS RESTARTS AGE
ebs-csi-controller-794b4448b-fhjxr 6/6 Running 0 2m14s
ebs-csi-controller-794b4448b-j94g5 6/6 Running 0 2m14s
ebs-csi-node-crz7p 3/3 Running 0 2m14s
ebs-csi-node-jg8n8 3/3 Running 0 2m14s
Configure storage classβ
vCluster requires a default StorageClass for its persistent volumes. EKS provides the gp2 StorageClass by default, but gp3 is required. Create a new StorageClass:
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp3
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
parameters:
type: gp3
encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
EOF
Remove the default status from the gp2 StorageClass:
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
Predeployment configuration optionsβ
Before deploying, it's recommended to review the set of configuration options that cannot be updated post deployment. These options require deploying a new vCluster instead of upgrading your vCluster with new options.
Control Plane Optionsβ
Decide the various options of how you want your control plane deployed:
- High availability - Run multiple copies of vCluster components.
- Rootless mode - Deploy the vCluster pod without root access to the host cluster.
- Backing Store - Decide how the data of your cluster is stored.
Backing store options
vCluster supports etcd or a relational database (using KINE) as the backend.This feature provides flexibility to vCluster operators. The available data store options allow you to select a data store that fits your use case.
vCluster supports the following datastore options:
- Embedded SQLite (default with
PersistentVolume(PV)) - PostgreSQL
- MySQL
- MariaDB
- etcd
warningAfter deploying your vCluster, there are limited migration paths to change your backing store. Review the backing store migration options before deploying.
Backing store options
- Embedded SQLite (Default)
- Embedded SQLite (No PV)
- Embedded etcd
- Deployed etcd
- MySQL / MariaDB
- PostgreSQL
This is the default, so you don't need to configure anything. If you want to explicitly set this option, you can use:
controlPlane:
backingStore:
database:
embedded:
enabled: trueBy default, vCluster stores its data in a
PersistentVolumeClaim(PVC). Alternatively, you can use anemptyDirvolume to store virtual cluster data.To use an
emptyDirto store the data instead of aPersistentVolume, create avalues.yamlwith the following contents:controlPlane:
statefulSet:
persistence:
volumeClaim:
enabled: trueThen upgrade or recreate the vCluster with:
vcluster create my-vcluster -n my-vcluster --upgrade -f values.yamlPotential data lossThis method should only be used for testing purposes, as data is lost upon pod recreation.
This is an enterprise feature that allows you to deploy etcd within each vCluster to enable high availability (HA), which isnβt supported with embedded SQLite:
controlPlane:
backingStore:
etcd:
embedded:
enabled: trueThis deploys an etcd instance outside of the vCluster control plane pod that is used as a backing store:
controlPlane:
backingStore:
etcd:
deploy:
enabled: trueThe option for MySQL and MariaDB typically has the following format:
controlPlane:
backingStore:
database:
external:
enabled: true
dataSource: mysql://username:password@tcp(hostname:3306)/database-nameIf you specify a database name and it does not exist, the server attempts to create it.
The option for PostgreSQL typically has the following format:
controlPlane:
backingStore:
database:
external:
enabled: true
dataSource: postgres://username:password@hostname:port/database-nameMore advanced configuration parameters are available. For more information, see https://godoc.org/github.com/lib/pq.
If you specify a database name and it does not exist, the server attempts to create it.
- Embedded SQLite (default with
Worker Nodesβ
Decide where you want your worker nodes to come from:
- Nodes from the host cluster - (Default) All worker nodes of the shared host cluster are used by the virtual cluster and all resources are synced to the single namespace that the vCluster is deployed on.
- Syncing Namespaces - Resources are synced to mapped namespaces on the host cluster.
- Isolated workloads - Different options to isolate a workload in a vCluster.
- Private Nodes - Enable adding individual nodes to the virtual cluster.
Deploy vCluster on EKSβ
If you're not sure which options to configure, you can update most settings later by upgrading your vCluster with an updated vcluster.yaml.
However, some settings β such as what type of worker nodes or the backing store β can only be set during the initial deployment and cannot be changed during an upgrade.
All of the deployment options below have the following assumptions:
- A
vcluster.yamlis provided. Refer to thevcluster.yamlreference docs to explore all configuration options. This file is optional and can be removed from the examples. - The vCluster is called
my-vcluster. - The vCluster is be deployed into the
team-xnamespace.
- vCluster CLI
- Helm
- Terraform
- Argo CD
- Cluster API
The vCluster CLI provides the most straightforward way to deploy and manage virtual clusters.
Install the vCluster CLI:
- Homebrew
- Mac (Intel/AMD)
- Mac (Silicon/ARM)
- Linux (AMD)
- Linux (ARM)
- Download Binary
- Windows Powershell
brew install loft-sh/tap/vclusterThe binaries in the tap are signed using the Sigstore framework for enhanced security.
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vclustercurl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-darwin-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vclustercurl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vclustercurl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vclusterDownload the binary for your platform from the GitHub Releases page and add this binary to your $PATH.
md -Force "$Env:APPDATA\vcluster"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';
Invoke-WebRequest -URI "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-windows-amd64.exe" -o $Env:APPDATA\vcluster\vcluster.exe;
$env:Path += ";" + $Env:APPDATA + "\vcluster";
[Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);Reboot RequiredYou may need to reboot your computer to use the CLI due to changes to the PATH variable (see below).
Check Environment Variable $PATHLine 4 of this install script adds the install directory
%APPDATA%\vclusterto the$PATHenvironment variable. This is only effective for the current Powershell session, i.e. when opening a new terminal window,vclustermay not be found.Make sure to add the folder
%APPDATA%\vclusterto thePATHenvironment variable after installing vcluster CLI via Powershell. Afterward, a reboot might be necessary.Confirm that you've installed the correct version of the vCluster CLI.
vcluster --versionDeploy vCluster:
Modify the following with your specific values to generate a copyable command:vcluster create my-vcluster --namespace team-x --values vcluster.yamlnoteAfter installation, vCluster automatically switches your Kubernetes context to the new virtual cluster. You can now run
kubectlcommands against the virtual cluster.
Helm provides fine-grained control over the deployment process and integrates well with existing Helm-based workflows.
Deploy vCluster using the
helm upgradecommand:Modify the following with your specific values to generate a copyable command:helm upgrade --install my-vcluster vcluster \
--values vcluster.yaml \
--repo https://charts.loft.sh \
--namespace team-x \
--repository-config='' \
--create-namespace
You can use Terraform to deploy vCluster as code with version control and state management.
Create a
main.tffile to define your vCluster deployment using the Terraform Helm provider:provider "helm" {
kubernetes = {
config_path = "~/.kube/config"
}
}
resource "helm_release" "my_vcluster" {
name = "my-vcluster"
namespace = "team-x"
create_namespace = true
repository = "https://charts.loft.sh"
chart = "vcluster"
# If you didn't create a vcluster.yaml, remove the values section.
values = [
file("${path.module}/vcluster.yaml")
]
}Helm Provider VersionThis configuration uses the Terraform Helm provider v3.x syntax where
kubernetesis defined as an argument (kubernetes = {). If you're using Helm provider v2.x, use the block syntax instead (kubernetes {). To use v3.x, ensure your provider version is at least v3.0.0.Install the required Helm provider and initialize Terraform:
terraform initGenerate a plan to preview the changes:
terraform planReview the plan output to verify connectivity and proposed changes.
Deploy vCluster:
terraform apply
ArgoCD deployment enables GitOps workflows for vCluster management, and provides automated deployment, drift detection, and declarative configuration management through Git repositories.
To deploy vCluster using ArgoCD, you need the following files:
vcluster.yamlfor your vCluster configuration options.<CLUSTER_NAME>-app.yamlfor your ArgoCDApplicationdefinition. Replace<CLUSTER_NAME>with your actual cluster name.
Create the ArgoCD
Applicationfile<CLUSTER_NAME>-app.yaml, which references the vCluster Helm chart:Modify the following with your specific values to generate a copyable command:apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-vcluster
namespace: argocd
spec:
project: default
source:
chart: vcluster
repoURL: https://charts.loft.sh
helm:
releaseName: my-vcluster
valueFiles:
- vcluster.yaml
destination:
server: https://kubernetes.default.svc
namespace: team-xCommit and push these files to your configured ArgoCD repository.
Sync your ArgoCD repository with your configured cluster:
Modify the following with your specific values to generate a copyable command:argocd app sync my-vcluster
Cluster API (CAPI) provides lifecycle management for Kubernetes clusters. The vCluster CAPI provider enables you to manage virtual clusters using the same declarative APIs and tooling used for physical clusters. For more details, see the Cluster API Provider for vCluster documentation.
Install the
clusterctlCLI.Install the vCluster provider:
clusterctl init --infrastructure vcluster:v0.2.0Export environment variables for the Cluster API provider to create the manifest. The manifest is applied to your Kubernetes cluster, which deploys a vCluster.
Modify the following with your specific values to generate a copyable command:export CLUSTER_NAME=my-vcluster
export CLUSTER_NAMESPACE=team-x
export VCLUSTER_YAML=$(awk '{printf "%s\n", $0}' vcluster.yaml)Create the namespace for the vCluster using the exported variable:
Modify the following with your specific values to generate a copyable command:kubectl create namespace team-xGenerate the required manifests and apply them using the exported variables:
Modify the following with your specific values to generate a copyable command:clusterctl generate cluster my-vcluster \
--infrastructure vcluster \
--target-namespace team-x \
| kubectl apply -f -Kubernetes versionThe Kubernetes version for the vCluster is not set at the CAPI provider command. Configure it in the
vcluster.yamlfile based on your Kubernetes distribution.Wait for vCluster to become ready by monitoring the vCluster custom resource status:
Modify the following with your specific values to generate a copyable command:kubectl wait --for=condition=ready vcluster -n team-x my-vcluster --timeout=300s
This configuration ensures that:
- Service accounts are properly synced between virtual and host clusters
- Persistent volume claims are handled correctly
- The
gp3storage class created earlier is used
Allow internal DNS resolutionβ
By default, vCluster runs a CoreDNS component inside the virtual cluster. This component listens on port 1053 instead of the standard DNS port 53 to avoid conflicts with the host cluster DNS.
On EKS, if the CoreDNS pod and other virtual cluster pods are scheduled on different nodes, DNS resolution may fail. This happens because AWS creates separate security groups for the EKS control plane and worker nodes, and the default node security group does not allow inbound traffic on port 1053.
To resolve this, manually update the EKS node security group to allow inbound TCP and UDP traffic on port 1053 between nodes.
This step is especially important for EKS clusters created using Terraform or other automation tools that apply restrictive network settings by default.
Next stepsβ
Now that you have vCluster running on EKS, consider:
- Setting up the platform UI to mange your virtual clusters.
- Integrating with Karpenter for autoscaling.
Pod identityβ
This feature is an Enterprise feature. See our pricing plans or contact our sales team for more information.
When using the platform you can easily enable Pod Identity.
Cleanupβ
If you deployed the EKS cluster with this tutorial, and want to clean up the resources, run the following command:
eksctl delete cluster -f cluster.yaml --disable-nodegroup-eviction