Skip to main content
Version: main 🚧

Deploy vCluster on EKS

This guide provides step-by-step instructions for deploying vCluster on Amazon EKS.

Prerequisites​

Before staring, ensure you have the following tools installed:

  • kubectl installed: Kubernetes command-line tool for interacting with the cluster. See Install and Set Up kubectl for installation instructions.
  • vCluster CLI
    brew install loft-sh/tap/vcluster

    The binaries in the tap are signed using the Sigstore framework for enhanced security.

    Confirm that you've installed the correct version of the vCluster CLI.

    vcluster --version
  • AWS CLI version 1.16.156 or greater
    note

    AWS IAM permissions to create roles and policies

  • eksctl installed for cluster management
    note

    Upgrade eksctl to the latest version to ensure latest Kubernetes version is deployed.

Create EKS cluster​

Start by creating EKS cluster using eksctl. This command creates a file named cluster.yaml with the required settings. Adjust the cluster name, region, and instance type as needed.

Modify the following with your specific values to generate a copyable command:
# This will create a file with your custom values
cat << EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: vcluster-demo
region: eu-central-1
iam:
withOIDC: true
nodeGroups:
- name: ng-1
instanceType: t3.medium
desiredCapacity: 2
iam:
withAddonPolicies:
ebs: true
volumeSize: 80

addons:
- name: aws-ebs-csi-driver
version: latest
attachPolicyARNs:
- arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy
EOF

The file defines a cluster with two t3.medium instances located in the eu-central-1 region. The configuration includes:

  • OIDC provider enabled for IAM roles for service accounts
  • Node group with EBS addon policy for volume management
  • EBS CSI driver addon with the official AWS managed IAM policy

Create the cluster by running:

Create EKS cluster
eksctl create cluster -f cluster.yaml
kubeconfig update

This command automatically updates your kubeconfig file with the new cluster configuration.

This process typically takes about 15-20 minutes.

Verify the host cluster creation​

Verify the installation by checking if the CSI driver pods are running:

Verify CSI driver installation
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-ebs-csi-driver

Expected output should look similar to:

NAME                                 READY   STATUS    RESTARTS   AGE
ebs-csi-controller-794b4448b-fhjxr 6/6 Running 0 2m14s
ebs-csi-controller-794b4448b-j94g5 6/6 Running 0 2m14s
ebs-csi-node-crz7p 3/3 Running 0 2m14s
ebs-csi-node-jg8n8 3/3 Running 0 2m14s

Configure storage class​

vCluster requires a default StorageClass for its persistent volumes. EKS provides the gp2 StorageClass by default, but gp3 is required. Create a new StorageClass:

gp3 StorageClass configuration
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp3
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
parameters:
type: gp3
encrypted: "true"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
EOF

Remove the default status from the gp2 StorageClass:

Remove default status from gp2 StorageClass
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

Predeployment configuration options​

Before deploying, it's recommended to review the set of configuration options that cannot be updated post deployment. These options require deploying a new vCluster instead of upgrading your vCluster with new options.

Control Plane Options​

Decide the various options of how you want your control plane deployed:

  • High availability - Run multiple copies of vCluster components.
  • Rootless mode - Deploy the vCluster pod without root access to the host cluster.
  • Backing Store - Decide how the data of your cluster is stored.
    Backing store options

    vCluster supports etcd or a relational database (using KINE) as the backend.This feature provides flexibility to vCluster operators. The available data store options allow you to select a data store that fits your use case.

    vCluster supports the following datastore options:

    warning

    After deploying your vCluster, there are limited migration paths to change your backing store. Review the backing store migration options before deploying.

    Backing store options

    This is the default, so you don't need to configure anything. If you want to explicitly set this option, you can use:

    controlPlane:
    backingStore:
    database:
    embedded:
    enabled: true

Worker Nodes​

Decide where you want your worker nodes to come from:

  • Nodes from the host cluster - (Default) All worker nodes of the shared host cluster are used by the virtual cluster and all resources are synced to the single namespace that the vCluster is deployed on.
  • Private Nodes - Enable adding individual nodes to the virtual cluster.

Deploy vCluster on EKS​

YAML configuration

If you're not sure which options to configure, you can update most settings later by upgrading your vCluster with an updated vcluster.yaml. However, some settings β€” such as what type of worker nodes or the backing store β€” can only be set during the initial deployment and cannot be changed during an upgrade.

All of the deployment options below have the following assumptions:

  • A vcluster.yaml is provided. Refer to the vcluster.yaml reference docs to explore all configuration options. This file is optional and can be removed from the examples.
  • The vCluster is called my-vcluster.
  • The vCluster is be deployed into the team-x namespace.

The vCluster CLI provides the most straightforward way to deploy and manage virtual clusters.

  1. Install the vCluster CLI:

     brew install loft-sh/tap/vcluster-experimental

    If you installed the CLI using brew install vcluster, you should brew uninstall vcluster and then install the experimental version. The binaries in the tap are signed using the Sigstore framework for enhanced security.

    Confirm that you've installed the correct version of the vCluster CLI.

    vcluster --version
  2. Deploy vCluster:

    Modify the following with your specific values to generate a copyable command:
    vcluster create my-vcluster --namespace team-x --values vcluster.yaml
    note

    After installation, vCluster automatically switches your Kubernetes context to the new virtual cluster. You can now run kubectl commands against the virtual cluster.

This configuration ensures that:

  • Service accounts are properly synced between virtual and host clusters
  • Persistent volume claims are handled correctly
  • The gp3 storage class created earlier is used

Allow internal DNS resolution​

By default, vCluster runs a CoreDNS component inside the virtual cluster. This component listens on port 1053 instead of the standard DNS port 53 to avoid conflicts with the host cluster DNS.

On EKS, if the CoreDNS pod and other virtual cluster pods are scheduled on different nodes, DNS resolution may fail. This happens because AWS creates separate security groups for the EKS control plane and worker nodes, and the default node security group does not allow inbound traffic on port 1053.

To resolve this, manually update the EKS node security group to allow inbound TCP and UDP traffic on port 1053 between nodes.

tip

This step is especially important for EKS clusters created using Terraform or other automation tools that apply restrictive network settings by default.

Next steps​

Now that you have vCluster running on EKS, consider:

  • Setting up the platform UI to mange your virtual clusters.
  • Integrating with Karpenter for autoscaling.

Pod identity​

Enterprise-Only Feature

This feature is an Enterprise feature. See our pricing plans or contact our sales team for more information.

When using the platform you can easily enable Pod Identity.

Cleanup​

If you deployed the EKS cluster with this tutorial, and want to clean up the resources, run the following command:

Clean up resources
eksctl delete cluster -f cluster.yaml --disable-nodegroup-eviction