Skip to main content
Version: main 🚧

Deploy vCluster Platform with an External Database

info
This feature is available from the Platform version v4.8.0
Modify the following with your specific values to replace on the whole page and generate copyable commands:
note

These instructions are tested on AWS. An external database deployment can run on other cloud providers with comparable capabilities (managed Kubernetes, a managed database, and a load balancer).

These steps walk you through setting up a high-availability vCluster Platform in a single AWS region using EKS, an external Amazon Relational Database Service (RDS) database as a Kine backend, Application Load Balancer (ALB) ingress, and multiple platform replicas.

Prerequisites​

Before you begin, ensure you have:

  • An AWS account with sufficient IAM permissions
  • A registered domain (example: platform.example.com)
  • A public Route 53 hosted zone (or equivalent DNS)
  • The following tools installed:
    • eksctl
    • kubectl
    • awscli
    • helm
    • vcluster

Step 1 - Create an EKS cluster​

Create an EKS cluster with at least three nodes to spread platform replicas across failure domains.

eksctl create cluster \
--name platform-ha \
--region us-east-1 \
--nodes 3 \
--managed \
--with-oidc

Install the Amazon EBS CSI driver​

The EBS CSI driver is required for dynamic provisioning of persistent volumes on EKS. Without it, virtual cluster StatefulSet pods remain in Pending state because the gp2 storage class cannot provision volumes.

eksctl create addon --name aws-ebs-csi-driver \
--cluster platform-ha \
--region us-east-1

For more details on EKS prerequisites for running virtual clusters, see the EKS environment setup guide.

Note the EKS VPC ID and CIDR​

eksctl creates a VPC automatically. Note the VPC ID and CIDR — you need these when creating the database security group (Step 3) and VPC peering (Step 4).

aws eks describe-cluster \
--name platform-ha \
--region us-east-1 \
--query 'cluster.resourcesVpcConfig.vpcId' \
--output text

To find the CIDR range:

Modify the following with your specific values to generate a copyable command:
aws ec2 describe-vpcs \
--vpc-ids vpc-yyyyyyyyy \
--region us-east-1 \
--query 'Vpcs[0].CidrBlock' \
--output text

The default eksctl CIDR is 192.168.0.0/16. Use the actual value from the output above in subsequent steps.

Step 2 - Install AWS load balancer controller​

Create IAM policy​

curl -o iam_policy.json \
https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json

aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json

Create IAM service account​

Create an IAM Roles for Service Accounts (IRSA) association to allow the load balancer controller pods to assume the IAM role.

eksctl create iamserviceaccount \
--cluster platform-ha \
--namespace kube-system \
--name aws-load-balancer-controller \
--attach-policy-arn arn:aws:iam::123456789012:policy/AWSLoadBalancerControllerIAMPolicy \
--approve

Install with Helm​

helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=platform-ha \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller

Step 3 - Create the database (Kine backend)​

Create an RDS instance (MariaDB) in its own isolated VPC, separate from the EKS cluster VPC. This keeps the database network boundary independent from the cluster workloads and simplifies security group rules.

Create the database VPC​

Create a VPC with a CIDR that does not overlap with the EKS cluster VPC. Enable DNS support and DNS hostnames so the RDS endpoint resolves correctly across the VPC peering connection.

aws ec2 create-vpc \
--cidr-block 10.1.0.0/16 \
--region us-east-1 \
--tag-specifications 'ResourceType=vpc,Tags=[{Key=Name,Value=ha-platform-db-vpc}]'

Note the VpcId from the output, then enable DNS settings:

aws ec2 modify-vpc-attribute \
--vpc-id vpc-xxxxxxxxx \
--enable-dns-support '{"Value":true}' \
--region us-east-1

aws ec2 modify-vpc-attribute \
--vpc-id vpc-xxxxxxxxx \
--enable-dns-hostnames '{"Value":true}' \
--region us-east-1

Create subnets and DB subnet group​

Create subnets in at least two availability zones (required by RDS), then create a DB subnet group.

aws ec2 create-subnet \
--vpc-id vpc-xxxxxxxxx \
--cidr-block 10.1.1.0/24 \
--availability-zone us-east-1a \
--region us-east-1 \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=ha-platform-db-subnet-1a}]'

aws ec2 create-subnet \
--vpc-id vpc-xxxxxxxxx \
--cidr-block 10.1.2.0/24 \
--availability-zone us-east-1b \
--region us-east-1 \
--tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=ha-platform-db-subnet-1b}]'
Modify the following with your specific values to generate a copyable command:
aws rds create-db-subnet-group \
--db-subnet-group-name ha-platform-db-subnet \
--db-subnet-group-description "Isolated VPC subnet group for platform RDS" \
--subnet-ids subnet-xxxxxxxxx subnet-yyyyyyyyy \
--region us-east-1

Create a security group for the database​

Create a security group that allows inbound MariaDB traffic (port 3306) from the EKS cluster VPC CIDR range.

aws ec2 create-security-group \
--group-name ha-platform-db-sg \
--description "Allow MariaDB from EKS VPC via peering" \
--vpc-id vpc-xxxxxxxxx \
--region us-east-1
Modify the following with your specific values to generate a copyable command:
aws ec2 authorize-security-group-ingress \
--group-id sg-xxxxxxxxx \
--protocol tcp --port 3306 \
--cidr 10.0.0.0/16 \
--region us-east-1

Create the RDS instance​

tip

Enable automated backups by adding --backup-retention-period 7 (or your preferred retention in days) to the command below. Without this, RDS does not retain automated snapshots and you must create manual snapshots for disaster recovery.

Modify the following with your specific values to generate a copyable command:
aws rds create-db-instance \
--engine mariadb \
--db-instance-identifier mariadb-ha-platform \
--allocated-storage 20 \
--region us-east-1 \
--db-instance-class db.t3.medium \
--master-username admin \
--master-user-password your-password \
--db-subnet-group-name ha-platform-db-subnet \
--vpc-security-group-ids sg-xxxxxxxxx \
--no-publicly-accessible \
--enable-iam-database-authentication

Wait for the instance to become available:

aws rds wait db-instance-available \
--db-instance-identifier mariadb-ha-platform \
--region us-east-1

Create an IAM policy for RDS access​

Create an IAM policy that grants the rds-db:connect permission for the kine database user. Replace <DBI_RESOURCE_ID> with the RDS instance resource ID. To find the value of the resource ID, use aws rds describe-db-instances --query 'DBInstances[0].DbiResourceId'.

Modify the following with your specific values to generate a copyable command:
aws iam create-policy \
--policy-name RDSIAMAuthKine \
--policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "rds-db:connect",
"Resource": "arn:aws:rds-db:us-east-1:123456789012:dbuser:db-XXXXXXXXXXXXXXXXXXXXXXXXXX/kine"
}
]
}'

Create an IAM role for Pod Identity​

Create an IAM role with a trust policy that allows the EKS Pod Identity agent to assume it.

aws iam create-role \
--role-name PlatformKineRDSRole \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
}'

aws iam attach-role-policy \
--role-name PlatformKineRDSRole \
--policy-arn arn:aws:iam::123456789012:policy/RDSIAMAuthKine

Install the EKS Pod Identity Agent​

Install the Pod Identity Agent add-on. Without this add-on, the Pod Identity association has no effect and pods cannot obtain IAM credentials.

aws eks create-addon \
--cluster-name platform-ha \
--addon-name eks-pod-identity-agent \
--region us-east-1

Verify the add-on is ACTIVE before proceeding:

aws eks describe-addon \
--cluster-name platform-ha \
--addon-name eks-pod-identity-agent \
--region us-east-1 \
--query 'addon.status'

Create Pod Identity association​

Associate the IAM role with the loft service account in the vcluster-platform namespace. This allows the platform pods to generate temporary IAM authentication tokens for the database connection.

aws eks create-pod-identity-association \
--cluster-name platform-ha \
--role-arn arn:aws:iam::123456789012:role/PlatformKineRDSRole \
--namespace vcluster-platform \
--service-account loft \
--region us-east-1

For more details on IAM database authentication with EKS Pod Identity, see IAM database authentication.

Step 4 - Create VPC peering to the database​

Create a VPC peering connection between the EKS cluster VPC and the database VPC. This gives the cluster network access to the RDS instance.

Create peering connection​

Modify the following with your specific values to generate a copyable command:
aws ec2 create-vpc-peering-connection \
--vpc-id vpc-xxxxxxxxx \
--peer-vpc-id vpc-yyyyyyyyy \
--region us-east-1 \
--tag-specifications 'ResourceType=vpc-peering-connection,Tags=[{Key=Name,Value=db-vpc-to-eks-ha-platform}]'

Accept the peering connection​

Modify the following with your specific values to generate a copyable command:
aws ec2 accept-vpc-peering-connection \
--vpc-peering-connection-id pcx-xxxxxxxxx \
--region us-east-1

Enable DNS resolution​

Enable DNS resolution on the peering connection so the RDS endpoint hostname resolves to its private IP from the EKS VPC.

Modify the following with your specific values to generate a copyable command:
aws ec2 modify-vpc-peering-connection-options \
--vpc-peering-connection-id pcx-xxxxxxxxx \
--requester-peering-connection-options AllowDnsResolutionFromRemoteVpc=true \
--accepter-peering-connection-options AllowDnsResolutionFromRemoteVpc=true \
--region us-east-1

Add routes​

Add routes so traffic can flow between the EKS VPC and the database VPC.

warning

Add routes to every route table whose subnets host EKS nodes, including public route tables. If your EKS nodes run in public subnets (the eksctl default), missing routes on the public route table cause database connection timeouts even though VPC peering is active.

Database VPC route table — route to the EKS VPC CIDR:

Modify the following with your specific values to generate a copyable command:
aws ec2 create-route \
--route-table-id rtb-xxxxxxxxx \
--destination-cidr-block 10.0.0.0/16 \
--vpc-peering-connection-id pcx-xxxxxxxxx \
--region us-east-1

EKS VPC route tables — route to the database VPC CIDR. Repeat for every route table associated with subnets that host EKS nodes:

Modify the following with your specific values to generate a copyable command:
aws ec2 create-route \
--route-table-id rtb-yyyyyyyyy \
--destination-cidr-block 10.1.0.0/16 \
--vpc-peering-connection-id pcx-xxxxxxxxx \
--region us-east-1

To list all route tables for a VPC and identify which subnets they serve:

Modify the following with your specific values to generate a copyable command:
aws ec2 describe-route-tables \
--filters "Name=vpc-id,Values=vpc-xxxxxxxxx" \
--region us-east-1 \
--query 'RouteTables[].{RouteTableId:RouteTableId,Name:Tags[?Key==`Name`].Value|[0],Subnets:Associations[].SubnetId}'

Verify connectivity​

Launch a test pod in the EKS cluster and verify it can reach the RDS endpoint:

Modify the following with your specific values to generate a copyable command:
kubectl run dbtest --image=busybox --restart=Never --rm -it -- \
nc -zv mariadb-ha-platform.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com 3306

The connection should report open. If it times out, verify that routes exist on the correct route tables and that the database security group allows ingress from the cluster's VPC CIDR.

Create the Kine database and IAM user​

Now that VPC peering is active, connect to the RDS instance through a temporary pod in the EKS cluster to create the Kine database and IAM-authenticated user.

Modify the following with your specific values to generate a copyable command:
kubectl run mariadb-client --image=mariadb:latest --restart=Never --rm -i -- \
mariadb -h mariadb-ha-platform.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com \
-u admin -pyour-password -e "
CREATE DATABASE IF NOT EXISTS kine;
CREATE USER IF NOT EXISTS 'kine'@'%' IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';
GRANT ALL PRIVILEGES ON kine.* TO 'kine'@'%';
FLUSH PRIVILEGES;
"

Step 5 - Deploy vCluster platform​

Deploy the platform with multiple replicas and the external database configuration. Unlike multi-region, no multiRegion block is needed — only config.database and replicaCount.

Create a values file (platform-ha-values.yaml):

Modify the following with your specific values to generate a copyable command:
admin:
email: admin@example.com

replicaCount: 3

config:
loftHost: platform.example.com
database:
enabled: true
dataSource: "mysql://kine@tcp(mariadb-ha-platform.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com:3306)/kine"
identityProvider: "aws"
extraArgs:
- --datastore-max-open-connections=20
# Set to 0 because IAM auth tokens expire, making idle connections stale.
- --datastore-max-idle-connections=0

# Cost control requires the embedded single-region database and is not
# compatible with the external Kine backend.
costControl:
enabled: false

# Run multiple agent replicas for resilience on the connected host cluster.
agentValues:
replicaCount: 3

Setting config.database.enabled=true with replicaCount > 1 automatically configures:

  • Embedded Kubernetes (Kine) with the external database
  • Leader election across replicas
  • RollingUpdate deployment strategy

Install the platform:

vcluster platform start \
--namespace vcluster-platform \
--kube-context arn:aws:eks:us-east-1:123456789012:cluster/platform-ha \
--values platform-ha-values.yaml \
--no-tunnel

Wait for all replicas to become ready:

kubectl --context arn:aws:eks:us-east-1:123456789012:cluster/platform-ha \
rollout status deployment/loft -n vcluster-platform

Step 6 - Configure HTTPS and Ingress​

Request an ACM certificate​

Request an AWS Certificate Manager (ACM) certificate for your domain.

aws acm request-certificate \
--region us-east-1 \
--domain-name "platform.example.com" \
--validation-method DNS

Get the certificate ARN from the output and describe the certificate to obtain DNS validation records:

Modify the following with your specific values to generate a copyable command:
aws acm describe-certificate \
--region us-east-1 \
--certificate-arn arn:aws:acm:us-east-1:123456789012:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Add DNS validation CNAMEs in Route 53 and wait until certificate status is ISSUED.

note

You can also do this step by setting up cert-manager in the cluster instead of using AWS ACM.

Create the Ingress (ALB)​

Apply the Ingress manifest to provision an AWS ALB:

Modify the following with your specific values to generate a copyable command:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: loft
namespace: vcluster-platform
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80},{"HTTPS":443}]'
alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:us-east-1:123456789012:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/idle-timeout: "3600"
alb.ingress.kubernetes.io/healthcheck-path: /healthz
alb.ingress.kubernetes.io/success-codes: "200"
spec:
ingressClassName: alb
rules:
- host: platform.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: loft
port:
number: 80

Create a DNS record​

Create an A (Alias) record pointing your domain to the ALB:

Modify the following with your specific values to generate a copyable command:
aws route53 change-resource-record-sets \
--hosted-zone-id Z1234567890ABC \
--change-batch '{
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "platform.example.com",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "ZXXXXXXXXXX",
"DNSName": "k8s-vcluster-loft-xxxxxxxxxx-xxxxxxxxx.us-east-1.elb.amazonaws.com",
"EvaluateTargetHealth": true
}
}
}
]
}'

Result​

You now have a fully operational vCluster Platform external database deployment with:

  • One EKS cluster with three nodes.
  • Three platform replicas with leader election.
  • External Kine database (RDS MariaDB) in an isolated VPC with IAM-based authentication (Pod Identity).
  • VPC peering between the cluster VPC and the database VPC.
  • ALB with extended idle timeout and ACM certificate.
  • RollingUpdate deployment strategy for zero-downtime upgrades.
  • PodDisruptionBudget (created automatically by the Helm chart) ensuring at least one replica is always available.