Skip to main content
Version: main 🚧
Enterprise-Only Feature

This feature is an Enterprise feature. See our pricing plans or contact our sales team for more information.

Integrate EKS Pod Identity with vCluster

This tutorial guides you through the process of integrating AWS Service Accounts with your vClustervClusterAn open-source software product that creates and manages virtual Kubernetes clusters inside a host Kubernetes cluster. vCluster improves isolation and multi-tenancy capabilities while reducing infrastructure costs.Related: Virtual Cluster, Host Cluster using Pod Identity.

Setting up Pod Identity requires you to link an AWS Service Account with the Kubernetes Service Account (KSA) used by your workloads. This KSA needs to be available in the host clusterHost ClusterThe physical Kubernetes cluster where virtual clusters are deployed and run. The host cluster provides the infrastructure resources (CPU, memory, storage, networking) that virtual clusters leverage, while maintaining isolation between different virtual environments.Related: Virtual Cluster in which your vCluster instance runs.

To achieve this setup, use the sync.toHost feature to expose the KSA in the host cluster together with the platformThe PlatformThe vCluster Platform that provides management, access control, and operational features for virtual clusters across multiple physical host clusters.Related: Project API to retrieve the updated name of the KSA in the host cluster.

Prerequisites​

This guide assumes you have the following prerequisites:

  • kubectl installed
  • aws CLI installed and configured
  • An existing EKS cluster with the CSI driver set up, IAM OIDC provider, and Pod Identity agent deployed

Step-by-step guide​

1. Start the platform and create an access key​

In order to integrate your workloads with EKS Pod Identity, you'll need a platform instance running. If you don't have one already, follow the platform installation guide.

Once you're done, you'll need to create a new access keyAccess KeyA secure credential in the platform that allows programmatic access to platform resources and API endpoints.Related: The Platform, User. This allows you to use the platform API. Follow this guide to create a new access key.

2. Set up variables​

Define the necessary environment variables for your EKS cluster, service accounts, and authentication details.

#!/bin/bash

# Set up environment variables
export AWS_REGION="eu-central-1" # Replace with your AWS region
export CLUSTER_NAME="pod-identity-1" # Replace with your EKS cluster name
export SERVICE_ACCOUNT_NAME="demo-sa"
export SERVICE_ACCOUNT_NAMESPACE="default"
export VCLUSTER_NAME="my-vcluster"
export HOST=https://your.loft.host # Replace with your host
export ACCESS_KEY=abcd1234 # Replace with your access key
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

3. Create vCluster configuration​

Create the vcluster.yaml file with following content:

sync:
toHost:
serviceAccounts:
enabled: true

4. Deploy vCluster​

The vCluster CLI provides the most straightforward way to deploy and manage virtual clusters.

  1. Install the vCluster CLI:

     brew install loft-sh/tap/vcluster-experimental

    If you installed the CLI using brew install vcluster, you should brew uninstall vcluster and then install the experimental version. The binaries in the tap are signed using the Sigstore framework for enhanced security.

    Confirm that you've installed the correct version of the vCluster CLI.

    vcluster --version
  2. Deploy vCluster:

    Modify the following with your specific values to generate a copyable command:
    vcluster create my-vcluster --namespace team-x --values vcluster.yaml
    note

    After installation, vCluster automatically switches your Kubernetes context to the new virtual cluster. You can now run kubectl commands against the virtual cluster.

5. Connect to vCluster​

Establish a connection to your vCluster instance:

vcluster connect ${VCLUSTER_NAME}

6. Create example workload​

Create an example workload to list S3 buckets.

# Create example-workload.yaml content dynamically
cat <<EOF > example-workload.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-sa
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: s3-list-buckets
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: s3-list-buckets
template:
metadata:
labels:
app: s3-list-buckets
spec:
serviceAccountName: demo-sa
containers:
- image: public.ecr.aws/aws-cli/aws-cli
command:
- "aws"
- "s3"
- "ls"
name: aws-pod
EOF

kubectl apply -f example-workload.yaml

7. Read updated name from platform API​

Define a function to fetch the KSA name using curl and use it to export KSA_NAME environment variable.

# Define the function to get the KSA name using curl
get_ksa_name() {
local vcluster_ksa_name=$1
local vcluster_ksa_namespace=$2
local vcluster_name=$3
local host=$4
local access_key=$5

local resource_path="/kubernetes/management/apis/management.loft.sh/v1/translatevclusterresourcenames"
local host_with_scheme=$([[ $host =~ ^(http|https):// ]] && echo "$host" || echo "https://$host")
local sanitized_host="${host_with_scheme%/}"
local full_url="${sanitized_host}${resource_path}"

local response=$(curl -s -k -X POST "$full_url" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${access_key}" \
-d @- <<EOF
{
"spec": {
"name": "${vcluster_ksa_name}",
"namespace": "${vcluster_ksa_namespace}",
"vclusterName": "${vcluster_name}"
}
}
EOF
)

local status_name=$(echo "$response" | jq -r '.status.name')
if [[ -z "$status_name" || "$status_name" == "null" ]]; then
echo "Error: Unable to fetch KSA name from response: $response"
exit 1
fi
echo "$status_name"
}

# Get the KSA name
export KSA_NAME=$(get_ksa_name "$SERVICE_ACCOUNT_NAME" "$SERVICE_ACCOUNT_NAMESPACE" "$VCLUSTER_NAME" "$HOST" "$ACCESS_KEY")

8. Create IAM policy and role for Pod Identity​

Create IAM policy and role for pod identity.

cat >my-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::*"
}
]
}
EOF

aws iam create-policy --policy-name my-policy --policy-document file://my-policy.json

cat >trust-relationship.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowEksAuthToAssumeRoleForPodIdentity",
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": [
"sts:AssumeRole",
"sts:TagSession"
]
}
]
}
EOF

aws iam create-role --role-name my-role --assume-role-policy-document file://trust-relationship.json --description "my-role-description"

aws iam attach-role-policy --role-name my-role --policy-arn=arn:aws:iam::${AWS_ACCOUNT_ID}:policy/my-policy

Create the pod identity association.

Namespace configuration

The namespace parameter depends on your vCluster deployment type:

  • Standalone vCluster (not using the platform): Use the namespace where vCluster is deployed
  • Platform-managed vCluster: The namespace follows the pattern loft-<project-name>-v-<vcluster-name>

For standalone vCluster deployments (deployed with vcluster create or Helm without the platform):

# Set the namespace where vCluster is deployed
export VCLUSTER_NAMESPACE="team-x" # Replace with your actual namespace

aws eks create-pod-identity-association \
--cluster-name ${CLUSTER_NAME} \
--role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/my-role \
--namespace ${VCLUSTER_NAMESPACE} \
--service-account ${KSA_NAME}

9. Verify the setup​

Verify the setup by checking the logs.

kubectl logs -l app=s3-list-buckets -n default