Skip to main content
Version: v4.5 Stable

Register clusters with Terraform

Automate host cluster registration to vCluster Platform using Terraform. This guide shows you how to programmatically register clusters during infrastructure provisioning, eliminating manual registration steps.

Overview​

When provisioning infrastructure with Terraform, you can automate the complete cluster registration workflow. This approach registers the cluster in vCluster Platform and installs the agent, making the cluster immediately available for virtual cluster deployment.

Terraform provider deprecation

The vCluster Platform Terraform provider is deprecated. This guide provides an alternative approach using the Kubernetes provider and vCluster Platform API to achieve the same automation goals.

Why automate cluster registration​

Manual cluster registration requires running vcluster platform add cluster after each cluster deployment. This creates friction in automated workflows and requires manual intervention. By automating registration through Terraform, you can:

  • Provision and register clusters in a single Terraform apply
  • Maintain infrastructure as code for the complete cluster lifecycle
  • Eliminate manual steps in CI/CD pipelines
  • Ensure consistent cluster configuration across environments

Prerequisites​

Before you begin, ensure you have:

  • Terraform installed: Version 1.0 or higher
  • vCluster Platform instance: Running and accessible
  • admin access key: Required to create cluster-specific access keys
  • kubectl access: To the vCluster Platform management cluster
  • Target cluster: The cluster you want to register (or Terraform configuration to create it)

admin access key​

The automation requires an admin-level access key to create cluster-specific access keys through the API. You can create an access key through the vCluster Platform UI or CLI.

Secure credential storage

Store the admin access key securely using a secrets management solution. Never commit access keys to version control. Consider using:

  • HashiCorp Vault for centralized secret management
  • Kubernetes Secrets with encryption at rest
  • Provider secret managers (AWS Secrets Manager, Azure Key Vault, GCP Secret Manager)
  • Environment variables in CI/CD systems with secret masking

How it works​

The registration process follows three steps:

  1. Create cluster resource: Register the cluster in vCluster Platform's management cluster
  2. Retrieve cluster-specific access key: Get the unique key for this cluster via API
  3. Install platform agent: Deploy the agent using the cluster-specific key

The cluster-specific access key authenticates the agent to vCluster Platform with the correct cluster identity. Using an admin key for the agent fails because the agent expects a cluster-scoped credential.

Step-by-step guide​

Platform hostname format

The PLATFORM_HOST variable should contain only the hostname without the https:// protocol prefix. For example: platform.example.com not https://platform.example.com. The Terraform configuration adds the protocol automatically.

Modify the following with your specific values to replace on the whole page and generate copyable commands:
  1. Set your environment variables using the interactive form at the top of this page:

    Set environment variables
    # Set variables
    export TF_VAR_platform_host="platform.example.com"
    export TF_VAR_admin_access_key="your-admin-key"
    export TF_VAR_platform_context="platform-context"
    export TF_VAR_target_cluster_context="target-cluster-context"
    export TF_VAR_cluster_name="prod-cluster-1"
    export TF_VAR_cluster_display_name="Production Cluster 1"
  2. Create the Cluster resource configuration.

    This registers the cluster with Platform and prepares it for agent connection.

    cluster-registration.tf
    # Configure Kubernetes provider for vCluster Platform management cluster
    provider "kubernetes" {
    alias = "platform"
    config_path = "~/.kube/config"
    config_context = var.platform_context
    }

    # Variables
    variable "platform_context" {
    description = "kubectl context for the vCluster Platform management cluster"
    type = string
    }

    # Create the cluster resource
    resource "kubernetes_manifest" "platform_cluster" {
    provider = kubernetes.platform

    manifest = {
    apiVersion = "management.loft.sh/v1"
    kind = "Cluster"
    metadata = {
    name = var.cluster_name
    }
    spec = {
    displayName = var.cluster_display_name
    networkPeer = true
    managementNamespace = "vcluster-platform"
    }
    }
    }

    # Variables
    variable "cluster_name" {
    description = "Internal cluster identifier"
    type = string
    }

    variable "cluster_display_name" {
    description = "Display name shown in vCluster Platform UI"
    type = string
    }

    Configuration options:

    • name: Internal identifier for the cluster (must be DNS-compatible)
    • displayName: Human-readable name shown in the UI
    • networkPeer: Enable network peering between clusters
    • managementNamespace: Namespace where the agent will be installed
  3. Configure the HTTP data source to retrieve the cluster-specific access key.

    After creating the cluster resource, retrieve the cluster-specific access key using the vCluster Platform API.

    cluster-access-key.tf
    # Data source to retrieve cluster access key
    data "http" "cluster_access_key" {
    url = "https://${var.platform_host}/kubernetes/management/apis/management.loft.sh/v1/clusters/${var.cluster_name}/accesskey"

    request_headers = {
    Authorization = "bearer ${var.admin_access_key}"
    }

    depends_on = [kubernetes_manifest.platform_cluster]
    }

    # Parse the response
    locals {
    access_key_response = jsondecode(data.http.cluster_access_key.response_body)
    cluster_access_key = local.access_key_response.accessKey
    loft_host = local.access_key_response.loftHost
    }

    # Variables
    variable "platform_host" {
    description = "vCluster Platform hostname without protocol (e.g., 'platform.example.com' not 'https://platform.example.com')"
    type = string
    }

    variable "admin_access_key" {
    description = "Admin access key for API authentication"
    type = string
    sensitive = true
    }

    The API endpoint returns a JSON response containing:

    • accessKey: Cluster-specific token for agent authentication
    • loftHost: vCluster Platform host URL
  4. Configure the Helm release to install the Platform agent.

    Install the vCluster Platform agent on your target cluster using the cluster-specific access key.

    agent-installation.tf
    # Configure Kubernetes provider for target cluster
    provider "kubernetes" {
    alias = "target"
    config_path = "~/.kube/config"
    config_context = var.target_cluster_context
    }

    provider "helm" {
    alias = "target"
    kubernetes {
    config_path = "~/.kube/config"
    config_context = var.target_cluster_context
    }
    }

    # Get platform version for agent
    data "http" "platform_version" {
    url = "https://${var.platform_host}/version"
    }

    locals {
    platform_version = jsondecode(data.http.platform_version.response_body).version
    }

    # Install vCluster Platform agent
    resource "helm_release" "platform_agent" {
    provider = helm.target
    name = "vcluster-platform"
    repository = "https://charts.loft.sh/"
    chart = "vcluster-platform"
    version = local.platform_version
    namespace = "vcluster-platform"
    create_namespace = true

    set {
    name = "agentOnly"
    value = "true"
    }

    set {
    name = "url"
    value = "https://${var.platform_host}"
    }

    set {
    name = "token"
    value = local.cluster_access_key
    }

    depends_on = [data.http.cluster_access_key]
    }

    # Variables
    variable "target_cluster_context" {
    description = "kubectl context for the target cluster"
    type = string
    }

    What happens: the agent installs in the vcluster-platform namespace, connects to the platform using the cluster-specific key, and establishes a secure tunnel. The cluster becomes available in the vCluster Platform UI within moments.

  5. Initialize Terraform.

    Initialize Terraform
    terraform init

    This downloads the required provider plugins (Kubernetes, Helm, HTTP).

  6. Review the execution plan.

    Preview changes
    terraform plan

    Verify that Terraform will:

    • Create the Cluster resource in Platform
    • Retrieve the cluster-specific access key via API
    • Install the Platform agent via Helm
  7. Apply the configuration.

    Apply configuration
    terraform apply

    Type yes when prompted. This executes all three steps: creating the cluster resource, retrieving the access key, and installing the agent.

  8. Verify the cluster in Platform UI.

    Navigate to Clusters in the Platform interface. Your cluster should appear with "Connected" status within moments.

Configuration files reference​

The following sections show the detailed Terraform configuration for each component.

Quick reference - Complete Terraform configuration

Copy and paste this complete configuration with your customized values:

main.tf
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.23"
}
helm = {
source = "hashicorp/helm"
version = "~> 2.11"
}
http = {
source = "hashicorp/http"
version = "~> 3.4"
}
}
}

# Provider for vCluster Platform management cluster
provider "kubernetes" {
alias = "platform"
config_path = "~/.kube/config"
config_context = "platform-context"
}

# Provider for target cluster
provider "kubernetes" {
alias = "target"
config_path = "~/.kube/config"
config_context = "target-cluster-context"
}

provider "helm" {
alias = "target"
kubernetes {
config_path = "~/.kube/config"
config_context = "target-cluster-context"
}
}

# Step 1: Create cluster resource
resource "kubernetes_manifest" "cluster" {
provider = kubernetes.platform

manifest = {
apiVersion = "management.loft.sh/v1"
kind = "Cluster"
metadata = {
name = "prod-cluster-1"
}
spec = {
displayName = "Production Cluster 1"
networkPeer = true
managementNamespace = "vcluster-platform"
}
}
}

# Step 2: Get cluster access key
data "http" "cluster_key" {
url = "https://platform.example.com/kubernetes/management/apis/management.loft.sh/v1/clusters/prod-cluster-1/accesskey"

request_headers = {
Authorization = "bearer your-admin-key"
}

depends_on = [kubernetes_manifest.cluster]
}

locals {
key_response = jsondecode(data.http.cluster_key.response_body)
}

# Get platform version
data "http" "version" {
url = "https://platform.example.com/version"
}

locals {
version = jsondecode(data.http.version.response_body).version
}

# Step 3: Install agent
resource "helm_release" "agent" {
provider = helm.target
name = "vcluster-platform"
repository = "https://charts.loft.sh/"
chart = "vcluster-platform"
version = local.version
namespace = "vcluster-platform"
create_namespace = true

set {
name = "agentOnly"
value = "true"
}

set {
name = "url"
value = "https://platform.example.com"
}

set {
name = "token"
value = local.key_response.accessKey
}

depends_on = [data.http.cluster_key]
}

# Variables
variable "platform_host" {
description = "vCluster Platform hostname without protocol (e.g., platform.example.com)"
type = string
}

variable "platform_context" {
description = "kubectl context for platform management cluster"
type = string
}

variable "target_cluster_context" {
description = "kubectl context for target cluster"
type = string
}

variable "cluster_name" {
description = "Cluster identifier"
type = string
}

variable "cluster_display_name" {
description = "Display name in UI"
type = string
}

variable "admin_access_key" {
description = "Admin access key"
type = string
sensitive = true
}

# Outputs
output "cluster_registered" {
value = "Cluster prod-cluster-1 registered successfully"
}

Security considerations​

Access key management​

The admin access key provides full platform access. Protect it using these practices:

Store in secrets management: use a dedicated secrets solution rather than environment variables or files:

  • HashiCorp Vault with dynamic credentials
  • Provider secret managers with automatic rotation
  • Kubernetes Secrets with encryption at rest and RBAC controls

Use scoped keys when possible - While this workflow requires an admin key for the initial setup, consider using scoped access keys for other automation tasks.

Rotate regularly - Establish a rotation schedule for access keys:

  • Set expiration dates on keys
  • Automate rotation through your secrets management system
  • Revoke keys immediately when team members leave

Audit access - Monitor access key usage:

  • Enable audit logging for API calls
  • Review access key usage patterns
  • Alert on unusual activity

Network security​

The agent establishes an outbound connection to vCluster Platform. Ensure:

  • Platform endpoint uses TLS (never set insecure = true in production)
  • Network policies allow outbound HTTPS from the agent namespace
  • Firewall rules permit the agent to reach the platform endpoint

Known limitations​

Bootstrap credential requirement​

This automation requires an admin access key to bootstrap the process. This creates a "chicken and egg" problem: you need credentials to create cluster-specific credentials.

Solutions:

  1. Pre-provision keys: Create the admin access key manually before running Terraform:

    # Create key through UI or CLI
    vcluster platform create accesskey bootstrap-key
    # Store in secrets management
  2. Use secrets management: Store the bootstrap key in your secrets management system:

    • HashiCorp Vault: use the Vault provider to retrieve keys
    • Provider platforms: use native Terraform data sources for secret managers
    • External Secrets Operator: sync keys from external systems to Kubernetes
  3. Manual initial registration: For the very first cluster or platform instance, register it manually, then use this automation for subsequent clusters.

Provider contexts​

The configuration requires two kubernetes provider contexts: one for the platform management cluster and one for the target cluster. Ensure both contexts are configured in your kubeconfig before running Terraform.

Troubleshoot common issues​

Agent fails to connect​

Symptom: agent pod shows authentication errors in logs.

Cause: using admin access key instead of cluster-specific key.

Solution: verify you're using the key from the API endpoint (/clusters/<name>/accesskey), not a user access key.

Cluster resource creation fails​

Symptom: kubernetes_manifest resource fails with permission denied.

Cause: kubectl context doesn't have permissions to create Cluster resources.

Solution: ensure your platform context has admin permissions or appropriate RBAC for Cluster resources.

Cannot retrieve access key​

Symptom: http data source fails with 401 or 403 status code.

Cause: admin access key invalid or lacks permissions.

Solution: verify the access key works with curl:

Verify access key with curl
curl -s "https://platform.example.com/kubernetes/management/apis/management.loft.sh/v1/clusters/prod-cluster-1/accesskey" \
-H "Authorization: bearer your-admin-key"

Next steps​

After registering your cluster, you can: