Deploy auto nodes with quick start templates
Quick start templates provide a ready-to-go infrastructure-as-code deployment for Kubernetes clusters across the three major public clouds: AWS, GCP, and Azure.
They are built to:
- Provision essential cloud resources such as VPCs, networks, subnets, NAT gateways, and node instances.
- Enable full cloud provider integration by deploying the Cloud Controller Manager (CCM) and Container Storage Interface (CSI), ensuring control-plane, storage, and networking features work seamlessly.
- Speed up onboarding by getting clusters running with minimal manual effort.
Get started immediately​
privateNodes:
  enabled: true
  autoNodes:
    - provider: aws
      dynamic:
        - name: aws-1
          nodeTypeSelector:
            - property: instance-type
              operator: In
              values:
                - e2-medium
controlPlane:
  service:
    spec:
      type: LoadBalancer
networking:
  podCIDR: 10.64.0.0/16
  serviceCIDR: 10.128.0.0/16
Prerequisites​
Before deploying a vClustervClusterAn open-source software product that creates and manages virtual Kubernetes clusters inside a host Kubernetes cluster. vCluster improves isolation and multi-tenancy capabilities while reducing infrastructure costs. with auto-nodes, you’ll need to prepare two things:
- A cloud-specific service account or role that Terraform can use.
- A NodeProvider configuration that sources one of the quick start templates.
Terraform service account / role​
Terraform requires cloud credentials with sufficient permissions to provision infrastructure.
You can review the required permissions for each provider here:
NodeProvider​
It is recommended to use the predefined NodeProvider, which already reference the quick start templates and is ready to use out of the box. They can be selected directly from the vCluster Platform UI.
If you prefer to create your own NodeProvider, you can source one of the quick start templates directly:
See the following example:
- AWS
- Azure
- GCP
apiVersion: management.loft.sh/v1
kind: NodeProvider
metadata:
  name: auto-nodes-aws
spec:
  properties:
    vcluster.com/ccm-enabled: "true"
    region: "us-east-1"
  terraform:
    nodeEnvironmentTemplate:
      infrastructure:
        git:
          repository: https://github.com/loft-sh/vcluster-auto-nodes-aws.git
          tag: v0.1.0
          subPath: environment/infrastructure
      kubernetes:
        git:
          repository: https://github.com/loft-sh/vcluster-auto-nodes-aws.git
          tag: v0.1.0
          subPath: environment/kubernetes
    nodeTemplate:
      git:
        repository: https://github.com/loft-sh/vcluster-auto-nodes-aws.git
        tag: v0.1.0
        subPath: node
    nodeTypes:
    - name: t3.small
      resources:
        cpu: "2"
        memory: 2Gi
      properties:
        instance-type: t3.small
    - name: t3.medium
      resources:
        cpu: "2"
        memory: 4Gi
      properties:
        instance-type: t3.medium
    - name: t3.large
      resources:
        cpu: "2"
        memory: 8Gi
      properties:
        instance-type: t3.large
    - name: t3.xlarge
      resources:
        cpu: "4"
        memory: 16Gi
      properties:
        instance-type: t3.xlarge
    - name: t3.2xlarge
      resources:
        cpu: "8"
        memory: 32Gi
      properties:
        instance-type: t3.2xlarge
apiVersion: management.loft.sh/v1
kind: NodeProvider
metadata:
  name: auto-nodes-azure
spec:
  properties:
    vcluster.com/ccm-enabled: "true"
    location: "eastus"
    resource-group: "dev"
    subscription-id: "f1b3f662-ec84-4c23-b590-681997a0f751"
  terraform:
    nodeEnvironmentTemplate:
      infrastructure:
        git:
          repository: https://github.com/loft-sh/vcluster-auto-nodes-azure.git
          tag: v0.1.0
          subPath: environment/infrastructure
      kubernetes:
        git:
          repository: https://github.com/loft-sh/vcluster-auto-nodes-azure.git
          tag: v0.1.0
          subPath: environment/kubernetes
    nodeTemplate:
      git:
        repository: https://github.com/loft-sh/vcluster-auto-nodes-azure.git
        tag: v0.1.0
        subPath: node
    nodeTypes:
    - name: standard-d2s-v5
      resources:
        cpu: "2"
        memory: 8Gi
      properties:
        instance-type: Standard_D2s_v5
    - name: standard-d4s-v5
      resources:
        cpu: "4"
        memory: 16Gi
      properties:
        instance-type: Standard_D4s_v5
    - name: standard-d8s-v5
      resources:
        cpu: "8"
        memory: 32Gi
      properties:
        instance-type: Standard_D8s_v5
    - name: standard-d16s-v5
      resources:
        cpu: "16"
        memory: 64Gi
      properties:
        instance-type: Standard_D16s_v5
apiVersion: management.loft.sh/v1
kind: NodeProvider
metadata:
  name: auto-nodes-gcp
spec:
  properties:
    vcluster.com/ccm-enabled: "true"
    region: "europe-west1"
    project: "my-project"
  terraform:
    nodeEnvironmentTemplate:
      infrastructure:
        git:
          repository: https://github.com/loft-sh/vcluster-auto-nodes-gcp.git
          tag: v0.1.0
          subPath: environment/infrastructure
      kubernetes:
        git:
          repository: https://github.com/loft-sh/vcluster-auto-nodes-gcp.git
          tag: v0.1.0
          subPath: environment/kubernetes
    nodeTemplate:
      git:
        repository: https://github.com/loft-sh/vcluster-auto-nodes-gcp.git
        tag: v0.1.0
        subPath: node
    nodeTypes:
    - name: e2-small
      resources:
        cpu: "2"
        memory: 2Gi
      properties:
        instance-type: e2-small
    - name: e2-medium
      resources:
        cpu: "2"
        memory: 4Gi
      properties:
        instance-type: e2-medium
    - name: e2-standard-2
      resources:
        cpu: "2"
        memory: 8Gi
      properties:
        instance-type: e2-standard-2
    - name: e2-standard-4
      resources:
        cpu: "4"
        memory: 16Gi
      properties:
        instance-type: e2-standard-4
    - name: e2-standard-8
      resources:
        cpu: "8"
        memory: 32Gi
      properties:
        instance-type: e2-standard-8
    - name: e2-standard-16
      resources:
        cpu: "16"
        memory: 64Gi
      properties:
        instance-type: e2-standard-16
Configure​
You can configure the NodeProvider with the following options:
| Option | Default Value | Description | 
|---|---|---|
| vcluster.com/ccm-enabled | true | Enables deployment of the Cloud Controller Manager (CCM). | 
| vcluster.com/ccm-lb-enabled | true | Enables the CCM service controller. If disabled, CCM will not create LoadBalancer services. | 
| vcluster.com/csi-enabled | true | Deploys the CSI driver and configures a default storage class (AWS EBS, GCP PD, Azure Disk). | 
| vcluster.com/vpc-cidr | AWS: 10.0.0.0/16Azure: 10.5.0.0/16GCP: 10.10.0.0/16 | Sets the VPC CIDR range. Useful in multi-cloud scenarios to avoid CIDR conflicts. | 
Example vCluster config.yaml
privateNodes:
  enabled: true
  autoNodes:
    - provider: aws
      properties:
        vcluster.com/ccm-enabled: "false"
        vcluster.com/csi-enabled: "false"
        vcluster.com/vpc-cidr: "10.30.0.0/16"
      dynamic:
        - name: aws-1
          nodeTypeSelector:
            - property: instance-type
              operator: In
              values:
                - e2-medium
controlPlane:
  service:
    spec:
      type: LoadBalancer
networking:
  podCIDR: 10.64.0.0/16
  serviceCIDR: 10.128.0.0/16
Deploy​
- Cloud
- Hybrid Cloud
- Multi Cloud
Deploy on a cloud provider
privateNodes:
  enabled: true
  autoNodes:
    - provider: aws
      dynamic:
        - name: aws-1
          nodeTypeSelector:
            - property: instance-type
              operator: In
              values:
                - e2-medium
controlPlane:
  service:
    spec:
      type: LoadBalancer
networking:
  podCIDR: 10.64.0.0/16
  serviceCIDR: 10.128.0.0/16
Integrate your on-premises data center with the cloud
privateNodes:
  enabled: true
  vpn:
    enabled: true
    nodeToNode:
      enabled: true
  autoNodes:
    - provider: aws
      properties:
        vcluster.com/csi-enabled: "false"
      dynamic:
        - name: aws-1
          nodeTypeSelector:
            - property: instance-type
              operator: In
              values:
                - t3.medium
    - provider: bare-metal # your own bare-metal provider
      static:
        - name: bare-metal
          nodeTypeSelector:
            - property: instance-type
              operator: In
              values:
                - server-1
networking:
  podCIDR: 10.64.0.0/16
  serviceCIDR: 10.128.0.0/16
Combine multiple cloud providers within one vCluster
privateNodes:
  enabled: true
  vpn:
    enabled: true
    nodeToNode:
      enabled: true
  autoNodes:
    - provider: aws
      properties:
        vcluster.com/csi-enabled: "false"
      dynamic:
        - name: aws-1
          nodeLabels:
            topology.ebs.csi.aws.com/zone: us-east-1a
          nodeTypeSelector:
            - property: instance-type
              operator: In
              values:
                - t3.medium
    - provider: gcp
      properties:
        vcluster.com/ccm-lb-enabled: "false"
        vcluster.com/csi-enabled: "false"
      dynamic:
        - name: gcp-1
          nodeTypeSelector:
            - property: instance-type
              operator: In
              values:
                - e2-medium
networking:
  podCIDR: 10.64.0.0/16
  serviceCIDR: 10.128.0.0/16
Security considerations​
Currently, when deploying Cloud Controller Manager (CCM) and Container Storage Interface (CSI) with Auto Nodes, permissions are granted through instance profiles (AWS, Azure) or service accounts/roles (GCP). This means all worker nodes inherit the same permissions as CCM and CSI. As a result, any pod running in the cluster could potentially access the same cloud permissions. Refer to the full list of permissions below for details.
Cluster administrators should be aware of the following:
- Shared permissions – All pods may gain the same access level as CCM and CSI.
- Mitigation – To avoid this, cluster admins can disable CCM and CSI deployment. In that case, instance profiles will not receive additional permissions. However, responsibility for deploying and configuring CCM and CSI securely then falls to the cluster admin.
- AWS-specific note – On AWS, pods run with host networking disabled by default. This prevents them from directly requesting the same credentials as CCM and CSI.
- Review granted permissions – The full list of permissions can be found in the Terraform IAM definitions for each provider:
Security-sensitive environments should carefully review which permissions are granted to clusters and consider whether CCM/CSI should be disabled and managed manually.
Limitations​
Hybrid-cloud and multi-cloud​
When running a vCluster across multiple providers, some additional configuration is required:
- CSI drivers – Install and configure the appropriate CSI driver for each cloud provider.
- StorageClasses – Use allowedTopologiesto restrict provisioning to valid zones/regions.
- NodePools – Add topology-specific labels (e.g., topology.ebs.csi.aws.com/zoneon AWS ortopology.gke.io/zoneon GCP) so workloads are scheduled on nodes with matching storage availability.
- NodePool
- AWS EBS StorageClass
- GCP PD StorageClass
privateNodes:
  enabled: true
  vpn:
    enabled: true
    nodeToNode:
      enabled: true
  autoNodes:
    - provider: aws
      properties:
        vcluster.com/csi-enabled: "false"
      dynamic:
        - name: aws-1
          nodeLabels:
            topology.ebs.csi.aws.com/zone: us-east-1a
          nodeTypeSelector:
            - property: instance-type
              operator: In
              values:
                - t3.medium
    - provider: gcp
      properties:
        vcluster.com/ccm-lb-enabled: "false"
        vcluster.com/csi-enabled: "false"
      dynamic:
        - name: gcp-1
          nodeLabels:
            topology.gke.io/zone: us-central1-a
          nodeTypeSelector:
            - property: instance-type
              operator: In
              values:
                - e2-medium
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: aws-gp3
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
  type: gp3
allowedTopologies:
  - matchLabelExpressions:
      - key: topology.ebs.csi.aws.com/zone
        values:
          - us-east-1a
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gcp-standard
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
parameters:
  type: pd-standard
allowedTopologies:
  - matchLabelExpressions:
      - key: topology.gke.io/zone
        values:
          - us-central1-a
Region changes​
Changing the region of an existing node pool is not supported.
To switch regions, create a new vCluster and migrate your workloads.
Dynamic nodes Limit​
When editing the limits property of dynamic nodes, any nodes that already exceed the new limit will not be removed automatically.
Administrators are responsible for manually scaling down or deleting the excess nodes.