Skip to main content
Version: v4.3

Add existing virtual clusters

Any running virtual cluster can be added to the Platform. When adding a virtual cluster to the Platform, you can choose between two integration levels based on your needs.

Related documentation

For conceptual understanding of externally vs. Platform-deployed virtual clusters, see Externally deployed virtual clusters.

Integration levels

When adding an existing vCluster to the Platform, you can choose between two integration levels based on your needs:

Management modes

  • Visibility and access only - Platform provides monitoring and access features while you continue managing the vCluster with your existing tools
  • Full Platform management - Platform takes over complete lifecycle management, enabling all Platform features like templates, sleep mode, and auto-delete
Technical details

The Platform uses a VirtualClusterInstance resource to track virtual clusters. The management level is controlled automatically based on how you add the vCluster. For more details, see the VirtualClusterInstance API reference.

Add for visibility and access

Use this approach when you want to keep managing the vCluster with your existing tools (Helm, Argo CD, etc.) while gaining visibility and access through the Platform, including SSO integration.

  1. Connect to the Platform: to use any vcluster platform command, you need to be connected to the platform. See authentication documentation for more details.

    When you started the platform, you would have automatically connected to the platform when you ran vcluster platform start.

    If you aren't sure if you are connected and already have a running platform, then you can login. Before running this command, be sure that your kubecontext is set to the cluster running the platform.

    Login to the platform.
    vcluster platform login
  2. Add Virtual Cluster to the Platform: before running this command, be sure that your kubecontext is set to the host cluster running the vCluster that you want to add.

    Modify the following with your specific values to generate a copyable command:
    VCLUSTER_NAME=my-vcluster
    PROJECT=default
    IMPORT_NAME="My vCluster"
    vcluster platform add vcluster $VCLUSTER_NAME \
    --project $PROJECT \
    --import-name "$IMPORT_NAME"

    This creates a VirtualClusterInstance with external: true, meaning the Platform will not manage the lifecycle.

  3. Add Host Cluster to the Platform (Optional): for additional visibility of the host cluster in the Platform UI, you can also add the host cluster itself. Before running this command, be sure that your kubecontext is set to the host cluster.

    Modify the following with your specific values to generate a copyable command:
    CLUSTER=my-host-cluster
    DISPLAY_NAME="My Host Cluster"
    vcluster platform add cluster $CLUSTER --display-name "$DISPLAY_NAME"

    This step is optional and provides additional management capabilities for the host cluster infrastructure.

With visibility and access mode:

  • ✅ View vCluster status in Platform UI
  • ✅ Access through Platform with SSO integration
  • ✅ Basic monitoring and metrics
  • ✅ Continue managing with original deployment tool
  • ❌ No Platform lifecycle management
  • ❌ No templates, sleep mode, or auto-delete

Enable full platform management

Use this approach when you want the Platform to take over complete management of your externally deployed vCluster, unlocking all Platform features.

Why enable platform management?

Organizations often have existing deployment pipelines using tools like Helm, Argo CD, or Terraform. This option allows you to:

  • Maintain existing CI/CD workflows for initial deployment
  • Transition to Platform management for ongoing operations
  • Enable Platform features like sleep modeSleep ModeA platform feature that allows virtual clusters or namespaces to be paused when inactive, conserving resources and reducing costs.Related: The Platform, SSO, and centralized management
  • Integrate existing vCluster deployments into Platform-managed projects with quotas and policies

Post-deployment platform registration

After deploying your vCluster using your existing tools, add it to the PlatformThe PlatformThe vCluster Platform that provides management, access control, and operational features for virtual clusters across multiple physical host clusters.Related: Project and enable full management:

Modify the following with your specific values to generate a copyable command:
# Assuming you have already deployed a vCluster using Helm, Argo CD, or another tool

# Add to Platform (creates VirtualClusterInstance with external: true)
vcluster platform add vcluster vc-name \
--namespace vc-namespace \
--project default

# Enable full Platform management
kubectl patch virtualclusterinstances.storage.loft.sh vc-name \
-n p-default \
--type='json' \
-p='[{"op": "replace", "path": "/spec/external", "value": false}]'

Helm wrapper for VirtualClusterInstance

Create a Helm chart that deploys the VirtualClusterInstance directly, allowing the Platform to manage the vCluster from creation. This approach is ideal when you want Helm to deploy the CRD but let the Platform handle the actual vCluster.

templates/virtualclusterinstance.yaml
apiVersion: storage.loft.sh/v1
kind: VirtualClusterInstance
metadata:
name: {{ .Values.name }}
namespace: p-{{ .Values.project | default "default" }} # Project namespace
spec:
clusterRef:
cluster: {{ .Values.cluster | default "loft-cluster" }}
namespace: {{ .Values.namespace }}
virtualCluster: {{ .Values.vclusterName }}
external: false # Platform manages lifecycle
owner:
user: {{ .Values.owner }}
template:
helmRelease:
chart:
version: {{ .Values.vclusterVersion }}
values: |
{{ .Values.vclusterValues | nindent 8 }}
Namespace requirements

The Platform must create and manage the namespace when external: false. Pre-existing namespaces will cause the error: "namespace exists and is not managed"

GitOps with VirtualClusterInstance

For GitOps deployments, create the VirtualClusterInstance directly in your Git repository. Tools like Argo CD or Flux will apply the CRD, and the Platform will manage the virtual cluster:

gitops/vcluster-instance.yaml
apiVersion: storage.loft.sh/v1
kind: VirtualClusterInstance
metadata:
name: gitops-vcluster
namespace: p-default # Platform project namespace
spec:
clusterRef:
cluster: loft-cluster
namespace: gitops-vcluster-ns
virtualCluster: gitops-cluster
external: false # Platform manages from creation
owner:
user: admin
template:
helmRelease:
chart:
version: "0.27.0"
values: |
controlPlane:
backingStore:
etcd:
deploy:
enabled: true
GitOps best practices
  • Store VirtualClusterInstance manifests in your Git repository
  • Use Kustomize or Helm for environment-specific configurations
  • Let Argo CD or Flux handle the CRD lifecycle while Platform manages the vCluster
  • See Platform GitOps installation guide for complete setup

Implementation example

Prerequisites

  • Administrator access to a Kubernetes cluster: See Accessing Clusters with kubectl for more information. Your current kube-context must have administrative privileges, which you can verify with kubectl auth can-i create clusterrole -A

    info

    To obtain a kube-context with admin access, ensure you have the necessary credentials and permissions for your Kubernetes cluster. This typically involves using kubectl config commands or authenticating through your cloud provider's CLI tools.

  • helm installed: Helm v3.10 is required for deploying the platform. Refer to the Helm Installation Guide if you need to install it.

  • kubectl installed: Kubernetes command-line tool for interacting with the cluster. See Install and Set Up kubectl for installation instructions.

Additionally, you'll need:

Implementation steps

  1. Deploy vCluster with Helm

    Deploy your vCluster using Helm or your preferred deployment tool. For detailed Platform installation instructions, see the Platform Helm installation guide.

    info

    For complete vCluster configuration options, see the vCluster configuration reference.

  2. Add the vCluster to Platform

    Modify the following with your specific values to generate a copyable command:
    vcluster platform add vcluster vc-name \
    --namespace vc-namespace \
    --project default \
    --import-name "imported-vc"

    This creates a VirtualClusterInstance with external: true.

  3. Enable full Platform management

    Modify the following with your specific values to generate a copyable command:
    kubectl patch virtualclusterinstances.storage.loft.sh \
    imported-vc -n p-default \
    --type='json' -p='[{"op": "replace", "path": "/spec/external", "value": false}]'

    Platform now manages the vCluster lifecycle.

  4. Verify Platform features are active

    Modify the following with your specific values to generate a copyable command:
    # Check VirtualClusterInstance status
    kubectl get virtualclusterinstances.storage.loft.sh \
    -n p-default imported-vc -o yaml

    # Verify in Platform UI
    vcluster platform list vcluster

Verification

Verify that the virtual cluster is correctly managed by the Platform. You can check this using the CLI:

Modify the following with your specific values to generate a copyable command:
# List all VirtualClusterInstances
kubectl get virtualclusterinstances.storage.loft.sh -A

# Verify external field is false (empty means false)
kubectl get virtualclusterinstances.storage.loft.sh vc-name \
-n p-default \
-o jsonpath='{.spec.external}'

# Check Platform-managed virtual clusters
vcluster list --driver=platform

# Access Platform UI
vcluster platform ui

Platform features available with full management

Once external: false is set, the following Platform features become available:

  • Full lifecycle management: Platform handles upgrades and configuration changes
  • Templates: Use and enforce vCluster templates for consistency
  • Auto-delete: Automatic cleanup after inactivity periods
  • Sleep mode: Automatic sleep/wake based on activity to save resources
  • Project assignment: Organize virtual clusters into projects with quotas
  • SSO integration: UserUserAn individual account in the platform with specific access permissions to projects and resources.Related: Team access through Platform SSO providers
  • Audit logging: Centralized audit trails for compliance
  • Resource quotas: Enforce resource limits via projectProjectA logical grouping of resources in the platform that provides team isolation and resource quotas. Projects help organize virtual clusters and namespace resources.Related: The Platform, Team settings
  • UI management: Full control through Platform UI
  • Backup/restore: Platform-managed backup policies

Limitations and considerations

Architectural considerations

Single source of truth principle: when external: false, the Platform becomes the sole manager of the virtual cluster. This means:

  • Avoid making changes through the original deployment tool (Helm, Argo CD)
  • Configuration updates should be done through Platform UI or by modifying the VirtualClusterInstance
  • Competing reconciliation loops can cause conflicts and unpredictable behavior

Technical requirements

  • Namespace management: The Platform must create and manage the namespace when external: false. Pre-existing namespaces will cause deployment failures with error: "namespace exists and is not managed"
  • Reconciliation conflicts: If external tools continue to manage the virtual cluster after setting external: false, both systems may fight to enforce their desired state
  • Version compatibility: Requires vCluster v0.20.0 or later for full Platform integration

Alternative approaches

For specific scenarios where full Platform management isn't suitable:

  1. Hybrid management: Keep external: true to maintain external lifecycle management while still gaining Platform features like SSO and monitoring

  2. GitOps compatibility: Use VirtualClusterInstance CRDs in your GitOps repository, letting Argo CD deploy them while Platform manages the resulting virtual clusters

  3. Gradual migration: Start with external management, evaluate Platform features, then transition to full management when ready

Next steps