Skip to main content
Version: v0.29 Stable

Deploy in high availability

Limited vCluster Tenancy Configuration Support

This feature is only available for the following:

Running the control plane as a binary for vCluster Standalone, which uses private nodes.

Overview​

By default, vCluster Standalone is installed on one initial control plane node. This deployment method is recommended for any uses cases that are very ephemeral (e.g. dev environments, CI/CD, etc.), but for production use cases, it's recommended to run vCluster with more redundancy. We recommend deploying vCluster Standalone with multiple nodes for the control plane (i.e. in high availability (HA)) in order have the virtual cluster be more resilient to failures.

Each control plane node needs to be added one by one to the cluster starting with an initial control plane node.

When deploying vCluster Standalone, the assets required to install the control plane are located in the GitHub releases of vCluster.

Predeployment configuration options​

Backing store must be embedded etcd​

When running vCluster Standalone in HA, the only option for the backing store is embedded etcd, which needs to be specifically enabled from the initial node.

Control Plane Node Roles​

Decide if the control plane node will also be a worker node or not.

Worker Nodes​

With vCluster Standalone, worker nodes can only be private nodes. Since there is no host cluster, there is no concept of host nodes.

Prerequisites​

Install Initial Control Plane Node​

Control Plane Node

All steps are perfomed on the initial control plane node.

  1. Save a vcluster.yaml configuration file for vCluster Standalone on the control plane node.

    Create a vcluster.yaml to enable HA for vCluster Standalone
    cat <<EOF > /etc/vcluster/vcluster.yaml
    controlPlane:
    standalone:
    enabled: true
    # Optional: Control Plane node will also be considered a worker node
    # joinNode:
    # enabled: true
    backingStore:
    etcd:
    embedded:
    enabled: true # Required for HA
    privateNodes:
    enabled: true
    EOF
    warning

    Adding additional control plane nodes will not be supported unless you follow the high availability steps for configuration.

  2. Run the installation script on the control plane node:

    Install vCluster Standalone on control plane node
    export VCLUSTER_VERSION="v0.29.0"

    sudo su -
    curl -sfL https://github.com/loft-sh/vcluster/releases/download/${VCLUSTER_VERSION}/install-standalone.sh | sh -s -- --vcluster-name standalone
  3. Check that the control plane node is ready.

    After installation, the kubeconfig is automatically configured on the control plane node. The kubectl context is set to interact with your new vCluster Standalone instance.

    Run these commands on the control plane node:

    Check node status
    kubectl get nodes

    Expected output:

    NAME               STATUS   ROLES                  AGE   VERSION
    ip-192-168-3-131 Ready control-plane,master 11m v1.32.1
    Verify cluster components are running
    kubectl get pods -A

    Pods should include:

    • Flannel: CNI for container networking
    • CoreDNS: DNS service for the cluster
    • KubeProxy: Network traffic routing and load balancing
    • Konnectivity: Secure control plane to worker node communication
    • Local Path Provisioner: Dynamic storage provisioning

Available flags to use in the install script​

There are several flags available that can be added to the script.

FlagDescription
--vcluster-nameName of the vCluster instance
--vcluster-versionSpecific vCluster version to install
--configPath to the vcluster.yaml configuration file
--skip-downloadSkip downloading vCluster binary (use existing)
--skip-waitExit without waiting for vCluster to be ready
--extra-envAdditional environment variables for vCluster
--platform-access-keyAccess key for vCluster Platform integration
--platform-hostvCluster Platform host URL
--platform-insecureSkip TLS verification for Platform connection
--platform-instance-nameInstance name in vCluster Platform
--platform-projectProject name in vCluster Platform

Add Additional control plane nodes​

After installing the initial control plane node, vCluster Standalone is already running and new nodes only need to join the cluster.

Create token for control plane nodes​

To join control plane nodes, a token from the vCluster must be created to provide access and permissions. A single token can be used for any node(s) to join, or if you wanted to, you could create a token for each node.

By default, the token expires within 1 hour. The token is stored as a secret prefixed with bootstrap-token- in the kube-system namespace. The expiry timestamp is stored under the expiration key in the secret.

Create a token for control plane nodes
# Create a token
/var/lib/vcluster/bin/vcluster-cli token create --control-plane --expires=1h

The output provides a command to run on your control plane node:

Example output from creating a token
curl -sfLk https://<vcluster-endpoint>/node/join?token=<token> | sh -

Join each control plane node​

For each control plane node that you want to join vCluster, run the command on the node.

The new node will automatically download the necessary binaries and configuration, and join the cluster as an additional control plane node.

Kubeconfig​

After installation, the kubeconfig is automatically configured on the control plane node. The kubectl context is set to interact with your new vCluster Standalone instance.

To access the cluster from other machines, copy the kubeconfig from /var/lib/vcluster/kubeconfig.yaml on the control plane node or use the vCluster CLI to generate access credentials.

Add Worker Nodes​

After the vCluster control plane is up and running, you can add dedicated worker nodes.

The API Server endpoint must be reachable from the worker nodes. You can additional configure the controlPlane.endpoint and controlPlane.proxy.extraSANs in your vCluster configuration to expose the API Server.