Deploy in high availability
This feature is only available for the following:
Running the control plane as a binary for vCluster Standalone, which uses private nodes.Overview​
By default, vCluster Standalone is installed on one initial control plane node. This deployment method is recommended for any uses cases that are very ephemeral (e.g. dev environments, CI/CD, etc.), but for production use cases, it's recommended to run vCluster with more redundancy. We recommend deploying vCluster Standalone with multiple nodes for the control plane (i.e. in high availability (HA)) in order have the virtual cluster be more resilient to failures.
Each control plane node needs to be added one by one to the cluster starting with an initial control plane node.
When deploying vCluster Standalone, the assets required to install the control plane are located in the GitHub releases of vCluster.
Predeployment configuration options​
Backing store must be embedded etcd​
When running vCluster Standalone in HA, the only option for the backing store is embedded etcd, which needs to be specifically enabled from the initial node.
Control Plane Node Roles​
Decide if the control plane node will also be a worker node or not.
Worker Nodes​
With vCluster Standalone, worker nodes can only be private nodes. Since there is no host cluster, there is no concept of host nodes.
Prerequisites​
- Access to nodes that satisfies the node requirements
Install Initial Control Plane Node​
All steps are perfomed on the initial control plane node.
Save a
vcluster.yaml
configuration file for vCluster Standalone on the control plane node.Create a vcluster.yaml to enable HA for vCluster Standalonecat <<EOF > /etc/vcluster/vcluster.yaml
controlPlane:
standalone:
enabled: true
# Optional: Control Plane node will also be considered a worker node
# joinNode:
# enabled: true
backingStore:
etcd:
embedded:
enabled: true # Required for HA
privateNodes:
enabled: true
EOFwarningAdding additional control plane nodes will not be supported unless you follow the high availability steps for configuration.
Run the installation script on the control plane node:
Install vCluster Standalone on control plane nodeexport VCLUSTER_VERSION="v0.29.0"
sudo su -
curl -sfL https://github.com/loft-sh/vcluster/releases/download/${VCLUSTER_VERSION}/install-standalone.sh | sh -s -- --vcluster-name standaloneCheck that the control plane node is ready.
After installation, the kubeconfig is automatically configured on the control plane node. The kubectl context is set to interact with your new vCluster Standalone instance.
Run these commands on the control plane node:
Check node statuskubectl get nodes
Expected output:
NAME STATUS ROLES AGE VERSION
ip-192-168-3-131 Ready control-plane,master 11m v1.32.1Verify cluster components are runningkubectl get pods -A
Pods should include:
- Flannel: CNI for container networking
- CoreDNS: DNS service for the cluster
- KubeProxy: Network traffic routing and load balancing
- Konnectivity: Secure control plane to worker node communication
- Local Path Provisioner: Dynamic storage provisioning
Available flags to use in the install script​
There are several flags available that can be added to the script.
Flag | Description |
---|---|
--vcluster-name | Name of the vCluster instance |
--vcluster-version | Specific vCluster version to install |
--config | Path to the vcluster.yaml configuration file |
--skip-download | Skip downloading vCluster binary (use existing) |
--skip-wait | Exit without waiting for vCluster to be ready |
--extra-env | Additional environment variables for vCluster |
--platform-access-key | Access key for vCluster Platform integration |
--platform-host | vCluster Platform host URL |
--platform-insecure | Skip TLS verification for Platform connection |
--platform-instance-name | Instance name in vCluster Platform |
--platform-project | Project name in vCluster Platform |
Add Additional control plane nodes​
After installing the initial control plane node, vCluster Standalone is already running and new nodes only need to join the cluster.
Create token for control plane nodes​
To join control plane nodes, a token from the vCluster must be created to provide access and permissions. A single token can be used for any node(s) to join, or if you wanted to, you could create a token for each node.
By default, the token expires within 1 hour. The token is stored as a secret prefixed with bootstrap-token- in the kube-system namespace. The expiry timestamp is stored under the expiration key in the secret.
# Create a token
/var/lib/vcluster/bin/vcluster-cli token create --control-plane --expires=1h
The output provides a command to run on your control plane node:
curl -sfLk https://<vcluster-endpoint>/node/join?token=<token> | sh -
Join each control plane node​
For each control plane node that you want to join vCluster, run the command on the node.
The new node will automatically download the necessary binaries and configuration, and join the cluster as an additional control plane node.
Kubeconfig​
After installation, the kubeconfig is automatically configured on the control plane node. The kubectl context is set to interact with your new vCluster Standalone instance.
To access the cluster from other machines, copy the kubeconfig from /var/lib/vcluster/kubeconfig.yaml
on the control plane node or use the vCluster CLI to generate access credentials.
Add Worker Nodes​
After the vCluster control plane is up and running, you can add dedicated worker nodes.
The API Server endpoint must be reachable from the worker nodes. You can additional
configure the controlPlane.endpoint
and controlPlane.proxy.extraSANs
in your vCluster configuration to expose the API Server.