Manage Private Nodes
This feature is only available when using the following worker node types:
Upgrade vCluster​
Upgrade vCluster control plane​
Do not upgrade multiple minor versions at the same time for the control plane as outlined in the Kubernetes Version Skew Policy. Instead always upgrade a single minor version, wait until the cluster becomes healthy and then upgrade to the next minor version. For example: v1.26
-> v1.27
-> v1.28
.
To upgrade the vCluster control plane, update the Kubernetes init
container version in your vcluster.yaml
file.
After the change, vCluster upgrades the control plane automatically. If automatic worker node upgrades are configured, those nodes are also upgraded.
To change the Kubernetes version, add the following to your vcluster.yaml
:
...
controlPlane:
statefulSet:
image:
tag: v1.31.1 # Or any other Kubernetes version
...
Upgrade vCluster workers​
Use the Kubernetes Version Skew Policy for worker upgrades.
There are two modes of how worker upgrades can be done:
- (Recommended) Automatically by vCluster when the control-plane was updated
- Manually using vCluster CLI or kubeadm
Use automatic upgrades​
Automatic worker node upgrades are enabled by default. vCluster upgrades each node when it detects a version mismatch between the control plane and the worker node. By default, one node is upgraded at a time. Upgrade starts 2 minutes after the control plane is upgraded. There is also a grace period for newly added worker nodes (upgrade is not going to start if node was created less than 2 minutes ago).
You can exclude a node from automatic upgrade by labeling it with vcluster.loft.sh/skip-auto-upgrade: true
The upgrade Pod runs directly on the node and completes the following steps:
- Downloads the Kubernetes binary bundle from the vCluster control plane.
- Replaces the
kubeadm
binary. - Runs
kubeadm upgrade node
. - Cordons the node.
- Replaces other binaries such as
containerd
,kubelet
, etc. - Restarts
containerd
andkubelet
if necessary. - Uncordons the node.
Node upgrades typically do not restart pods. However, depending on the Kubernetes version change, restarts might occur in some cases.
Perform manual upgrades​
You can either upgrade a node using the vCluster CLI by running the following command:
vcluster node upgrade my-node
Alternatively, you can manually upgrade by following the official Kubeadm Kubernetes guide.
Reuse a node​
A node can be re-used for another virtual cluster by using the --force-join
flag. Before joining the node to a new vCluster, it
should be deleted from the original vCluster.
# Delete the node from vCluster
vcluster node delete my-node
# Create a new token from the vCluster you want to join
vcluster token create
# Then on the node itself run this to join it
curl ... | sh -s -- --force-join
If you are re-oining the original vCluster, then the kube-flannel
and kube-proxy
pods are restarted. All workloads will be restarted.
Remove vCluster​
When deleting a vCluster with private nodes, it's recommended that each node is removed from the vCluster before deleting the vCluster itself.
Remove worker node​
To remove a node from the cluster, follow standard procedures as with any other Kubernetes distro. First cordon a node, then drain it, which can be done using kubectl
commands.
We also supply a vCluster CLI command to delete worker nodes from the cluster.
- Cordon and drain node:
vcluster node delete <node-name>
- Once no more workloads are running on the node, log into the node terminal and run the installation script that you used originally (more on this in Install section):
curl -sfLk https://<vcluster-endpoint>/node/join?token=<token> | sh -s -- --reset-only
- Then, stop vCluster systemD service:
systemctl stop vcluster.service
- Now, you can safely remove service definition & other vCluster related files:
rm -rf /var/lib/vcluster && rm /etc/systemd/system/vcluster.service
If you shut down, delete or otherwise make the node instance unreachable before running vcluster node delete
, it will
prevent draining. To proceed, add --drain=false
to the delete command.
Clean up orphaned node​
There may be cases where the vCluster is deleted before all worker nodes were removed.
In these scenarios, the kubelet
, kube-proxy
and other components running on the node are still trying to connect to the now deleted vCluster control plane.
The node can be cleaned up by using any join script with the --reset-only
flag.
Load Docker images to a node​
There may be instances where you want to load an image directly to the private node.
Before loading any images to the node, the node has to be in a Ready
state and part of a virtual cluster. The command pulls the image locally
onto the machine that is running the command and saves it as a .tar
archive. It creates a pod on the private node and copies the image using kubectl cp
to the pod. The pod imports the image
onto the node.
vcluster node load-image my-node --image nginx:latest