Day 3: Multi-Node vind Clusters: Real Scheduling, Real Node Drains
.jpeg)
.jpeg)
Yesterday we got started with a single-node vind cluster. That’s great for basic development, but if you want to test pod scheduling, node affinity, anti-affinity, topology constraints, or node drains, you need multiple nodes.
With KinD, multi-node configs work but you’re still limited to local Docker containers with no external node support. vind gives you the same multi-node Docker setup, plus the option to add real cloud nodes later (we’ll cover that in Day 4).
Today, let’s create a 4-node cluster and put it through its paces.
Create a multi-node.yaml file:
experimental:
docker:
nodes:
- name: worker-1
- name: worker-2
- name: worker-3That’s it. This tells vind to create 3 additional worker nodes alongside the control plane. Each worker runs as its own Docker container with kubelet, kube-proxy, and Flannel.

Command:
vcluster create multi-node -f multi-node.yamlOutput:
12:57:42 info Using vCluster driver 'docker' to create your virtual clusters, which means the CLI is managing Docker-based virtual clusters locally
12:57:42 info If you prefer to use helm or the vCluster platform API instead, use '--driver helm' or '--driver platform', or run 'vcluster use driver helm' or 'vcluster use driver platform' to change the default
12:57:42 info Ensuring environment for vCluster multi-node...
12:57:43 done Created network vcluster.multi-node
12:57:47 warn Load balancer type services are not supported inside the vCluster because this command was executed with insufficient privileges. To enable load balancer type services, run this command with sudo
12:57:48 info Will connect vCluster multi-node to platform...
12:57:49 info Starting vCluster standalone multi-node
12:57:51 info Adding node worker-1 to vCluster multi-node
12:57:52 info Joining node vcluster.node.multi-node.worker-1 to vCluster multi-node...
12:58:17 info Adding node worker-2 to vCluster multi-node
12:58:17 info Joining node vcluster.node.multi-node.worker-2 to vCluster multi-node...
12:58:24 info Adding node worker-3 to vCluster multi-node
12:58:24 info Joining node vcluster.node.multi-node.worker-3 to vCluster multi-node...
12:58:31 done Successfully created virtual cluster multi-node
12:58:31 info Finding docker container vcluster.cp.multi-node...
12:58:31 info Waiting for vCluster kubeconfig to be available...
12:58:32 info Waiting for vCluster to become ready...
12:58:32 done vCluster is ready
12:58:32 done Switched active kube context to vcluster-docker_multi-node
- Use `vcluster disconnect` to return to your previous kube context
- Use `kubectl get namespaces` to access the vclusterEach node takes about 10 seconds to join. Let’s verify:
Command:
kubectl get nodes -o wideOutput:
NAME STATUS ROLES AGE VERSION
multi-node Ready control-plane,master 122m v1.35.0
worker-1 Ready <none> 122m v1.35.0
worker-2 Ready <none> 122m v1.35.0
worker-3 Ready <none> 122m v1.35.0Four nodes, one control plane and three workers. Each has its own IP on the Docker network, running Kubernetes v1.35.0. This looks exactly like a real multi-node cluster.
Let’s deploy 6 replicas and see how Kubernetes distributes them:
kubectl create deployment web --image=nginx:latest --replicas=6
deployment.apps/web createdAfter a few seconds:
kubectl get pods -o wideOutput:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-ff44d897b-5ffqt 1/1 Running 0 8s 10.244.5.2 worker-3 <none> <none>
web-ff44d897b-7vm76 1/1 Running 0 8s 10.244.2.3 worker-1 <none> <none>
web-ff44d897b-hpf9d 1/1 Running 0 8s 10.244.5.3 worker-3 <none> <none>
web-ff44d897b-l4c7t 1/1 Running 0 8s 10.244.4.2 worker-2 <none> <none>
web-ff44d897b-p276z 1/1 Running 0 8s 10.244.2.2 worker-1 <none> <none>
web-ff44d897b-z77gm 1/1 Running 0 8s 10.244.0.4 multi-node <none> <none>Look at the NODE column, pods are distributed across all 4 nodes: - worker-1: 2 pods (10.244.2.x subnet) - worker-2: 1 pod (10.244.3.x subnet) - worker-3: 2 pods (10.244.4.x subnet) - multi-node (control plane): 1 pod (10.244.0.x subnet)
Each node has its own Flannel subnet. The Kubernetes scheduler is doing real scheduling across real (containerized) nodes.
This is where multi-node really shines. Let’s drain worker-3 and watch pods get rescheduled:
Command:
kubectl drain worker-3 --ignore-daemonsets --delete-emptydir-dataOutput:
Warning: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-fpr8n, kube-system/kube-proxy-dsg8b
evicting pod default/web-ff44d897b-hpf9d
evicting pod default/web-ff44d897b-5ffqt
pod/web-ff44d897b-5ffqt evicted
pod/web-ff44d897b-hpf9d evicted
node/worker-3 drainedBoth pods on worker-3 were evicted. Where did they go?
Command:
kubectl get pods -o wideOutput:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-ff44d897b-7vm76 1/1 Running 0 20m 10.244.2.3 worker-1 <none> <none>
web-ff44d897b-hchpq 1/1 Running 0 19m 10.244.4.3 worker-2 <none> <none>
web-ff44d897b-l4c7t 1/1 Running 0 20m 10.244.4.2 worker-2 <none> <none>
web-ff44d897b-p276z 1/1 Running 0 20m 10.244.2.2 worker-1 <none> <none>
web-ff44d897b-pc4cx 1/1 Running 0 19m 10.244.0.5 multi-node <none> <none>
web-ff44d897b-z77gm 1/1 Running 0 20m 10.244.0.4 multi-node <none> <none>The scheduler created new pods on worker-2 and the control plane node. Zero pods on worker-3. This is exactly how it works in production.
Uncordon when you’re done:
Command:
kubectl uncordon worker-3Output:
node/worker-3 uncordonedWith multi-node, you can test real node affinity rules:
apiVersion: apps/v1
kind: Deployment
metadata:
name: worker-only
spec:
replicas: 3
selector:
matchLabels:
app: worker-only
template:
metadata:
labels:
app: worker-only
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: DoesNotExist
containers:
- name: nginx
image: nginx:latestThis ensures pods only run on worker nodes, not the control plane, something you can only test with multiple nodes.
kubectl apply -f affinity.yaml
deployment.apps/worker-only createdkubectl get po -owide
worker-only-86dd84d489-6v98m 1/1 Running 0 15s 10.244.5.4 worker-3 <none> <none>
worker-only-86dd84d489-hmbq4 1/1 Running 0 15s 10.244.4.4 worker-2 <none> <none>
worker-only-86dd84d489-xdttw 1/1 Running 0 15s 10.244.2.4 worker-1 <none> <none>Force pods to spread across different nodes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: spread-app
spec:
replicas: 3
selector:
matchLabels:
app: spread-app
template:
metadata:
labels:
app: spread-app
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: spread-app
containers:
- name: nginx
image: nginx:latestWith 3 replicas and 3 workers, each worker gets exactly one pod. Try doing that with a single-node cluster.
kubectl apply -f antiaffinity.yaml
deployment.apps/spread-app createdkubectl get po -owide | grep spread
spread-app-596d884c4d-hp585 1/1 Running 0 10s 10.244.4.5 worker-2 <none> <none>
spread-app-596d884c4d-j4hcm 1/1 Running 0 10s 10.244.5.5 worker-3 <none> <none>
spread-app-596d884c4d-lj7c5 1/1 Running 0 10s 10.244.2.5 worker-1 <none> <none>You can pass environment variables to worker containers:
experimental:
docker:
nodes:
- name: worker-1
env:
- "CUSTOM_VAR=value1"
- name: worker-2
env:
- "CUSTOM_VAR=value2"
- name: worker-3These are Docker container environment variables, useful for differentiating nodes in testing scenarios.
Command:
vcluster delete multi-nodeOutput:
16:18:36 info Using vCluster driver 'docker' to delete your virtual clusters, which means the CLI is managing Docker-based virtual clusters locally
16:18:36 info If you prefer to use helm or the vCluster platform API instead, use '--driver helm' or '--driver platform', or run 'vcluster use driver helm' or 'vcluster use driver platform' to change the default
16:18:36 info Removing vCluster container vcluster.cp.multi-node...
16:18:39 info Removing vCluster node worker-3...
16:18:40 info Removing vCluster node worker-2...
16:18:42 info Removing vCluster node worker-1...
16:18:44 info Delete virtual cluster instance p-default/multi-node in platform
16:18:44 info Deleted kube context vcluster-docker_multi-node
16:18:44 done Successfully deleted virtual cluster multi-nodeMulti-node with Docker containers is powerful, but what if you need real cloud resources? A GPU instance? A specific CPU architecture? Tomorrow, we’ll add a GCP Compute Engine instance as an external worker node to a vind cluster, all connected via VPN. That’s something KinD simply cannot do.
All commands tested on macOS (Apple Silicon M1) with Docker Desktop and vCluster CLI v0.31.0.
Deploy your first virtual cluster today.