Skip to main content
Version: main 🚧

vCluster VPN

Limited vCluster Tenancy Configuration Support

This feature is only available for the following:

Running the control plane as a container and the following worker node types:
  • Private Nodes

vCluster VPN allows you to connect the private worker nodes to the vCluster control plane through vCluster Platform. This is useful in scenarios where the nodes cannot reach the control plane directly, for example if you cannot use LoadBalancer or NodePort type services, but nodes can reach the vCluster Platform URL.

vCluster VPN builds a network across the control plane and the worker nodes powered by the same technology used by tailscale. You don't need a tailscale account for VPN to work as vCluster reuses tailscale as a library under the hood and will never reach out to any tailscale services directly.

The only requirement for the VPN to work is that the vCluster control plane and any worker node(s) can reach the platform URL (e.g. https://platform.my-domain.com).

vCluster VPN figures out a direct connection between worker nodes and the control plane through wireguard, which can traverse even complex NAT setups and make direct connections. If wireguard fails to create a direct connection, traffic is routed through the vCluster Platform.


Enabling VPN
privateNodes:
enabled: true
vpn:
enabled: true # This enables node to control plane vpn
nodeToNode:
enabled: true # This enables node to node vpn

When to use vCluster VPN​

VPN is useful in the following scenarios:​

  • Worker nodes cannot reach the control plane directly, because the control plane service cannot be exposed with a NodePort or a LoadBalancer.
  • Worker nodes cannot reach each other directly, because the nodes are in different VPCs or spread across cloud and bare-metal.
  • Traffic between nodes is not encrypted through the CNI and might be sniffed by other parties.

When to not use VPN:​

  • Only the control plane cannot reach the worker nodes directly. This is already solved by using Konnectivity and does not require vCluster VPN or any extra configuration
  • Nodes cannot reach the vCluster platform URL, in this case the VPN cannot be used
  • You need the maximum possible network throughput. While tailscale will only add minimal overhead it still cannot be tolerated in some cases
  • You cannot change the networking setup on the node. vCluster VPN will create a new network interface (tailscale0) and needs permissions to create it
  • When the goal is to isolate node networking. While vCluster VPN will encrypt all traffic, it cannot prevent nodes reaching each other or block traffic between them. If that is the goal, use a proper VLAN or underlay setup for this.
Only use vCluster VPN if you have to

If there is a direct way for node-to-node or node-to-control-plane communication it's usually preferred over using vCluster VPN.

How does it work?​

When enabled, the join script will setup a systemd service called vcluster-vpn that connects the node to the platform. The service runs a modified version of tailscaled. As soon as the node has joined the vpn it will join the node itself into the vCluster.

On each vCluster control plane an integrated tailnet service is running to connect the control plane to the vpn as well. Within the Kubernetes cluster the Kubernetes endpoint will then be the tailscale ip of the control plane.

If node to node communication is wanted, the default cni flannel is then configured to use the network interface tailscale0 on the node as well as the internal node ip will be exchanged with the tailscale one.

Node to control plane VPN​

Node to control plane VPN allows your nodes to communicate to the control plane over the tailscale vpn. This is useful if you cannot expose the vCluster control plane via a LoadBalancer service or a public ip.

To configure node to control plane tunneling, you need to set the following in your vcluster.yaml:

privateNodes:
enabled: true
vpn:
enabled: true # This enables node to control plane vpn

After you have started the vCluster, you should see that the endpoints of the kubernetes service is a 100.64.x.x tailscale ip address:

$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 100.64.0.4:443 5d19h

Node to node VPN​

Node to node VPN allows your nodes to communicate with each other over the tailscale vpn. This is useful when the nodes have no direct connection to each other, for example if you have nodes in different clouds or you use a mix of bare-metal and cloud nodes.

To configure node to node VPN, you just need to set the following in your vcluster.yaml:

privateNodes:
enabled: true
vpn:
enabled: true # This enables node to control plane vpn
nodeToNode:
enabled: true # This enables node to node vpn

Using node to node VPN requires node to control plane VPN, so you cannot opt-out of that.

After you have started the vCluster and joined a node, you should see that the node ip address is a 100.64.x.x tailscale ip address:

$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
my-node Ready <none> 5d19h v1.31.6 100.64.0.5 <none> Ubuntu 6.14.10 containerd://2.0.5

Troubleshooting​

If you experience any problems with the connection, you can check the vCluster VPN logs directly or via the tailscale CLI

Make sure cni is configured to use interface tailscale0​

If you want to use your own custom cni (not flannel) and node-to-node communication over the vpn, make sure to configure it to use interface tailscale0 instead of the default one.

Check vCluster VPN logs​

vCluster VPN is installed as a systemd service on the node that should join, you can check the logs via the following command on the node:

journalctl -u vcluster-vpn --no-pager

Check with tailscale CLI​

Do not install tailscaled daemon

Make sure to only install the CLI and NOT tailscaled as vCluster VPN is replacing that and if you install tailscale the regular way it will override and conflict with vCluster VPN.

If the logs look good, you can also install tailscale CLI on the node via:

# Arch either amd64 or arm64
ARCH=amd64

# Download and install tailscale
curl -sSLf https://pkgs.tailscale.com/stable/tailscale_1.86.0_$ARCH.tgz > tailscale.tgz && tar -zxf tailscale.tgz && mv tailscale_1.86.0_$ARCH/tailscale /usr/local/bin

Then you can use tailscale CLI commands to debug:

# Try to ping control plane or other nodes (check 'kubectl get endpoints' for the control plane ip or 'kubectl get nodes -o wide' for node ips)
tailscale ping 100.64.X.X

# Check if platform url shows up
tailscale debug derp-map

# Check if peers correctly show up
tailscale debug netmap

Changing settings after node join​

When changing the way the virtual cluster control plane is exposed, e.g. switching from LoadBalancer to VPN, you will need to re-join the nodes to pick up the new settings.

Config reference​

vpn required object ​

VPN holds configuration for the private nodes vpn. This can be used to connect the private nodes to the control plane or connect the private nodes to each other if they are not running in the same network. Platform connection is required for the vpn to work.

enabled required boolean false ​

Enabled defines if the private nodes vpn should be enabled.

nodeToNode required object ​

NodeToNode holds configuration for the node to node vpn. This can be used to connect the private nodes to each other if they are not running in the same network.

enabled required boolean false ​

Enabled defines if the node to node vpn should be enabled.