Skip to main content
Version: main 🚧

Deploy with isolated workloads

Limited vCluster Tenancy Configuration Support

This feature is only available when using the following worker node types:

  • Host Nodes
  • vCluster offers several different policies to automatically isolate workloads in a virtual cluster. For all options within each policy, review the policies documentation.

    In order to isolate workloads on a vCluster, you need to enable a couple of configuration options:

    1. Set a Pod Security Standard. If you enable either baseline or restricted policy, it follows the standards outlined in Kubernetes. For example, with baseline as a Pod Security Standard, pods that try to run as a privileged container or mount a host path are not synced to the host cluster. Though Pod Security Standard is a Kubernetes concept and only applicable to certain versions of Kubernetes, vCluster supports this regardless of Kubernetes version as this is directly implemented in vCluster. Rejected pods stay Pending in the vCluster and in newer Kubernetes version they are denied by the admission controller. as well.
    2. Enable a resource quota as well as a limit range. This allows restricting resource consumption of vCluster workloads. If enabled, sane defaults for those 2 resources are chosen.
    3. Enable a network policy that restricts access of vCluster workloads as well as the vCluster control plane to other pods in the host cluster. This only works if your host cluster CNI supports network policies.
    Example of Workload Isolation YAML
    policies:
    # empty, baseline, restricted can be used here
    podSecurityStandard: baseline

    resourceQuota:
    enabled: true

    limitRange:
    enabled: true

    networkPolicy:
    enabled: true
    info

    When enabling resource quotas locally, add a --expose-local=false flag to your vcluster create [...] command, as by default the vCluster CLI tries to automatically expose the vCluster using NodePorts, when interacting with a local Kubernetes cluster.

    Network only isolation​

    By default, workloads created by vCluster are able to communicate with other workloads in the host cluster through their cluster IPs. This can be beneficial if you want to purposely access a host cluster service, which is a good method to share services between virtual clusters.

    If you do not want pods running inside one vCluster to have access to other workloads in the host cluster, then deploy a network policy for the specific namespace where vCluster is installed in.

    vCluster can automatically deploy this network policy in your host cluster.

    Example of enabling network policy
    policies:
    networkPolicy:
    enabled: true
    warning

    Network policies do not work in all Kubernetes clusters and need to be supported by the underlying CNI plugin.

    Advanced Isolation​

    Besides the basic workload isolation using Pod Security Standard and Resource Quotas, you could always set more advanced isolation methods.

    Isolation with dedicated nodes​

    Pods created in the vCluster set their tolerations, which affect scheduling decisions on the shared host cluster. To prevent the pods from being scheduled to the undesirable nodes, you can use the sync.fromHost.nodes.selector.labels to select specific nodes from the host cluster.

    Isolation with private nodes​

    Instead of using nodes from the shared host cluster, you can use private nodes for the virtual cluster.

    Workload & Network Isolation within the vCluster​

    Besides isolating workloads from one virtual cluster to another virtual cluster, normal workload isolation from within the vCluster can also be achieved by deploying resource quotas, limit ranges, admission controllers and network policies in the virtual cluster. To allow network policies to function correctly, you'll need to enable the same configuration options in the deployment of the virtual cluster through the vcluster.yaml.

    Secret based Service Account tokens​

    By default, vCluster creates a service account token for each pod and injects them as an annotation in the respective pod's metadata. If this doesn't comply with your security practices, this can be mitigated by enabling an option in the vcluster.yaml which creates separate secrets for each pod's service account token and mounts it accordingly using projected volumes.

    sync:
    toHost:
    pods:
    useSecretsForSATokens: true