Namespaces
This feature is an Enterprise feature. See our pricing plans or contact our sales team for more information.
Overview​
Namespace syncing is an advanced vCluster feature that creates dedicated namespaces on the host cluster corresponding to namespaces in the virtual cluster. This feature preserves original resource names and enables complex integrations with cloud providers and custom schedulers.
Why use namespace syncing?​
Use namespace syncing when you need:
- Predictable resource names: Cloud IAM integrations (like GCP Workload Identity) that require specific ServiceAccount names
- Custom scheduler support: Tools like Run:ai that manage hierarchies with quotas and priorities
- Direct namespace mapping: Scenarios where name rewriting would break external integrations
- Multi-tenant isolation: Better separation of resources across different virtual namespaces
By default, namespace syncing is disabled. vCluster rewrites resource names (e.g., test
→ test-x-my-namespace-x-my-vcluster
) to avoid conflicts and syncs everything to the vCluster's host namespace.
How namespace syncing works​
When enabled, namespace syncing creates dedicated host namespaces that map to virtual namespaces, preserving original resource names without rewriting. This enables IAM integrations (like GCP Workload Identity) and custom schedulers (like Run:ai) that require predictable resource names.
If namespace syncing is not enabled or if a namespace doesn't match any mapping rule, vCluster syncs resources into the vCluster control plane's namespace on the host using rewritten names to avoid conflicts.
Enable namespace syncing​
Configure namespace syncing by enabling the feature and defining mapping rules.
Namespace syncing configuration cannot be modified after vCluster deployment. Plan your mapping strategy carefully before initial deployment.
Basic configuration​
sync:
toHost:
namespaces:
enabled: true
mappings: # mappings are mandatory
byName:
example-virtual-namespace: example-host-namespace
When enabled, vCluster creates corresponding namespaces on the host cluster for each namespace created in the vCluster, based on the defined mapping rules.
Import existing namespaces​
Any host namespace matching a defined mapping (exact or pattern) is automatically imported into the vCluster, including all existing workloads.
When a host namespace matches your mapping rules:
- vCluster automatically imports the namespace
- Existing pods and resources appear in the virtual cluster
- Resources created in either direction sync bidirectionally
Configure mapping rule types​
Available mapping patterns​
The mappings.byName
field supports these patterns:
- Exact mappings: Map a specific virtual namespace name to a specific host namespace name.
- Pattern mappings: Use a single wildcard
*
character to map a range of virtual namespaces to a corresponding range of host namespaces. The wildcard content mirrors between virtual and host. ${name}
variable: This variable substitutes automatically with the vCluster instance's name. Use this to create predictable host namespaces that clearly associate with a particular vCluster instance.
Combine the ${name}
variable with pattern mappings for flexible namespace organization.
Define only exact-to-exact or pattern-to-pattern mappings. Mixing these types (exact virtual namespace to patterned host namespace, or patterned virtual namespace to exact host namespace) is not supported.
Configuration examples​
Here are different mapping strategies you can use:
sync:
toHost:
namespaces:
enabled: true
mappings:
byName:
# Exact mapping:
# Virtual NS "frontend-prod" -> Host NS "customer-project-alpha-prod"
"frontend-prod": "customer-project-alpha-prod"
# Pattern mapping:
# Virtual NS "team-a-*" -> Host NS "h-team-a-*"
# e.g., vNS "team-a-dev" -> hNS "h-team-a-dev"
# e.g., vNS "team-a-staging" -> hNS "h-team-a-staging"
"team-a-*": "h-team-a-*"
# Pattern mapping using ${name} variable:
# Assumes vCluster name is "my-vc"
# Virtual NS "datasets-*" -> Host NS "my-vc-data-*"
# e.g., vNS "datasets-raw" -> hNS "my-vc-data-raw"
# e.g., vNS "datasets-processed" -> hNS "my-vc-data-processed"
"datasets-*": "${name}-data-*"
Enforce only syncing defined mappings​
By default (mappingsOnly: false
), virtual namespaces that don't match any mapping rules still work - they sync to the vCluster's host namespace using name-rewriting.
Restrict to mapped namespaces only​
To strictly enforce mappings and block unmapped namespaces, enable mappingsOnly:true
:
# vcluster-mappings-only-example.yaml
sync:
toHost:
namespaces:
enabled: true
mappingsOnly: true # Enable strict mapping
mappings:
byName:
# Only virtual namespaces starting with "synced-" are mapped
"synced-*": "host-${name}-synced-*"
# And the virtual namespace "important-configs" maps to "host-configs-critical"
"important-configs": "host-configs-critical"
Delete a vCluster​
When you delete a vCluster, namespaces and resources created by the vCluster on the host are removed. Resources that originated from the host cluster and were imported remain unchanged - vCluster only cleans up what it created.
You can test this behavior by deploying vCluster with the following configuration:
sync:
toHost:
namespaces:
enabled: true
mappings:
byName:
# Test mapping: 'sync-*' in vCluster maps to 'host-*' on host
# This helps demonstrate what gets deleted vs preserved
sync-*: host-*
Examples​
The following end-to-end examples demonstrate namespace syncing behavior in both virtual and host clusters.
Example: Enable namespace mappings​
This example demonstrates how different namespace mapping rules affect resource creation and placement on the host cluster.
Deploy a vCluster named
mapping-demo
using thevCluster-mappings-example.yaml
file from the preceding example.vcluster create mapping-demo -f vCluster-mappings-example.yaml
Inside the vCluster, create a namespace that matches the
team-a-*
pattern.kubectl create namespace team-a-ns1
Create a namespace named
frontend-prod
inside the vCluster. This matches the exact mapping rule"frontend-prod": "customer-project-alpha-prod"
defined in yourvcluster.yaml
.kubectl create namespace frontend-prod
Create a namespace that does not match any mapping rule:
kubectl create namespace some-other-namespace
Verify the namespaces on the host cluster:
kubectl --context kind-vcluster get namespace
The output is similar to the following:
NAME STATUS AGE
customer-project-alpha-prod Active 12s
default Active 45h
h-team-a-ns1 Active 20s
kube-node-lease Active 45h
kube-public Active 45h
kube-system Active 45h
local-path-storage Active 45h
vcluster-mapping-demo Active 2m11sOnly namespaces matched by mappings sync directly to the host cluster.
Run a pod inside the mapped namespace:
kubectl -n team-a-ns1 run nginx-from-team-a --image nginx
Run a pod inside the non-mapped namespace:
kubectl -n some-other-namespace run nginx-from-other-namespace --image nginx
Verify both pods run inside vCluster:
kubectl get pods -A
The output is similar to the following:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-94f599b5-wdgzg 1/1 Running 0 1m
some-other-namespace nginx-from-other-namespace 1/1 Running 0 11s
team-a-ns1 nginx-from-team-a 1/1 Running 0 18sVerify pods running on the host cluster:
kubectl --context kind-vcluster get pods -A
The output is similar to the following:
NAMESPACE NAME READY STATUS RESTARTS AGE
h-team-a-ns1 nginx-from-team-a 1/1 Running 0 58s
kube-system coredns-787d4945fb-5ljfb 1/1 Running 1 (20h ago) 45h
kube-system coredns-787d4945fb-zh9q7 1/1 Running 1 (20h ago) 45h
kube-system etcd-vcluster-pro-control-plane 1/1 Running 1 (20h ago) 45h
kube-system kindnet-8mmjr 1/1 Running 1 (20h ago) 45h
kube-system kube-apiserver-vcluster-pro-control-plane 1/1 Running 1 (20h ago) 45h
kube-system kube-controller-manager-vcluster-pro-control-plane 1/1 Running 1 (20h ago) 45h
kube-system kube-proxy-jnwfv 1/1 Running 1 (20h ago) 45h
kube-system kube-scheduler-vcluster-pro-control-plane 1/1 Running 1 (20h ago) 45h
local-path-storage local-path-provisioner-75f5b54ffd-j6pjj 1/1 Running 1 (20h ago) 45h
vcluster-mapping-demo coredns-94f599b5-wdgzg-x-kube-system-x-default-namespace-sync 1/1 Running 0 1m
vcluster-mapping-demo mapping-demo-0 1/1 Running 0 86s
vcluster-default-namespace-sync nginx-from-other-namespace-x-some-other-namespace-x--4b96e8f948 1/1 Running 0 51sThe pod from the mapped namespace syncs directly without renaming, while the pod from the non-mapped namespace uses default translation logic.
Example: Import existing host namespaces​
This example shows how vCluster automatically imports host namespaces that match your mapping rules, including any existing workloads.
Create a namespace matching one of the mappings on host. This matches
"team-a-*": "h-team-a-*"
mapping.kubectl --context kind-vcluster create namespace h-team-a-namespace-synced-from-host
Start a pod inside this namespace on host cluster:
kubectl --context kind-vcluster -n h-team-a-namespace-synced-from-host run nginx-from-host --image nginx
Inside vCluster get namespaces:
kubectl get namespace
Namespace was imported from host following defined mapping.
NAME STATUS AGE
default Active 58s
kube-node-lease Active 58s
kube-public Active 58s
kube-system Active 58s
team-a-namespace-synced-from-host Active 14sCheck the pod running in this namespace:
kubectl -n team-a-namespace-synced-from-host get pods
Pod was imported into vCluster.
NAME READY STATUS RESTARTS AGE
nginx-from-host 1/1 Running 0 36s
Example: Sync only mapped namespaces​
This example demonstrates strict namespace control using mappingsOnly: true
to prevent creation of unmapped namespaces.
Deploy a vCluster named
strict-vc
with thevCluster-mappings.yaml
configuration:vcluster create strict-vc -f vCluster-mappings-only.yaml
Inside the vCluster, create a namespace that matches a mapping rule:
kubectl create namespace synced-data-alpha
This matches
synced-*
. vCluster creates thehost-strict-vc-synced-data-alpha
namespace on the host, and resources created in the virtualsynced-data-alpha
namespace appear there.Create a namespace that does not match any rule:
kubectl create namespace experimental-app
The output is similar to the following:
Error from server (Forbidden): Virtual namespace 'experimental-app' is not allowed by vcluster mappings. Allowed namespaces are important-configs, synced-* (post namespaces)
vCluster blocks this namespace. Any operation on namespaces not defined in mappings gets blocked when
mappingsOnly
mode is enabled.
Example: Delete a vCluster with namespace syncing​
This example demonstrates the cleanup behavior when deleting a vCluster, showing which resources are removed and which are preserved.
Create a namespace on your host cluster that matches host-side mappings in the configuration and start a pod inside of it:
kubectl --context kind-vcluster create namespace host-namespace-from-host
kubectl --context kind-vcluster -n host-namespace-from-host run nginx-from-host --image nginxThe commands run an NGINX pod directly in the host cluster namespace. This pod gets imported into vCluster when the vCluster starts.
Verify that the pod started in host cluster:
kubectl --context kind-vcluster -n host-namespace-from-host get pods
The output is similar to the following:
NAME READY STATUS RESTARTS AGE
nginx-from-host 1/1 Running 0 50sCreate a new vCluster instance with the namespace mapping configuration that imports existing host namespaces:
vcluster create test-cleanup -f values-test-cleanup.yaml
Create a namespace inside vCluster and start a pod in it:
kubectl create namespace sync-namespace-from-vcluster
kubectl -n sync-namespace-from-vcluster run nginx-from-vcluster --image nginxThis runs an NGINX pod inside the vCluster namespace. This gets synced to the host cluster.
Verify both namespaces are available in both host cluster and vCluster:
kubectl get ns
The output is similar to the following:
NAME STATUS AGE
default Active 3m35s
kube-node-lease Active 3m35s
kube-public Active 3m35s
kube-system Active 3m35s
sync-namespace-from-host Active 2m21s
sync-namespace-from-vcluster Active 5sList namespaces as they exist on the host cluster (show both original host namespaces and synced vCluster namespaces):
kubectl --context kind-vcluster get ns
The output is similar to the following:
NAME STATUS AGE
default Active 6d23h
host-namespace-from-host Active 3m3s
host-namespace-from-vcluster Active 47s
kube-node-lease Active 6d23h
kube-public Active 6d23h
kube-system Active 6d23h
local-path-storage Active 6d23h
vcluster-test-cleanup Active 4m43sShow all
nginx
pods as seen from inside the vCluster. The following command displays the virtual cluster's view of resources:View pods from vCluster perspectivekubectl get po -A | grep nginx
The output is similar to the following:
sync-namespace-from-host nginx-from-host 1/1 Running 0 3m58s
sync-namespace-from-vcluster nginx-from-vcluster 1/1 Running 0 114sShows all
nginx
pods as seen from the host cluster. The following command displays how the same resources appear on the physical cluster:View pods from host cluster perspectivekubectl --context kind-vcluster get po -A | grep nginx
The output is similar to the following:
host-namespace-from-host nginx-from-host 1/1 Running 0 4m7s
host-namespace-from-vcluster nginx-from-vcluster 1/1 Running 0 2m3sDelete vCluster:
vcluster delete test-cleanup
The output is similar to the following:
12:29:31 info Stopping background proxy...
12:29:31 info Delete vcluster test-cleanup...
12:29:31 done Successfully deleted virtual cluster test-cleanup in namespace vcluster-test-cleanup
12:29:31 info Starting cleanup of vCluster 'test-cleanup' namespaces.
12:29:31 info Namespace 'host-namespace-from-host' was imported, cleaning up its resources and metadata...
12:29:36 info Deleting virtual cluster namespace 'host-namespace-from-vcluster'
12:29:36 info Successfully deleted virtual cluster namespace 'host-namespace-from-vcluster'
12:29:36 info Cleanup of vCluster 'test-cleanup' namespaces finished.
12:29:36 done Successfully deleted virtual cluster namespace vcluster-test-cleanup
12:29:36 info Waiting for virtual cluster to be deleted...
12:29:47 done Virtual Cluster is deletedVerify namespaces and resources left on host:
kubectl --context kind-vcluster get ns
The output is similar to the following:
NAME STATUS AGE
default Active 6d23h
host-namespace-from-host Active 6m55s
kube-node-lease Active 6d23h
kube-public Active 6d23h
kube-system Active 6d23h
local-path-storage Active 6d23hkubectl --context kind-vcluster get pods -n host-namespace-from-host
The output is similar to the following:
NAME READY STATUS RESTARTS AGE
nginx-from-host 1/1 Running 0 6m48sResources synced from vCluster are gone while all resources created on host cluster remained untouched.
Known limitations​
Incompatibility with generic sync​
The namespace syncing feature is incompatible with the older experimental.genericSync
feature.
If you enable sync.toHost.namespaces
, you must NOT use genericSync
. Instead, rely on the specific sync.toHost
and sync.fromHost
configurations
for resource synchronization.
Automatic syncing of all secrets and ConfigMaps​
When namespace syncing is enabled, all Secrets
and ConfigMaps
automatically sync to host namespaces (equivalent to sync.toHost.secrets.all: true
and sync.toHost.configMaps.all: true
). This cannot be disabled and makes these resources visible on the host cluster.
NetworkPolicy syncing is disabled​
The sync.toHost.networkPolicies
feature is not supported with namespace syncing. NetworkPolicy objects created in the vCluster won't sync to the host and have no effect on network traffic.
Inter-namespace pod affinity​
podAffinity
rules that reference pods in other namespaces are not translated by vCluster, causing pods to remain Pending
. Intra-namespace podAffinity
rules work as expected.
Example: Inter-namespace affinity (fails)​
This example attempts to schedule pods in one namespace with affinity to pods in another namespace. It fails because vCluster cannot translate cross-namespace references in pod affinity rules, causing pods to remain in Pending state indefinitely.
# Namespace for the target application
apiVersion: v1
kind: Namespace
metadata:
name: sync-a
---
# Namespace for the follower application
apiVersion: v1
kind: Namespace
metadata:
name: sync-b
---
# --- Target App Deployment ---
apiVersion: apps/v1
kind: Deployment
metadata:
name: target-app
namespace: sync-a
spec:
replicas: 2
selector:
matchLabels:
app: target
template:
metadata:
labels:
app: target
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- target
topologyKey: "kubernetes.io/hostname"
containers:
- name: nginx
image: nginx
---
# --- Follower App with Failing Affinity ---
apiVersion: apps/v1
kind: Deployment
metadata:
name: follower-app
namespace: sync-b
spec:
replicas: 2
selector:
matchLabels:
app: follower
template:
metadata:
labels:
app: follower
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- target
# This cross-namespace reference will not work
namespaces: ["sync-a"]
topologyKey: "kubernetes.io/hostname"
containers:
- name: busybox
image: busybox
command: ["sleep", "3600"]
After applying this manifest, the follower-app
pods remain stuck indefinitely in the Pending
state.
If you run:
kubectl get po -A -o wide
You should see output similar to
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
sync-a target-app-7d8df4d5d4-tlpmc 1/1 Running 0 3s 10.244.2.24 loft-worker <none> <none>
sync-a target-app-7d8df4d5d4-vhct5 1/1 Running 0 3s 10.244.1.17 loft-worker2 <none> <none>
sync-b follower-app-6b5ccc86dd-8ssw6 0/1 Pending 0 3s <none> <none> <none> <none>
sync-b follower-app-6b5ccc86dd-kjkkg 0/1 Pending 0 3s <none> <none> <none> <none>
Example: Intra-namespace affinity (works)​
This example shows how to correctly configure pod affinity within the same namespace. By placing both deployments in the same namespace without cross-namespace references, the affinity rules work as expected.
# Namespace for the demo
apiVersion: v1
kind: Namespace
metadata:
name: sync-demo
---
# --- Target App Deployment ---
apiVersion: apps/v1
kind: Deployment
metadata:
name: target-app
namespace: sync-demo
spec:
replicas: 2
selector:
matchLabels:
app: target
template:
metadata:
labels:
app: target
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- target
topologyKey: "kubernetes.io/hostname"
containers:
- name: nginx
image: nginx
---
# --- Follower App with Working Affinity ---
apiVersion: apps/v1
kind: Deployment
metadata:
name: follower-app
namespace: sync-demo
spec:
replicas: 2
selector:
matchLabels:
app: follower
template:
metadata:
labels:
app: follower
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- target
topologyKey: "kubernetes.io/hostname"
containers:
- name: busybox
image: busybox
command: ["sleep", "3600"]
After applying this manifest, all pods are in the Running
state, and the follower-app
pods are co-located on the same nodes as the target-app
pods.
If you run:
kubectl get po -A -o wide
You should see output similar to
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-744d9bd8bf-ck47h 1/1 Running 0 21m 10.244.1.11 loft-worker2 <none> <none>
sync-demo follower-app-6f6df5d4fb-kvvcz 1/1 Running 0 7s 10.244.1.19 loft-worker2 <none> <none>
sync-demo follower-app-6f6df5d4fb-rsfjf 1/1 Running 0 7s 10.244.2.26 loft-worker <none> <none>
sync-demo target-app-7d8df4d5d4-czfrd 1/1 Running 0 8s 10.244.1.18 loft-worker2 <none> <none>
sync-demo target-app-7d8df4d5d4-lblcc 1/1 Running 0 8s 10.244.2.25 loft-worker <none> <none>