Skip to main content
Version: main 🚧

Resource Proxy

Enterprise-Only Feature

This feature is an Enterprise feature. See our pricing plans or contact our sales team for more information.

vCluster Platform required

This feature requires vCluster Platform. Both the client and target virtual clusters must be managed as VirtualClusterInstance within the platform.

The Resource Proxy feature enables vCluster to proxy custom resource (CRD) requests to other virtual clusters. When enabled, the client virtual cluster transparently stores resources in and delegates management to a target virtual cluster. This enables cross-cluster communication patterns, centralized resource management, and multi-tenant architectures.

How it works​

When you configure a client virtual cluster to proxy custom resources, vCluster intercepts API requests for those resources and forwards them to the target virtual cluster through vCluster Platform.

The proxy performs several key functions:

  1. Request interception: The client's Kubernetes API server intercepts requests for configured custom resources and routes them to the proxy.
  2. Authentication: The proxy authenticates to the target using the client's vCluster Platform identity (loft:vcluster:p-<project>:<name>).
  3. Owner labeling: On create and update operations, the proxy adds owner labels to track which client created each resource.
  4. Visibility filtering: When listing resources, the proxy filters results based on the configured access mode (owned or all).
  5. Namespace synchronization: If a namespace doesn't exist on the target, the proxy creates it automatically.

Multi-client isolation​

When multiple client virtual clusters proxy to the same target, each client only sees resources it created by default. The proxy achieves this through owner labels and label selector injection.

Key capabilities​

  • Transparent access: Users interact with custom resources as if they were local to their virtual cluster.
  • Centralized storage: A dedicated target virtual cluster stores all resources.
  • Multi-tenant isolation: Each client virtual cluster only sees its own resources by default.
  • RBAC enforcement: The target virtual cluster enforces its own RBAC policies on proxied requests.

Platform RBAC requirements​

For the resource proxy to function, the client virtual cluster must be authorized to access the target virtual cluster through vCluster Platform. This requires RBAC configuration on the platform's management cluster.

Platform RBAC configuration​

Create a Role and RoleBinding in the project namespace (p-<project-name>) on the platform's management cluster:

platform-proxy-rbac.yaml
# Platform RBAC for resource proxy
# This grants the client vCluster permission to proxy requests to the target
# Apply to the platform's management cluster in the project namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: vcluster-proxy-target-access
namespace: p-default
rules:
- apiGroups: ["management.loft.sh"]
resources: ["virtualclusterinstances"]
resourceNames: ["target"] # Target vCluster name
verbs: ["use"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: client-vcluster-proxy-access
namespace: p-default
subjects:
- kind: User
name: "loft:vcluster:p-default:client" # Client vCluster identity
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: vcluster-proxy-target-access
apiGroup: rbac.authorization.k8s.io

Apply this configuration to the platform's management cluster (not the virtual clusters):

Apply platform RBAC
kubectl apply -f platform-proxy-rbac.yaml --context <platform-context>

Configuration​

To enable resource proxying, configure the experimental.proxy.customResources section in your vcluster.yaml:

vcluster.yaml
# Basic Resource Proxy configuration
# Proxies MyResource resources from example.com/v1 to a target virtual cluster
experimental:
proxy:
customResources:
myresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "target"

Configuration options​

FieldTypeDescription
enabledbooleanEnable or disable the proxy for this resource.
targetVirtualClusterobjectReference to the target VirtualClusterInstance to proxy requests to.
targetVirtualCluster.namestringName of the target virtual cluster. Required when enabled.
targetVirtualCluster.projectstringProject of the target virtual cluster. Defaults to the same project as the client vCluster.
accessResourcesstringResource visibility mode: owned (default) or all. See Access modes.

Resource key format​

The resource key follows the format resource.apiGroup/version:

  • myresources.example.com/v1 - proxies MyResource resources from the example.com API group, version v1
  • otherresources.test.io/v2 - proxies OtherResource resources from the test.io API group, version v2
  • additionalresources.acme.org/v1alpha1 - proxies AdditionalResource resources from the acme.org API group, version v1alpha1

Example: Basic proxy setup​

This example demonstrates a simple two-cluster setup where a client virtual cluster proxies MyResource resources to a target virtual cluster.

  1. Create the target virtual cluster.

    Create a virtual cluster to serve as the target. The target doesn't need any proxy configuration - it just stores the resources and enforces RBAC:

    Create target virtual cluster
    vcluster create target
  2. Install the CustomResourceDefinition in the target virtual cluster.

    The CustomResourceDefinition must exist in the target virtual cluster:

    myresource-crd.yaml
    apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    metadata:
    name: myresources.example.com
    spec:
    group: example.com
    names:
    kind: MyResource
    listKind: MyResourceList
    plural: myresources
    singular: myresource
    scope: Namespaced
    versions:
    - name: v1
    served: true
    storage: true
    schema:
    openAPIV3Schema:
    type: object
    properties:
    spec:
    type: object
    properties:
    name:
    type: string
    priority:
    type: string

    Apply the CustomResourceDefinition to the target virtual cluster:

    Apply CustomResourceDefinition to target
    vcluster connect target -- kubectl apply -f myresource-crd.yaml
  3. Configure RBAC in the target virtual cluster.

    Create RBAC rules to allow the client virtual cluster to access resources. The client virtual cluster authenticates using its vCluster Platform identity in the format loft:vcluster:p-<project>:<name>:

    target-rbac.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    name: vcluster-proxy-client
    rules:
    - apiGroups: ["example.com"]
    resources: ["myresources"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
    - apiGroups: ["example.com"]
    resources: ["myresources/status"]
    verbs: ["get", "update", "patch"]
    - apiGroups: [""]
    resources: ["namespaces"]
    verbs: ["get", "list", "watch", "create"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: vcluster-proxy-client
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: vcluster-proxy-client
    subjects:
    - kind: User
    # vCluster identity format: loft:vcluster:p-<project>:<name>
    name: "loft:vcluster:p-default:client"
    apiGroup: rbac.authorization.k8s.io

    Apply RBAC to the target virtual cluster:

    Apply RBAC to target
    vcluster connect target -- kubectl apply -f target-rbac.yaml
  4. Configure platform RBAC.

    Grant the client virtual cluster permission to access the target through vCluster Platform. Apply this to the platform's management cluster:

    platform-rbac.yaml
    # Platform RBAC for resource proxy
    # This grants the client vCluster permission to proxy requests to the target
    # Apply to the platform's management cluster in the project namespace
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
    name: vcluster-proxy-target-access
    namespace: p-default
    rules:
    - apiGroups: ["management.loft.sh"]
    resources: ["virtualclusterinstances"]
    resourceNames: ["target"] # Target vCluster name
    verbs: ["use"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
    name: client-vcluster-proxy-access
    namespace: p-default
    subjects:
    - kind: User
    name: "loft:vcluster:p-default:client" # Client vCluster identity
    apiGroup: rbac.authorization.k8s.io
    roleRef:
    kind: Role
    name: vcluster-proxy-target-access
    apiGroup: rbac.authorization.k8s.io

    Apply to the platform's management cluster:

    Apply platform RBAC
    kubectl apply -f platform-rbac.yaml --context <platform-context>
  5. Create the client virtual cluster with proxy configuration.

    Configure the client virtual cluster to proxy MyResource resources to the target:

    client-vcluster.yaml
    experimental:
    proxy:
    customResources:
    myresources.example.com/v1:
    enabled: true
    targetVirtualCluster:
    name: "target"

    Deploy the client virtual cluster:

    Deploy client virtual cluster
    vcluster create client -f client-vcluster.yaml
  6. Test the proxy.

    Create a MyResource in the client virtual cluster:

    Create MyResource in client
    vcluster connect client -- kubectl apply -f - <<EOF
    apiVersion: example.com/v1
    kind: MyResource
    metadata:
    name: test-resource
    namespace: default
    spec:
    name: "Test Resource"
    priority: "high"
    EOF

    Verify the resource exists in both virtual clusters:

    Verify resource in both clusters
    # Check in client via proxy
    vcluster connect client -- kubectl get myresources

    # Check in target where resources are stored
    vcluster connect target -- kubectl get myresources

Example: Multi-target proxy​

A single virtual cluster can proxy different resources to different target virtual clusters based on API group and version.

client-multi-target.yaml
# Multi-target Resource Proxy configuration
# Proxies different resources to different target virtual clusters
experimental:
proxy:
customResources:
# Proxy MyResource and SecondaryResource to target-a (example.com/v1)
myresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "target-a"
secondaryresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "target-a"
# Proxy OtherResource and AdditionalResource to target-b (test.io/v2)
otherresources.test.io/v2:
enabled: true
targetVirtualCluster:
name: "target-b"
additionalresources.test.io/v2:
enabled: true
targetVirtualCluster:
name: "target-b"

In this configuration:

  • target-a stores MyResource and SecondaryResource resources from example.com/v1
  • target-b stores OtherResource and AdditionalResource resources from test.io/v2

Example: Cross-project proxy​

By default, the target virtual cluster is assumed to be in the same project as the client. You can proxy to a virtual cluster in a different project by specifying the project field. This works across different host clusters connected to the same vCluster Platform.

client-cross-project.yaml
# Cross-project Resource Proxy configuration
# Proxies resources to a target virtual cluster in a different project
experimental:
proxy:
customResources:
myresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "target"
project: "other-project" # Target is in a different project

This is useful for scenarios where:

  • A shared resource storage cluster exists in a centralized project
  • Teams in different projects need to access common resources
  • Virtual clusters across different host clusters need to share CRDs
  • CI/CD environments need access to centralized resource management

Access modes​

The accessResources field controls which resources a client virtual cluster can see in the target:

Access modes configuration
# Access modes configuration examples

# Example 1: "owned" mode (default)
# The virtual cluster only sees resources it created
experimental:
proxy:
customResources:
myresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "target"
# accessResources defaults to "owned" - only see resources we created

---
# Example 2: "all" mode
# The virtual cluster can see all resources regardless of owner
experimental:
proxy:
customResources:
myresources.example.com/v1:
enabled: true
targetVirtualCluster:
name: "target"
accessResources: all # Can see all resources, not just owned ones

owned mode - default​

Each client virtual cluster only sees resources it created. This enables multi-tenant isolation where multiple client virtual clusters can share a target without seeing each other's resources.

all mode​

The client virtual cluster can see all resources in the target, regardless of who created them. This is useful for read-only observers, centralized dashboards, or admin access.

Combine access modes with RBAC

The accessResources mode controls visibility - what resources a virtual cluster can see. RBAC in the target virtual cluster controls permissions - what operations the virtual cluster can perform.

For example, a virtual cluster with accessResources: all but read-only RBAC can see all resources but can't modify any.

Example: Multi-client isolation​

This example demonstrates how multiple client virtual clusters can share a target while maintaining isolation.

  1. Configure client virtual clusters.

    Both clients proxy to the same target:

    team-a-vcluster.yaml
    experimental:
    proxy:
    customResources:
    myresources.example.com/v1:
    enabled: true
    targetVirtualCluster:
    name: "orchestrator"
    # Uses default accessResources: owned
    team-b-vcluster.yaml
    experimental:
    proxy:
    customResources:
    myresources.example.com/v1:
    enabled: true
    targetVirtualCluster:
    name: "orchestrator"
    # Uses default accessResources: owned
  2. Configure target RBAC for multiple clients.

    Configure RBAC in the target for both clients:

    multi-client-target-rbac.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    name: vcluster-proxy-client
    rules:
    - apiGroups: ["example.com"]
    resources: ["myresources"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
    - apiGroups: ["example.com"]
    resources: ["myresources/status"]
    verbs: ["get", "update", "patch"]
    - apiGroups: [""]
    resources: ["namespaces"]
    verbs: ["get", "list", "watch", "create"]
    ---
    # Bind for team-a
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: vcluster-proxy-team-a
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: vcluster-proxy-client
    subjects:
    - kind: User
    name: "loft:vcluster:p-default:team-a"
    apiGroup: rbac.authorization.k8s.io
    ---
    # Bind for team-b
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: vcluster-proxy-team-b
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: vcluster-proxy-client
    subjects:
    - kind: User
    name: "loft:vcluster:p-default:team-b"
    apiGroup: rbac.authorization.k8s.io

    Apply to the target virtual cluster:

    Apply target RBAC
    vcluster connect orchestrator -- kubectl apply -f multi-client-target-rbac.yaml
  3. Configure platform RBAC for multiple clients.

    Grant both client virtual clusters permission to access the target through vCluster Platform:

    multi-client-platform-rbac.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
    name: vcluster-proxy-target-access
    namespace: p-default
    rules:
    - apiGroups: ["management.loft.sh"]
    resources: ["virtualclusterinstances"]
    resourceNames: ["orchestrator"]
    verbs: ["use"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
    name: team-a-vcluster-proxy-access
    namespace: p-default
    subjects:
    - kind: User
    name: "loft:vcluster:p-default:team-a"
    apiGroup: rbac.authorization.k8s.io
    roleRef:
    kind: Role
    name: vcluster-proxy-target-access
    apiGroup: rbac.authorization.k8s.io
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
    name: team-b-vcluster-proxy-access
    namespace: p-default
    subjects:
    - kind: User
    name: "loft:vcluster:p-default:team-b"
    apiGroup: rbac.authorization.k8s.io
    roleRef:
    kind: Role
    name: vcluster-proxy-target-access
    apiGroup: rbac.authorization.k8s.io

    Apply to the platform's management cluster:

    Apply platform RBAC
    kubectl apply -f multi-client-platform-rbac.yaml --context <platform-context>
  4. Test isolation.

    Test multi-client isolation
    # Team A creates a resource
    vcluster connect team-a -- kubectl apply -f - <<EOF
    apiVersion: example.com/v1
    kind: MyResource
    metadata:
    name: team-a-resource
    spec:
    name: "Team A Data Resource"
    EOF

    # Team B creates a resource
    vcluster connect team-b -- kubectl apply -f - <<EOF
    apiVersion: example.com/v1
    kind: MyResource
    metadata:
    name: team-b-resource
    spec:
    name: "Team B ML Resource"
    EOF

    # Team A only sees their resource
    vcluster connect team-a -- kubectl get myresources
    # NAME AGE
    # team-a-resource 1m

    # Team B only sees their resource
    vcluster connect team-b -- kubectl get myresources
    # NAME AGE
    # team-b-resource 1m

    # Target orchestrator sees both
    vcluster connect orchestrator -- kubectl get myresources
    # NAME AGE
    # team-a-resource 2m
    # team-b-resource 1m

Limitations​

  • All CustomResources within the same API GroupVersion must use the same target virtual cluster.
  • When configuring RBAC for status updates, include permissions for the status subresource.

Troubleshoot​

Resources not appearing​

If resources don't appear after creation:

  1. Check RBAC configuration: Ensure the client virtual cluster's identity has correct permissions in the target.

  2. Verify the CustomResourceDefinition exists in target:

Check CustomResourceDefinition in target
vcluster connect <target> -- kubectl get crd <resource>.<group>
  1. Check virtual cluster logs for errors:
Check virtual cluster logs
kubectl logs -n vcluster-<name> -l app=vcluster --tail=100

Permission denied errors​

Permission errors can occur at two levels: the platform level and the target virtual cluster level.

  1. Check platform RBAC: Ensure the client has "use" permission on the target VirtualClusterInstance:
Check platform RBAC
kubectl auth can-i use virtualclusterinstances/target \
--as="loft:vcluster:p-default:client" \
-n p-default \
--context <platform-context>
  1. Verify the virtual cluster identity format: The identity follows loft:vcluster:p-<project>:<name>.

  2. Test target RBAC directly:

Test target RBAC permissions
vcluster connect <target> -- kubectl auth can-i create myresources.example.com \
--as="loft:vcluster:p-default:client"

Target unavailable errors​

Ensure the target virtual cluster is running and healthy. vCluster automatically attempts to reconnect when the target becomes available.

Config reference​

experimental.proxy​

FieldTypeDefaultDescription
customResourcesmap[string]CustomResourceProxy{}Map of resource keys to proxy configuration.

CustomResourceProxy​

FieldTypeDefaultDescription
enabledbooleanfalseEnable the proxy for this resource.
targetVirtualClusterVirtualClusterRef-Reference to the target virtual cluster. Required when enabled.
accessResourcesstring"owned"Resource visibility mode: owned or all.

VirtualClusterRef​

FieldTypeDefaultDescription
namestring-Name of the target virtual cluster. Required.
projectstringSame as sourceProject of the target virtual cluster.