Skip to main content
Version: v0.30 Stable

Netris

Limited vCluster Tenancy Configuration Support

This feature is only available for the following:

Running the control plane as a container and the following worker node types:
  • Private Nodes
Enterprise-Only Feature

This feature is an Enterprise feature. See our pricing plans or contact our sales team for more information.

The Netris integration enables vCluster to use Netris for network automation for the vCluster control plane and private nodes.

Features​

For more information on the features capable of integrating with Netris, see:

Configure Netris integration​

  1. Create a Secret in the host cluster in the namespace where vCluster Platform is installed to provide authentication and endpoint information:

    apiVersion: v1
    kind: Secret
    metadata:
    name: netris-credentials
    namespace: vcluster-platform
    stringData:
    url: "https://netris.example.com/api"
    username: "admin"
    password: "password"
  2. Reference the Secret in your vcluster.yaml:

    integrations:
    netris:
    enabled: true
    connector: netris-credentials

Automatically configure kube-vip​

info

This requires Multus to be installed in the host cluster.

When the Netris integration is enabled, you can configure it to automatically configure kube-vip for your vCluster control plane. The platform looks up the server cluster in Netris and uses the information to create the necessary Multus NetworkAttachmentDefinition and configure kube-vip.

Add the kubeVip configuration to your Netris integration:

integrations:
netris:
enabled: true
connector: netris-credentials
kubeVip:
# Netris server cluster to use for VIP allocation
serverCluster: vcluster-control-plane
# Bridge name for the NetworkAttachmentDefinition
bridge: br-netris
# Optional: IP ranges to use for allocation instead of CIDR-based allocation
ipRange: "10.0.0.10-10.0.0.20"

Configuration fields​

serverCluster​

Type: string Required: Yes

The name of the Netris server cluster from which to allocate the control plane VIP address.

bridge​

Type: string Required: Yes

The bridge name to use in the Multus NetworkAttachmentDefinition for connecting the control plane to the Netris network. The bridge must exist on the nodes where the vCluster pods are running and be connected to the Netris fabric (e.g. through VLAN).

ipRange​

Type: string (comma-separated IP ranges) Optional

Specifies IP ranges to use for VIP allocation instead of using CIDR-based allocation from the server cluster's subnet.

Example: 10.0.0.10-10.0.0.20,10.0.0.30-10.0.0.40

When not specified, the VIP is allocated from the full subnet of the Netris server cluster.

How it works​

When configured, vCluster Platform will:

  1. Query the specified Netris server cluster to get subnet and gateway information
  2. Allocate a VIP from the subnet (or from the specified IP range if ipRange is set)
  3. Create a NetworkAttachmentDefinition with the specified bridge
  4. Transparently configure kube-vip for the vCluster instance to announce the VIP

You don't need to configure any of this yourself. The equivalent manual configuration would look like this:

# transparently configured
controlPlane:
# Set the endpoint to the allocated VIP
endpoint: 10.0.0.11:8443
statefulSet:
annotations:
# Reference the auto-created NetworkAttachmentDefinition
k8s.v1.cni.cncf.io/networks: netris-kubevip-<vcluster-name>
advanced:
kubeVip:
# enable kube-vip
enabled: true
# set to Multus interface
interface: net1
# set gateway and subnet prefix for policy-based routing
gateway: 10.0.0.1/24

For more information on manual kube-vip configuration, see Kube-vip.

Additional resources​