Istio
This feature is an Enterprise feature. See our pricing plans or contact our sales team for more information.
Istio integration
This guide shows how to set up Istio integration with your virtual cluster. This enables you to use one Istio installation from the host cluster instead of installing Istio in each virtual cluster.
You can include your virtual workloads in the mesh by setting istio.io/dataplane-mode=ambient
label on the virtual Namespaces or Pods.
You can exclude your virtual workloads from the mesh by setting istio.io/dataplane-mode=none
label either on the Namespace or on the Pod.
Prerequisites​
-
Administrator access to a Kubernetes cluster: See Accessing Clusters with kubectl for more information. Run the command
kubectl auth can-i create clusterrole -A
to verify that your current kube-context has administrative privileges.infoTo obtain a kube-context with admin access, ensure you have the necessary credentials and permissions for your Kubernetes cluster. This typically involves using
kubectl config
commands or authenticating through your cloud provider's CLI tools. -
helm
: Helm v3.10 is required for deploying the platform. Refer to the Helm Installation Guide if you need to install it. -
kubectl
: Kubernetes command-line tool for interacting with the cluster. See Install and Set Up kubectl for installation instructions.
istio
operator installed on your host cluster in ambient mode with DNS Capture disabled
To disable DNS capture, set values.cni.ambient.dnsCapture: false
in your Istio configuration.
This integration works only with Istio in Ambient mode. Sidecar mode is not supported.
Enable the integration​
Enable the Istio integration in your virtual cluster configuration:
integrations:
istio:
enabled: true
This configuration:
- Enables the integration.
- Installs Resource Definitions for
DestinationRules
,Gateways
andVirtualServices
into the virtual cluster. - Exports
DestinationRules
,Gateways
andVirtualServices
from the virtual cluster to the host (and re-writes) service references to the services translated names in the host. - Adds
istio.io/dataplane-mode
label to the synced Pods based on the value of this label set in the virtual namespace.
Only DestinationRules
, Gateways
, and VirtualServices
from networking.istio.io/v1
API Version are synced to the host clusters. Other kinds are not yet supported.
Set up cluster contexts​
Setting up the host and virtual cluster contexts makes it easier to switch between them.
export HOST_CTX="your-host-context"
export VCLUSTER_CTX="vcluster-ctx"
export VCLUSTER_HOST_NAMESPACE="vcluster"
You can find your contexts by running kubectl config get-contexts
Route request based on the version label of the app​
In this tutorial, you set Kubernetes service name as a host in the VirtualService
spec.hosts
. To make it work, you need a Waypoint proxy in the virtual cluster's host namespace. In many cases it is optional however. Refer to Istio documentation for more information on Waypoint proxies. Install Gateway CRD first in the host:Install Gateway CRDkubectl --context="${HOST_CTX}" get crd gateways.gateway.networking.k8s.io &> /dev/null || \
kubectl --context="${HOST_CTX}" apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yamlthis is a Gateway for Waypoint you need:
waypoint-gateway.yamlapiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: waypoint
labels:
istio.io/waypoint-for: service
spec:
gatewayClassName: istio-waypoint
listeners:
- name: mesh
port: 15008
protocol: HBONEcreate it in the host cluster:
Create Waypoint Gatewaykubectl --context="${HOST_CTX}" create -f waypoint-gateway.yaml --namespace="${VCLUSTER_HOST_NAMESPACE}"
First, you create
test
namespace:Create test namespacekubectl --context="${VCLUSTER_CTX}" create namespace test
and label it with
istio.io/dataplane-mode: ambient
:Label test namespacekubectl --context="${VCLUSTER_CTX}" label namespace test istio.io/dataplane-mode=ambient
Next, you create 3 deployments: two of them are nginx server and the third one is to curl the other two.
Create NGINX deployments that respond with different response bodies based on the contents of their respective ConfigMaps:
configmap1.yamlapiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap-v1
namespace: test
data:
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx v1!</title>
</head>
<body>
<h1>Hello from Nginx Version 1!</h1>
</body>
</html>deployment1.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-v1
namespace: test
labels:
app: nginx
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: nginx
version: v1
template:
metadata:
labels:
app: nginx
version: v1
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-index-v1
mountPath: /usr/share/nginx/html/index.html
subPath: index.html
volumes:
- name: nginx-index-v1
configMap:
name: nginx-configmap-v1Create v1 config mapkubectl --context="${VCLUSTER_CTX}" create -f configmap1.yaml --namespace test
Create v1 deploymentkubectl --context="${VCLUSTER_CTX}" create -f deployment1.yaml --namespace test
make sure that this nginx app is up and running:
Wait for v1 podskubectl --context="${VCLUSTER_CTX}" wait --for=condition=ready pod -l app=nginx --namespace test --timeout=300s
Create an additional NGINX deployment configured to serve a different response body, using a separate ConfigMap:
configmap2.yamlapiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap-v2
namespace: test
data:
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx v2!</title>
</head>
<body>
<h1>Hello from Nginx Version 2!</h1>
</body>
</html>deployment2.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-v2
namespace: test
labels:
app: nginx
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: nginx
version: v2
template:
metadata:
labels:
app: nginx
version: v2
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-index-v2
mountPath: /usr/share/nginx/html/index.html
subPath: index.html
volumes:
- name: nginx-index-v2
configMap:
name: nginx-configmap-v2Create v2 config mapkubectl --context="${VCLUSTER_CTX}" create -f configmap2.yaml --namespace test
Create v2 deploymentkubectl --context="${VCLUSTER_CTX}" create -f deployment2.yaml --namespace test
To ensure your NGINX application is up and running in your Kubernetes cluster, use the following command:
Wait for v2 podskubectl --context="${VCLUSTER_CTX}" wait --for=condition=ready pod -l app=nginx --namespace test --timeout=300s
Create a Service that targets Pods from both Deployments by using a shared label:
service.yamlapiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: test
labels:
app: nginx
istio.io/use-waypoint: "waypoint"
spec:
ports:
- port: 80
targetPort: 80
selector:
app: nginxThe istio.io/use-waypoint: waypoint label directs Istio to route traffic for the labeled resource through the waypoint proxy within the same namespace. This configuration enables Layer 7 (L7) policy enforcement and observability features provided by the waypoint proxy. Applying this label to a namespace ensures that all Pods and Services within that namespace use the specified waypoint proxy.
To deploy a Service defined in the
service.yaml
file within thetest
namespace of the Kubernetes cluster specified by the${VCLUSTER_CTX}
context, use the following command:Create servicekubectl --context="${VCLUSTER_CTX}" create -f service.yaml --namespace test
To test connectivity between the two NGINX deployments, deploy a temporary Pod equipped with
curl
:client_deployment.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: client
namespace: test
labels:
app: client
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80Create client deploymentkubectl --context="${VCLUSTER_CTX}" create -f client_deployment.yaml --namespace test
You can create
DestinationRules
andVirtualService
in the virtual cluster.Create a pair that routes our request based on the request path:
- Requesting
/v2
endpoint should route our request to pods withversion=v2
label - All other requests are routed to
version=v1
pods.
Save this
DestinationRule
andVirtualService
definition, and apply it in the virtual cluster:destination_rule.yamlapiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: nginx-destination
namespace: test
spec:
host: nginx-service.test.svc.cluster.local # vCluster translates it to the host service automatically
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2virtual_service.yamlapiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: nginx-service
namespace: test
spec:
hosts:
- nginx-service.test.svc.cluster.local # vCluster translates it to the host service automatically
http:
- name: "nginx-v2"
match:
- uri:
prefix: "/v2"
rewrite:
uri: "/"
route:
- destination:
host: nginx-service.test.svc.cluster.local # vCluster translates it to the host service automatically
subset: v2
- name: "nginx-v1"
route:
- destination:
host: nginx-service.test.svc.cluster.local # vCluster translates it to the host service automatically
subset: v1To apply a
DestinationRule
configuration to the virtual cluster specified by the${VCLUSTER_CTX}
context, use the following command:Create destination rulekubectl --context="${VCLUSTER_CTX}" create -f destination_rule.yaml
Create virtual servicekubectl --context="${VCLUSTER_CTX}" create -f virtual_service.yaml
- Requesting
- Check destination rule in the host cluster
kubectl --context="${HOST_CTX}" get destinationrules --namespace "${VCLUSTER_HOST_NAMESPACE}"
Check virtual service in the host clusterkubectl --context="${HOST_CTX}" get virtualservices --namespace "${VCLUSTER_HOST_NAMESPACE}"
You should see a
DestinationRule
namednginx-destination-x-test-x-vcluster
and VirtualService namednginx-service-x-test-x-vcluster
. Execute a
curl
command from within the client Pod to verify responses from the two NGINX deployments. Depending on the request path, you should receive either "Hello from Nginx Version 1!" or "Hello from Nginx Version 2!" in the response:Query version 2kubectl --context="${VCLUSTER_CTX}" exec -it -n test deploy/client -- curl nginx-service/v2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx v2!</title>
</head>
<body>
<h1>Hello from Nginx Version 2!</h1>
</body>
</html>Query version 1kubectl --context="${VCLUSTER_CTX}" exec -it -n test deploy/client -- curl nginx-service
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx v1!</title>
</head>
<body>
<h1>Hello from Nginx Version 1!</h1>
</body>
</html>Seeing the same output means that request was intercepted by Istio and routed as we specified in the
DestinationRule
andVirtualService
.Istio integration enables you to re-use one Istio instance from the host cluster for multiple virtual clusters. Virtual cluster users can define their own
Gateway
,DestinationRule
andVirtualService
without interfering with each other.
Create waypoint proxy in the host​
Create virtual namespace with ambient mode enabled​
Create two versions of your app​
Configure your desired traffic routing using DestinationRule and VirtualService​
Verify that DestinationRule and VirtualService is synced to the host cluster​
Test traffic routing​
Summary​
Fields translated during the sync to host​
Following fields of Gateway
are modified by vCluster during the sync to host:
- reference to the TLS Secret is re-written
spec.servers[*].tls.credentialName
. Secret is automatically synced to the host cluster. - namespace,
.
and*
prefix, followed by/
is stripped fromspec.servers[*].hosts[*]
, so e.g.foo-namespace/loft.sh
becomesloft.sh
in the host object. - additional labels
vcluster.loft.sh/managed-by: [YOUR VIRTUAL CLUSTER NAME]
andvcluster.loft.sh/namespace: [VIRTUAL NAMESPACE]
are automatically added to thespec.subsets[*].labels
For additional information how Secret and Service references are translated, read How does syncing work?
Following fields of DestinationRule
are modified by vCluster during the sync to host:
- reference to the virtual Kubernetes Service is re-written for
spec.host
- reference to the TLS Secret in
spec.trafficPolicy.portLevelSettings[*].tls.credentialName
&spec.trafficPolicy.tls.credentialName
is re-written. Secrets are automatically synced to the host cluster. - additional labels
Following fields of VirtualService
are modified by vCluster during the sync to host:
-
reference to the virtual Kubernetes Service is re-written for:
-
spec.hosts[*]
-
spec.http[*].route[*].destination.host
-
spec.http[*].mirrors[*].destination.host
-
spec.tcp[*].route[*].destination.host
-
spec.tls[*].route[*].destination.host
- reference to the
networking.istio.io/v1
kind:Gateway
is re-written for:
- reference to the
-
spec.gateways[*]
-
spec.http[*].match[*].gateways[*]
-
spec.tls[*].match[*].gateways[*]
-
spec.tcp[*].match[*].gateways[*]
- reference to the
networking.istio.io/v1
kind:VirtualService
is re-written for:
- reference to the
-
spec.http[*].delegate
Fields not supported in VirtualService
:
spec.exportTo
spec.http[*].match[*].sourceLabels
spec.http[*].match[*].sourceNamespace
spec.tcp[*].match[*].sourceLabels
spec.tcp[*].match[*].sourceNamespace
spec.tls[*].match[*].sourceLabels
spec.tls[*].match[*].sourceNamespace
Config reference​
istio
required object pro​
Istio syncs DestinationRules, Gateways and VirtualServices from virtual cluster to the host.
istio
required object pro​enabled
required boolean false pro​
Enabled defines if this option should be enabled.
enabled
required boolean false pro​