Deploy with Flux
This guide shows how to deploy and manage virtual clusters using Flux, a GitOps tool for Kubernetes. Flux uses GitOps to keep your Kubernetes clusters in sync with the desired state defined in your Git repository.
Prerequisites​
-
Administrator access to a Kubernetes cluster: See Accessing Clusters with kubectl for more information. Run the command
kubectl auth can-i create clusterrole -A
to verify that your current kube-context has administrative privileges.infoTo obtain a kube-context with admin access, ensure you have the necessary credentials and permissions for your Kubernetes cluster. This typically involves using
kubectl config
commands or authenticating through your cloud provider's CLI tools. -
helm
: Helm v3.10 is required for deploying the platform. Refer to the Helm Installation Guide if you need to install it. -
kubectl
: Kubernetes command-line tool for interacting with the cluster. See Install and Set Up kubectl for installation instructions.
Additionally, you'll need:
- A Kubernetes cluster with Flux controllers installed
- The
flux
CLI tool installed on your machine (See Flux Installation Guide) - The
vcluster
CLI tool installed on your machine - Basic understanding of GitOps principles
Architecture options​
When using Flux with virtual clusters, choose an architecture that aligns well with your GitOps workflow. You can configure Flux to work with virtual clusters in several ways, each offering different benefits and limitations.
Unlike ArgoCD, which treats other clusters as first-class objects, Flux manages workloads on other clusters through KubeConfig references in HelmRelease
and Kustomization
resources. This difference in design influences how you'll structure your GitOps workflows when working with virtual clusters.
The standalone approach—deploying Flux in each virtual cluster—might not scale effectively when managing large numbers of virtual clusters, particularly in ephemeral environments like pull request previews. In these cases, a hub-and-spoke model or a Flux instance per host cluster is typically more resource-efficient and can reduce the overhead of running multiple Flux controllers.
The following are common approaches for integrating Flux with virtual clusters, each suited to different use cases:
1. Flux instance per host cluster​
When running Flux on each host cluster, it can manage the virtual clusters within that environment. This approach is recommended if you already use Flux for each traditional cluster and want to maintain a similar management pattern.
- One Flux instance per host cluster
- Each Flux instance manages multiple virtual clusters on that host
- Virtual cluster KubeConfig Secret management is simplified since Secrets are local to the cluster
- Clear separation of responsibilities by host cluster
- Recommended if you already use a Flux instance per traditional cluster
- Provides better resource utilization since the Flux controllers are shared
2. Hub and spoke model​
With this approach, a central Flux instance manages multiple virtual clusters across different host clusters. This is a good option if you already use a single Flux instance with multiple Kubernetes clusters or if you want centralized control of all virtual environments.
- One central Flux instance manages multiple virtual clusters across different hosts
- Works well with existing hub and spoke Flux setups
- Requires secure KubeConfig Secret management between clusters
- More efficient for large numbers of virtual clusters
- Provides a single control point for all virtual cluster management
- Can simplify GitOps workflows by having a single source of truth
3. Flux inside virtual clusters​
While possible, running Flux inside every virtual cluster adds resource overhead and management complexity. This approach might be suitable when virtual clusters need complete isolation and independent GitOps workflows.
- Each virtual cluster runs its own Flux instance
- Provides complete isolation between environments
- Teams can manage their own GitOps workflows independently
- Increased resource overhead (each vCluster needs its own Flux controllers)
- More complex to manage at scale
- Suitable for environments where strict isolation is required
Enable KubeConfig export​
To enable Flux to deploy to virtual clusters, you must create a KubeConfig Secret
that Flux can reference.
exportKubeConfig:
# Set a meaningful context name
context: default
# Use a server URL that is accessible from the Flux controllers
server: https://vcluster-name.vcluster-namespace.svc.cluster.local:443
# Skip TLS verification when Flux connects to the vCluster
insecure: true
# Specify the secret where the KubeConfig is stored
secret:
name: vcluster-flux-kubeconfig
syncer:
extraArgs:
# Add TLS SAN for the server URL to ensure certificate validity
- --tls-san=vcluster-name.vcluster-namespace.svc.cluster.local
This configuration:
- Exports the virtual cluster KubeConfig as a Secret in the host namespace
- Makes the Secret available for Flux to use with the
spec.kubeConfig
field - Uses a server URL that is accessible from the Flux controllers (replace
vcluster-name
andvcluster-namespace
with your actual values) - Sets
insecure: true
to automatically skip TLS certificate verification - Adds a TLS SAN (Subject Alternative Name) that matches the server URL, which helps prevent certificate verification errors
The vCluster exportKubeConfig
configuration creates a Secret with the KubeConfig data stored under the key config
. When referring to this Secret in Flux resources, you must specify this key in the secretRef.key
field, as shown in the examples below.
# In Flux HelmRelease
spec:
kubeConfig:
secretRef:
name: vcluster-flux-kubeconfig
key: config # Must match the key used in the vCluster-generated Secret
When using vCluster with Flux, proper TLS certificate configuration is essential:
- Set
exportKubeConfig.insecure: true
in your vCluster configuration - Configure proper TLS SANs with the
--tls-san
flag in vCluster configuration - Ensure the server URL matches the certificate's SAN
# In your vCluster configuration
syncer:
extraArgs:
- --tls-san=vcluster-name.vcluster-namespace.svc.cluster.local
exportKubeConfig:
server: https://vcluster-name.vcluster-namespace.svc.cluster.local:443
insecure: true
See the Troubleshooting section for solutions to certificate issues.
Deploy virtual clusters with Flux​
Git Repository Create the vCluster Helm repository definition
Create a source for the vCluster Helm charts in your Git repository:
clusters/sources/vcluster-repository.yaml---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: vcluster
namespace: flux-system
spec:
interval: 1h
url: https://charts.loft.shGit Repository Define your vCluster configuration
Create a vCluster configuration file in your Git repository:
clusters/production/vcluster-demo.yaml---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: vcluster-demo
namespace: vcluster-demo
spec:
interval: 10m
chart:
spec:
chart: vcluster
version: "0.24.x"
sourceRef:
kind: HelmRepository
name: vcluster
namespace: flux-system
values:
# Configure TLS SAN for the certificate
syncer:
extraArgs:
- --tls-san=vcluster-demo.vcluster-demo.svc.cluster.local
exportKubeConfig:
# Set a meaningful context name
context: default
# Use a server URL that matches the TLS SAN
server: https://vcluster-demo.vcluster-demo.svc.cluster.local:443
# Skip TLS verification when Flux connects to the vCluster
insecure: true
# Specify the secret where the KubeConfig is stored
secret:
name: vcluster-flux-kubeconfig
sync:
toHost:
ingresses:
enabled: true
controlPlane:
coredns:
enabled: true
embedded: true
backingStore:
etcd:
embedded:
enabled: trueYou can include any standard vCluster configuration in the
values
section.Kubernetes Cluster Apply the vCluster namespace
Before applying the
HelmRelease
, ensure that the target namespace exists:Create the namespacekubectl create namespace vcluster-demo
Git Repository Commit and push your changes
Commit and push to the repositorygit add clusters/
git commit -m "Add vCluster demo configuration"
git pushFlux detects the changes and deploy the vCluster according to your configuration.
Deploy applications to virtual clusters​
After the vCluster is up and running, use Flux to deploy applications directly into the virtual cluster.
Git Repository Create a Helm repository source
vcluster-apps/sources/podinfo-repository.yaml---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: podinfo
namespace: vcluster-demo
spec:
interval: 1h
url: https://stefanprodan.github.io/podinfoGit Repository Create a HelmRelease targeting the vCluster
vcluster-apps/apps/podinfo-app.yaml---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: podinfo
namespace: vcluster-demo
spec:
chart:
spec:
chart: podinfo
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: podinfo
version: '*'
interval: 30m
kubeConfig:
secretRef:
name: vcluster-flux-kubeconfig
key: config
# Skip TLS verification for the target cluster
# Available in Flux v0.40.0 and later
skipTLSVerify: true
releaseName: podinfo
targetNamespace: podinfo
install:
createNamespace: true
values:
ui:
message: "Deployed by Flux to virtual cluster"
ingress:
enabled: true
hosts:
- host: podinfo.example.com
paths:
- path: /
pathType: PrefixHandling TLS certificate verificationThe
kubeConfig
section references theSecret
created by the vCluster using theexportKubeConfig
setting. There are several approaches to handle TLS certificate verification:-
Recommended approach (Flux v0.40.0+): Use
skipTLSVerify: true
in thekubeConfig
section as shown above, which tells Flux to skip certificate verification when connecting to the virtual cluster. -
Alternative approach: Configure both the TLS SAN and
insecure: true
in your vCluster configuration as we did in the example. -
If you still encounter certificate errors: Use a modified
Secret
created with the solution in the troubleshooting section:
Use a modified KubeConfig SecretkubeConfig:
secretRef:
name: vcluster-flux-kubeconfig-modified # Use the modified Secret
key: config-
Git Repository Commit and push your changes
Commit and push application definitionsgit add vcluster-apps/
git commit -m "Add podinfo application for vCluster demo"
git pushVirtual Cluster Verify deployment
After Flux reconciles the changes, you can connect to your vCluster and verify the application is deployed:
Connect to vCluster and check deploymentvcluster connect vcluster-demo -n vcluster-demo
# Check the deployment in the virtual cluster
kubectl get namespace podinfo
kubectl get pods -n podinfo
kubectl get ingress -n podinfo
Manage multiple virtual clusters​
When managing multiple virtual clusters with Flux, you can use Kustomize to organize your configurations.
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- sources/vcluster-repository.yaml
- development/vcluster-dev.yaml
- staging/vcluster-staging.yaml
- production/vcluster-prod.yaml
Bootstrap review environments with pre-installed Flux​
A common scenario is having Flux already installed on the host cluster and wanting to use it for ephemeral review environments with virtual clusters. This approach follows GitOps principles while avoiding the need to install Flux separately for each environment.
Use ResourceSet feature for review environments​
For later versions of Flux, you can use the native Flux Operator ResourceSet
capability to manage ephemeral environments efficiently. This is a feature for creating review environments that include virtual clusters.
apiVersion: resourcesets.toolkit.fluxcd.io/v1alpha2
kind: ResourceSet
metadata:
name: pr-123
namespace: reviews
spec:
interval: 5m
serviceAccountName: flux-reconciler
resources:
# First create the vCluster
- apiVersion: v1
kind: Namespace
metadata:
name: review-123
- apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: vcluster-pr-123
namespace: review-123
spec:
interval: 10m
chart:
spec:
chart: vcluster
version: "0.24.x"
sourceRef:
kind: HelmRepository
name: vcluster
namespace: flux-system
values:
exportKubeConfig:
context: default
server: https://vcluster-pr-123.review-123.svc.cluster.local:443
insecure: true
secret:
name: vcluster-pr-123-kubeconfig
# Deploy the PR app code into the vCluster
- apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: pr-app
namespace: review-123
spec:
interval: 5m
chart:
spec:
chart: ./charts/app
sourceRef:
kind: GitRepository
name: pull-request-123
namespace: flux-system
kubeConfig:
secretRef:
name: vcluster-pr-123-kubeconfig
key: config
skipTLSVerify: true
values:
image:
tag: pr-123
This approach allows you to:
- Create the vCluster and deploy applications in a single resource
- Use the Flux Operator directly without having to create custom CI scripts
- Simplify the cleanup when a PR is closed or merged
- Manage the entire lifecycle of review environments through GitOps practices
For more details, see the Flux Operator ResourceSet
documentation and an example implementation.
Use existing Flux for review environments​
When Flux is already installed on your host cluster, you can create a GitOps workflow for review environments that:
- Uses the existing Flux installation on the host cluster
- Deploys virtual clusters for each review environment
- Use Flux to deploy applications to these virtual clusters
- Reduces overhead and speeds up environment bootstrapping
Git Repository Create a structure for review environments
├── clusters/
│ ├── sources/
│ │ └── vcluster-repository.yaml # HelmRepository for vCluster
│ └── reviews/
│ ├── review-env-template.yaml # Template for new review environments
│ └── pr-123/ # Directory for a specific PR review
│ └── vcluster.yaml # vCluster definition for PR-123
└── apps/
├── sources/
│ └── app-repository.yaml # Application source repositories
└── reviews/
└── pr-123/ # Apps for PR-123 environment
└── deployment.yaml # Application deployment targeting PR-123 vClusterGit Repository Create a template for review environments
clusters/reviews/review-env-template.yaml---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: vcluster-${PR_NUMBER}
namespace: review-${PR_NUMBER}
spec:
interval: 10m
chart:
spec:
chart: vcluster
version: "0.24.x"
sourceRef:
kind: HelmRepository
name: vcluster
namespace: flux-system
values:
sync:
toHost:
ingresses:
enabled: true
exportKubeConfig:
context: default
server: https://kubernetes.default.svc.cluster.local:443
secret:
name: vcluster-${PR_NUMBER}-kubeconfigCI Pipeline Create a CI workflow that generates environments
CI workflow (conceptual)steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Create PR-specific vCluster config
run: |
export PR_NUMBER=${GITHUB_REF#refs/pull/}
export PR_NUMBER=${PR_NUMBER%/merge}
mkdir -p clusters/reviews/pr-${PR_NUMBER}
# Generate the vCluster config from template
cat clusters/reviews/review-env-template.yaml | \
sed "s/\${PR_NUMBER}/$PR_NUMBER/g" > \
clusters/reviews/pr-${PR_NUMBER}/vcluster.yaml
- name: Commit and push to GitOps repo
run: |
git add clusters/reviews/pr-${PR_NUMBER}
git commit -m "Add review environment for PR #${PR_NUMBER}"
git pushWith this approach, your CI/CD pipeline creates the necessary configuration in your GitOps repository, and Flux (already running on the host cluster) automatically provisions the vCluster and deploys applications to it.
Host Cluster Existing Flux detects and applies changes
The Flux controllers already running on your host cluster is going to:
- Detect the new vCluster configuration
- Create the required namespace
- Deploy the vCluster using the Helm chart
- Create the KubeConfig
Secret
- Use the exported KubeConfig to deploy apps to the vCluster
This entire process follows GitOps principles, with your Git repository as the source of truth, and Flux handling the reconciliation—all without requiring manual intervention or imperative commands.
In production environments, implement automatic cleanup of preview environments when pull requests are closed or merged. You can do this by adding a CI workflow step that deletes the corresponding directory from your GitOps repository.
This pattern allows you to use an existing Flux installation rather than deploying Flux separately for each review environment. This significantly reduces overhead and bootstrap time.
For organizations managing many virtual clusters—especially in dynamic, ephemeral environments—vCluster Platform offers advanced lifecycle management capabilities that integrate smoothly with GitOps workflows. It supports automatic creation of KubeConfig Secrets
, access control management, and simplified bootstrapping of virtual clusters with Flux.
Troubleshoot​
Host Cluster- Verify the virtual cluster KubeConfig
Secret
exists with the correct format - Check Flux controller logs for errors
- Ensure Flux has the necessary permissions to access the
Secret
kubectl logs -n flux-system deployment/source-controller
kubectl logs -n flux-system deployment/helm-controller
- Verify that resources are being created in the virtual cluster
- Check that the
exportKubeConfig
setting is properly configured - Ensure the server URL is reachable from the Flux controllers
kubectl get configmap -n vcluster-namespace vcluster-flux-demo -o yaml
Common issues​
TLS certificate verification errors​
If you see TLS certificate verification errors in Flux controller logs like:
tls: failed to verify certificate: x509: certificate signed by unknown authority
This is a common issue when Flux attempts to connect to a vCluster, because the vCluster generates a self-signed certificate. Follow these solutions in order:
Solution 1: Properly configure vCluster certificate SANs​
The most reliable approach is to configure proper TLS SANs when deploying vCluster:
syncer:
extraArgs:
- --tls-san=vcluster-name.vcluster-namespace.svc.cluster.local
exportKubeConfig:
server: https://vcluster-name.vcluster-namespace.svc.cluster.local:443
insecure: true
secret:
name: vcluster-flux-kubeconfig
This ensures the certificate includes the correct SAN for the service DNS name.
Solution 2: Use a modified KubeConfig Secret​
If you're still encountering issues, create a modified KubeConfig Secret
with TLS verification disabled:
# Set your environment variables
NAMESPACE="vcluster-namespace"
VCLUSTER_NAME="vcluster-name"
KUBECONFIG_SECRET="vcluster-flux-kubeconfig"
# Create a temporary directory
TMPDIR=$(mktemp -d)
cd $TMPDIR
# Extract original kubeconfig
kubectl get secret -n $NAMESPACE $KUBECONFIG_SECRET -o jsonpath='{.data.config}' | base64 -d > original-kubeconfig.yaml
# Extract client certificates
CLIENT_CERT=$(grep -A1 "client-certificate-data:" original-kubeconfig.yaml | tail -n1 | awk '{print $1}')
CLIENT_KEY=$(grep -A1 "client-key-data:" original-kubeconfig.yaml | tail -n1 | awk '{print $1}')
# Create KubeConfig without certificate-authority-data and with insecure-skip-tls-verify enabled
cat > modified-kubeconfig.yaml << EOF
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://${VCLUSTER_NAME}.${NAMESPACE}.svc.cluster.local:443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: ${CLIENT_CERT}
client-key-data: ${CLIENT_KEY}
EOF
# Create Secret
kubectl create secret generic ${KUBECONFIG_SECRET}-modified -n $NAMESPACE --from-file=config=modified-kubeconfig.yaml
# Clean up
rm -rf $TMPDIR
Update your Flux resource to use this Secret
:
spec:
kubeConfig:
secretRef:
name: vcluster-flux-kubeconfig-modified
key: config
Solution 3: Use Flux's built-in TLS verification options​
For later versions of Flux (v0.40.0+), you can use Flux's native TLS verification options in your HelmRelease
or Kustomization
resources:
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: podinfo
namespace: vcluster-demo
spec:
# Other fields...
kubeConfig:
secretRef:
name: vcluster-flux-kubeconfig
key: config
# Skip TLS verification for target cluster
skipTLSVerify: true
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: app-deployment
namespace: vcluster-demo
spec:
# Other fields...
kubeConfig:
secretRef:
name: vcluster-flux-kubeconfig
key: config
# Skip TLS verification for target cluster
skipTLSVerify: true
This approach has the advantage of not requiring you to modify the KubeConfig Secret
manually while still resolving TLS certificate verification issues.
Connection refused errors​
If you see "connection refused" errors in the Flux controller logs, it might indicate:
- The virtual cluster's API server is not accessible from Flux
- Network policies are blocking the communication
- The virtual cluster is not running or healthy
- The server URL in the KubeConfig is not correctly configured
You might see errors in the Flux controller logs like:
connect: connection refused
To troubleshoot:
- Check if the virtual cluster is running and ready:
kubectl get pods -n <vcluster-namespace>
- Verify the server URL in your
exportKubeConfig
setting:
kubectl get secret -n <vcluster-namespace> <kubeconfig-secret-name> -o jsonpath='{.data.config}' | base64 -d | grep server
- Ensure the server URL is accessible from the Flux controllers. Using the service DNS name is generally more reliable:
server: https://vcluster-name.vcluster-namespace.svc.cluster.local:443