Skip to main content
Version: main 🚧

Deploy vCluster on GKE

This guide provides step-by-step instructions for deploying vCluster on Google Kubernetes Engine (GKE).

Prerequisites​

Before starting, ensure you have the following tools installed:

  • kubectl: Kubernetes command-line tool for interacting with the cluster. See Install and Set Up kubectl for installation instructions.
  • vCluster CLI
    brew install loft-sh/tap/vcluster

    The binaries in the tap are signed using the Sigstore framework for enhanced security.

    Confirm that you've installed the correct version of the vCluster CLI.

    vcluster --version
  • Google Cloud SDK (gcloud CLI)
    note

    Ensure you have the necessary IAM permissions to create clusters and manage cloud services.

Create GKE cluster​

Start by creating a zonal GKE cluster using the gcloud CLI. First, set up your environment variables:

tip

Project ID can be found in the Google Cloud Console under the project name. Alternatively use gcloud project list to list all projects and their IDs. To check which project is active, use gcloud config get-value project.

Modify the following with your specific values to generate a copyable command:
export PROJECT_ID=development
export CLUSTER_NAME=vcluster-demo
export ZONE=europe-west1-b
export MACHINE_TYPE=e2-standard-4

Configure gcloud and enable the required APIs and set default project:

Configure gcloud
gcloud config set project $PROJECT_ID
gcloud services enable container.googleapis.com

Create the cluster:

Create GKE cluster
gcloud container clusters create $CLUSTER_NAME \
--zone $ZONE \
--machine-type $MACHINE_TYPE \
--num-nodes 2
info

This process typically takes about 10-15 minutes.

This command creates a GKE cluster named vcluster-demo in the europe-west1-b zone with two nodes of type e2-standard-4.

kubeconfig update

This command automatically updates your kubeconfig file with the new cluster configuration.

Verify the host cluster creation​

Verify the cluster by listing the nodes:

List cluster nodes
kubectl get nodes

You should see output similar to:

NAME           LOCATION        MASTER_VERSION      MASTER_IP      MACHINE_TYPE   NODE_VERSION        NUM_NODES  STATUS
vcluster-demo europe-west1-b 1.30.5-gke.1443001 35.187.66.218 e2-standard-4 1.30.5-gke.1443001 2 RUNNING

Predeployment configuration options​

Before deploying, it's recommended to review the set of configuration options that cannot be updated post deployment. These options require deploying a new vCluster instead of upgrading your vCluster with new options.

Control Plane Options​

Decide the various options of how you want your control plane deployed:

  • High availability - Run multiple copies of vCluster components.
  • Rootless mode - Deploy the vCluster pod without root access to the host cluster.
  • Backing Store - Decide how the data of your cluster is stored.
    Backing store options

    vCluster supports etcd or a relational database (using KINE) as the backend.This feature provides flexibility to vCluster operators. The available data store options allow you to select a data store that fits your use case.

    vCluster supports the following datastore options:

    warning

    After deploying your vCluster, there are limited migration paths to change your backing store. Review the backing store migration options before deploying.

    Backing store options

    This is the default, so you don't need to configure anything. If you want to explicitly set this option, you can use:

    controlPlane:
    backingStore:
    database:
    embedded:
    enabled: true

Worker Nodes​

Decide where you want your worker nodes to come from:

  • Nodes from the host cluster - (Default) All worker nodes of the shared host cluster are used by the virtual cluster and all resources are synced to the single namespace that the vCluster is deployed on.
  • Private Nodes - Enable adding individual nodes to the virtual cluster.

Deploy vCluster on GKE​

YAML configuration

If you're not sure which options to configure, you can update most settings later by upgrading your vCluster with an updated vcluster.yaml. However, some settings β€” such as what type of worker nodes or the backing store β€” can only be set during the initial deployment and cannot be changed during an upgrade.

All of the deployment options below have the following assumptions:

  • A vcluster.yaml is provided. Refer to the vcluster.yaml reference docs to explore all configuration options. This file is optional and can be removed from the examples.
  • The vCluster is called my-vcluster.
  • The vCluster is be deployed into the team-x namespace.

The vCluster CLI provides the most straightforward way to deploy and manage virtual clusters.

  1. Install the vCluster CLI:

     brew install loft-sh/tap/vcluster-experimental

    If you installed the CLI using brew install vcluster, you should brew uninstall vcluster and then install the experimental version. The binaries in the tap are signed using the Sigstore framework for enhanced security.

    Confirm that you've installed the correct version of the vCluster CLI.

    vcluster --version
  2. Deploy vCluster:

    Modify the following with your specific values to generate a copyable command:
    vcluster create my-vcluster --namespace team-x --values vcluster.yaml
    note

    After installation, vCluster automatically switches your Kubernetes context to the new virtual cluster. You can now run kubectl commands against the virtual cluster.

Next steps​

Now that you have vCluster running on GKE, consider setting up the platform UI to mange your virtual clusters.

Workload Identity​

Enterprise-Only Feature

This feature is an Enterprise feature. See our pricing plans or contact our sales team for more information.

When using the platform you can easily enable Workload Identity.

Cleanup​

If you deployed the GKE cluster with this tutorial, and want to clean up the resources, run the following command:

Clean up resources
gcloud container clusters delete $CLUSTER_NAME --zone $ZONE --quiet