Reimagining Local Kubernetes: Replacing Kind with vind — A Deep Dive
An open-source alternative to KinD with native LoadBalancer support, free UI, pull-through caching, and the ability to attach external nodes to your local cluster
Kubernetes developers have long relied on tools like KinD (Kubernetes in Docker) to spin up disposable clusters locally for development, testing, and CI/CD workflows. While KinD is a solid tool, it has limitations like not being able to use service type LoadBalancer, accessing homelab clusters from the web, or adding GPU nodes to your local cluster. Introducing vind (vCluster in Docker) - an open source alternative that enables Kubernetes clusters as first-class Docker containers, offering improved performance, modern features, and an enhanced developer experience.
Pragmatic Hybrid AI: Bursting Across Private GPUs and Public Cloud Without Leaking Data or Dollars
Hybrid AI That Works: Network Isolation, Data Gravity, and Workload Placement in the Real World
For the past two years, the AI infrastructure debate has been framed as binary: go all-in on on-prem GPU estates or stay all-in on the cloud. Neither approach is sustainable at enterprise scale. The winning pattern is intelligent placement—keep sensitive or data-heavy jobs local, burst elastic workloads into the cloud. Success depends on strict isolation, careful placement, and scheduling that is cost-aware from the start.
Why the nodes/proxy Kubernetes RCE Does Not Apply to vCluster
How vCluster provides more security than vanilla Kubernetes when using nodes/proxy permissions for monitoring stacks
A security researcher recently disclosed that Kubernetes nodes/proxy permissions can be exploited for remote code execution. Kubernetes labeled it "working as intended" and issued no CVE. Since vCluster was mentioned in the disclosure, we investigated how this vulnerability affects our users. The conclusion: vCluster is not compromised and actually provides more security than vanilla Kubernetes when using features that require the nodes/proxy permission.
Launching vCluster Free - Get vCluster Enterprise Features at No Cost
A free tier that makes advanced Kubernetes multi-tenancy accessible—without trials or sales gates.
We’re launching vCluster Free to make advanced Kubernetes multi-tenancy available to more builders.
Isolating Workloads in a Multi-Tenant GPU Cluster
Practical strategies for securing shared GPU environments with Kubernetes-native isolation, hardware partitioning, and operational best practices
Sharing GPU access across teams maximizes hardware ROI, but multitenant environments introduce critical performance and security challenges. This guide explores proven workload isolation strategies, from Kubernetes RBAC and network policies to NVIDIA MIG and time-slicing, that enable you to build secure, scalable GPU clusters. Learn how to prevent resource contention, enforce tenant boundaries, and implement operational safeguards that protect both workloads and data in production AI infrastructure.
Separate Clusters Aren’t as Secure as You Think — Lessons from a Cloud Platform Engineering Director
Lessons in Intentional Tenancy and Security at Scale from a Cloud Platform Director
If a workload needs isolation, give it its own cluster. It sounds safe, but at scale, this logic breaks down. Learn why consistency, not separation, is the real security challenge in modern Kubernetes environments.
Solving GPU-Sharing Challenges with Virtual Clusters
Why MPS and MIG fall short—and how virtual clusters deliver isolation without hardware lock-in
GPUs are expensive, but most organizations only achieve 30-50% utilization. The problem? GPUs weren't designed for sharing. Software solutions like MPS lack isolation. Hardware solutions like MIG lock you into specific vendors. vCluster takes a different approach—solving GPU multitenancy at the Kubernetes orchestration layer.
vCluster Ambassador program
Introducing the first vCluster Ambassadors shaping the future of Kubernetes multi-tenancy and platform engineering
Meet the first vCluster Ambassadors - community leaders and practitioners advancing Kubernetes multi-tenancy, platform engineering, and real-world developer platforms.
Architecting a Private Cloud for AI Workloads
How to design, build, and operate a cost-effective private cloud infrastructure for enterprise AI at scale
Public clouds are convenient for AI experimentation, but production workloads often hit walls. For enterprises running continuous training and inference, a private cloud can deliver better ROI, data sovereignty, and performance. This comprehensive guide walks through architecting a private cloud for AI workloads from the ground up.
GPU Multitenancy in Kubernetes: Strategies, Challenges, and Best Practices
How to safely share expensive GPU infrastructure across teams without sacrificing performance or security
GPUs don't support native sharing between isolated processes. Learn four approaches for running multitenant GPU workloads at scale without performance hits.
AI Infrastructure Isn’t Limited By GPUs. It’s Limited By Multi-Tenancy.
What the AI Infrastructure 2025 Survey Reveals, And How Platform Teams Can Respond
The latest AI Infrastructure 2025 survey shows that most organizations are struggling not due to GPU scarcity, but because of poor GPU utilization caused by limited multi-tenancy capabilities. Learn how virtual clusters and virtual nodes help platform teams solve high costs, sharing issues, and low operational maturity in Kubernetes environments.
KubeCon + CloudNativeCon North America 2025 Recap
Announcing the Infrastructure Tenancy Platform for NVIDIA DGX—plus what we learned from 100+ conversations at KubeCon about GPU efficiency, isolation, and the future of AI on Kubernetes.
KubeCon Atlanta 2025 was packed with energy, launches, and conversations that shaped the future of AI infrastructure. At Booth #421, we officially launched the Infrastructure Tenancy Platform for NVIDIA DGX—a Kubernetes-native platform designed to maximize GPU efficiency across private AI supercomputers, hyperscalers, and neoclouds. Here's what happened, what we announced, and why it matters for teams scaling AI workloads.
Scaling Without Limits: The What, Why, and How of Cloud Bursting
A practical guide to implementing cloud bursting using vCluster VPN, Private Nodes, and Auto Nodes for secure, elastic, multi-cloud scalability.
Cloud bursting lets you expand compute capacity on demand without overprovisioning or re-architecting your systems. In this guide, we break down how vCluster VPN connects Private and Auto Nodes securely across environments—so you can scale beyond limits while keeping costs and complexity in check.
vCluster and Netris Partner to Bring Cloud-Grade Kubernetes to AI Factories & GPU Clouds With Strong Network Isolation Requirements
vCluster Labs and Netris team up to bring cloud-grade Kubernetes automation and network-level multi-tenancy to AI factories and GPU-powered infrastructure.
vCluster Labs has partnered with Netris to revolutionize how AI operators run Kubernetes on GPU infrastructure. By combining vCluster’s Kubernetes-level isolation with Netris’s network automation, the integration delivers a full-stack multi-tenancy solution, simplifying GPU cloud operations, maximizing utilization, and enabling cloud-grade performance anywhere AI runs.
Recapping The Future of Kubernetes Tenancy Launch Series
How vCluster’s Private Nodes, Auto Nodes, and Standalone releases redefine multi-tenancy for modern Kubernetes platforms.
From hardware-isolated clusters to dynamic autoscaling and fully standalone control planes, vCluster’s latest launch series completes the future of Kubernetes multi-tenancy. Discover how Private Nodes, Auto Nodes, and Standalone unlock new levels of performance, security, and flexibility for platform teams worldwide.
Bootstrapping Kubernetes from Scratch with vCluster Standalone: An End-to-End Walkthrough
Bootstrapping Kubernetes from scratch, no host cluster, no external dependencies.
Kubernetes multi-tenancy just got simpler. With vCluster Standalone, you can bootstrap a full Kubernetes control plane directly on bare metal or VMs, no host cluster required. This walkthrough shows how to install, join worker nodes, and run virtual clusters on a single lightweight foundation, reducing vendor dependencies and setup complexity for platform and infrastructure teams.
GPU on Kubernetes: Safe Upgrades, Flexible Multitenancy
How vCluster and NVIDIA’s KAI Scheduler reshape GPU workload management in Kubernetes - enabling isolation, safety, and maximum utilization.
GPU workloads have become the backbone of modern AI infrastructure, but managing and upgrading GPU schedulers in Kubernetes remains risky and complex.
This post explores how vCluster and NVIDIA’s KAI Scheduler together enable fractional GPU allocation, isolated scheduler testing, and multi-team autonomy, helping organizations innovate faster while keeping production safe.
A New Foundation for Multi-Tenancy: Introducing vCluster Standalone
Eliminating the “Cluster 1 problem” with vCluster Standalone v0.29 – the unified foundation for Kubernetes multi-tenancy on bare metal, VMs, and cloud.
vCluster Standalone changes the Kubernetes tenancy spectrum by removing the need for external host clusters. With direct bare metal and VM bootstrapping, teams gain full control, stronger isolation, and vendor-supported simplicity. Explore how vCluster Standalone (v0.29) solves the “Cluster 1 problem” while supporting Shared, Private, and Auto Nodes for any workload.
Introducing vCluster Auto Nodes — Practical deep dive
Auto Nodes extend Private Nodes with provider-agnostic, automated node provisioning and scaling across clouds, on-prem, and bare metal.
Kubernetes makes pods elastic, but node scaling often breaks outside managed clouds. With vCluster Platform 4.4 + v0.28, Auto Nodes fix that gap, combining isolation, elasticity, and portability. Learn how Auto Nodes extend Private Nodes with automated provisioning and dynamic scaling across any environment.
Introducing vCluster Auto Nodes: Karpenter-Based Dynamic Autoscaling Anywhere
Dynamic, isolated, and cloud-agnostic autoscaling for every virtual cluster.
vCluster Auto Nodes brings dynamic, Karpenter-powered autoscaling to any environment, public cloud, private cloud, or bare metal. Combined with Private Nodes, it delivers true isolation and elasticity for Kubernetes, letting every virtual cluster scale independently without cloud-specific limits.
How vCluster Auto Nodes Delivers Dynamic Kubernetes Scaling Across Any Infrastructure
Kubernetes pods scale elastically, but node scaling often stops at the provider boundary. Auto Nodes extend Private Nodes to bring elasticity and portability to isolated clusters across clouds, private datacenters, and bare metal.
Pods autoscale in Kubernetes, but nodes don’t. Outside managed services, teams fall back on brittle scripts or costly overprovisioning. With vCluster Platform 4.4 + vCluster v0.28, Auto Nodes close the gap, bringing automated provisioning and elastic scaling to isolated clusters across clouds, private datacenters, and bare metal.
The Case for Portable Autoscaling
Kubernetes has pods and deployments covered, but when it comes to nodes, scaling breaks down across clouds, providers, and private infrastructure. Auto Nodes change that.
Kubernetes makes workloads elastic until you hit the node layer. Managed services offer partial fixes, but hybrid and isolated environments still face scaling gaps and wasted resources. vCluster Auto Nodes close this gap by combining isolation, just-in-time elasticity, and environment-agnostic portability.
Running Dedicated Clusters with vCluster: A Technical Deep Dive into Private Nodes
A technical walkthrough of Private Nodes in vCluster v0.27 and how they enable true single-tenant Kubernetes clusters.
Private Nodes in vCluster v0.27 take Kubernetes multi-tenancy to the next level by enabling fully isolated, dedicated clusters. In this deep dive, we walk through setup, benefits, and gotchas, from creating a vCluster with Private Nodes to joining worker nodes and deploying workloads. If you need stronger isolation, simpler lifecycle management, or enterprise-grade security, this guide covers how Private Nodes transform vCluster into a powerful single-tenant option without losing the flexibility of virtual clusters.
We’re Now vCluster Labs
A new name, the same mission, building the best Kubernetes tenancy tools for teams everywhere.
Loft Labs is now vCluster Labs, a name that reflects our focus on building the best Kubernetes multi-tenancy and infrastructure engineering tools. The same team, projects, and mission remain, but with a clearer brand aligned to our product, vCluster.
vCluster v0.27: Introducing Private Nodes for Dedicated Clusters
Dedicated, tenant‑owned nodes with a managed control plane, full isolation without running separate clusters.
Private Nodes complete vCluster’s tenancy spectrum: tenants connect their own nodes to a centrally managed control plane for full isolation, custom runtimes (CRI/CNI/CSI), and consistent performance, ideal for AI/ML, HPC, and regulated environments. Learn how it works and what’s next with Auto Nodes.
How to Scale Kubernetes Without etcd Sharding
Rethinking Kubernetes scale: avoid the risks of etcd sharding with virtual clusters built for performance, stability, and multi-tenant environments.
Is your Kubernetes cluster slowing down under load? etcd doesn’t scale well with multi-tenancy or 30k+ objects. This blog shows how virtual clusters offer an easier, safer way to isolate tenants and scale your control plane, no sharding required.
Three Tenancy Modes, One Platform: Rethinking Flexibility in Kubernetes Multi-Tenancy
Why covering the full Kubernetes tenancy spectrum is critical, and how Private Nodes bring stronger isolation to vCluster
In this blog, we explore why covering the full Kubernetes tenancy spectrum is essential, and how vCluster’s upcoming Private Nodes feature introduces stronger isolation for teams running production, regulated, or multi-tenant environments without giving up Kubernetes-native workflows.
Scaling Kubernetes Without the Pain of etcd Sharding
Why sharding etcd doesn’t scale, and how virtual clusters eliminate control plane bottlenecks in large Kubernetes environments.
OpenAI’s outage revealed what happens when etcd breaks at scale. This post explains why sharding isn’t enough, and how vCluster offloads API load with virtual control planes. Benchmark included.
vCluster: The Performance Paradox – How Virtual Clusters Save Millions Without Sacrificing Speed
How vCluster Balances Kubernetes Cost Reduction With Real-World Performance
Can you really save millions on Kubernetes infrastructure without compromising performance? Yes, with vCluster. In this blog, we break down how virtual clusters reduce control plane overhead, unlock higher node utilization, and simplify multi-tenancy, all while maintaining lightning-fast performance.
5 Must-See KubeCon + CloudNativeCon India 2025 Sessions
A curated list of impactful, technical, and thought-provoking sessions to catch at KubeCon + CloudNativeCon India 2025 in Hyderabad.
KubeCon + CloudNativeCon India 2025 is back in Hyderabad on August 6–7! With so many exciting sessions, it can be hard to choose. Here are 5 standout talks you shouldn't miss, from real-world Kubernetes meltdowns to scaling GitOps at Expedia, and even why Kubernetes is moving to NFTables.