Tech Blog by vClusterPress and Media Resources

Separate Clusters Aren’t as Secure as You Think — Lessons from a Cloud Platform Engineering Director

Jan 14, 2026
|
4
min Read
Separate Clusters Aren’t as Secure as You Think — Lessons from a Cloud Platform Engineering Director

For years, Kubernetes security decisions have followed a familiar pattern:

if a workload needs stronger isolation, give it its own cluster.

The logic feels sound. Separate clusters create clean boundaries, smaller blast radius, and an easy answer to the question, “Is this isolated?”

But according to a Director of Cloud Platform Engineering at one of the most technically advanced companies in the fast food industry, that assumption starts to break down at scale.

As he put it plainly during a recent conversation:

“Separate clusters aren’t more secure either.”

The original promise of separate clusters

Early on, the platform team defaulted to separate clusters for higher-security workloads. It aligned with industry best practices and made conversations with security and compliance stakeholders straightforward.

“From a conceptual standpoint, separate clusters feel safer. You have a clean boundary, and you can say, ‘This thing can’t impact that thing.”

At small scale, this approach worked. But as the organization grew, and the number of clusters grew with it, the operational reality began to change.

When security issues start to multiply

The most important realization wasn’t that Kubernetes isolation failed.

It was that security problems don’t stay isolated when the same pattern is repeated hundreds of times.

“If you have a vulnerability in one cluster, now you potentially have that same vulnerability replicated across 150 clusters.”

Instead of shrinking risk, cluster sprawl amplified it.

Each cluster introduced slight differences:

  • Kubernetes versions drifted
  • Policies evolved unevenly
  • Admission controls weren’t always identical
  • Upgrade timelines slipped

What looked like strong isolation on a whiteboard became fragile in practice.

“The assumption is that every cluster is configured perfectly and stays that way forever. That’s just not how real systems work.”

Consistency became the real security challenge

As the fleet expanded, the platform team discovered that consistency , not separation, was the hardest thing to maintain.

“Security isn’t just about where the boundary is. It’s about whether you can actually enforce the same controls everywhere, every time.”

Auditing became more complex. Confidence in the overall security posture declined. Even well-intentioned teams introduced drift simply by operating independently.

Separate clusters had quietly changed the trust model:

instead of trusting a system, the organization was now trusting that every cluster was being operated correctly at all times.

Isolation isn’t binary

One of the most important shifts in thinking was recognizing that tenancy isn’t a yes-or-no decision.

“Not every workload needs the same level of isolation. Treating them all the same forces you into the most expensive and least flexible option by default.”

Different workloads carry different risk profiles:

  • Development and test environments
  • Internal services
  • Sensitive or regulated workloads
  • Platform extensions and shared components

Defaulting all of them to “needs its own cluster” removed the ability to make intentional tradeoffs.

Stronger isolation doesn’t have to mean more clusters

The takeaway wasn’t that isolation is unnecessary.

“There are absolutely cases where you want stronger guarantees. The mistake is assuming the only way to get there is another cluster.”

Instead, the team began thinking in terms of graduated tenancy models:

  • Shared environments for low-risk workloads
  • Dedicated or private compute for stronger isolation
  • Clear control-plane boundaries where autonomy is required
  • Separate clusters only when regulatory or organizational constraints demand them

The conversation shifted from how many clusters to how reliably isolation could be enforced.

Why defaulting to separate clusters is risky

Separate clusters still have a place. But treating them as the default response to security concerns comes with real downsides.

“Every cluster you add is another thing you have to secure, audit, upgrade, and keep in sync. That overhead doesn’t disappear, it compounds.”

Over time:

  • Security teams lose visibility
  • Platform teams lose leverage
  • Developers lose velocity
  • Cost and risk grow together

The real lesson

The lesson from this platform team wasn’t that separate clusters are unsafe.

It was that security doesn’t scale just because separation does.

When every workload gets its own cluster by default, vulnerabilities don’t disappear, they multiply. Policies drift. Audits get harder. Confidence erodes.

The most mature platforms don’t ask “Is this isolated?”

They ask “Is this the right isolation model for this workload, and can we operate it consistently?”

That shift, from binary decisions to intentional tenancy, is where real security at scale starts.

What this means for platform teams

Taken together, the takeaways are clear:

  • Security is an operational problem, not just an architectural one
  • Cluster count is a blunt instrument for isolation
  • Consistency matters more than physical separation alone
  • Separate clusters should be a deliberate choice, not the default

Modern platform engineering is about offering safe options, not forcing one model everywhere.

How vCluster supports intentional tenancy

Once tenancy is treated as a spectrum rather than a binary choice, the question becomes how to offer different isolation models without fragmenting the platform.

This is where virtual clusters fit naturally.

vCluster enables platform teams to:

  • Provide isolated Kubernetes control planes per team or workload
  • Enforce centralized governance and lifecycle management
  • Choose between shared nodes, dedicated nodes, or private nodes depending on security and compliance needs
  • Maintain consistency by default, even as isolation requirements vary

Instead of creating a new cluster for every security concern, platform teams can apply the right tenancy model per workload, without multiplying control planes or allowing configurations to drift out of sync.

Importantly, this doesn’t replace separate clusters. It complements them by giving platform teams more options before reaching for the most expensive form of isolation.

Learn more

Want to explore what a tenancy spectrum looks like in practice?

Learn how platform teams use vCluster to offer isolated Kubernetes environments without the operational overhead of managing hundreds of separate clusters.

👉 Explore vCluster tenancy models

Share:
Ready to take vCluster for a spin?

Deploy your first virtual cluster today.