Separate Clusters Aren’t as Secure as You Think — Lessons from a Cloud Platform Engineering Director
.png)
.png)
For years, Kubernetes security decisions have followed a familiar pattern:
if a workload needs stronger isolation, give it its own cluster.
The logic feels sound. Separate clusters create clean boundaries, smaller blast radius, and an easy answer to the question, “Is this isolated?”
But according to a Director of Cloud Platform Engineering at one of the most technically advanced companies in the fast food industry, that assumption starts to break down at scale.
As he put it plainly during a recent conversation:
“Separate clusters aren’t more secure either.”
Early on, the platform team defaulted to separate clusters for higher-security workloads. It aligned with industry best practices and made conversations with security and compliance stakeholders straightforward.
“From a conceptual standpoint, separate clusters feel safer. You have a clean boundary, and you can say, ‘This thing can’t impact that thing.”
At small scale, this approach worked. But as the organization grew, and the number of clusters grew with it, the operational reality began to change.
The most important realization wasn’t that Kubernetes isolation failed.
It was that security problems don’t stay isolated when the same pattern is repeated hundreds of times.
“If you have a vulnerability in one cluster, now you potentially have that same vulnerability replicated across 150 clusters.”
Instead of shrinking risk, cluster sprawl amplified it.
Each cluster introduced slight differences:
What looked like strong isolation on a whiteboard became fragile in practice.
“The assumption is that every cluster is configured perfectly and stays that way forever. That’s just not how real systems work.”
As the fleet expanded, the platform team discovered that consistency , not separation, was the hardest thing to maintain.
“Security isn’t just about where the boundary is. It’s about whether you can actually enforce the same controls everywhere, every time.”
Auditing became more complex. Confidence in the overall security posture declined. Even well-intentioned teams introduced drift simply by operating independently.
Separate clusters had quietly changed the trust model:
instead of trusting a system, the organization was now trusting that every cluster was being operated correctly at all times.
One of the most important shifts in thinking was recognizing that tenancy isn’t a yes-or-no decision.
“Not every workload needs the same level of isolation. Treating them all the same forces you into the most expensive and least flexible option by default.”
Different workloads carry different risk profiles:
Defaulting all of them to “needs its own cluster” removed the ability to make intentional tradeoffs.
The takeaway wasn’t that isolation is unnecessary.
“There are absolutely cases where you want stronger guarantees. The mistake is assuming the only way to get there is another cluster.”
Instead, the team began thinking in terms of graduated tenancy models:
The conversation shifted from how many clusters to how reliably isolation could be enforced.
Separate clusters still have a place. But treating them as the default response to security concerns comes with real downsides.
“Every cluster you add is another thing you have to secure, audit, upgrade, and keep in sync. That overhead doesn’t disappear, it compounds.”
Over time:
The lesson from this platform team wasn’t that separate clusters are unsafe.
It was that security doesn’t scale just because separation does.
When every workload gets its own cluster by default, vulnerabilities don’t disappear, they multiply. Policies drift. Audits get harder. Confidence erodes.
The most mature platforms don’t ask “Is this isolated?”
They ask “Is this the right isolation model for this workload, and can we operate it consistently?”
That shift, from binary decisions to intentional tenancy, is where real security at scale starts.
Taken together, the takeaways are clear:
Modern platform engineering is about offering safe options, not forcing one model everywhere.
Once tenancy is treated as a spectrum rather than a binary choice, the question becomes how to offer different isolation models without fragmenting the platform.
This is where virtual clusters fit naturally.
vCluster enables platform teams to:
Instead of creating a new cluster for every security concern, platform teams can apply the right tenancy model per workload, without multiplying control planes or allowing configurations to drift out of sync.
Importantly, this doesn’t replace separate clusters. It complements them by giving platform teams more options before reaching for the most expensive form of isolation.
Want to explore what a tenancy spectrum looks like in practice?
Learn how platform teams use vCluster to offer isolated Kubernetes environments without the operational overhead of managing hundreds of separate clusters.
Deploy your first virtual cluster today.