Virtual Kubernetes Clusters: Tips and Tricks
2022-10-23, 12:15–12:45, Room 2

A lot of interest in virtual Kubernetes clusters and the open source tool vcluster has developed over the last year. vcluster allows platform teams to provide virtual Kubernetes clusters to their users. A virtual cluster appears to be a full-blown Kubernetes cluster to the users, but it runs within a namespace of the host cluster. This allows users to have admin access to the cluster, use multiple namespaces in it, and manage global objects like CRDs.

During the last year, many new features have been added to vcluster, and we’ve seen it used for use cases that we hadn’t even imagined. This talk will provide tips and tricks to help teams get more from their virtual clusters and show off some fun things you can do with them.

We’ll cover: How to share resources like ingresses from the host cluster, using vcluster’s isolated mode to automatically add network policies and Pod Security Standards to your virtual clusters, pausing and resuming virtual clusters, monitoring and backing up virtual clusters, and writing plugins with the vcluster SDK. We’ll also cover some weirder examples like using vcluster for shadow IT (users don’t need to have elevated privileges in the host cluster to start a virtual cluster) and running a virtual cluster inside a virtual cluster.

Multi-tenancy in Kubernetes is hard. Teams generally default to either namespace isolation or provisioning tons of clusters, and neither of those solutions is very satisfactory. Virtual Kubernetes clusters allow teams to share clusters while giving users the access they need.

Since we open sourced vcluster in April of 2021, we've seen a lot of users embrace the tool and find interesting uses for it. It's very easy to get started with virtual clusters but there's a lot more to do to make them very useful. I'd like to give some tips on using virtual clusters to the community while also throwing out some of the fun and weird ideas we've heard from the community.

Note: vcluster isn't currently a CNCF project but it is a CNCF certified Kuberenetes distribution.