Getting started with vcluster

What are Virtual Kubernetes Clusters(VCluster)?

Virtual clusters are fully working Kubernetes clusters that run on top of other Kubernetes clusters. Compared to fully separate “real” clusters, virtual clusters reuse worker nodes and networking of the host cluster. They have their own control plane and schedule all workloads into a single namespace of the host cluster. Like virtual machines, virtual clusters partition a single physical cluster into multiple separate ones.

For more about the vcluster, please refer to its official website. The goal for this post is to learn how to deploy vcluster in an existing kubernetes cluster.

Install vCluster CLI

Requirements:

  • kubectl (check via kubectl version)
  • helm v3 (check with helm version)
  • a working kube-context with access to a Kubernetes cluster (check with kubectl get namespaces)

To install kubectl, refer to this post.

1
2
$ kubectl version
Kustomize Version: v4.5.7

To install Helm(Helm is the package manager for Kubernetes):

1
2
3
4
5
6
7
8
9
10
11
12
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3

$ chmod 700 get_helm.sh

$ ./get_helm.sh
Downloading https://get.helm.sh/helm-v3.13.1-linux-amd64.tar.gz
Verifying checksum... Done.
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm

$ helm version
version.BuildInfo{Version:"v3.13.1" ...

To install vcluster CLI:

1
2
3
4
$ curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster

$ vcluster --version
vcluster version 0.16.4

Method 1 - Deploy vcluster with kubectl

Check the kubernetes cluster nodes:

1
2
3
4
5
6
7
$ export KUBECONFIG=./kubeconfig
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node11 Ready <none> 39h v1.27.2 10.10.0.11 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://23.0.6
node12 Ready control-plane 39h v1.27.2 10.10.0.12 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://23.0.6
node13 Ready <none> 39h v1.27.2 10.10.0.13 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://24.0.6
node14 Ready <none> 39h v1.27.2 10.10.0.14 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://24.0.6

Create namespace for vcluster:

1
2
3
4
5
$ kubectl create namespace vcluster-my-vcluster
$ kubectl get ns
NAME STATUS AGE
vcluster-my-vcluster Active 7s
[...]

Make a storageclass default for vcluster:

1
2
3
4
5
$ kubectl patch storageclass px-db -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
px-db (default) kubernetes.io/portworx-volume Delete Immediate true 29h
[...]

Deploy vcluster:

1
2
3
4
5
6
7
8
9
10
$ helm template my-vcluster vcluster --repo https://charts.loft.sh -n vcluster-my-vcluster | kubectl apply -f -
serviceaccount/vc-my-vcluster created
serviceaccount/vc-workload-my-vcluster created
configmap/my-vcluster-coredns created
configmap/my-vcluster-init-manifests created
role.rbac.authorization.k8s.io/my-vcluster created
rolebinding.rbac.authorization.k8s.io/my-vcluster created
service/my-vcluster created
service/my-vcluster-headless created
statefulset.apps/my-vcluster created

Verify vcluster:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@cluster.local cluster.local kubernetes-admin

$ vcluster list

NAME | NAMESPACE | STATUS | VERSION | CONNECTED | CREATED | AGE | PRO
--------------+----------------------+---------+---------+-----------+-------------------------------+--------+------
my-vcluster | vcluster-my-vcluster | Running | 0.16.4 | | 2023-10-31 21:40:04 +0000 UTC | 52m30s |

$ kubectl get statefulset -n vcluster-my-vcluster
NAME READY AGE
my-vcluster 1/1 52m

$ kubectl get pod -n vcluster-my-vcluster
NAME READY STATUS RESTARTS AGE
coredns-68bdd584b4-79rpj-x-kube-system-x-my-vcluster 1/1 Running 0 27m
my-vcluster-0 2/2 Running 0 52m

$ kubectl get pvc -n vcluster-my-vcluster
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-my-vcluster-0 Bound pvc-97bddb5c-47ee-4b62-916a-240c96e84676 5Gi RWO px-db 52m

$ kubectl get ns
NAME STATUS AGE
vcluster-my-vcluster Active 54m
[...]

$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
px-db (default) kubernetes.io/portworx-volume Delete Immediate true 30h
[...]

Connect to vcluster:

1
2
3
4
5
6
7
8
$ vcluster connect my-vcluster -n vcluster-my-vcluster
22:08:33 done Switched active kube context to vcluster_my-vcluster_vcluster-my-vcluster_kubernetes-admin@cluster.local
22:08:33 warn Since you are using port-forwarding to connect, you will need to leave this terminal open
- Use CTRL+C to return to your previous kube context
- Use `kubectl get namespaces` in another terminal to access the vcluster
Forwarding from 127.0.0.1:11459 -> 8443
Forwarding from [::1]:11459 -> 8443
Handling connection for 11459

Open a new terminal and verify the vcluster nodes:

1
2
3
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node11 Ready <none> 4m43s v1.28.2+k3s1 10.10.62.211 <none> Fake Kubernetes Image 4.19.76-fakelinux docker://19.3.12

Note: By default, there is only one node available for vcluster.

Disconnect from vcluster:

1
2
$ vcluster disconnect
22:37:06 info Successfully disconnected from vcluster: my-vcluster and switched back to the original context: kubernetes-admin@cluster.local

Open a new bash with the vcluster KUBECONFIG defined:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ vcluster connect my-vcluster -n vcluster-my-vcluster -- bash
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* vcluster_my-vcluster_vcluster-my-vcluster_kubernetes-admin@cluster.local vcluster_my-vcluster_vcluster-my-vcluster_kubernetes-admin@cluster.local vcluster_my-vcluster_vcluster-my-vcluster_kubernetes-admin@cluster.local

$ kubectl get ns
NAME STATUS AGE
kube-system Active 45m
kube-public Active 45m
kube-node-lease Active 45m
default Active 45m

$ vcluster disconnect
22:51:03 info Successfully disconnected from vcluster: my-vcluster and switched back to the original context: kubernetes-admin@cluster.local

Delete vcluster:

1
2
3
4
5
6
7
8
9
10
11
12
$  vcluster list

NAME | NAMESPACE | STATUS | VERSION | CONNECTED | CREATED | AGE | PRO
--------------+----------------------+---------+---------+-----------+-------------------------------+----------+------
my-vcluster | vcluster-my-vcluster | Running | 0.16.4 | | 2023-10-31 21:40:04 +0000 UTC | 1h12m38s |

$ kubectl delete namespace vcluster-my-vcluster

$ vcluster list

NAME | NAMESPACE | STATUS | VERSION | CONNECTED | CREATED | AGE | PRO
-------+-----------+--------+---------+-----------+---------+-----+------

Method 2 - Deploy vcluster with vcluster CLI

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
$ cat cluster_values.yaml
sync:
services:
enabled: true
configmaps:
enabled: true
secrets:
enabled: true
endpoints:
enabled: true
pods:
enabled: true
ephemeralContainers: false
status: false
events:
enabled: true
persistentvolumeclaims:
enabled: true
ingresses:
enabled: true
fake-nodes:
enabled: true # will be ignored if nodes.enabled = true
fake-persistentvolumes:
enabled: true # will be ignored if persistentvolumes.enabled = true
nodes:
enabled: true
# If nodes sync is enabled, and syncAllNodes = true, the virtual cluster
# will sync all nodes instead of only the ones where some pods are running.
syncAllNodes: true
# nodeSelector is used to limit which nodes get synced to the vcluster,
# and which nodes are used to run vcluster pods.
# A valid string representation of a label selector must be used.
nodeSelector: ""
# syncNodeChanges allows vcluster user edits of the nodes to be synced down to the host nodes.
# Write permissions on node resource will be given to the vcluster.
syncNodeChanges: false
persistentvolumes:
enabled: true
storageclasses:
enabled: false
legacy-storageclasses:
enabled: true
priorityclasses:
enabled: true
networkpolicies:
enabled: true
volumesnapshots:
enabled: false
poddisruptionbudgets:
enabled: true
serviceaccounts:
enabled: true

# Scale up etcd
etcd:
replicas: 2
fsGroup: 12345
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 12345
runAsNonRoot: true
runAsUser: 12345
seccompProfile:
type: RuntimeDefault


# Scale up controller manager
controller:
replicas: 2
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 12345
runAsNonRoot: true
runAsUser: 12345
seccompProfile:
type: RuntimeDefault

# Scale up api server
api:
replicas: 2
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 12345
runAsNonRoot: true
runAsUser: 12345
seccompProfile:
type: RuntimeDefault

# Scale up DNS server
coredns:
replicas: 2
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 12345
runAsNonRoot: true
runAsUser: 12345
seccompProfile:
type: RuntimeDefault

ingress:
# Enable ingress record generation
enabled: true
# Ingress path type
pathType: ImplementationSpecific
apiVersion: networking.k8s.io/v1
ingressClassName: "nginx"
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"

init:
helm:
# public chart
- chart:
name: metrics-server
repo: https://kubernetes-sigs.github.io/metrics-server/
version: 3.9.0
# optional field
values: |-
replicas: 2
defaultArgs:
- --cert-dir=/tmp
- --kubelet-use-node-status-port
- --metric-resolution=15s
args:
- /metrics-server
- --kubelet-insecure-tls=true
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
release:
name: metrics-server
namespace: kube-system

multiNamespaceMode:
enabled: true

$ vcluster create my-vcluster -f vcluster_values.yaml
23:00:34 info Creating namespace vcluster-my-vcluster
23:00:34 info Create vcluster my-vcluster...
23:00:34 info execute command: helm upgrade my-vcluster /tmp/vcluster-0.16.4.tgz-1464010202 --kubeconfig /tmp/2856381900 --namespace vcluster-my-vcluster --install --repository-config='' --values /tmp/901005439 --values vcluster_values.yaml
23:00:35 done Successfully created virtual cluster my-vcluster in namespace vcluster-my-vcluster
23:00:35 info Waiting for vcluster to come up...
23:00:51 warn vcluster is waiting, because vcluster pod my-vcluster-0 has status: ContainerCreating
23:01:02 done Switched active kube context to vcluster_my-vcluster_vcluster-my-vcluster_kubernetes-admin@cluster.local
23:01:02 warn Since you are using port-forwarding to connect, you will need to leave this terminal open
- Use CTRL+C to return to your previous kube context
- Use `kubectl get namespaces` in another terminal to access the vcluster
Forwarding from 127.0.0.1:11704 -> 8443
Forwarding from [::1]:11704 -> 8443

Use CTRL+C to switch back to original context:

1
2
3
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@cluster.local cluster.local kubernetes-admin

Check the vcluster nodes:

1
2
3
4
5
6
7
8
9
10
11
$ vcluster connect my-vcluster -n vcluster-my-vcluster -- bash
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* vcluster_my-vcluster_vcluster-my-vcluster_kubernetes-admin@cluster.local vcluster_my-vcluster_vcluster-my-vcluster_kubernetes-admin@cluster.local vcluster_my-vcluster_vcluster-my-vcluster_kubernetes-admin@cluster.local

$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node12 Ready control-plane 9m59s v1.27.2 10.10.50.155 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://23.0.6
node13 Ready <none> 9m59s v1.27.2 10.10.9.202 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://24.0.6
node14 Ready <none> 9m59s v1.27.2 10.10.7.186 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://24.0.6
node11 Ready <none> 9m59s v1.27.2 10.10.53.174 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://23.0.6

Check the node labels:

1
2
3
4
$ kubectl get nodes -o wide --show-labels
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME LABELS
node14 Ready <none> 14m v1.27.2 10.10.7.186 <none> CentOS Linux 7 (Core) 5.7.12-1.el7.elrepo.x86_64 docker://24.0.6 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/fio=true,kubernetes.io/hostname=node14,kubernetes.io/os=linux
[...]

Note: The node label which was created in host cluster is also available in vcluster. In this example, the label is “fio=true”

Disconnect from the vcluster:

1
2
3
4
5
6
7
8
9
10
11
$ vcluster disconnect
$ exit
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@cluster.local cluster.local kubernetes-admin

$ vcluster list

NAME | NAMESPACE | STATUS | VERSION | CONNECTED | CREATED | AGE | PRO
--------------+----------------------+---------+---------+-----------+-------------------------------+------+------
my-vcluster | vcluster-my-vcluster | Running | 0.16.4 | | 2023-10-31 23:00:35 +0000 UTC | 6m6s |

Create storageclass in host cluster:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fio-sc
provisioner: pxd.portworx.com
parameters:
repl: "1"
sharedv4: "true"
nodes: "20a547ae-b610-4939-94a8-3a87c87ee9b6"
reclaimPolicy: Retain
allowVolumeExpansion: true

$ kubectl apply -f sc.yaml

Verify the storageclass in vcluster:

1
2
3
4
$ vcluster connect my-vcluster -n vcluster-my-vcluster -- bash

$ kubectl get sc | grep fio-sc
fio-sc pxd.portworx.com Retain Immediate true 44s

Note: The storageclass created in host cluster is also available in vcluster.

Create namespace in vcluster:

1
2
3
4
5
6
7
8
9
$ kubectl create ns app-fio

$ kubectl get ns
NAME STATUS AGE
default Active 22m
kube-system Active 22m
kube-public Active 22m
kube-node-lease Active 22m
app-fio Active 4s

Verify namespace in host cluster context:

1
2
3
4
5
6
7
8
9
10
11
12
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@cluster.local cluster.local kubernetes-admin

$ kubectl get ns
NAME STATUS AGE
default Active 41h
kube-node-lease Active 41h
kube-public Active 41h
kube-system Active 41h
portworx Active 31h
vcluster-my-vcluster Active 24m

Note: The created namespace in vcluster does not appear in the host cluster context.

Create fio pod in vcluster:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
$ cat fio-pod.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fio-pvc1
namespace: app-fio
annotations:
volume.beta.kubernetes.io/storage-class: fio-sc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Gi
---
apiVersion: v1
kind: Pod
metadata:
name: fio-pod1
namespace: app-fio
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/fio
operator: In
values:
- "true"
containers:
- name: fio-pod-container1
image: ljishen/fio:3.6
command: [ "sleep" ]
args: [ "30d" ]
volumeMounts:
- name: fio-vol1
mountPath: /mnt/data1
volumes:
- name: fio-vol1
persistentVolumeClaim:
claimName: fio-pvc1

$ kubectl apply -f fio-pod.yaml

Verfiy POD and PVCs in vcluster:

1
2
3
4
5
6
7
$ kubectl get pods -n app-fio -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
fio-pod1 1/1 Running 0 9m29s 10.10.126.209 node14 <none> <none>

$ kubectl get pvc -n app-fio
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
fio-pvc1 Bound pvc-9997f6a0-b584-4b1b-b520-4ebe1afefffe 500Gi RWX fio-sc 4m2s

Connect to the POD in vcluster:

1
2
3
$ kubectl exec -it fio-pod1 -n app-fio -- sh
/ # hostname
fio-pod1

Verify POD and PVCs in the host cluster context:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* kubernetes-admin@cluster.local cluster.local kubernetes-admin

$ kubectl get ns
NAME STATUS AGE
default Active 41h
kube-node-lease Active 41h
kube-public Active 41h
kube-system Active 41h
portworx Active 31h
vcluster-my-vcluster Active 51m

$ kubectl get pod
No resources found in default namespace.
$ kubectl get pvc
No resources found in default namespace.

$ kubectl get pod -n app-fio
No resources found in app-fio namespace.
$ kubectl get pvc -n app-fio
No resources found in app-fio namespace

Note: The created namespace app-fio doesn’t appear in the host cluster context. Thus, the related POD and PVCs doesn’t appear either.

Reference