Home Deploy application with kubernetes statefulset
Post
Cancel

Deploy application with kubernetes statefulset

StatefulSet is the workload API object used to manage stateful applications.

Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.

If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed.

Source

The example below demonstrates the components of a StatefulSet.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
$ cat perfbench-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: perfbench
spec:
  serviceName: perfbench
  replicas: 1
  selector:
    matchLabels:
      app: perfbench
  template:
    metadata:
      labels:
        app: perfbench
    spec:
      containers:
      - name: perfbench
        image: noname/perfbench:latest
        volumeMounts:
        - name: perfbench-data
          mountPath: /perfdata
        - name: perfbench-log
          mountPath: /perflog
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: perfbench-data
    spec:
      storageClassName: <storage-class>
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
  - metadata:
      name: perfbench-log
    spec:
      storageClassName: <storage-class>
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi

The statefulset can be created as below.

1
$ kubectl apply -f perfbench-statefulset.yaml

In case of any failure to create the statefulset, you can check the events with the following commands.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
$ kubectl describe pod perfbench-0
Name:         perfbench-0
Namespace:    default
Priority:     0
Node:         <hostname>/<ip-address>
Start Time:   Sat, 26 Feb 2022 06:22:27 +0000
Labels:       app=perfbench
              controller-revision-hash=perfbench-7657fb8779
              statefulset.kubernetes.io/pod-name=perfbench-0
Annotations:  cni.projectcalico.org/containerID: c8569d9a3f01f546ca92fb2d0cba98d4d971933e53ab83373ed34c91040d92bc
              cni.projectcalico.org/podIP: 192.168.201.145/32
              cni.projectcalico.org/podIPs: 192.168.201.145/32
Status:       Running
IP:           192.168.201.145
IPs:
  IP:           192.168.201.145
Controlled By:  StatefulSet/perfbench
Containers:
  perfbench:
    Container ID:   docker://b6a2c838837395c1c85913d4735e3167d3cea6b5ed3ade276584461d643eaee5
    Image:          noname/perfbench:latest
    Image ID:       docker-pullable://<noname/perfbench@sha256:5460e3c04ea972afcde3db092b514919867f87974d012f046c538ac816c7aaae
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Sat, 26 Feb 2022 06:22:35 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /perfdata from perfbench-data (rw)
      /perflog from perfbench-log (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x8qhw (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  perfbench-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  perfbench-data-perfbench-0
    ReadOnly:   false
  perfbench-log:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  perfbench-log-perfbench-0
    ReadOnly:   false
  kube-api-access-x8qhw:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  20s (x2 over 21s)  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         18s                default-scheduler  Successfully assigned default/perfbench-0 to <hostname>
  Normal   Pulling           16s                kubelet            Pulling image "<noname/perfbench:latest"
  Normal   Pulled            11s                kubelet            Successfully pulled image "<noname/perfbench:latest" in 4.368378217s
  Normal   Created           10s                kubelet            Created container perfbench
  Normal   Started           10s                kubelet            Started container perfbench

You can check the pod status and login to the pod container as below.

1
2
3
4
5
$ kubectl get pod
NAME         READY   STATUS    RESTARTS   AGE
perfbench-0   1/1     Running   0          3m44s

$ kubectl exec -it perfbench-0 -- bash

Reference

This post is licensed under CC BY 4.0 by the author.

Docker-based fio benchmarking - Part Two

Keep a docker container running and not exiting

Comments powered by Disqus.