Home Understanding thin provisioning volume and snapshot
Post
Cancel

Understanding thin provisioning volume and snapshot

Thin provisioning volume

Logical volume can be thinly provisioned. It allows storage administrator to overcommit the physical storage. In other words, it’s possible to create a logical volume which is larger than the available extents.

Create thin provisioning volume

In the following example, we create a 500GiB thin pool and 100GiB volume.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
$ vgcreate vg1 /dev/nvme0n1
  Physical volume "/dev/nvme0n1" successfully created.
  Volume group "vg1" successfully created

$ vgs
  VG     #PV #LV #SN Attr   VSize   VFree
  centos   1   3   0 wz--n- 893.05g      0
  vg1      1   0   0 wz--n- 931.51g 931.51g

$ lvcreate -L 500G --thinpool thinpool1 vg1
  Thin pool volume with chunk size 256.00 KiB can address at most 63.25 TiB of data.
  Logical volume "thinpool1" created.

$ lvs
  LV        VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home      centos -wi-ao---- 839.05g
  root      centos -wi-ao----  50.00g
  swap      centos -wi-ao----   4.00g
  thinpool1 vg1    twi-a-tz-- 500.00g             0.00   10.41

$ lvs -ao name,size,stripesize,chunksize,metadata_percent
  LV                LSize   Stripe Chunk   Meta%
  home              839.05g     0       0
  root               50.00g     0       0
  swap                4.00g     0       0
  [lvol0_pmspare]   128.00m     0       0
  thinpool1         500.00g     0  256.00k 10.41
  [thinpool1_tdata] 500.00g     0       0
  [thinpool1_tmeta] 128.00m     0       
  
$ lvcreate -V 100G --thin -n thinvol1 vg1/thinpool1
  Logical volume "thinvol1" created.

$ lvs
  LV        VG     Attr       LSize   Pool      Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home      centos -wi-ao---- 839.05g
  root      centos -wi-ao----  50.00g
  swap      centos -wi-ao----   4.00g
  thinpool1 vg1    twi-aotz-- 500.00g                  0.00   10.42
  thinvol1  vg1    Vwi-a-tz-- 100.00g thinpool1        0.00

Thin pool volume chunk size

By default, lvm2 starts with 64KiB chunk size and increase its value when the resulting size of the thin pool metadata device grows above 128MiB.

In the previous example, the 500GiB thin pool results in 256KiB chunk size. In the following example, the 100MiB thin pool results in 64KiB chunk size.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ lvcreate  -L 100M --thinpool thinpool2 vg1
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  Logical volume "thinpool2" created.

$ lvs -ao name,size,stripesize,chunksize,metadata_percent
  LV                LSize   Stripe Chunk   Meta%
  home              839.05g     0       0
  root               50.00g     0       0
  swap                4.00g     0       0
  [lvol0_pmspare]   128.00m     0       0
  thinpool1         500.00g     0  256.00k 10.42
  [thinpool1_tdata] 500.00g     0       0
  [thinpool1_tmeta] 128.00m     0       0
  thinpool2         100.00m     0   64.00k 10.84
  [thinpool2_tdata] 100.00m     0       0
  [thinpool2_tmeta]   4.00m     0       0
  thinvol1          100.00g     0       0  

The “-c” option can be used to specify the desired chunk size if needed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ lvcreate -c 128k -L 100M --thinpool thinpool3 vg1
  Thin pool volume with chunk size 128.00 KiB can address at most 31.62 TiB of data.
  Logical volume "thinpool3" created.

$ lvs -ao name,size,stripesize,chunksize,metadata_percent
  LV                LSize   Stripe Chunk   Meta%
  home              839.05g     0       0
  root               50.00g     0       0
  swap                4.00g     0       0
  [lvol0_pmspare]   128.00m     0       0
  thinpool1         500.00g     0  256.00k 10.42
  [thinpool1_tdata] 500.00g     0       0
  [thinpool1_tmeta] 128.00m     0       0
  thinpool2         100.00m     0   64.00k 10.84
  [thinpool2_tdata] 100.00m     0       0
  [thinpool2_tmeta]   4.00m     0       0
  thinpool3         100.00m     0  128.00k 10.84
  [thinpool3_tdata] 100.00m     0       0
  [thinpool3_tmeta]   4.00m     0       0
  thinvol1          100.00g     0       0

Use the following criteria for using the chunk size:

  • A smaller chunk size requires more metadata and hinders performance, but provides better space utilization with snapshots.
  • A bigger chunk size requires less metadata manipulation, but makes the snapshot less space efficient.

Normal snapshot volume

The LVM snapshot provides the ability to create a virtual image of device at a point in time without a service interruption.

When the original data block is overwritten after snapshot is taken, the original data needs to be copied to the snapshot volume. This will introduce copy-on-write overhead whenever the original block is overwritten. The state of original data can be reconstructed with the snapshot.

Thinly-provisioned snapshot volume

Unlike normal snapshot volume, the thin snapshot and volume are all about metadata. When the volume is snapshot, its metadata are copied for the thin snapshot volume use. As the metadata of the volume is changed, the snapshot volume still addresses the original data blocks. The new data will be written to new blocks. In other words, overwrites actually write the data to new blocks. Thus, the original data blocks can be still addressed by snapshot volume metadata after the data change.

Create the snapshot volume

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
$ lvcreate -s -L 100G -n thinvol1-snap /dev/vg1/thinvol1
  Logical volume "thinvol1-snap" created.

$ ls /dev/vg1
thinpool2  thinpool3   thinvol1  thinvol1-snap

$ lvs
  LV            VG     Attr       LSize   Pool      Origin   Data%  Meta%  Move Log Cpy%Sync Convert
  home          centos -wi-ao---- 839.05g
  root          centos -wi-ao----  50.00g
  swap          centos -wi-ao----   4.00g
  thinpool1     vg1    twi-aotz-- 500.00g                    0.00   10.42
  thinpool2     vg1    twi-a-tz-- 100.00m                    0.00   10.84
  thinpool3     vg1    twi-a-tz-- 100.00g                    0.00   10.43
  thinvol1      vg1    owi-a-tz-- 100.00g thinpool1          0.00
  thinvol1-snap vg1    swi-a-s--- 100.00g           thinvol1 0.00

$ lvs -ao name,size,stripesize,chunksize,metadata_percent
  LV                LSize   Stripe Chunk   Meta%
  home              839.05g     0       0
  root               50.00g     0       0
  swap                4.00g     0       0
  [lvol0_pmspare]   128.00m     0       0
  thinpool1         500.00g     0  256.00k 10.42
  [thinpool1_tdata] 500.00g     0       0
  [thinpool1_tmeta] 128.00m     0       0
  thinpool2         100.00m     0   64.00k 10.84
  [thinpool2_tdata] 100.00m     0       0
  [thinpool2_tmeta]   4.00m     0       0
  thinpool3         100.00m     0  128.00k 10.84
  [thinpool3_tdata] 100.00m     0       0
  [thinpool3_tmeta]   4.00m     0       0
  thinvol1          100.00g     0       0
  thinvol1-snap     100.00g     0    4.00k

The chunk size of snapshot volume can be specified with “-c” option.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ lvcreate -s -c 128k -L 100G -n thinvol1-snap2 /dev/vg1/thinvol1
  Logical volume "thinvol1-snap2" created.

$ lvs -ao name,size,stripesize,chunksize,metadata_percent
  LV                LSize   Stripe Chunk   Meta%
  home              839.05g     0       0
  root               50.00g     0       0
  swap                4.00g     0       0
  [lvol0_pmspare]   128.00m     0       0
  thinpool1         500.00g     0  256.00k 10.42
  [thinpool1_tdata] 500.00g     0       0
  [thinpool1_tmeta] 128.00m     0       0
  thinpool2         100.00m     0   64.00k 10.84
  [thinpool2_tdata] 100.00m     0       0
  [thinpool2_tmeta]   4.00m     0       0
  thinpool3         100.00m     0  128.00k 10.84
  [thinpool3_tdata] 100.00m     0       0
  [thinpool3_tmeta]   4.00m     0       0
  thinvol1          100.00g     0       0
  thinvol1-snap     100.00g     0    4.00k
  thinvol1-snap2    100.00g     0  128.00k

Remove the snaphost volume

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ lvremove /dev/vg1/thinvol1-snap
Do you really want to remove active logical volume vg1/thinvol1-snap? [y/n]: y
  Logical volume "thinvol1-snap" successfully removed
$ lvremove /dev/vg1/thinvol1-snap2
Do you really want to remove active logical volume vg1/thinvol1-snap2? [y/n]: y
  Logical volume "thinvol1-snap2" successfully removed

$ lvs
  LV        VG     Attr       LSize   Pool      Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home      centos -wi-ao---- 839.05g
  root      centos -wi-ao----  50.00g
  swap      centos -wi-ao----   4.00g
  thinpool1 vg1    twi-aotz-- 500.00g                  0.00   10.42
  thinpool2 vg1    twi-a-tz-- 100.00m                  0.00   10.84
  thinpool3 vg1    twi-a-tz-- 100.00g                  0.00   10.43
  thinvol1  vg1    Vwi-a-tz-- 100.00g thinpool1        0.00

Remove the volume and pool

1
2
3
4
5
6
$ lvremove /dev/vg1/thinvol1 -f
  Logical volume "thinvol1" successfully removed

$ lvremove /dev/vg1/thinpool1
Do you really want to remove active logical volume vg1/thinpool1? [y/n]: y
  Logical volume "thinpool1" successfully removed  

Reference

This post is licensed under CC BY 4.0 by the author.

Using virtctl to access virtual machine in Kubernetes

Using node selector to assign virtual mahcines to a node

Comments powered by Disqus.