Home Setting up LVM volumes on a mdraid array
Post
Cancel

Setting up LVM volumes on a mdraid array

Setting up mdraid array

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
$ mdadm --create /dev/md0 --name=mdvol --level=raid0 --raid-devices=2 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

$ cat /proc/mdstat
Personalities : [raid0] [linear]
md0 : active raid0 nvme4n1[3] nvme3n1[2] nvme2n1[1] nvme1n1[0]
      15002423296 blocks super 1.2 512k chunks

unused devices: <none>

$ mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Feb  1 19:07:32 2022
        Raid Level : raid0
        Array Size : 15002423296 (13.97 TiB 15.36 TB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Tue Feb  1 19:07:32 2022
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : mdvol
              UUID : aa6fe868:3aa2391d:37a21dc8:d0f4c5f1
            Events : 0

    Number   Major   Minor   RaidDevice State
       0     259        8        0      active sync   /dev/nvme1n1
       1     259        9        1      active sync   /dev/nvme2n1
       2     259       10        2      active sync   /dev/nvme3n1
       3     259       11        3      active sync   /dev/nvme4n1

Settup LVM volumes on mdraid array

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
$ pvcreate /dev/md0
$ pvs | egrep "PV|md"
  PV             VG     Fmt  Attr PSize  PFree
  /dev/md0              lvm2 ---  13.97t 13.97t
$ vgcreate testvg /dev/md0  

$ vgs | egrep "VG|testvg"
  VG     #PV #LV #SN Attr   VSize  VFree
  testvg   1   0   0 wz--n- 13.97t 13.97t

$ lvcreate -n testlv11 -L 500G testvg -Wy --yes

$ lvs | egrep "LV|testvg"
  LV       VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  testlv11 testvg -wi-a----- 500.00g  

$ lvs -ao name,size,stripesize,chunksize,metadata_percent | egrep "LV|testlv"
  LV       LSize   Stripe Chunk Meta%
  testlv11 500.00g     0     0  

$ mkfs.ext4 /dev/testvg/testlv11

$ mkdir -p /mnt/testmnt11

$ mount /dev/testvg/testlv11 /mnt/testmnt11

$ lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
<omitted...>
nvme1n1             259:8    0  3.5T  0 disk
└─md0                 9:0    0   14T  0 raid0
  └─testvg-testlv11 253:3    0  500G  0 lvm   /mnt/testmnt11
nvme2n1             259:11   0  3.5T  0 disk
└─md0                 9:0    0   14T  0 raid0
  └─testvg-testlv11 253:3    0  500G  0 lvm   /mnt/testmnt11
nvme3n1             259:10   0  3.5T  0 disk
└─md0                 9:0    0   14T  0 raid0
  └─testvg-testlv11 253:3    0  500G  0 lvm   /mnt/testmnt11
nvme4n1             259:9    0  3.5T  0 disk
└─md0                 9:0    0   14T  0 raid0
  └─testvg-testlv11 253:3    0  500G  0 lvm   /mnt/testmnt11  
This post is licensed under CC BY 4.0 by the author.

Linux Software RAID

btrfs - A modern copy on write(CoW) filesystem for Linux

Comments powered by Disqus.