LVM RAID 5 not resulting in logical volume size expected

Posted on

LVM RAID 5 not resulting in logical volume size expected – Managing your servers can streamline the performance of your team by allowing them to complete complex tasks faster. Plus, it can enable them to detect problems early on before they get out of hand and compromise your business. As a result, the risk of experiencing operational setbacks is drastically lower.

But the only way to make the most of your server management is to perform it correctly. And to help you do so, this article will share nine tips on improving your server management and fix some problem about linux, lvm, raid5, , .

I’m having an issue with LVM RAID 5 not allowing me to create a LV that uses the space on all four drives in the VG. What is particulary annoying is that I create fthis very same VG/LV using the same model of drives two years ago on this same system and I don’t recall having this problem.

Here’s the output of pvs and vgs before I attempt to create the RAID 5 LV:

Output of pvs:

PV         VG          Fmt  Attr PSize   PFree 
/dev/sda1  vg_sklad02  lvm2 a--    2.73t  2.73t
/dev/sdb1  vg_sklad01  lvm2 a--    2.73t     0 
/dev/sdc1  vg_sklad02  lvm2 a--    2.73t  2.73t
/dev/sdd1  vg_sklad01  lvm2 a--    2.73t     0 
/dev/sde1  vg_sklad01  lvm2 a--    2.73t     0 
/dev/sdf1  vg_sklad02  lvm2 a--    2.73t  2.73t
/dev/sdg1  vg_sklad02  lvm2 a--    2.73t  2.73t
/dev/sdh1  vg_sklad01  lvm2 a--    2.73t     0 
/dev/sdi2  vg_bootdisk lvm2 a--  118.75g 40.00m
/dev/sdj2  vg_bootdisk lvm2 a--  118.75g 40.00m

Output of vgs:

VG          #PV #LV #SN Attr   VSize   VFree 
vg_bootdisk   2   2   0 wz--n- 237.50g 80.00m
vg_sklad01    4   1   0 wz--n-  10.92t     0 
vg_sklad02    4   0   0 wz--n-  10.92t 10.92t

The command I used last time to create LV using the same model drives on the same system is:

lvcreate --type raid5 -L 8.18T -n lv_sklad01 vg_sklad01

When I issue this same command changing the VG and LV target names I get:

lvcreate --type raid5 -L 8.18T -n lv_sklad02 vg_sklad02

Using default stripesize 64.00 KiB.
Rounding up size to full physical extent 8.18 TiB
Insufficient free space: 3216510 extents needed, but only 2861584 available

This doesn’t make sense as I have four drives with a capacity of 2.73T. 4 * 2.73 = 10.92. Subtracting one for parity gives me 8.19T, which is the size of the original LV I have on this system. Banging. My. Head. Against. Monitor. 😕

Grasping at straws, I also tried:

[root@sklad ~]# lvcreate --type raid5 -l 100%VG -n lv_sklad02 vg_sklad02
  Using default stripesize 64.00 KiB.
  Logical volume "lv_sklad02" created.

This results in a LV 2/3 the size I expect. Output from lvs:

LV         VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
lv_root    vg_bootdisk rwi-aor--- 102.70g                                    100.00          
lv_swap    vg_bootdisk rwi-aor---  16.00g                                  100.00          
lv_sklad01 vg_sklad01  rwi-aor---   8.19t                                    100.00          
lv_sklad02 vg_sklad02  rwi-a-r---   5.46t                                    0.18

After issuing the above lvcreate command the output of pvs, vgs, and lvs are as follows:

[root@sklad ~]# pvs
  PV         VG          Fmt  Attr PSize   PFree 
  /dev/sda1  vg_sklad02  lvm2 a--    2.73t     0 
  /dev/sdb1  vg_sklad01  lvm2 a--    2.73t     0 
  /dev/sdc1  vg_sklad02  lvm2 a--    2.73t     0 
  /dev/sdd1  vg_sklad01  lvm2 a--    2.73t     0 
  /dev/sde1  vg_sklad01  lvm2 a--    2.73t     0 
  /dev/sdf1  vg_sklad02  lvm2 a--    2.73t     0 
  /dev/sdg1  vg_sklad02  lvm2 a--    2.73t  2.73t
  /dev/sdh1  vg_sklad01  lvm2 a--    2.73t     0 
  /dev/sdi2  vg_bootdisk lvm2 a--  118.75g 40.00m
  /dev/sdj2  vg_bootdisk lvm2 a--  118.75g 40.00m

[root@sklad ~]# vgs
  VG          #PV #LV #SN Attr   VSize   VFree 
  vg_bootdisk   2   2   0 wz--n- 237.50g 80.00m
  vg_sklad01    4   1   0 wz--n-  10.92t     0 
  vg_sklad02    4   1   0 wz--n-  10.92t  2.73t

[root@sklad ~]# lvs
  LV         VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv_root    vg_bootdisk rwi-aor--- 102.70g                                    100.00          
  lv_swap    vg_bootdisk rwi-aor---  16.00g                                    100.00          
  lv_sklad01 vg_sklad01  rwi-aor---   8.19t                                    100.00          
  lv_sklad02 vg_sklad02  rwi-a-r---   5.46t                                    2.31            

For some reason there is unallocated space in vg_sklad02 (the VG I’m working on). Shouldn’t the -l 100%VG used all available space in the VG?

LV lv_sklad01 and lv_sklad02 should be the same size as they are created from the same drives, and as far as I recall I attempted to use the same create command.

Does anyone have any suggestions as to what I’m doing wrong?

Solution :

As I said in my question, I’ve done this before and have a capture log of what I did to accomplish it two years ago. For some reason the identical lvcreate command didn’t work. To get this LV create I had to specify the number of stripes using -i 3. So, the working command was:

lvcreate -i 3 --type raid5 -L 8.18T -n lv_sklad02 vg_sklad02

I guess something changed in updates to the LVM tools?

UPDATE

They did indeed make a change to LVM2. From rpm -q –changelog lvm2

* Fri Jul 29 2016 Peter Rajnoha <prajnoha@redhat.com> - 7:2.02.162-1
<...>
- Add allocation/raid_stripe_all_devices to reinstate previous behaviour.
- Create raid stripes across fixed small numbers of PVs instead of all PVs.
<...>

Nice to know I wasn’t completely insane. 🙂 I RTFM’d, but not the right FM I guess. :-))

Leave a Reply

Your email address will not be published.