Linux automatically creating LVM partitions on RAID members?

Posted on

Linux automatically creating LVM partitions on RAID members? – Managing your servers can streamline the performance of your team by allowing them to complete complex tasks faster. Plus, it can enable them to detect problems early on before they get out of hand and compromise your business. As a result, the risk of experiencing operational setbacks is drastically lower.

But the only way to make the most of your server management is to perform it correctly. And to help you do so, this article will share nine tips on improving your server management and fix some problem about linux, raid, lvm, , .

I’ve had a software RAID1 array in production for over a year which has LVM partitions on top of /dev/md0. I rebooted over the weekend to apply some kernel patches and now the array won’t come up. Getting the “Continue to wait; or Press S to skip mounting or M for manual recovery” on boot. I hit M, login as root, and the RAID array is up, but none of the LVM partitions are available. It’s like everything is gone. I stopped the array and brought it up on a single disk (it’s RAID1) with --run. Ok, the lvm stuff is there now. So I added a new disk and add it to the degraded array. It starts rebuilding. I do an fdisk of the new disk I just added and there’s a brand new partition there of type ‘Linux LVM’. I did not add that partition. What’s going on? I’m not even using partitions, I’m just using the raw devices.

Solution :

Only way I could get the software raid array stable was to leave LVM off and hard-partition. I tried using both raw devices, as well as type 0xFD partitions and as soon as I tried using LVM across /dev/md0, the partition types would automatically change from 0xFD to “LVM” on all the raid members. Very, very strange. I’ve been using LVM over Linux software RAID for nearly a decade and have never seen this problem before. I’m buying an Areca card.

Leave a Reply

Your email address will not be published.