LVM/RAID recovery question
Rich Braun
richb-RBmg6HWzfGThzJAekONQAQ at public.gmane.org
Tue Sep 23 19:48:07 EDT 2008
Matthew Gillen offered:
> Add a device to your
> LVM, then do a 'pvmove' (the man page has a good example). Then remove
> your broken "RAID" device from the LVM and re-create it properly, add it
> back to LVM and 'pvmove' back. I think you can even do all that without
> taking the system offline.
This thing is just totally *gonzo*. I'm pretty amazed at the behavior I'm
seeing.
So I plug in a new drive (hot-swap, no reboot so far). Its name becomes
/dev/sdh. I set up the partition table to match /dev/sda (two RAID1
partitions). I do this:
# mdadm --create --level=raid1 --raid-devices=2 --chunk=256 /dev/md5
/dev/sdh2 missing
That creates in theory a new, untainted device into which I should be able to
move the system volume group. Wrong! Here's what I get:
# pvmove /dev/sda2 /dev/md5
Physical Volume "/dev/sda2" not found in Volume Group "system"
So where is the system vg?
# pvdisplay -C
PV VG Fmt Attr PSize PFree
...
/dev/md5 system lvm2 a- 232.00G 4.00G
The LVM thinks it's already on the newly-generated lvm (with the old size)!
(The first time I tried this, I used the original md device name /dev/md1, but
the same thing happened when I used a new device /dev/md5!) If I then do
this, I see the original:
# mdadm --stop /dev/md5
# pvdisplay -C
PV VG Fmt Attr PSize PFree
...
/dev/sda2 system lvm2 a- 325.97G 97.97G
I'm not sure it's safe to do a pvremove to wipe /dev/md5 (pvdisplay seems to
show the UUID of the /dev/sda2 device for some reason), and if I use pvmove to
there is some other underlying corruption of this volume.
So I have no way of telling pvmove where the source data lives. I suppose I
could move it from /dev/sda2 to /dev/sdh2 (leaving /dev/md5 stopped) but that
defeats the purpose: I'm trying to convert this from a raw partition into a
RAID1 volume.
I'll try a reboot to see if that clears this weirdness.
-rich
More information about the Discuss
mailing list