![]() |
Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On Mon, Aug 16, 2010 at 3:53 PM, Jerry Feldman <gaf-mNDKBlG2WHs at public.gmane.org> wrote: > We have a power failure here last week and one of my systems failed to > come up with serious fsck issue on the boot drive: > I've got 3 SATA drives, 1 160GB boot and OS only > 2 2TB drives configured as either and LVM or possibly as a RAID1. ?In > any case I resinstalled RHEL 5.3 and now I need to recover the volume > group and logical volumes. I tried vgchange, vgscan and pvscan with no > results. ?I may have used either LVM with mirroring or RAID1. I just > want to make sure I don't damage any data on the drives. The partitions > on both /dev/sdb and /dev/sdc are Linux LVM (8e). > I just don't want to do anything destructive at this point. You might try "mdadm -q /dev/sdXXX" on the partitions to see if they really are Linux RAID1 partitions. Also try "cat /proc/mdstat". I'm not sure about RHEL, but it is possible that if you didn't configure LVM/MD during the OS install that your initial ramdisk doesn't automatically load the kernel modules for the Linux MD drivers. You might try loading the raid1 module manually and then re-checking /proc/mdstat. Good Luck, Bill Bogstad
![]() |
|
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |