RAID 1 failure
Jim Van Zandt
jrvz at comcast.net
Sat Jan 17 09:12:11 EST 2004
I have been using two SCSI disk partitions in a RAID 1 configuration
for several months. However, I now get this message at system boot:
fsck.ext3: Invalid argument while trying to open /dev/md0
/dev/md0:
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
The suggested command fails with the same message.
However, I can manually mount each of the partitions:
mount /dev/sda2 /mnt -oro
kjournald starting. Commit interval 5 seconds
EXT3-fs: mounted filesystem with ordered data mode.
The data seems fine.
If each partition can be mounted, the superblocks must be okay. Why
then can't they be mounted as a RAID 1 volume?
What's the best way to recover? I do have a complete copy of the
data on another disk, so I can start all over again if need be.
However, I'd like to know what happened first.
- Jim Van Zandt
----------- details -------------
/etc/raidtab is set up like this:
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
nr-spare-disks 0
chunk-size 4
persistent-superblock 1
device /dev/sda2
raid-disk 0
device /dev/sdb2
raid-disk 1
fdisk -l /dev/sda reports:
vanzandt:/proc# fdisk -l
Disk /dev/sda: 255 heads, 63 sectors, 4492 cylinders
Units = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 5 40131 83 Linux
/dev/sda2 6 3895 31246425 83 Linux
/dev/sda3 3896 4381 3903795 83 Linux
/dev/sda4 4382 4492 891607+ 82 Linux swap
Disk /dev/sdb: 255 heads, 63 sectors, 4492 cylinders
Units = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 596 4787338+ 83 Linux
/dev/sdb2 597 4486 31246425 83 Linux
(I've not changed the partition IDs to 0xfd for autodetection.)
/proc/mdstat reports:
Personalities : [raid1]
read_ahead not set
unused devices: <none>
More information about the Discuss
mailing list