Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RAID 1 failure



Not sure, but it sound like, if you can figure out which one has a bad super
block (or if not, choose one), you might break it out of the mirror, then
re-join it to the mirror, rebuilding onto the one you broke out of the
mirror. Then you should be able to go back to the original supterblock.

Please get more info before acking on this! .. JC

-----Original Message-----
From: discuss-admin at blu.org [mailto:discuss-admin at blu.org]On Behalf Of
Jim Van Zandt
Sent: Saturday, January 17, 2004 8:12 AM
To: discuss at blu.org
Subject: RAID 1 failure



I have been using two SCSI disk partitions in a RAID 1 configuration
for several months.  However, I now get this message at system boot:

  fsck.ext3: Invalid argument while trying to open /dev/md0
  /dev/md0:
  The superblock could not be read or does not describe a correct ext2
  filesystem.  If the device is valid and it really contains an ext2
  filesystem (and not swap or ufs or something else), then the superblock
  is corrupt, and you might try running e2fsck with an alternate superblock:
      e2fsck -b 8193 <device>

The suggested command fails with the same message.

However, I can manually mount each of the partitions:

  mount /dev/sda2 /mnt -oro
  kjournald starting.  Commit interval 5 seconds
  EXT3-fs: mounted filesystem with ordered data mode.

The data seems fine.

If each partition can be mounted, the superblocks must be okay.  Why
then can't they be mounted as a RAID 1 volume?

What's the best way to recover?  I do have a complete copy of the
data on another disk, so I can start all over again if need be.
However, I'd like to know what happened first.

	     - Jim Van Zandt


  -----------  details  -------------

/etc/raidtab is set up like this:

	raiddev /dev/md0
		raid-level	1
		nr-raid-disks	2
		nr-spare-disks	0
		chunk-size	4
		persistent-superblock	1
		device		/dev/sda2
		raid-disk	0
		device		/dev/sdb2
		raid-disk	1

fdisk -l /dev/sda reports:

  vanzandt:/proc# fdisk -l

  Disk /dev/sda: 255 heads, 63 sectors, 4492 cylinders
  Units = cylinders of 16065 * 512 bytes

     Device Boot    Start       End    Blocks   Id  System
  /dev/sda1             1         5     40131   83  Linux
  /dev/sda2             6      3895  31246425   83  Linux
  /dev/sda3          3896      4381   3903795   83  Linux
  /dev/sda4          4382      4492    891607+  82  Linux swap

  Disk /dev/sdb: 255 heads, 63 sectors, 4492 cylinders
  Units = cylinders of 16065 * 512 bytes

     Device Boot    Start       End    Blocks   Id  System
  /dev/sdb1             1       596   4787338+  83  Linux
  /dev/sdb2           597      4486  31246425   83  Linux

(I've not changed the partition IDs to 0xfd for autodetection.)

/proc/mdstat reports:

  Personalities : [raid1]
  read_ahead not set
  unused devices: <none>

_______________________________________________
Discuss mailing list
Discuss at blu.org
http://www.blu.org/mailman/listinfo/discuss





BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org