Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
Daniel Feenberg wrote: > If I have 10 disks in a JBOD, I will need to > mount at least 10 partitions, and none will depend on any other. OK. I had inferred that you meant concatenation by your usage of JBOD. My mistake. > Well, there are two ways to use two disks without redundancy. You can > interleve secotors or place one drive after the other. The former may > gain you some speed over a single drive and has a name - RAID 0. I don't > know the name of the other, or even if it is available. As mentioned in my prior message, Linear mode - as in linear concatenation of address/blocks/cylinders/partitions/etc. - is the name used by the Linux MD subsystem for concatenating drives sequentially. Of course this can also be accomplished with LVM. And I discovered that some people use the JBOD acronym as a generic way to refer to a collection of logically concatenated drives. > It would have the advantage that partitions might be expandable by > adding drives. Indeed that's the case with Linear mode, and it's pretty much the reason for the existence of LVM. >> That aside, I'd be curious to know which setup, if either, would prove >> to be more easily (partially) recovered: a two drive Linear array or a >> two drive LVM set, if one of the two drives fails. > > I wouldn't think either would be recoverable - there isn't any > redundancy and individual files could be spread across both drives. One > can imagine a filesystem that covers multiple drives, and only loses > some of the files when a drive dies, but I am not aware of one. I wouldn't be so quick to jump to that conclusion. If you consider the simple case of a single drive with a file system, if half of the blocks on the drive become unreadable, you should be able to recover a portion of your data. Data stored where the blocks failed will obviously be unrecoverable, and fragmented files that overlap the bad blocks will be (partially) unrecoverable, and the recovery process itself may be very difficult if critical file system meta data (like allocation tables) is lost, but it shouldn't be a total loss. In contrast, if you overlay MD or LVM on top of several physical devices, it's less clear what will happen if one of the physical devices fails. Will those subsystems attempt to use all of the remaining working physical devices, passing through the bad device as a range of bad blocks, or will they simply fail to create the virtual block device, if everything at the physical layer isn't perfectly to their liking. This is where I think there can be a lot of variability between the different choices of subsystems. I found this: http://tldp.org/HOWTO/Software-RAID-HOWTO-1.html Linear mode ... If one disk crashes you will most probably lose all your data. You can however be lucky to recover some data, since the filesystem will just be missing one large consecutive chunk of data. which implies that, at least for RAID Linear, the subsystem will still function with the loss of a physical device. This page says: http://www.linuxdocs.org/HOWTOs/Antares-RAID-sparcLinux-HOWTO/Antares-RAID-sparcLinux-HOWTO-5.html RAID-linear...decreases the overall reliability: if any one drive fails, the combined drive will fail. But that's stating the obvious and doesn't address partial recovery. A thread from someone actually trying to recover a 4-drive Linear array with one failed drive: http://oss.sgi.com/archives/xfs/2004-07/msg00057.html Not particularly informative, and never came to a conclusion with the OP saying whether they were successful, but it suggested that once the bad drive was replaced, repairing the file system for partial recovery was possible. On the LVM side of things, this Linux Journal article: Recovery of RAID and LVM2 Volumes http://www.linuxjournal.com/article/8874 actually isn't that relevant to my question (it doesn't deal with failed or corrupt drives; instead LVM volume group naming conflicts), but the reader comments suggest partial recovery of an LVM volume set is possible after some of the physical devices have failed. Digging around in the LVM list archives: http://linux.msede.com/lvm_mlist/archive/ turns up mention of a "partial" mode: http://linux.msede.com/lvm_mlist/archive/2003/08/0189.html which lets you use an LVM volume with missing physical devices. So it seems the capability to deal with missing or failed physical devices is present in both subsystems. There seems to be a lot more chatter on the topic of LVM recovery than RAID Linear recovery. Perhaps because it is newer or more popular. It also seem to be more complex (LVM2 consist of several layers with different commands used for each layer). While searching I also ran across this tangentially related article: Using LVM snapshots for filesystem recovery http://blog.madduck.net/geek/2006.08.30-lvm-for-filesystem-recovery which illustrates how LVM could actually prove advantageous in a disk recovery situation. The author used LVM's snapshot feature (I wasn't aware it had that capability) to mount a corrupt file system in copy-on-write mode, then he ran the tools to repair the file system. This way any changes made by the repair tools would get written to the copy-on-write file on another disk, and if unsuccessful, could be discarded while preserving the original file system for other repair attempts. -Tom -- Tom Metro Venture Logic, Newton, MA, USA "Enterprise solutions through open source." Professional Profile: http://tmetro.venturelogic.com/ -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |