Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On Thu, 2008-03-06 at 18:13 -0500, Ben Holland wrote: > I didn't think that half of each file was on each disk, I thought that > there were fairly large swipes of files, like say perhaps even per > page or every 100MB, the first 100MB was written to one disk, the next > to the other and back and forth, or even pages, i-nodes, file > structure, the whole deal. Most RAID setups use a stripe size of somewhere between 32K and 256K. Assuming a 2-disk RAID 0 array with a 128K stripe size, for every 128K of data written to disk, 64K goes to one disk, 64K to the other, and they're written as simultaneously as possible. The I/O of RAID 0 is quite good, because you're getting roughly # of disks * bandwidth of a single disk total storage bandwidth (assuming you don't have bottlenecks elsewhere, of course). But the down side is that any disk failing means the entire array is horked. Just google, and you can find plenty on the topic. Really, this is quite old news. :) > On Thu, Mar 6, 2008 at 5:57 PM, Jarod Wilson <[hidden email]> > wrote: > On Thu, 2008-03-06 at 16:11 -0500, Ben Holland wrote: > > Alright, I was going to say something about how RAID 0 just > stripes with no > > parody, a RAID 1 mirrors, so you get a whole drive of > redundancy RAID 5 is > > stripe with parody so you need 3 drives minimum. If you > loose a RAID 0 you > > lost half of your data, you can't recover from it, and > chances are you lost > > all of the other side as well because your file system is > totally hosed > > (though could someone please verify that). > > > Yes, that is correct, the file system is completely hosed, > you've > probably got zero chance at recovery without taking the entire > array, > including the original failed disk, to a recovery specialist. > > NEVER EVER EVER use RAID0 for data you can't afford to lose. > It should > only be used when you need high-performance short-term storage > (such as > for a build system or test system that does heavy disk I/O, > but the data > doesn't need to be kept around). > > > > > On Thu, Mar 6, 2008 at 3:45 PM, Vince Kimball > <[hidden email]> wrote: > > > > > I believe RAID 0 has no redundancy, so a single drive > failure destroys > > > the volume. > > > > > > Scott R. Ehrlich wrote: > > > > I have a Dell PowerEdge 2950 with PERC 5/i, and 6 > disks. Two disks > > > > have one logical volume via a hardware RAID 1 and > consist of CentOS 5 > > > > 64-bit; the remaining four comprise a logical volume via > a hardware > > > > RAID 0, and is all user data. > > > > > > > > One drive on the RAID 0 went bad. I removed it while > the system was > > > > on, tried a reboot, and the system hangs at RedHat > Linux... Starting > > > > > > > > I tried to boot from a Fedora 8 CD, which sees the boot > drives fine, > > > > but not the RAID 0 partitions. > > > > > > > > Visiting the PERC controller setup claims the RAID 0 > volume is > > > > unavailable, or something similar, though it is defined, > with one of > > > > the disks labelled as missing, since I removed it from > the system. > > > > > > > > How do I get the partitions on the RAID 0 setup back? > I have some of > > > > the data, but need the rest, if possible, and the > remaining three > > > > disks appear physically healthy. I'm also going to > work with Dell > > > > for some answers, and I've done a lot of googling. > > > > > > > > Thanks. > > > > > > > > Scott
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |