Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On Mon, Feb 23, 2009 at 04:28:12PM -0500, Jerry Natowitz wrote: > A year or two ago Google published results of a big study they did on > disk reliability. I don't remember much about it, except that they > found that RAIDs are usually built using disks from the same > manufacturing lot, and that failure modes are often quite similar for > disks from a particular lot. This results in a higher probability than > expected that a disk failure will involve multiple drives. > > I read this to mean that RAID 6 or RAID 1+0 (sometimes called 10), or > possibly 5+0 should be used rather than RAID 5. Robin Harris from ZDNet/Storage Mojo had a few posts on the Google paper and RAID 5 in general: Google's Disk Failure Experience http://storagemojo.com/2007/02/19/googles-disk-failure-experience/ Why RAID 5 stops working in 2009 http://blogs.zdnet.com/storage/?p=162 NetApp Weighs In On Disks http://storagemojo.com/2007/02/26/netapp-weighs-in-on-disks/ Which has this nice incendiary line: "RAID 5 today verges on professional malpractice" Fun stuff. -b -- irrationality is the square root of all evil. <douglas hofstadter>
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |