Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RAID5 for Linux

"Rich Braun" <richb at> writes:

> - You don't get any penalty for running RAID1 in software, and you can't get a
> performance boost running RAID1 in hardware, on a 2-drive system.  You would
> get a performance boost running hardware RAID5 vs. software RAID5, but the
> boost may not be measurable if your application is not I/O-bound.

Uh, actually, there is a penalty.  With S/W RAID1 the kernel has to
perform two writes across the PCI/IDE bus (one to each Raid-1 mirror
drive), whereas with hardware RAID1 you only need to write across the
PCI bus once and then the raid controller will send out the duplicate
writes across the disk busses.  This extra writing will definitely
cause a peformance penalty (on writes) for software raid that you wont
see in a hardware raid.

Also, historically it had NOT been recommended to use both parts of an
IDE bus because the master/slave relationship reduces the bus
throughput.  Has this changed recently?  This is another reason why a
hardware raid is better.  A good card would use direct busses for each
drive.  This is also why you see CDrom drives on hdc and not hdb.


       Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
       Member, MIT Student Information Processing Board  (SIPB)
       URL:    PP-ASEL-IA     N1NWH
       warlord at MIT.EDU                        PGP key available

BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!

Boston Linux & Unix /