RAID5 for Linux
Bob Keyes
bob at sinister.com
Wed Apr 28 22:16:27 EDT 2004
On Wed, 28 Apr 2004, Derek Atkins wrote:
> "Rich Braun" <richb at pioneer.ci.net> writes:
>
> > - You don't get any penalty for running RAID1 in software, and you can't get a
> > performance boost running RAID1 in hardware, on a 2-drive system. You would
> > get a performance boost running hardware RAID5 vs. software RAID5, but the
> > boost may not be measurable if your application is not I/O-bound.
>
> Uh, actually, there is a penalty. With S/W RAID1 the kernel has to
> perform two writes across the PCI/IDE bus (one to each Raid-1 mirror
> drive), whereas with hardware RAID1 you only need to write across the
> PCI bus once and then the raid controller will send out the duplicate
> writes across the disk busses. This extra writing will definitely
> cause a peformance penalty (on writes) for software raid that you wont
> see in a hardware raid.
>
> Also, historically it had NOT been recommended to use both parts of an
> IDE bus because the master/slave relationship reduces the bus
> throughput. Has this changed recently? This is another reason why a
> hardware raid is better. A good card would use direct busses for each
> drive. This is also why you see CDrom drives on hdc and not hdb.
Well I am going to try out a software raid first, but use a second IDE PCI
controller to have 4 channels, one per drive. If the performance isn't
good enough, then I'll move up to hardware raid. I figure I'll get four
160GB drives, as they are the cheapest per byte.
Then, I'll try to decide if I should try to boot from the raid, or have a
simple 10gb or whatever drive to boot the OS from, and the raid just for
data. Hrm....that means another controller.
More information about the Discuss
mailing list