![]() |
Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
The difference between hardware & software RAIDS is tremendous, and the difference between Software RAIDs & Software RAIDs, and Hardware RAIDs & Hardware RAIDs is also pretty substantial. Software RAIDs: o simple meta-devices that allow for striping & mirroring (RAID0 & 1) o Solaris MD & Linux LVM o logical volume managers (LVMs)that allow for disk-swapping & RAID5 o Apple OSX Server Disk Utility? o more elaborate LVMs that manage disk pools, snapshots, & corruption o Solaris ZFS Hardware RAIDs: o simple RAID cards or onboard RAID controllers for stripes & mirrors o RAID0, RAID1, RAID1+0 o more elaborate RAID controller cards o RAID0, 1, & 5 o maybe hot swap or global spare disks o standard Enterprise RAIDs o RAID0, 1, & 5, hot-swappable disks, controllers o Fiber-channel controllers or backplanes/disks o maybe SaS disks (Serial-attached SCSI) o luns, partitions o Advanced Enterprise RAIDs o same as above, but with 4gb interfaces & RAID6, 50 & 60 o host-to-lun mapping, and perhaps some SAN functionality o everything except enclosure hot-swappable This list is _really_ simplified, and the range of available functionality is very generalized. In reality each product is different from the rest, and you really need to assess what your needs and desires are for the specific goal you are trying to achieve. I usually recommend that you base your decision partly on what the acceptable downtime is, and what that amount of downtime of the system will cost your company. If it's going to cost you tens of thousands to be down for half a day, then to spend $10k or more on a reliable, high-availability RAID is not unacceptable. You also need to assess what the application is, as databases, file services, and application services all have different requirements. As a note, Apple's OSX Leopard is supposed to include the ZFS filesystem/LVM. That should be an incredible improvement from the HFS+ system that is being employed presently. Grant M. -- Grant Mongardi Senior Systems Engineer NAPC gmongardi-cGmSLFmkI3Y at public.gmane.org http://www.napc.com/ 781.894.3114 phone 781.894.3997 fax NAPC | technology matters -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.
![]() |
|
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |