[Discuss] Why NOT use Linux?

Shirley Márquez Dúlcey mark at buttery.org
Thu Feb 13 16:21:31 EST 2014


On Thu, Feb 13, 2014 at 3:43 PM, Joe Polcari <joe at polcari.com> wrote:
> Really - which linux does this?
> I've built numerous servers with 24 SSDs and on the servers with no raid
> controllers, the disks are always enumerated in the order of what sata
> controller port they are plugged into for RH, Cent and Debian variants.

If you have nothing but drives on one type of motherboard controller
(SATA or PATA) it's not a problem; every Linux distro I have ever
worked with does the right thing there (enumerating the drives to
match the hardware controller sequence). But other cases are more
problematic: a mix of SATA and PATA drives, drives connected to PCI
interface cards, USB external drives or SD cards with file systems
combined with internal drives, etc.

The enumeration order is NOT consistent when more than one type of
controller is involved, nor is Linux consistent about the order of
enumerating more than one non-motherboard controller of the same type.
I had problems back in the day when I had a storage server with two
PCI PATA controllers until Linux went to the hack of remembering UUIDs
and assigning the drive assignments based on those. But the UUID thing
created a new problem; replacing a dead drive takes extra steps
because the UUID of the new drive doesn't match the old one.

In case you are curious: the system in question had a RAID 5 setup
using five 200GB drives. Each PCI controller (cheap Promise cards that
had come bundled with a couple of the drives; companies were doing
that at the time because many motherboards didn't yet support Ultra
IDE speeds) had two PATA channels and the motherboard had two. If you
are setting up RAID putting two PATA drives on the same channel
doesn't work well because a failure of one drive, either master or
slave, tends to make the other drive on the channel stop working. So I
needed six PATA channels for reliability: one for each of the five
hard drives plus a sixth for the optical drive. I created that circa
2000 and used it for a number of years; it weathered two
non-simultaneous drive failures. When a third drive failed I replaced
it all with a RAID 1 system using a pair of 1.5 GB drives, and
repurposed the remaining 200 and 250GB drives; the first two failed
200GB drives had been replaced with 250GB drives because 200GB drives
were no longer being made when I bought the replacements. The five
original drives came from three different manufacturers - Maxtor
(still a separate company back then), Seagate, and Western Digital -
and no two were from the same manufacturing lot, so I had followed
best practice for avoiding correlated failures.



More information about the Discuss mailing list