[Discuss] Redundant array of inexpensive servers: clustering?

Derek Martin invalid at pizzashack.org
Tue Apr 1 14:33:44 EDT 2014


On Tue, Apr 01, 2014 at 12:21:20PM -0400, Richard Pieri wrote:
> Like I wrote yesterday, the hard part is clustering applications.

It really depends, but mostly it isn't.  One of my earliest gigs was
managing just such an environment, which mostly included a few custom
applications which were not designed to be clustered.  With the right
hardware, it's fairly trivial.  But the right hardware is expensive.

Most services can be divided into the code that runs it, the hardware
that the code runs on, the data that the service relies on/produces,
and the storage on which that data lives.  So, you:

 - provide redundant hardware, each running your software (usually cheap)
 - provide redundant, replicating data storage (expensive: think
   redundant replicating disk array, such as the expensive solutions
   EMC provides for this purpose)
 - connect your storage to all of the nodes in your cluster

In many cases, the rest of the problem takes care of itself.  The main
remaining problem is loss of transactions which were in progress when
your hardware failed.  Some applications, like databases and those
built upon databases, already have a means of dealing with that
problem.  Some applications can simply ignore transactions that failed
in progress (like e-mail transmission, DNS requests, etc.).  The rest,
that's where the interesting bits lie... but that's a relatively small
piece of the puzzle.

The only hard part is paying for the redundant storage.  My now-
defunct former employer was developing a (relatively) cheap solution
that replaced the expensive EMC array with cheaper redundant RAID
arrays, but that product was subsumed by Red Hat.  I can't comment on
how good it is.

-- 
Derek D. Martin    http://www.pizzashack.org/   GPG Key ID: 0xDFBEAD02
-=-=-=-=-
This message is posted from an invalid address.  Replying to it will result in
undeliverable mail due to spam prevention.  Sorry for the inconvenience.



More information about the Discuss mailing list