Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

getting into virtual computing...



Stephen Adler asked:
> Over the past week or so, I've kinda slipped into virtual computing 
> setting up way to many virtual systems on my desktop. This got me to 
> thinking... If I had all the money in the world... what would be the 
> best computing setup to run virtual hosts most effectively, especially 
> when I configure the virtual hosts with multi-cpus? AMD, Intel? What 
> Motherboard and chip set? what memory bandwidth? etc. etc.

And Dan Ritter said IBM Z architecture, which sounds damn powerful for
server work...


Looking for a good bang-for-buck sweet spot I recently specced out a
smaller server solution (that was not going to be heavy on disk use):

- two matching 1U Supermicro servers
- server model supporting a pair of fast (45nm was it?) multi-core Intel
CPUs,
- matching CPUs that are one notch slower than the fastest available to
save some money (sweet spot issue)
- populate both sockets if the number of VMs loaded at once will often
be more than the number cores in the CPU, sooner if you trust SMP inside
your VMs
- dual hot-swap power supplies in each server
- extra gigabit ethernet board in each for a dedicated link between the
two boxes
- a ton of RAM in each box
- GFS2 on top of DRDB on top of raid 1 (maybe some LVM in there too, I
forget)
- 4 large 1.5TB SATA disks in each box (mix manufacturers for each raid
1 pair!)
- possibly a pair of smallish UPSs, one for one AC input of each server
(in case of power mess up by hosting facility)

Configure the whole thing with KVM and live migration, where either box
alone can carry the whole load.

Try to put them a few feet from each other so that if one catches fire
the other won't be damaged, use some of the high availability heartbeat
software available, and you can build a very survivable system where the
single points of failure are reduced to:

- matching software means software bugs are duplicated (though the host
OS versions need not be exactly the same, one can lag the other for some
components some of the time)

- matching configurations means configuration errors are duplicated

- the two boxes still have to live somewhere and that place might lose
air conditioning, flood, be hit by lightning, catch fire, get blown
over, crumble in an earthquake, lose all power, lose all internet
connections, be robbed, be served with a court order, etc.

- single administrator could get confused (or angry) and mangle both
machines at once

- single design for motherboard, cpu, etc., makes any design flaw duplicated

- duplicated hardware from same manufacturing lot possibly duplicates
any manufacturing flaws (try to get different manufacturing dates)

But otherwise, put a bullet through *any* single component and it can
keep running. (One of the trickiest parts is if both machines keep
running but lose communication with each other: take out the crossover
cable between the two and add some kind of network error so that the
normal network connection between the two selectively drops traffic
between the two, and higher level data will get out of sync.)

It would have cost under $15K (but was never assembled).


For interactive (non-server) use, um, I presume more computrons and more
RAM are good, getting specialized hardware inside a VM (GPU, some addon
device) can be tricky...but I haven't looked at the details.


-kb, the Kent who is still pissed he didn't get to build that system.







BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org