Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] Dev Ops - architecture (local not cloud)



> From: discuss-bounces+blu=nedharvey.com at blu.org [mailto:discuss-
> bounces+blu=nedharvey.com at blu.org] On Behalf Of Kent Borg
> 
> Sure, the
> server will have somewhat faster parts, but it might also have more than one
> user. And the network might have some congestion.

Depends on the users.  Suppose you have 24 engineers, who all perform maximum intensity simulations all the time.  Then you're not going to gain anything by building a centralized server 24x as powerful as a single workstation.

But suppose you have 24 engineers, who are sometimes drawing pictures, sometimes doing email, browsing the web, attending meetings...  You build a centralized server in the closet 24x as powerful as a workstation...  You provision 24 VM's that each have 12x the processing capacity of a single workstation.  (You're obviously overprovisioning your hardware.)  Now, whenever some individual tries to execute a big simulation run, they're able to go 12x faster than they would have been able to go on their individual workstation.  And in the worst case, every individual hammers the system simultaneously, so the system load balances, and they all get the meager performance of an individual workstation.  So in the worst case they're no worse off than they otherwise would have been, but in the best case, they're able to accomplish much more productivity, much faster.


> Whenever the power blinks at my job my computer stays happy, because I
> have a tiny UPS that can ride out short outages.  But the rest of the services
> on our network seem to take the better part of an hour to all come back.

Sounds like a symptom of bad IT.


> Something else I long ago observed: Because ethernet degrades gracefully it
> always operates degraded.

Ethernet does NOT degrade gracefully.  A graceful degradation would be:  You have 11 machines on a network together.  1 is a server, and 10 are clients.  All 10 clients hammer the server, and all 10 of them each get 10% of the bandwidth that the server can sustain.  This is the behavior of other network switching topologies (in particular IB and FC) but it is not the behavior of Ethernet.  Because Ethernet is asynchronous, buffered, store and forward, with flow control packets and collisions...  Sure, the most intelligent switches can eliminate collisions, but flow control is still necessary, buffering is still necessary...  You have network overhead, and congestion leads to degradation of efficiency.  Each of the 10 clients might be getting 5% of the bandwidth, which is an ungraceful degradation.



BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org