Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
> From: Bill Bogstad [mailto:bogstad at pobox.com] > Sent: Friday, December 13, 2013 5:49 PM > To: Edward Ned Harvey (blu) > Cc: GNHLUG; blu > Subject: Re: [Discuss] Dev Ops - architecture (local not cloud) > > On Fri, Dec 13, 2013 at 1:42 PM, Edward Ned Harvey (blu) > <blu at nedharvey.com> wrote: > >> From: discuss-bounces+blu=nedharvey.com at blu.org [mailto:discuss- > >> bounces+blu=nedharvey.com at blu.org] On Behalf Of Kent Borg > > >> Something else I long ago observed: Because ethernet degrades > >> gracefully it always operates degraded. > > > > Ethernet does NOT degrade gracefully. A graceful degradation would be: > You have 11 machines on a network together. 1 is a server, and 10 are > clients. All 10 clients hammer the server, and all 10 of them each get 10% of > the bandwidth that the server can sustain. This is the behavior of other > network switching topologies (in particular IB and FC) but it is not the > behavior of Ethernet. Because Ethernet is asynchronous, buffered, store > and forward, with flow control packets and collisions... Sure, the most > intelligent switches can eliminate collisions, but flow control is still necessary, > buffering is still necessary... You have network overhead, and congestion > leads to degradation of efficiency. Each of the 10 clients might be getting 5% > of the bandwidth, which is an ungraceful degradation. > > Ed: Can you define what you mean by "collision" in the context of an > Ethernet switch where twisted pair wiring is being used? (i.e. any > of the commonly used *BaseT wiring systems) Did you stop reading at the first instance of the word "collision?" Because I think I went into that immediately thereafter. Switches eliminate collisions (although hubs did not) but everything else is still relevant.
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |