Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
Bill Horne <bill at horne.net> > At some point, the Internet will need a major overhaul. Will it? > For what common carriers are trying to do...TCP/IP can't be made to fit. In 2001 I was working at a router company with a solution in search of this problem. The company blew through its VC funding while customers just threw fatter pipes (and big iron from Juniper) at the problem. Over time, I've become pretty well convinced that the latency problem can and will ultimately be solved by fatter pipes, not fancy new protocols. It's just plain cheaper to put monitoring in place on a link, and route around it between upgrades, than to overhaul TCP/IP. And even with the occasional glitches, the system works vastly better than the phone system ever did in its pre-ESS days. And it got that way over a 20-year period, not 120 years. > This fight will be > about which mega-corporations carve out virtual slices of Internet > bandwidth so that they can avoid paying for their own. That part I agree with. But I think those mega-corps will be running IPv4 on their backbone links for the next 10-15 years, even if China gets going with IPv6 in the next 3-5 years. Few of them are holding board-level discussions about the claimed "BufferBloat" problem. ;-) The gearheads who recognize BufferBloat will ultimately do the obvious: crank down the buffering and adjust the retry parameters. And flatten out the number of hops from source to destination. I worked at a disk-drive company back in the 1990s that had to deal with virtually the same issue: on a contract with Time-Warner to ship disks for video-on-demand disk arrays (back when I/O performance for video was a bottleneck), we had to adjust and/or gut out the retry logic to punt on bad sectors rather than impact latency. -rich
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |