Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
I'm having a bit of an argument with an online backup vendor about robustness of their application when transferring large files. Vendor: You're dropping too many packets. Rich: I can't reproduce this with anything less than 100MB of transfer. Vendor: You're dropping too many packets. Indeed, apparently I am dropping too many. A predecessor in my job had the same kind of run-in with Dell and Red Hat concerning drivers for the gigE interfaces on the 1950-class servers. I *think* that's where the packet loss (actually I think it's corruption) is occurring, at such a low rate (under 1 in 10,000) that the only application that can reproduce it is with utilities which rely on openssh, such as scp. My hunch is that the vendor is using openssh for its file transfers. I'm already well on the path of dumping the vendor anyway in favor of an on-site solution (I want to try bacula, but the learning curve is *immense*, it comes with a nice 800-page user manual!) but I'd like to get to the underlying problem if I can. Any thoughts on running scp for files over 2GB or so, in real-world networks that run on systems with flaky gigE drivers? (Tried reducing port speed to 100/full, doesn't help.) -rich P.S. Do you use bacula? Email me... -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. _______________________________________________ Discuss mailing list [hidden email] http://lists.blu.org/mailman/listinfo/discuss
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |