Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On Thu, Jan 27, 2011 at 12:18 PM, Tom Metro <tmetro-blu-5a1Jt6qxUNc at public.gmane.org> wrote: > Daniel Feenberg wrote: >> We would like to speed up NFS traffic between a file server... >> >> We understand that "ganging" several 1 gig ethernet ports won't speed up a >> single connection, only allow multiple 1 gig connections. That won't help >> us, since we typically would only have one (large) file open on the >> server. > > It seems like this should be addressable in the NFS client (or perhaps > even the next layer up). For example, a hypothetical NFS driver that > implements a read-ahead cache, resulting in separate, parallel requests > for chunks of the file. > > Given the big price difference between 10G and 1G Ethernet, there's > motivation for a software solution, so I wouldn't be surprised if > someone has created an NFS driver or set of kernel parameters tuned for > these circumstances. I've actually seen a number of questions similar to this on various sysadmin lists and wondered the same thing. (Here's one on lopsa-tech: https://lopsa.org/pipermail/tech/2010-November/005225.html) After some thinking/discussion on the subject elsewhere, I've come to the conclusion that bonding multiple network channels together to speed up a single flow is a little tricky. For example, just because you can send packets to the same machine from both ports doesn't mean that they will arrive at different ports on the destination machine. If you use an Ethernet switch to connect everybody, you not only need to distribute packets from a single flow across multiple output ports; but also make sure they have different Ethernet destination addresses. The problem is that the switch will have learned a single port for a particular destination address and your two port output will attempt to enter a single port input. OTOH, if you are willing to dedicate ports for point-to-point connections; the physical topology of the network will force this not to happen. This still won't be good enough because if you don't have the appropriate Ethernet destination address on the packet, the receiving port will reject it as not for it. You might be able to play some games to get past that point such as: 1. Putting the ports into promiscuous mode so all packets are accepted at the Ethernet level. With many OSes, packets that come in on the "wrong" physical interface will still be accepted if the machine is configured for that IP address on any interface. 2. Don't depend on automatic ARP to do IP to Ethernet addressing. Instead force an entry into the ARP table on both sides to send to either the Ethernet broadcast address (or possibly a multicast address) when sending to the IP destination address. The above is still dependent on a way to round-robin packets across multiple physical ports. A did a quick look at FreeBSD and it seems to be able to do so. Not sure if Linux will. Even if this works you still might not get good throughput. Depending on low level scheduling issues in the OS and even on the sender's bus, the destination may see packets from your single flow out of order. This is legal (should work), but I suspect that most network code isn't optimized for this atypical case. I don't have access to a test lab and I see no way to reason around this concern without actual testing. Those who do and can afford to pick up 4 extra 1Gbit cards and two crossover cables might consider trying it out. If you do, please report back here. I'm curious. :-) Bill
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |