Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
Daniel Feenberg wrote: > How do you then share the disk among multiple machines? We went to 10GB > ethernet so that multiple computers could access the same file system on > a NAS box. With a Fibrechannel SAN, I couldn't figure out how to share > the file system, except to have one of the SAN clients be an NFS server, > which means we'd need 10GBE to get the good performance anyway. Was I > wrong? Not wrong in your reasoning. Wrong, perhaps, in your conclusions. GigE is not gigabit throughput. It's 500Mbit throughput in each direction. You won't ever get performance near local disk over GigE without lots of very specific optimizations for a very limited set of I/O operations. If consolidation is a requirement and performance is a requirement then I'd take a serious look at hybrid 10GigE NICs that can do TCP/IP and FCoE, use fibre channel to access disk volumes, use a cluster-aware file system for shared volumes when possible, and figure out what to do about the rest of the nodes. Then use lots of fast spindles on the storage system. > We also have a strong need for a very fast /tmp local to each machine. I > put 2 new Samsung SSD drives in a RAID 0, but for long sequential data > (our situation) the performance was similar to local 7,200 RPM drives. As a data point: I've repeatedly stated on this list that flash SSD sustained write performance is terrible. The only way to get reasonable flash write performance is to use LOTS of flash chips in large, distributed arrays. And by "lots" I mean many dozens, maybe hundreds of chips, not the 4 or 8 or 16 you find in consumer SSDs, with a price tag around 100 times higher than what you paid for those Samsung disks. -- Rich P.
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |