Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
I am using Ubuntu now, and have used a number of distros in the past. I've played with file systems and a few years ago I did some benchmarks for a project. At that time: ReiserFS was good for many small file, but performed badly for larger files. Also a high write and file creation environment performed badly. ReiserFS was a bit buggy. JFS and XFS behaved similarly to each other, in that big files and moderately large amounts of moderately large/small files worked well. Worked well in a high write and file creation environment. IBM's JFS seemed more stable and with a better tool chain. EXT3 was a stodgy all around lame performer. Was one of the worst performers in dynamic environments. EXT2 was had pretty good performance but that can be attributed to a lack of journaling. For discussion, what is the general consensus on file systems now? Are the above assumptions still valid? Opinions? I have a project that may require a million plus directories. Ideally, I'd like to have them all at the same level and perform well, but if I have too I can use the hierarchical hash-bucked strategy, i.e. top/0/0/0, top/0/0/1, top/0/0/2 ... top/0/1/0, etc. to keep the number of files per directory less than 1000. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. _______________________________________________ Discuss mailing list [hidden email] http://lists.blu.org/mailman/listinfo/discuss
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |