Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

How to best do zillions of little files?



I once used the inodes of files as a unique key from which I could
divine the appropriate directory[0] (inode % num-of-dirs) and for
which I could be assured of zero collisions and "random" distribution
in the directory "buckets".

My goals were atomicity, simplicity, and to prevent an arbitrary
number of file-injector processes from generating collisions or
requiring locking.  I wrote a suite of low-level C-based tools to do
this stuff, which I may be able to dig up if you're interested.

Not feasible under Reiserfs. :-)

[0] I actually hashed by 2-3 directory layers.


#if Derek Atkins /* Oct 02, 10:49 */
> I had a scenario where I was trying to create 300,000 files in
> one directory.  The files were named V[number] where [number] was
> monotonically increasing from 0-299999.  I killed the process after
> waiting a couple hours.
#endif /* warlord at MIT.EDU */

-- 
Andy Davidoff
Sen. Unix SysAdmin
Tufts University




BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org