[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ossig] Two obscure questions: directory access times and atomic rename()



Hi all...  sorry for the  cross-post...  I'm writing a queue processor
and I've  got two specific questions that  I hope somebody can help me
out with:

1.    It is  certainly  true  that the   time taken to   open() a file
increases with the number of files in the directory  - e.g.  reading 1
file out of 100 is faster than reading 1  file out of 1000.  Are there
any guidelines  for how this overhead  scales?  What do MTA authors do
to maximize seek time within a directory while  conserving inodes?  If
I put everything in  its own directory then  obviously it'll be a snap
to open(), but I'll burn inodes  twice as fast.   Is there a watershed
point at which performance takes a huge dive at,  say 1024 inodes in a
directory?   Is this  (horrors!)  fs-specific,  where e.g. JFS  scales
better than XFS?  Aargh.

2.  This is a Linux-specific  question as I've heard  that the BSD man
pages are  actually explicit on this issue:  the man page for rename()
indicates that it's atomic when *replacing* the target file, but makes
no mention of atomicity when *creating*  the target file.  Is rename()
atomic when <newname> doesn't yet exist?


-- 
% You are in a maze of twisty passages, all alike.
Christopher DeMarco <cdemarco@fastmail.fm>          
PGP public key ID 0x2E76CF5C @ pgp.mit.edu
+6012 232 2106

PGP signature