Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Blog | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Asynchronous File I/O on Linux



On 05/16/2010 01:30 PM, Nathan Meyers wrote:
> Sorry... yes, I'm addressing a different level of buffering. Consider m=
e=20
> ingloriously run down between first and second.
>
>  =20
Not so inglorious, but the discussion has been on direct system call
level I/O.
Years ago I did benchmark some I/O performance issues back in the 1980s.
While Linux did not yet exist, I found that stdio actually outperformed
direct I/O. The issue came up where one of my colleagues (former Bell
labs guy) stated that stdio would always be slower than direct I/O
because of the double buffering. It is very hard to really make a
definitive statement because both the code in libc as well as the code
in the Linux kernel has improved. The basic difference is that libc code
is all user space, where system call (eg open(2)...) is all kernel space
buffering. But you also have the overhead of a system call every time
you make a system call where fopen(3) and friends are user space, and
the system calls are done in the background. (Note, I don't remember
which Unix we were using at the time, could have been Santa Cruz).

--=20
Jerry Feldman <gaf-mNDKBlG2WHs at public.gmane.org>
Boston Linux and Unix
PGP key id: 537C5846
PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB  CA3B 4607 4319 537C 5846








BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org