(This isn't my program, but I'll try to provide all the relevant information to the best of my knowledge.)
There is a program which reads binary files that are roughly 300MB in size, processes them and outputs some information. The program uses ifstream for file input and streams are correctly initialized and closed for each read.
The program has to read each file multiple times. Reading a file for the first time takes about 3 seconds, and each consecutive read takes about 0.1 seconds. If several files are processed, going back to the first file will still yield fast read speeds, but after some time re-reading a file becomes slow.
Additionally, if a file is copied to another location, the speed of the first read of the new file is roughly 0.1 seconds.
If you do the math, the speed of consecutive reads is roughly the advertised read speed of the hard drive.
All this looks like file locations are cached by either the OS or the hard drive, so that on consecutive reads you don't have to seek out file locations.
Does anyone know what exactly is causing the slowdown on the initial read, and if it can be prevented? Three seconds may not seem like a lot, but they add about 5 hours to the total time needed to correctly process every file.
Also, the program runs on Fedora 14 and Scientific Linux, with both OS's having their default file systems.
Any ideas would be appreciated.