Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

11
  • 2
    Always add bs=1M or something to dd, otherwise you're measuring syscall overhead. smartctl -H is useless. Check kernel logs, full smart data, partition alignment, tune2fs, benchmark disks individually/directly w/o filesystem, ... if nothing comes up, replace cables, disk anyway Commented Apr 20, 2014 at 15:30
  • bs=1M did not change the test (it was indeed running faster on the other servers), but thanks for the pointer. UPDATE: after leaving the disk doing nothing for 10 hours, THE SPEED IS AGAIN HIGH. The dd test copies hundreds of megabytes per second, like all other servers. So it appears that something "happens" incrementally to the file system. After a few hours, things gets slower and slower. But if stop all activity and wait for a few hours things get back to normality. I guess this has something to do with delayed writes, but frankly I don't know what I should change.
    – seba
    Commented Apr 20, 2014 at 17:15
  • what scheduler do you use ? none,cfq,deadline?
    – UnX
    Commented Apr 20, 2014 at 18:31
  • cfq. Thanks, really, for pointing this out—I didn't even know there was a settable scheduler. I think deadline would be more appropriate for what we are doing. We need the system being as dumb as possible, as we alternate phases in which we make large writes to different files.
    – seba
    Commented Apr 20, 2014 at 18:58
  • For the time being, I'm trying disabling journalling altogether.
    – seba
    Commented Apr 20, 2014 at 18:59