13

I have a similar issue as this user. You can pretty much take his screen shot and apply it to my issue: Windows 8.1, 99% disk usage or in that vicinity for a longer period of time (minutes). Yet the individual disk usage stats don't add up to more than roughly 1MB/Sec. I have an SSD disk capable of much higher throughput.

The question I linked to doesn't provide much answer, but one comment hints to low free disk space. That is an issue with my disk. I have only ~1GB left of free space on my C drive. But my follow-up question is why would that matter? Windows does pre-allocate 6GB of virtual drive, doesn't it? At least that's how I interpret "currently allocated" in the Virtual Memory settings screen under System Properties → Performance options.

I've tried to use the process monitor tool to gain some information on what's using the disk, but the amount of data logged in just a couple of seconds is so vast that I don't know where to start. The only thing I see is that service.exe accesses the registry a lot... But I guess that's normal? If anyone care to take a look, the log dump is here (19MB)

To summarize:

  1. Does the disk usage really go up when free space declines? If so, why?
  2. Can anyone deduct from my Procmon log file what program or process causes 99% disk usage?

edit: To comment on the "malware"-answer in the other post: I'd like to know, not guess. I doubt it is malware in my case, but if it is, my logs should show that.

update: Similar case just happened. Any clues from these two ResMon screenshots?

ResMon Disk usage. "System" on top with 5MB per second, MsMpEng.exe trailing with  2MB per second

ResMon Network usage activity. No significant traffic visible

"System" doesn't really tell me much.

15
  • As usage increases so too does fragmentation, which is a big performance problem for NTFS in particular, and results in multiple IO operations for each file access. Not sure that would explain it all, but I'm also not sure how windows assesses io load. Commented Apr 29, 2014 at 12:35
  • 1
    its less of an issue, because scan/seek are now constant, but any time you have to reassemble a file from 29 different locations, it will still be slower than accessing a single disk location. That said, your article makes good points about wear-leveling algorithms, which do trump performance concerns in most folks minds. Commented Apr 29, 2014 at 13:05
  • 1
    Forget third-party software. If you are truly interested in getting into the nitty-gritty of your OS then look into the Built-in Windows Performance Monitor
    – MonkeyZeus
    Commented Apr 29, 2014 at 13:09
  • 1
    Use Resource Monitor (press Win+R, write resmon.exe, hit enter). Open it, go to Disk tab and tell us which process(es) have higher disk usage when it happens. Or better, upload ascreenshot.
    – Jet
    Commented Apr 29, 2014 at 16:04
  • 2
    MsMPEng (Widows Defender) can cause SIGNIFICANT disk and processor usage, I always remove it from my windows installations. And as visible in the screenshot, you get up to 100MB/s usage of your disk. Try disconnecting your PC from the internet, then temporarly disabling windows defender and search indexer and windows update, and any other non-microsoft services, processes AND scheduled tasks. There have also been reports of bad windows installs, which after a clean re-install worked normally.
    – Gizmo
    Commented May 2, 2014 at 11:01

5 Answers 5

8

I've just seen your problem has gone away but below should be helpful for next time.

I downloaded your Process Monitor log and did a count for the number of times each file was being accessed during that 6 second trace you captured. You should be able to see this yourself in Process Monitor by going to Tools -> Count Values Occurences -> Select Path from the Column drop-down and then click on the Count button. Click the Count button to sort the count numbers in ascending/descending order. Double-click on any of those file paths to focus on them.

The top 4 file paths accessed are: 3276 times: HKCU\Software\Classes\Local Settings\MuiCache\de\417C44EB 3276 times: HKLM\SYSTEM\CurrentControlSet\Control\MUI\StringCacheSettings 2188 times: HKCU 1457 times: C: (this is the searchindexer that's accessing this)

Processes involved in the above accesses include SearchIndexer.exe and services.exe and they seems to repeating the request for accesses over and over again. Not I sure I know what's going on further than that but I suggest you try disabling your Windows Search indexer and see how that affects performance.

I also had another look at the trace in an unfiltered format and saw lots of access to various Software Distribution folders with paths prefixed with C:\Windows\SoftwareDistribution\Download..... so worth checking/disabling this and see how that affect performance.

1
  • +1 for teaching me about the count aggregation in Procmon and for providing a concrete answer to #2
    – Nilzor
    Commented May 2, 2014 at 12:36
4
  1. Does the disk usage really go up when free space declines? If so, why?

Yes. In Windows as well as other OSs a full main disk will result in OS behaving slowly/badly in my experience. Why? Fragmentation on disk (and the resulting increase of processing fragments in memory as well as number of disk read/writes). The remaining small area will be fragmented on disk, probably spread over different areas on disk. There is no spare space so de-fragmentation will not be able to help much until more space is freed up. All processes need to write temporary files (e.g. internet browser cache). These files will be fragmented. For magnetic disks the disk heads have to travel further which means slower file read/writes. For SSD disks I was not sure, this is a little bit boring but explains it well:
http://www.youtube.com/watch?v=VfYkJoqfG-k "Why Fragmentation is Still a Problem with SSDs".

In memory (in file-allocation-table in memory) fragmented files take up more space and result in more cpu and bus activity dealing with different fragments. On the video he ends up with a sell for diskkeeper which claims to improve the problem for SSDs. BUT allowing enough spare space on disk would probably be the best strategy.

This is good: Investigation into effect of de-fragging SSDs. http://www.pcworld.com/article/2047513/fragging-wonderful-the-truth-about-defragging-your-ssd.html

Conclusion: There is no point in running a de-frag tool with SSD.

  1. Can anyone deduct from my Procmon log file what program or process causes 99% disk usage?

I think you need to make some space on disk first. Get system to behave a little better. Then if there is still a problem look at disk usage (space or activity).

0
1

I would be cautious about what you are using to determine that the disk is running at 100%. What does this mean?

That the disk is running at max throughput? No, how does your computer know what the maximum speed of the disk is? Given the optimistic speeds provided by manufacturers, I'd be staggered if you ever reach 100% of top speed for any length of time.

I think more likely the 99% you are looking at is an approximate measure of when the disk is in use... i.e. at the time of observation is is continually being written to/read from. This doesn't mean it will be working at full speed. You are more likely hitting a larger number of smaller files. SSDs don't have the same mechanics of a conventional disk, but there are overheads when reading/writing lots of small files.

If you run one of the dozen benchmarking applications out there, you'll find out exactly how fast your disk works and it will be way higher than 1GB/s! If you look at your first image, you see that the highest (green) peak is approx 60-70GB/s which is reasonable-if-unremarkable real-world speed for your disk, but most of the time, it is running slower. On the same image, we can see that you have a queue depth of 10.... it means that disk operations are stacking up.

However, as others have pointed out.... you could do with quite a bit more free space for Windows and other applications to work. Also more RAM means more in-memory caching and therefore less disk work.

2
  • Good point there. 100% or not, it gives noticable overall performance degredation to the point of temporary freezes, so it's in any case a problem. Freeing up space is a priority now, although that also is a non-trivial task on an ultrabook with a 128 GB HD ;)
    – Nilzor
    Commented May 2, 2014 at 12:42
  • You don't have much room to manoeuvre in 128GB... perhaps you can splash out on a 240GB replacement? Not the solution you are hoping for but, even if you clear 10GB out, you use it up in no time and you'll forever be battling with it.
    – CJM
    Commented May 2, 2014 at 13:16
0

Alright, I was fooling around for gaming, updating drivers, stuff like that because I had the same problem. Then I came across a option on control panel (I am on windows 8.1) under Control Panel>System>Advanced System Settings>Performance. There I disabled a bunch of visual effects and TA DAH I'm at 4% on my disk.Great ain't it. You can also try updating your graphic card driver. Hope it helps.

0

You should run CristaDiskMark or a similar tool to understand what the real limits of your disk are. I've seen HDDs scoring less than 1MB/s in random read/write tests, so a single process reading a heavily fragmented file (or just making a lot of non-sequential read/writes, e.g indexing a database) can consume 100% of your HDD throughput at only 1MB/s.

Another possible cause is your disk being near its end of life, when many files have unstable or remapped sectors. Remapped sectors have the same effect as fragmentation, only they cannot be defragmented because they are outside of disk area accessible to software. Unstable sectors can be much worse, taking several seconds to read. Any SMART tool can display you these, e.g. CristalDiskInfo.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .