0

I want to optimize the pagefile size to fit my actual needs. And I want to follow the formula suggested by Mark Russinovich: The Minimum should be Peak Commit minus Physical RAM, and the Maximum should be double that.

In order to be sure I do everything right I'd like to clarify the following:

1) Is this number in red is that Peak Commit mentioned by Mr. Russinovich?

On the screen is the System Information window of the Process Explorer

enter image description here

2) In Control Panel > System it says that my installed RAM is 4 GB and only 2,45 GB is usable. And obviously, in Task Manager my total Physical Memory is also 2509 MB. That is because my OS is 32-bit. So, do I understand correctly that this 2509 (not those 4GB) is the number I should use for my pagefile size calculation?

I know this may be a silly thing to ask, but as I already said, I want to make sure I do it right.

Thank you very much.

1 Answer 1

1

You're no doubt referring to "Pushing the Limits of Windows: Virtual Memory" by Mark Russinovich.

  1. Yes, that's the number.
  2. Yes, you only have 2.5 GB RAM usable (approximately - there's no need for excess precision here).

However.. you write of "doing it right". I must point out that Mark's article is about "pushing the limits", and that formula gives you an absolute bare minimum for pagefile size. If you want your system to perform well you don't want to "push the limits".

There is no recommendation in that article that says you should make the settings that small. Only that you can get away with it if you really want to. (Assuming that you never actually need any more committed memory.)

(And you might well need more. Do remember that this "peak" you've seen here is only the peak for the duration of the Process Explorer run. It is only meaningful if you happen to run your maximum app workload during that time. That doesn't just mean starting up all the apps you think you will ever have running at one time. You also need to get each app to use the maximum private memory ("committed" memory) you will ever ask of it. That's a difficult thing to set up and test. And who's to say you will never add another app to your typical workload? So the info you get with monitoring tools doesn't come with a guarantee that you'll absolutely never need more.)

Note that if you should happen to actually hit that peak again, and you have the pagefile set to the min you calculate that way, there will be no room in RAM for RAM used by mapped files - such as code files, like exe's and dll's. (This is because the count of "committed" virtual memory does not include code pages, or other pages backed by mapped files.) Since code has to be in RAM to execute, that is a problem.

Ok, since you are enabling pagefile expansion, it's not a "showstopper" problem. But it will certainly be a performance hit.

So unless you are trying for some academic reason to demonstrate the absolute minimum pagefile size that won't result in app crashes, regardless of performance problems, I wouldn't do it that way. (I can't think of a practical reason to do it.)

If you really want to cut things to a practical minimum, i.e. without restricting the RAM available for code, simply set your pagefile minimum size to your observed maximum commit charge. This means there will still be RAM available for code even if apps and the OS have created that much committed memory (and referenced it all).

Nor is there a reason to limit the maximum size to only 2x the min, unless you are very concerned with disk usage.

You see, it does no harm (other than disk space usage) to have a pagefile larger than the system will ever need. In particular a larger pagefile than necessary will not "attract" more paging to it. And there is no advantage to making it only just barely big enough (again, other than disk space usage). So why bother? ie you are not "optimizing" anything except disk space occupied by this exercise.

Given that running out of commit limit can cause app crashes (and therefore loss of data), and in rare cases even system crashes, I would not be nearly so interested in making my pagefile as small as possible.

On the other hand, if the system has to write to the pagefile a lot -and on a Windows 7 system with only 2.5 GB RAM usable, I imagine it will be - having a pagefile that is considerably larger than the system will ever use will speed things up. Why? Because the space allocation algorithm that's used for the pagefile can be a lot faster if there's a lot of free space.

Because of this, I like to see the PerfMon counter for Pagefile %usage to stay under 25 percent.

4
  • Thank you for your beautiful answer, Jamie! I hope someone else (and not just me) will also find it useful, because it looks like you spent so much time on it. In fact I hadn't read the original article I only saw/read a reference to it here lifehacker.com/5426041/…. So, thank you for the link to the original and explaining to me what it really was about and also your view on the issue.
    – Ahu Lee
    Commented Feb 10, 2018 at 6:33
  • All of what you said makes sense to me, but the author in the lifehacker.com article (see the link above) which I read first argued that a huge pagefile would cause your system to be extremely slow in case if you are opening 12 GB worth of in-use applications, and your hard drive is going to grind to the point where your PC will be fairly unusable. So is he or she simply wrong? Thank you so much!
    – Ahu Lee
    Commented Feb 10, 2018 at 6:34
  • They're simply wrong. First, when you're "opening" an app, none of its storage is in the pagefile, so there would be no reason to read anything from it. If we're talking about coming back to a long-idle app then yes, Windows may well have paged some of it out...not all of it to the pagefile btw... but that's only done to the oldest stuff in the process working set, which is least likely to be needed again. Windows normally doesn't put anything in your pagefile unless and until something else you're doing needs the RAM. Finally, Windows is "demand paged" OS, meaning that ... Commented Feb 10, 2018 at 9:16
  • ... it only pages in stuff that you actually need. So when you come back to a long-idle process that has some stuff paged out, it does not automatically try to page everything back in. That would be stupid, and contrary to popular belief the Windows OS kernel people aren't stupid. It only pages in those pages that the process actually tries to access. (The same thing happens when you start an app, for that matter; the app is never loaded as a whole.) So even if it actually has written many GB to the pagefile from different processes, it doesn't have to read it all before doing anything else. Commented Feb 10, 2018 at 9:18

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .