You're no doubt referring to "Pushing the Limits of Windows: Virtual Memory" by Mark Russinovich.
- Yes, that's the number.
- Yes, you only have 2.5 GB RAM usable (approximately - there's no need for excess precision here).
However.. you write of "doing it right". I must point out that Mark's article is about "pushing the limits", and that formula gives you an absolute bare minimum for pagefile size. If you want your system to perform well you don't want to "push the limits".
There is no recommendation in that article that says you should make the settings that small. Only that you can get away with it if you really want to. (Assuming that you never actually need any more committed memory.)
(And you might well need more. Do remember that this "peak" you've seen here is only the peak for the duration of the Process Explorer run. It is only meaningful if you happen to run your maximum app workload during that time. That doesn't just mean starting up all the apps you think you will ever have running at one time. You also need to get each app to use the maximum private memory ("committed" memory) you will ever ask of it. That's a difficult thing to set up and test. And who's to say you will never add another app to your typical workload? So the info you get with monitoring tools doesn't come with a guarantee that you'll absolutely never need more.)
Note that if you should happen to actually hit that peak again, and you have the pagefile set to the min you calculate that way, there will be no room in RAM for RAM used by mapped files - such as code files, like exe's and dll's. (This is because the count of "committed" virtual memory does not include code pages, or other pages backed by mapped files.) Since code has to be in RAM to execute, that is a problem.
Ok, since you are enabling pagefile expansion, it's not a "showstopper" problem. But it will certainly be a performance hit.
So unless you are trying for some academic reason to demonstrate the absolute minimum pagefile size that won't result in app crashes, regardless of performance problems, I wouldn't do it that way. (I can't think of a practical reason to do it.)
If you really want to cut things to a practical minimum, i.e. without restricting the RAM available for code, simply set your pagefile minimum size to your observed maximum commit charge. This means there will still be RAM available for code even if apps and the OS have created that much committed memory (and referenced it all).
Nor is there a reason to limit the maximum size to only 2x the min, unless you are very concerned with disk usage.
You see, it does no harm (other than disk space usage) to have a pagefile larger than the system will ever need. In particular a larger pagefile than necessary will not "attract" more paging to it. And there is no advantage to making it only just barely big enough (again, other than disk space usage). So why bother?
ie you are not "optimizing" anything except disk space occupied by this exercise.
Given that running out of commit limit can cause app crashes (and therefore loss of data), and in rare cases even system crashes, I would not be nearly so interested in making my pagefile as small as possible.
On the other hand, if the system has to write to the pagefile a lot -and on a Windows 7 system with only 2.5 GB RAM usable, I imagine it will be - having a pagefile that is considerably larger than the system will ever use will speed things up. Why? Because the space allocation algorithm that's used for the pagefile can be a lot faster if there's a lot of free space.
Because of this, I like to see the PerfMon counter for Pagefile %usage to stay under 25 percent.