Skip to main content
67 events
when toggle format what by license comment
Aug 10, 2021 at 16:10 comment added David Schwartz @user4779 It will write them to cache but it will not remove them from RAM. This allows the system to go faster if there's memory pressure because it eliminates the need to write them first when the system encounters memory pressure and I/O is precious. Under memory pressure, everything you do will slow down no matter what you do. There's no magic way to fix memory pressure when the working set exceeds RAM. As I said, the people who designed your operating system's behavior are not fools. If you really think they are, use a different OS.
Aug 10, 2021 at 1:53 comment added dns @DavidSchwartz If you have 5000 EXABYTES of RAM, you'll still definitely need Pagefile because the "experts" said so. ;-)
Jul 3, 2021 at 1:33 comment added user4779 I fundamentally disagree with this answer and agree with Fred in these comments. You keep referring to "stale stuff" that can be paged out. For people like me who have a lot of processes open but the ram to support it, I want instant alt-tab switching to old processes without any delay. Using a paging file, even with 90% of free RAM available, doesn't allow that. Windows will cache processes that aren't used for a long time, even if it's not necessary. For many people like me, this is an unacceptable and unnecessary performance cost that is merely coded into Windows for no good reason.
Jan 18, 2019 at 16:49 comment added Cherry "The people who designed your operating system's behavior are not fools" - and those people added settings for disabling page file - they are definitelly optimized memory managment for this case. :)
Jun 29, 2018 at 15:56 comment added Gabriel L. Oliveira I'd like to add my 2 cents that, on systems (specially laptops) with restricted disk space (250 GB SSD) and plenty of Ram (32 gb), it's a good idea to reduce the pagefile to a personalized size (in my case, 1 gb minimum and 4 gb max). That's because Windows 7, by default, set the pagefile to as much as available memory ram (32 gb, in this case), i.e., inutilizes almost 13% of your disk. With this personalized settings, you still have a pagefile to dump some of the kernel stuff and paginate old and unused memory, without compromising a lot of your SSD.
Mar 18, 2018 at 11:31 comment added Sergey.quixoticaxis.Ivanov Modern gaming PCs with 64GB, 128GB and hopefully (I hope Intel will present a good gaming CPU, not the half-baked i7 and i9 they are pushing now) 256GB of fast RAM will most probably benefit even more from disabling pagefile in gaming scenarios.
Mar 18, 2018 at 11:28 comment added Sergey.quixoticaxis.Ivanov Side-note: no, it's not funny, because I didn't give a baseless answer. I've ran some tests though: i7 4790k in stock, Titan X Pascal in stock, 32GB ddr3 1333, OS, all games and pagefile resided on the same Kingston 256GB SSD. No superfetch, system rebooted after each run. All tests were run for 3 times. Time Spy benchmark: +0.2% score, +7% min fps, Fire Strike Ultra benchmark: +0.1% score, +13% min fps, Hitman:Absolution benchmark: +-0 avg fps, +13% min fps, FFXV benchmark: -2 avg fps, +15 min fps. Note please, that this is very old PC which was a thing back when games demanded 2-4 GB RAM.
Mar 18, 2018 at 1:55 comment added Sergey.quixoticaxis.Ivanov I think Microsoft supposes that those few people who have gaming PCs can disable pagefile if they need in a few clicks. If you're interested, here's an article, it's a bit old, but it shows some benchmarks in the end (tweakhound.com/2011/10/10/…). Unfortunately, my new PC will be traveling for one more month, but I guess I'll run dome benchmarks in the morning on my current 32Gb machine and provide you with nubers (or find some links). Not now, it's 5 am for me.
Mar 18, 2018 at 1:03 comment added David Schwartz @Sergey.quixoticaxis.Ivanov Funny that you complain that I'm not citing sources and your response is "all papers written on the subject that I know" and then you follow up with "plenty of tests". But your main thesis is just silly -- if there was some way to make an OS better at bog standard tasks that everyone uses the OS for, it would come set that way. You're seriously arguing that experts in VM design with decades of experience left a simple switch set in the wrong position. It's a comically silly position. You think they can't tell what a game is or think people like low frame rates?
Mar 17, 2018 at 23:45 comment added Sergey.quixoticaxis.Ivanov Yet again, no sources except "I believe" and "vm experts". It's not a concrete criteria. Papers, please. From OS developers, if possible, because all papers written on the subject that I know from OS devs ofc mention the corner cases and note that the OS is not tuned for everything possible and disabling pagefile can provide performance boost in some scenarios. Also there are plenty of tests that show how disabling pagefile while obviously doing nothing to average fps, can (also obviously) increase min fps in games if you have enough RAM. And it can be easily reproduced.
Mar 17, 2018 at 20:26 comment added David Schwartz @Sergey.quixoticaxis.Ivanov The exact criteria is that such a swap will produce improved performance under the conditions the OS is measuring based on a complex set of heuristics tuned over many years of experience. The point of VirtualLock is primarily to handle cases where data cannot be swapped for security reasons or to meet the occasional timeliness requirement for a small piece of data even at the expense of worse system behavior overall. The OS comes tuned by VM experts for best overall system behavior under a wide variety of conditions.
Mar 16, 2018 at 23:56 comment added Sergey.quixoticaxis.Ivanov Okay. The main question: What is the exact criteria in Windows 10 that is used to determine whether the commited page can be swapped to disk and the underlying physical RAM reused? The side question: if the OS is some godlike object that operates according to ideal algorithms, what is the point of VirtualLock function?
Mar 13, 2018 at 4:41 comment added David Schwartz Sure, ask a question about it and if I don't answer it, point it out to me. OS developers have been optimizing and fine-tuning virtual memory systems with precision for decades and yet people who don't understand it insist on seeking a magic "go faster" button that does not exist.
Mar 12, 2018 at 17:38 comment added Sergey.quixoticaxis.Ivanov Can you provide anything beyond words about "every major system" and "it's junk"? Anything beyond your own beliefs? Cache strategy description maybe? Or swapping strategy description from official or any sources? At least test that show your point? Also I can't see what's "very typical" about this scenario.
Mar 12, 2018 at 16:30 comment added David Schwartz @Sergey.quixoticaxis.Ivanov OSes were tuned the way you say years ago. The virtual memory systems of every major operating system you're likely to use on a PC has been tweaked extensively since then. People asking for such handles are almost always doing so out of mistaken notions of how memory management work. If the OS doesn't come sensibly tuned for these very typical use cases, it's junk. The elusive "make my very typical use case go lots faster" button does not exist or it would already come pushed.
Mar 12, 2018 at 9:58 comment added Sergey.quixoticaxis.Ivanov Microsft has already been asked lots of times to actually provide some handles for tuning swapping and caching strategies like unix based OSs provide. Unfortunatelly, this is, while considered, is not implemented.
Mar 12, 2018 at 9:46 comment added Sergey.quixoticaxis.Ivanov OS developers are not fools, but these mechanisms were tuned for the case when "RAM < size of code + size of data it will access", which is false for, for example, modern gaming PCs where often "size of RAM > side of all code + size of all data it would ever access". And disabling pagefile could be noticable.
Mar 12, 2018 at 9:45 comment added Sergey.quixoticaxis.Ivanov Ofc I would not care if the memory was not changed. What was the point of your comment? But in practice it can be changed even when there are tons of memory. Your arguments are all floating around "OS knows better if it has something better to do" and "OS devs are not fools" without providing detailed information.
Mar 12, 2018 at 8:46 comment added David Schwartz @NickSotiros More RAM is always going to be better than an equivalent amount of swap. But that's almost never the decision you're faced with. Of course if you have a choice of 16GB more RAM or 16GB more swap, take the RAM.
Mar 12, 2018 at 6:55 comment added Nick Sotiros So I am running Windows with 16gb of RAM and 16gb of swap. It seems that somehow this system with 64gb of RAM should be able to do what my system can do, even with no swap. It seems like at the very least it should be able to divide its RAM in half and have 32gb of free use RAM and 32gb of ramdisk swap.
Mar 12, 2018 at 6:04 comment added David Schwartz @Sergey.quixoticaxis.Ivanov You wouldn't care whether the OS swapped it out or not because it would stay in memory even after being written out unless the system had some better use for that RAM. Writing something to swap does not necessarily mean that it is discarded.
Mar 12, 2018 at 1:19 comment added Sergey.quixoticaxis.Ivanov The accepted answer is really one-sided and is given under assumption that the user 1) wants the system to be able to use RAM efficiently and 2) is using more then one application at a time. And both could be false in some scenarios. For example, when you play a video game and the engine cached 30 Gb of resources, you'd be really disappointed when the OS swapped some that were not used for a long time (even when swapping to a fast SSD).
Jun 20, 2017 at 0:43 comment added Jamie Hanrahan ...continuing: The fact that Windows may appear to be "using the pagefile" when a user's impression is that it "shouldn't have to" does not mean it's doing harm; it mostly means that the user doesn't have enough information to conclude "it shouldn't have to" or that "it's doing harm". The fact that the pagefile is nonempty doesn't mean it's slowing you down. By allowing the OS to page out long-ago-referenced stale stuff it allows the OS to make more RAM available for things that are being accessed more often.
Jun 20, 2017 at 0:40 comment added Jamie Hanrahan @FredHamilton I believe a more correct summary is: Pagefiles are often helpful, sometimes necessary, and if they are not needed, they do no harm (in the vast majority of cases). See the extensive test reports here: tweakhound.com/2011/10/10/… But if your workload one day needs a pagefile and you don't have one, that can cause program crashes, etc. The obvious conclusion is that you should have a pagefile. Think of it like a safety net: It won't encourage the trapeze artists to fall, but if they happen to it's sure a good thing to have.
Feb 17, 2016 at 18:18 comment added David Schwartz @BamsBamx If you reach that point, you either need to get more RAM, get a faster device for swap, reduce the load on the machine, or tolerate the reduction in performance. Hardware can only do what it can do.
Feb 17, 2016 at 16:51 comment added BamsBamx And what about having a very slow HDD? It could cause a bottleneck in the entire system
Sep 15, 2015 at 0:54 review Suggested edits
Sep 15, 2015 at 7:22
Jul 13, 2015 at 20:04 comment added Fred Hamilton I realized what was riling me about all this is that one side seems to be saying "pagefiles never do anything but good" and the other side is "pagefiles are terrible" and then people get entrenched on one side or the other. The "truth" is that in some cases they're very useful and even crash-preventing and in other cases they are not needed and can actually cause performance to decrease. I'm happy with that as my final statement regardless. Live long and prosper, @David Schwartz.
Jul 13, 2015 at 18:46 comment added David Schwartz @FredHamilton So your complaint is with what I didn't say?
Jul 13, 2015 at 5:08 comment added Fred Hamilton I just don't like hearing that "pagefiles are magic so don't turn them off or you'll be sorry" when I know that under many common circumstances that you can turn them off, and safely see an improvement in performance because you are no longer increasing disk I/O by 100% or higher whenever the MMS wants to do something. All I want to hear from the people on the other side of this debate is "yes, there are circumstances where you can turn them off and reduce disk I/O which can result in thrashing". I'm not saying pagefiles are always bad, maybe you can say that they're not always necessary.
Jul 13, 2015 at 5:00 comment added Fred Hamilton I get that MMS designers might be preparing for some sudden worst-case memory allocation request, but I don't understand why the pagefile had to be so frequently written/read during the hours/days/years of use when the running tasks asked for a small fraction of RAM. I'm not saying we don't need pagefiles, I'm just saying on a scale of zero to perfect, this particular MMS fell short enough of perfect that I (and from what I can tell many others) got significantly faster hard disk I/O with no negative consequences by disabling the pagefile.
Jul 13, 2015 at 4:49 comment added Fred Hamilton I didn't say that the MMS should write the page file the instant before it runs out of RAM, I said "until it was actually needed", and my point is that it should not be needed if you've got a very large amount of RAM and you're using programs that only ASK FOR a fraction of that. Yet Windows (at least XP, the last time I ran Windows with a pagefile) would be reading/writing the pagefile enough to cause thrashing even if only 10%-20% of the RAM was being used.
Jul 13, 2015 at 0:27 comment added David Schwartz @FredHamilton No, that would be a bad thing. The worst time to start writing the pagefile is when you need it -- that's when I/O causes the most harm to the system because you need the I/O capability to read from the pagefile and to write recently-modified data. You actually want to start opportunistic writing earlier, when I/O is not precious, so that more pages are discardable later. I don't mean to be rude, but I know exactly what I'm talking about and have spent decades working on this stuff, it's very annoying to be constantly told I'm wrong by people who don't have a clue.
Jul 12, 2015 at 14:43 comment added Fred Hamilton I do agree that a system with a pagefile doesn't HAVE to be thrashy - I just know from experience that with every version of Windows I have used it IS more thrashy, because Windows is reading/writing the pagefile even when it's only using 10% of the available RAM. No one would be arguing with you if the system didn't use a pagefile until it was actually needed (I believe linux is a lot closer to that if not already there). That would be a great thing. But, at least with Windows, that's not how it works.
Jul 12, 2015 at 14:37 comment added Fred Hamilton @DavidSchwartz 'Saying "pagefiles can cause thrashing" is equivalent to saying that the memory management system is terribly, terribly broken.' I'd say the MMS you're describing IS terribly broken. The only universe in which a system with 4GB RAM and a 4GB pagefile is more reliable and less thrashy than a system with 64GB and no pagefile is a universe where the memory management system is terribly, terribly broken.
Jul 12, 2015 at 10:43 comment added David Schwartz @FredHamilton Saying "pagefiles can cause thrashing" is equivalent to saying that the memory management system is terribly, terribly broken. You're essentially claiming that giving the system more choices (because that's all a pagefile does, it doesn't require the system to do anything) makes performance worse. I don't understand how you can say, "Only if they are doing massive things that use more than the 4GB - 8GB that most users use". As I've explained several times, it doesn't matter how much RAM they have free.
Jul 10, 2015 at 23:28 comment added Fred Hamilton The only acceptable reason to say that everyone has to use a pagefile all the time regardless of the amount of RAM they have and how they use use their PC would be if the memory management system was terribly, terribly broken. Does someone running Win8 with 4GB RAM need a pagefile? Absolutely. Does someone running Win8 with 64GB RAM need a pagefile? Only if they are doing massive things that use more than the 4GB - 8GB that most users use. And of course pagefiles can cause thrashing. How can people even argue these points?
Jul 10, 2015 at 23:23 comment added Fred Hamilton @DavidSchwartz I really don't understand why people are so vehemently pro-pagefile, when common sense (and actual experience) indicates it works fine. You're analogy "That's like saying you don't have to worry about money because you have lots of checks in your checkbook" is not correct. The analogy is that you were getting along fine on $50,000/year and you win $10M. You no longer have to worry about money if you keep reasonably close to the same lifestyle. If all you want to do is play World of Warcraft and surf the web, I'm certain you can do that without a pagefile on a 64GB machine.
Jun 4, 2015 at 16:47 comment added David Schwartz @Bigwheels Without a pagefile, a 1GB private, modifiable mapping prevents 1GB of RAM from ever holding pages that are discardable. With a 1GB pagefile, it does not. So "Every single argument you have made against not having a pagefile is also true when you have a pagefile." is simply false. "I am not sure why you insist on pushing the agenda that having no pagefile is bad" because it is bad, for all the reasons I've explained. (And which I see no evidence that you understand.)
Jun 4, 2015 at 15:34 comment added Jason Wheeler @DavidSchwartz Every single argument you have made against not having a pagefile is also true when you have a pagefile. With respect to your arguments, adding 1GB of RAM is equivalent to adding 1GB of pagefile. I am not sure why you insist on pushing the agenda that having no pagefile is bad, but it is a disservice to the people reading your answer.
Jun 3, 2015 at 18:51 comment added David Schwartz @Bigwheels "But that's not a problem until all of the RAM has been committed." No, that's not true. With no page file, every byte of RAM that's committed but not yet used is one byte of RAM that is limited to holding only clean, backed pages. This can impact performance (by forcing the early discard of clean pages that are part of the working set) long before anything is anywhere close to running out.
Feb 22, 2015 at 2:21 comment added David Schwartz @Bigwheels I don't come to that conclusion. Read my answer again.
Feb 21, 2015 at 11:27 comment added Jason Wheeler @DavidSchwartz The information you give is technically correct, and it's good information to know. But the conclusion that you come to that you should always have a page file regardless of how much RAM you have is not correct and I stand by my claim that this should not be the accepted answer.
Feb 21, 2015 at 11:22 comment added Jason Wheeler @DavidSchwartz I'm clear on that point. One process commits some amount of memory, and then no other process will be able to use the memory that's been committed, even if that RAM is never used. But that's not a problem until all of the RAM has been committed. Btw, this isn't just a problem when you don't have a page file. The page file just gives you more memory to commit before you reach the limit. To make the point even further, I could decrease my page file by 1GB and add 1GB of RAM to my system and be no better or worse off, with respect to the amount of RAM that can be committed.
Feb 20, 2015 at 23:05 comment added David Schwartz @Bigwheels Because the backing store is committed. It's the same reason you can't write a check just because you have enough money in your bank account. Windows will not overcommit backing store, so if it doesn't have a paging file, it can't overcommit RAM either. It doesn't matter how much RAM is free if it's committed, it can't be used to hold anything that's not discardable unless there's sufficient uncommitted backing store. An unsharable, writable file mapping commits as much backing store as the size of the mapping, even if it never actually uses any RAM.
Feb 20, 2015 at 20:50 comment added Jason Wheeler @David Schwartz: "if that RAM is committed, even if unused, the system will have to refuse subsequent allocations." Why? Why would the system have to refuse subsequent allocations?
Feb 20, 2015 at 8:53 comment added David Schwartz @Bigwheels It doesn't matter how much RAM you have. That's like saying you don't have to worry about money because you have lots of checks in your checkbook. If that RAM is committed, even if unused, the system will have to refuse subsequent allocations. (Consider a program that makes an unsharable, modifiable mapping of a 1GB file. Even if no RAM is used by that mapping, if there's no paging file, 1GB of RAM will be restricted to use for only clean, discardable pages.)
Feb 20, 2015 at 8:36 comment added Jason Wheeler This should not be the accepted answer. The conclusion is incorrect since the answer doesn't take into account the OP's postulation that he has "tons of RAM", which I take to mean that he has much more RAM than he's using. In such a scenario, 1) the size of the disk cache is not affected, and 2) "wasting" RAM due to allocations that will never be read from is also a non-issue.
Oct 22, 2014 at 8:35 vote accept user1306322
Oct 8, 2014 at 8:00 audit First posts
Oct 8, 2014 at 8:01
Oct 2, 2014 at 16:30 audit First posts
Oct 2, 2014 at 16:30
Oct 1, 2014 at 7:16 audit First posts
Oct 1, 2014 at 7:16
Sep 18, 2014 at 23:59 audit First posts
Sep 19, 2014 at 0:01
Sep 17, 2014 at 0:18 audit First posts
Sep 17, 2014 at 0:19
Sep 15, 2014 at 20:48 audit First posts
Sep 15, 2014 at 20:49
Sep 14, 2014 at 11:35 comment added Victor Zakharov @DanNeely: +1. One of the Warhammer games does it too. :)
Sep 12, 2014 at 21:06 comment added Dan Is Fiddling By Firelight I suspect page file in a ramdrive started out as a cargo cult "workaround" for the fact that some software will refuse to start if it detects there isn't a page file. (I've been told Adobe's graphics/video tools do this.)
Sep 12, 2014 at 16:27 comment added spudone I was contradicting the downvote. But to your last comment: I run 2 different machines w/o swap. It's perfectly fine if you know exactly how the machine is going to be used.
Sep 12, 2014 at 15:51 comment added David Schwartz @viraptor If that was correct (and I'm not 100% certain how Windows behaves in this case) it would make my argument even stronger -- not having a paging file could cause applications to needlessly crash, again, because the OS couldn't use the RAM it had efficiently.
Sep 12, 2014 at 15:11 comment added viraptor Does windows really always preallocate writable private mmapped files? On linux I don't think anyone would do that with huge files - MAP_NORESERVE and MAP_POPULATE are options that give you some control over the reservations and early population of pages. But also allow you to virtually mmap files bigger than physical memory+swap. Sure, you can run into situations where you SIGSEGV on access, but that's a known tradeoff.
Sep 12, 2014 at 10:25 comment added vgru @Eliah: yes, but that is correct: there is no point in doing that. Which is the same thing written by Russinovich in that link by spudone: "putting the pagefile on a RAM disk is ridiculous". That's why I don't understand how the comment is relevant at all (being what seems to be a possible explanation to David's question about the downvote).
Sep 12, 2014 at 10:21 comment added user105707 @Groo Last paragraph: "There's no point in trying to put a paging file in RAM. ..."
Sep 12, 2014 at 10:19 comment added vgru @spudone: Where did David mention a RAM disk?
Sep 12, 2014 at 4:37 comment added Mattisdada David Schwartz is absolutely correct, if you are really concerned about the possibility of paging to disk slowing down your system, you can buy a 64GB SSD for pretty much nothing ($40AUD) and use that as the paging/swap disk. Also reserving the RAM for the possibility of needing more of it later has other performance implications. In Windows (and other OSs) it will put commonly used items into the memory cache, that can be instantly destroyed when required, but when needed can be accessed quicker than reading from disk
Sep 11, 2014 at 17:28 comment added spudone Wasn't me, but read this: overclock.net/t/1193401/… (specifically, the part about the commit limit, and the email comment from Mark Russinovich, who is an expert on the topic).
Sep 11, 2014 at 16:27 history edited David Schwartz CC BY-SA 3.0
added 5 characters in body
Sep 11, 2014 at 15:43 history answered David Schwartz CC BY-SA 3.0