This article correctly states that committed memory is that in use + memory that's being paged out. Now why would you want to know the committed memory over the actual physical RAM in use?
I can trivially create an app that memory maps a massive file and will show several GB of committed memory. This won't be in use of course, memory mapping files so that the OS will page in/out as required is intentional. Those GB of committed memory aren't something you should care about. I'd be scared if someone looked at the committed memory use of a program that correctly uses mmap and caused someone to exclaim "OMG this is uses TB of RAM!".
Task Manager is doing the right thing here. It's showing you want's actually paged in and in use right now.
> Now why would you want to know the committed memory over the actual physical RAM in use?
Because in Windows, committed size is relative to physical size. You can commit a lot more than RAM, but watch your page-file grow.
malloc() can fail on Windows for this reason. This is not the same on Linux or any of the BSD's I've tried. :)
I experienced/discovered this August last year.. sometimes understanding a lot about Linux can make you blind to the architectural differences Windows has.
You can choose an overcommit policy on Linux, but most library developers on Linux have chosen the default and regularly allocate wide swaths they don't intend to use.
This is a real pain when moving an application from FreeBSD to Linux, as effective limits on memory are lost (ulimit set at ~90% of ram results in a malloc failure and a clean crashdump rather than death by thrashing, or an untrapable oom kill).
There could maybe be a middle ground where malloc would allocate large chunks of address space for ease of administration, and then ask the OS to commit those pages in smaller chunks as needed. Often, there's not much a lot you can do when allocation fails, but it's way more actionable if the failure is returned from a syscall vs failing when you write to an unbacked page, which could happen basically anywhere in your program.
This doesn't occur if you memory map a file though (barring certain flags that you can set as stated by quotemstr below)
You can legitimately have Windows stating many GB's of committed RAM without actually using that RAM and it's not using the systems pagefile/swap. It's also common for this to occur. Pretty much every program capable of opening large files (GB+) in a non-sequential fashion does this.
Hm, maybe I wasn't clear. It's not actually /using/ the memory when it's committed.
But the sum of your committed memory across all applications must exist in some form on the host system.
So, for example it's a common performance optimisation to double the amount of allocated space when you grow anything in C++, what this means is that you're not actually using the space yet but malloc() and zeroing is kinda slow.
So, you have 128MB of ram which is your programs address space and you just doubled your array from 75MB to 150MB, well, that extra 22MB must exist. Even though you're only using 76MB.. even if the OS shows it as free. (which, it will)
Thems the rules, and I promise you that I have thoroughly tested this; as it was causing a really nice crash on my servers even though we had more than 50% of memory "free"
Memory Mapped files do exist in some form or another though. As the file itself. That's the point of memory mapping files. You can right now memory map every file on your computer. That's TB of files. There will be no physical RAM usage and no swap file usage unless you start to actually work with those files (at which point they will be paged in). This will show as 'committed memory' in task manager.
Your example above isn't memory mapping files. It's just allocating RAM. That does have to exist in RAM or the swap file. But that's not what 'committed memory' above shows. Which is the whole point. The column the article is telling people to use is misleading.
I had a problem on my windows 10 pc for a long time, where I would clearly be running out of ram, task manager would show 100% usage, everything getting slow. However if I added up all of the processes using ram, it was nowhere close.
So something was using up ram that task manager gave me no visibility into. I had to download obscure tools and do some guesswork to figure it out. It would have been really nice if Task Manager could just report it in the first place (it turned out to be a network card driver with a memory leak in it)
Yep, this is particularly evident if you use Virtualbox a lot - the memory use isn't in the actual process but in one of the drivers. Has also happened to me with a VPN drivers.
Best tool I've seen to start finding where it's going is RAMmap [1].
> committed memory is that in use + memory that's being paged out
That's not actually true though. You can see an increase in a process's commit charge without anything new being written to the pagefile. Commit is a check that the kernel writes to applications; you're confusing that check for cash in a wallet. You can also have commit without any virtual address space to blame for it through section handles or other tricks. It's complicated.
I'm not going to dispute that but just want to highlight it doesn't change the fact that committed RAM can show as extremely high just by working with memory mapped files.
Memory mapping a GB log file for example will absolutely show GB's of committed memory but in reality you'll only have the last page in actual physical RAM.
Right. When you memory-map a file, what you've essentially done is add a temporary new pagefile to the system (your mapped file), and when you work with memory backed by that file, it's no different from working with "anonymous" memory backed by the system-wide pagefile.
Aside: I have to say this is a genius perspective on mmap, and I now finally understand why the same syscall is used for both. I never understood the link.
I understand commit can happen without physical page file being written to. But you say commit can happen even without any increase in virtual address space usage? That seems strange, could you explain how it can happen?
Change a PAGE_READONLY mapping to a PAGE_WRITECOPY one. The commit charge is billed at VirtualProtect time, and the protection can fail if you run out of commit. The kernel doesn't have to commit anything for a PAGE_READONLY mapping because all pages in such a mapping are guaranteed to be clean and trivially evictable. Not so once you introduce the possibility of COW faults. Reserving a big range of address space and committing it a little bit at a time is a very common pattern.
Another thing: create a 1GB section. Map it, and fill it up. Unmap it. Map it again. What you wrote is still there. Between the map and remap, you have commit without corresponding address space.
For versions of Windows before 10, I'd agree with this 100%. Windows 10 (or maybe 8-8.1?) pulled in a lot of functionality from Process Explorer and is much improved.
For developers Process Explorer (and ProcMon and a few other utils) is likely an improvement, but frankly if you're doing Windows development you should already have learned about them and probably some of Nir Sofer's tools as well (nirsoft.net). For 90% of people (even developers) you probably don't need what Process Explorer provides.
Side note, in Process Explorer if you turn on the lower pane (View menu or Ctrl-L) you can view all handles that a process has open, including file handles. That can be useful for identifying unrecognized processes.
This article seems to have a high-level overview of how virtual memory works, without mentioning it at all…I find the terminology use rather strange too. Saying that the virtual address space as is “reserved by the OS for each process” was a bit confusing to to me.
The article confuses address space and memory reservations as you note. It again makes the confusion when it suggests the virtual size column in Process Explorer is a reflection of the address space[1]. It also suggests that working set, private bytes, and committed memory are the same thing[2]. The article does not even live up to its click-baity title: Task Manager's default memory column of "private working set" is a decent measure of how much memory a process is uniquely using and task manager has the ability to add all the other measures of memory usage that the article mentions.
For a better explanation of virtual memory in Windows, I recommend Mark Russinovich's article[3]. His tool VMMap[4] is useful for visualizing the memory usage of an individual process.
[2]: Task Manager and Process Explorer add to the confusion by calling the same memory different things (Process Explorer's "private bytes" number is the same as Task Manager's "commit size" number on Windows 10 1809).
The bigger problem is that there's no good number that captures the memory impact of a process on modern unified-memory page-cache-ful architectures. Practically nobody gets this right, and the author of the article is himself glosses over some important details. Every byte of address space is either allocated and backed by some memory object ("reserved" memory) or it's unallocated. Commit charge is a measure of the amount of memory that the system has guaranteed will be available in the worst case, should all faultable address ranges be faulted, but that's not the same thing as memory actually being used. For example, if you MapViewOfFile a 1GB file PAGE_READONLY, you burn very little commit --- just enough for the page metadata --- but if you change page protections to PAGE_WRITECOPY, then you incur an extra 1GB of commit charge, even though you still haven't faulted anything into memory or reserved more address space --- and that's because you could legally COW-fault every page in that file, and the kernel has to commit (thus the word) to providing that memory in the worst case.
I work on Linux these days, which is even more confusing, because thanks to overcommit, most people don't distinguish these different kinds of memory allocation, even though the distinction between commit and reserved memory exists on Linux too. (The kernel just lies about satisfying commit charges unless you tell it not to lie to you. Most people are happy with overcommit's optimism.)
Anyway, the key thing to realize about modern virtual memory subsystems is that "memory consumption" is an incoherent concept. You can derive lots of different numbers from memory management statistics, but each of these numbers is useful for a specific purpose. There is no one number that will give you an accurate measure of the impact of a particular process in all scenarios. People constantly say, "Look: just give me a number that I can plot on a dashboard and drive down over time". No such thing exists.
Task manager has to pick one of these numbers to show users by default, and its choice, roughly equivalent to Linux Private_Dirty, probably isn't terrible, since it's a decent proxy for how much RAM you get back if you kill the process. I don't think total commit charge is as good a choice, since with a large pagefile (which everyone should have) total commit can be much larger than total resident memory. Linux PSS is another popular choice, since (unlike Private_Dirty) it reflects the impact of a program's use of shared memory, but PSS behaves in perverse ways --- e.g., starting an instance of memory-hungry process can make PSS decrease because some pages in this program are distributed across more processes, increasing the denominator in the PSS calculation.
Are you worried about running out of page file space? Yes, you want to look at commit. Are you wondering why you're seeing a large number of page faults starting a game? Commit won't help you, but RSS might. It really depends on the situation.
I wouldn't take the advice in the article at face value. If you want to understand the impact a particular program has on the system's memory behavior, you need to understand how the virtual memory system actually behaves, and that's non-trivial.
Drepper's What every programmer should know about memory is a little old and goes into perhaps unnecessary detail at times, but is a great place to start.
Does anyone understand the story behind Process Explorer? For 10 years it's been an invaluable tool. But despite being developed by Microsoft you have to install it like third party software. Doesn't even come with an installer! How come it never got integrated into the main OS?
It originally started out as something from NTInternals. They were building small utils with capabilities similar to what they had become used to in their Unix background. Then they focused on all the low level tools which led to MS buying them.
Not having an installer is actually a bonus. Far too many "simple" Windows programs seem to need Gigabytes of DLLs installing to Windows folder and spray themselves all over the system. /Windows bloats massively after a couple of years of active use. An exe I can keep in a folder, or drop into the path and simply delete if no longer useful.
Well an Installer doesn't mean it has to install something bloated. Make a folder for it in C:\Program Files\, copy over the .exe and .chm, done. I can do this manually easily enough just think it's notable no one ever did it for this tool.
Depending on whom you ask that is actually a good thing. There's already enough software which is completely portable by nature, i.e. just a directory or even single executable which you can put anywhere, but still gets shipped as an installer only. Which depending on who wrote it might or might not get rid of artefacts again.
Apart from that: just like many, many other software it's easy to acquire via PowerShell's package management in which case it will even be in your PATH. If you've got everything setup it's Install-Package sysinternals. You might need Install-PackageProvider ChocolateyGet and Import-PackageProvider ChocolateyGet before that.
How come it never got integrated into the main OS?
I think the story goes something like 'Mark Russinovich created sysinternals, it was awesome enough for MS to embrace it, but deemed too technical and too much for power-users to be part of Windows'. A logic which is understandable in a way. Also if you opt for procexp to replace Task Manager it actually is integrated in the OS.
Nah, if something hangs I use the key combination that starts up the Task Manager. Not Perf Mon. There I instantly see what's the problem. I kill it (or use the temporary Pause) and go back to work.
This is the most occurring case where I look into memory. This will probably be the case for most windows users who know about the Task Manager out there and there is no reason for them to "not use Task Manager" anymore.
I hate those generalizing click bait headlines...they should at least come up with some equal justification for that.
I am surprised by how little thought MS always put into the Task Manager compared to every other process manager for Windows, including MSs own Process Explorer.
Even on Windows Server 2016, it doesn't show the per-process memory correctly for processes using over 128 GB.
Had to learn that the hard way when the overall RAM usage was pretty high but I couldn't find any individual process using that much. Then opened Process Explorer and boom, SQL Server was using over 130 GB (cubes..).
One small tip to whoever designs these apps: count memory in mibs - this is a way more intuitive, apps that consume less than a mib are extremely rare.
I can trivially create an app that memory maps a massive file and will show several GB of committed memory. This won't be in use of course, memory mapping files so that the OS will page in/out as required is intentional. Those GB of committed memory aren't something you should care about. I'd be scared if someone looked at the committed memory use of a program that correctly uses mmap and caused someone to exclaim "OMG this is uses TB of RAM!".
Task Manager is doing the right thing here. It's showing you want's actually paged in and in use right now.