More on Virtual Memory, Memory Fragmentation and Leaks, and WOW64
I found myself writing a very long response to a recent comment to this blog entry, so I decided to make it into a blog entry and link the two.
Question:
Thanks for the info. Sorry i was not clear.
I am recycling the memory based upon the virtual memory size. I presumed that by virtual memory it means memory stored on disk in the page file. Is this not the case?
There is still physical ram free on the machine when i see the out of memory problems leading me to suspect either a memory leak or memory fragmentation. I see the virtual bytes counter rise quickly to 1500mb (2-3 times a day) whereas the physical Ram used by the w3wp process is only a few hundred MB with plenty of RAM left to spare.
I'm not sure why the virtual memory grows so high when there is enough Physical RAM. I have tried using DebugDiag but the report generated is too high a level for me to understand without more documentation. An example from the DebugDiag report on the w3wp process when it had a high virtual memory usage:
Virtual Memory Summary
Size of largest free VM block 75.21 MBytes
Free memory fragmentation 88.13%
Free Memory 633.53 MBytes (30.93% of Total Memory)
Reserved Memory 1.03 GBytes (51.29% of Total Memory)
Committed Memory 364.06 MBytes (17.78% of Total Memory)
Total Memory 2.00 GBytes
Largest free block at 0x00000000`685ba000
This is the heap I identified as using the most memory
Heap Name
msvcrt!_crtheap
Heap Description
This heap is used by msvcrt
Reserved memory 868.25 MBytes
Committed memory 89.16 MBytes (10.27% of reserved)
Uncommitted memory 779.10 MBytes (89.73% of reserved)
Number of heap segments 12 segments
Number of uncommitted ranges 8207 range(s)
Size of largest uncommitted range 32.20 MBytes
Calculated heap fragmentation 95.87%
Top Allocation - Allocation Size - 1048
29.42 MBytes 29435 allocation(s)
There are 12 heap segments and nearly all have extremely high fragmentation and they have allocated a lot of reserved memory. The commited size in each segment is very low so i'm unsure why that is the case and where i go from here to debug the application further.
This has been driving me nuts for months now and there isn't anyone i know to ask other than you experts who write this stuff...
Going back to your article you say that i should run the ASP on iis in a 64 bit process and that it will automatically create a seperate dll host for all the 32 bit com objects.
Do i need to set the flag Enable32BitAppOnWin64 for this to work?
How do i register the 32 bit components. Is it a case of simply using regsvr32?
Answer:
Ah, I see where things are going awry...
About Virtual Memory
Virtual Memory is not the same as memory stored on disk in the page file.
Virtual Memory is a concept of indirection introduced in memory addressing whereby application see a contiguous "virtual memory" address space, but the OS transparently fulfills that memory address space with EITHER physical memory (i.e. RAM) or something else like disk memory (i.e. pagefile).
From the application's perspective, it is simply using "memory". However, the OS knows whether that "memory" is real physical memory or has to be swapped in from the pagefile on disk into real memory. It is this swapping operation which makes virtual memory that is in the page file slow.
If you want more information on the subject, I suggest reading books like "Inside Windows 2000" or "Microsoft Windows Internals" by Solomon and Russinovich, or classic books like "Computer Architectures: A Quantitative Approach". I know that it's probably way more information and detail than you are asking for, but those are the authoritative sources that I go with...
Memory Fragmentation vs Memory Leak
The classic signs of Memory Fragmentation is if you have lots of physical RAM free, low committed memory, yet seeing Out Of Memory errors.
How to distinguish this from Memory leak? Well, Memory leak would consume your physical RAM such that very little is free, virtual memory would be maxed out, and you see Out Of Memory errors. When you see the error depends on when your Virtual Memory depletes - a pagefile simply delays the inevitable and eventually grinds the system to a halt.
Your issue looks like memory fragmentation.
About Memory Fragmentation
Memory fragmentation is really an issue caused by user-application not designed for long-term usage such as on a server. The best way to fix the issue is to architect the application correctly with regards to memory allocation.
When applications just malloc/free memory all the time, over time this will naturally fragment memory. How does this happen? Well, assume the application does the following sequence of memory operations (pardon the ASCII art attempt to visualize what's going on in the virtual memory address space...):
- Allocates 4K of memory |--4K--|
- Allocates 4K of memory |--4K--|--4K--|
- Deallocates the first 4K of memory |--??--|--4K--|
- Allocates 8K of memory |--??--|--4K--|--8K--|
Since applications assume memory address is contiguous, that 8K allocation CANNOT use any of the 4K of deallocated (??) memory. That 4K is technically "free memory", but if the application needs >4K of memory, it is not usable by it. Now, imagine this happening all over the virtual memory address space to the point that no big chunk of CONTIGUOUS memory remains, and then you ask for a big block of memory... this is when you get Out of Memory even though enough fragmented free memory exists.
Here's a simplistic "worst case" example: assuming you manage to allocate the first byte of every virtual memory page (let's say the page size is 4096 bytes) and the virtual memory address space size is 4GB, you will see only 1MB of physical RAM used but all of the virtual memory used... and you get Out of Memory on the next allocation. In other words, over 99.99% memory is "free" yet you get "Out of Memory". Crazy, right? But that's memory fragmentation for you...
Now, software designed for server-side usage pre-create and cache memory blocks for runtime usage to avoid hitting the heap and eventually causing memory fragmentation. For example, IIS is architected to do this; it does not call malloc/free at runtime to handle requests... though we cannot cover user applications...
On Running 32bit COM objects on 64bit Windows
You do not need to change Enable32BitAppOnWin64 to run 32bit COM components. That property just makes IIS launch 32bit w3wp.exe to execute user code. 64bit process can use 32bit COM components just fine... just outside of its process instead of in-process, and this happens transiently. Of course, this automatic alteration may/not work, depending on how your COM component functions.
Also, it is easiest to just use the 64bit version of REGSVR32.EXE (by default). Please do not accidentally run a 32bit command console window and use the 32bit version of REGSVR32. REGSVR32 tries to put the ProgId into the necessary places depending on the bitness of the DLL such that 32bit processes load 32bit DLLs and 64bit processes load 64bit DLLs
Of course, whether this works or not depends on if the 32bit COM component actually works/registers correctly on 64bit Windows, and that is best determined by the people supporting that component.
//David
Comments
Anonymous
February 16, 2006
Hello David!
Although I know the concept of virtual memory, I'm not confirm with the technical details at all.
So I ask you, in your example why it's impossible for the OS to split the new 8k request into two 4k virtual pages located somewhere in physical memory? The app would see a continous space of 8k but the CPU translate it. After the first 4k it should be possible to load just a new page descriptor to acces the second 4k.
Maybe there would be a performance hit, but IMHO it's better to get slower mem than no mem.
Of course, the "normal" behaviour of the OS memory manager should be to return connected pages in the first try.
Can you tell us, why it is not implemented in this way? Some technical reasons?
AndreasAnonymous
February 17, 2006
Andreas - We are talking about fragmentation of the Virtual Memory address space.
What you are describing is what a "Virtual" Virtual Memory address scheme would look like. It is a possible solution, but not the only one, and it is not without its drawbacks. The "FAT32" style of linking entries cannot be friendly to filling CPU cachelines, which is all-important in modern computing...
I mean, I'm not certain "slower mem" is better than no mem. I'm sure you love running XP with 128MB RAM and 1GB of PageFile on a primary LOB application that only uses 32MB but gets swapped in/out all the time. ;-) If it was me, I'd rather it fail up front and tell me "get more memory!" instead of struggling along.
Now, we are just talking about software - anything is possible. Microsoft can and does rewrite the Memory Manager as necessary, provide low-fragmentation heap implementations as default, etc, as "user scenarios" evolve over time. So, the question is only a matter of "when it gets addressed" and not "why it is not implemented in this way".
I think the reason "why it is not implemented in this way" is because humans are not clairvoyant.
//DavidAnonymous
February 20, 2006
Oh, sorry, now I understand! You mean fragmentation in the virtual space, and I thought, you were talking about the physical memory.
IMHO every one who was able to find your blog, should already know about the problemes of dynamically allocated space. There are many approaches for better memory managers and garbage collectors out there, but AFAIK all of them can only decrease the problemes.
But I renew my question: Do you know if the Windows memory manager is able to split a request for, say, 8k into two 4k pages resident somewhere in physical RAM?
And I really think that a cache miss is better than an "Out of Memory" error when your physical RAM becomes fragmented.
AndreasAnonymous
February 20, 2006
The comment has been removedAnonymous
February 21, 2006
Andreas - I have no idea whether the Windows Manager does what you are asking, but I am not going to find out. If it is public knowledge there it would already be in books and locatable; if it is not yet public knowledge, then I certainly cannot release it.
Personally:
1. I think that a program which causes memory fragmentation is the real problem. It is clearly not designed for long-term usage if it constantly hits the heap in this manner.
2. I do not believe the OS should go out of its way to automatically "fix" or compensate for bad user program design, especially if that same user program cannot deal with "out of memory" errors gracefully. Of course, others in Microsoft believe in the opposite... to have the OS try to compensate and fix as many such things automatically as possible.
3. I believe in keeping things simple. Fail Early and Fail Fast. It may make the user programming a little more laborious, but it is very deterministic and rewards good design.
//DavidAnonymous
February 22, 2006
The Microsoft.com OPS guys have blogged about their migration to x64 and how it all works. Check it out...
http://blogs.technet.com/mscom/archive/2005/09/26/411568.aspx
//DavidAnonymous
February 27, 2006
I ran into a smiliar problem, and did some testing using the iis debug diag tool to see how virtual memory is being used by w3wp.exe.
The first thing i wanted to understand is how much virtual memory w3wp.exe allocates for just a simple hello-world aspx page, and the memory analysis report showed something interesting:
(edited to make it easier to read)
1. Virtual Memory Summary
Size of largest free VM block 1.04 GBytes
Free memory fragmentation 40.84%
Free Memory 1.76 GBytes (88.22%)
Reserved Memory 178.95 MBytes (8.74%)
Committed Memory 62.27 MBytes (3.04%)
Total Memory 2.00 GBytes
2. Virtual Allocation Summary
Reserved memory 158.76 MBytes
Committed memory 10.98 MBytes
Mapped memory 5.84 MBytes
Reserved block count 41 blocks
Committed block count 87 blocks
Mapped block count 33 blocks
3. Loaded Module Summary
Number of Modules 78 Modules
Total reserved memory 248.00 KBytes
Total committed memory 46.89 MBytes
Module Name Size
shell32 8.01MB
mscorlib_79990000 3.24MB
mscorlib 2.05MB
...
system_windows_forms 1.98MB
...
Question 1:
Even for such a simple hello-world aspx page, w3wp.exe will allocate about 252MB virtual memory. What really beats is this:
2. Virtual Allocation Summary
Reserved memory 158.76 MBytes
Why does w3wp.exe need to reserve (but not commit) over 158 MB vmem upfront? Even after running a heavy load testing on the website, this figure (reserved but uncommitted) is still well over 100 MB.
Question 2:
why there are 2 mscorlib? (of different sizes, the first one loaded to a different base address)
Question 3:
why does w3wp.exe need to load system.windows.forms?
granted, it isn't using much vmem (just under 2MB), but if it's not really used, it would still be good to not load it)Anonymous
February 27, 2006
William - actually, your question is more about the "hello world" footprint of ASP.Net... which does not have anything to do with IIS nor w3wp.exe (just a subtle FYI detail in the spirit of learning)...
w3wp.exe is simply an empty host process provided by IIS for other code (like ASP.Net) to run in. In the case of hello-world ASPX page, you are really talking about the memory footprint of ASP.Net as it loads up relevant parts of the .Net Framework, various memory pre-commitments, GC setup, etc...
I will check with the ASP.Net performance developer on some of your questions because the "Hello world ASPX" memory footprint is a standard perf test scenario.
What ASP.Net version are you talking about?
Though, I see no cause for alarm in your observations since it looks "normal" to me:
- Reserving virtual memory is nothing to worry about, as your heavy load testing has shown. It is the commited values that matter
- since w3wp.exe does not load System.Windows.Forms, it must have come in through some assembly dependency originating from System.Web (ildasm can quickly identify the cause)
- mscorlib relocation... are you using NGen?
//DavidAnonymous
February 28, 2006
William - the ASP.Net performance developer tells me that:
- The Reserved virtual memory is nothing to worry about. You can view it as performance/caching prerequisite of the CLR. And heavy load testing shows that it is nothing to worry about.
- System.Windows.Forms - It's not pulled in by empty hello world ASPX page. You can use Microsoft Debugging Tools and "sx e ld system.windows.forms" to identify what is actually pulling it in at runtime. Or you can ildasm to find the dependency.
- mscorlib - make sure it is GAC'd and NGen'd properly.
//DavidAnonymous
May 01, 2006
David,
Late getting to this blog, but I just found it searching for virtual memory fragmentation errors.
I agree with your points on the Feb 21 post, but have one difference, and a question.
I think that in the spirit of a flat memory model, the problem domain of virtual memory management is best hidden from end user applications. So I don't think addressing this issue in the OS is necessarily making up for deficiencies elsewhere.
Can you tell me what other functions a program can do that will cause memory fragmentation in the OS layer? I've seen this problem recently, and would like to inspect my code to see what I need to change to avoid the problem. I do manage memory allocation ourselves, so I've got that covered. I do open and close a lot of (index) files during the coarse of the process, so I'm suspicious of that as a potential problem.
Thanks,
JBAnonymous
May 02, 2006
John - I believe in giving users a flat memory model because that is simply easier to use. Why should I have to worry about where this piece of memory came from and how it was mapped?
However, I believe "memory management" (which includes fragmentation) is something else altogether.
For example, you can use managed code to avoid fragmentation, but you have to pay the GC cost ever so often. Or you can handle your own memory allocation to avoid frequently hitting the heap, which requires you to write that abstraction at the cost of not having to pay for GC or fragmentation. Or you can use "Low Fragmentation Heap" but once again, it is an attempt; the underly problem still remains. There is no free-lunch to quality engineering. If it was free, then monkeys should do it.
The best engineered applications are still those that are written with full understanding of the system to take advantage of efficiencies and avoid inefficiencies. With abstractions to ease implementation comes tradeoffs. Pick your tradeoffs.
I will query the IIS core developers about other causes of memory fragmentation, but the best I know of is based on Memory. IIS opens and closes lots of files over time, too, but that has not been a problem.
//DavidAnonymous
May 19, 2006
I found this blog searching for memory fragmentation as well.
However at this point I discovered already that fragmentation is caused by the way WinXP (SP2) loads dlls. There was not such problem with SP1. Now system dlls are spread all over the address space.
I found one way around: to reserve a heapsize (linker option /heapsize), but then there is a problem to use this (otherwise continiuos) space - malloc() type functions do not allocate large blocks from the heap, they rather call VirtualAllocate.
If there is a solution for this, I'm curious to know :)Anonymous
June 12, 2006
Andrey - I can tell you one of the reasons why system DLLs are moving around in the process address space - security. It is not infallible, but it does raise the bar just a little more to foil some would-be hackers.
This was introduced in XPSP2 and WS03SP1.
//DavidAnonymous
September 04, 2006
Hello David.
I saw the HeapDecommitFreeBlockThreshold is mentioned many times in relation to the VM fragmentation problem.
(see http://support.microsoft.com/kb/315407/)
How can this registry flag help (if at all)?
Thanks I.A
Tal.Anonymous
May 26, 2008
Have you tested a simple .asp or .html page with debug diag ?? You'll see that in the summary, still appears the fragmentation issue. greetings.Anonymous
May 26, 2008
<<However at this point I discovered already that fragmentation is caused by the way WinXP (SP2) loads dlls. There was not such problem with SP1. Now system dlls are spread all over the address space. >> That's randomization of the address space (similar to PIE), GOOD POINT. ¿Has anyone confirmed this issue? Maybe this affects to the performance and memory fragmentation.Anonymous
May 28, 2008
J - I do not think you are talking about memory fragmentation, which is a function of runtime behavior of a DLL's memory allocation pattern, not merely where the DLL loads. //DavidAnonymous
July 14, 2008
I also have hit a similar problem. reserved memory in the virtual address space is around 1.2gb. My question is why will the virtual address space state be in reserved state, why not in free ?. What does MEM_RESERVE virtual address space state mean when we are talking about memory fragmentation ? VinayAnonymous
August 20, 2008
The comment has been removedAnonymous
October 16, 2008
Bonjour, ce second article est consacré à la fragmentation mémoire sous IIS6. Notez cependant qu'il peutAnonymous
May 06, 2009
Sorry, but I read very pathetic excuses for inadequacies in memory management. If a programmer allocates the entire memory space in 1Kb chunks and returns it all. They should be able to come back and request ALL of it in one contiguous chunk. If an OS cannot do that it is NOT as OS, it is a toy, ergo Windows. The audacity is when the label "Server" is put on it. Please.... I have tried that on ANY Unix server and it works. On Windows if you chunk up memory in 1kb chunks and return every bit back then request the largest contiguous chunk, good luck. Any more pathetic excuses ? TRY IT YOURSELF:
#include <stdio.h> #include <string.h> #include <stdlib.h> #define null 0 static char ** heap_ptrs; static int get_mem_size_in_kbs(); static int get_mem_cont_in_mbs(); #define meg (1024 * 1024) #define kay 1024 int main( int argc, char * argv[]) { printf("Looking for memory limit....n"); // Support up to 64 Gig. each holding 1kb. heap_ptrs = (char **) malloc( 64 * 1024 * 1024 * sizeof(char ) ); for(int i = 0; i < 64meg; i++) heap_ptrs[i] = 0; int mem_cont_max = get_mem_cont_in_mbs(); printf("Max contiguous mem = %d MBsn", mem_cont_max); int mem_size_max = get_mem_size_in_kbs(); printf("Max mem = %d kbsn", mem_size_max); for(int x = 0; x < 3; x++) { mem_size_max = get_mem_size_in_kbs(); printf("Max mem = %d kbsn", mem_size_max); } mem_cont_max = get_mem_cont_in_mbs(); printf("Max contiguous mem = %d MBsn", mem_cont_max); mem_size_max = get_mem_size_in_kbs(); printf("Max mem = %d kbsn", mem_size_max); mem_cont_max = get_mem_cont_in_mbs(); printf("Max contiguous mem = %d MBsn", mem_cont_max); } int get_mem_cont_in_mbs() { int i=1; int got_at_least_one = false; void * ptr; while( (ptr = malloc(i*meg)) != null) { free(ptr); printf("%d MBs ", i); i++; got_at_least_one = true; } if(got_at_least_one) printf("n"); return got_at_least_one ? i : 0; } int get_mem_size_in_kbs() { int i=0; while( ( heap_ptrs[i] = (char *) malloc(kay)) != null) i++; for(int j = i-1; j >=0 ; j--) { free(heap_ptrs[j]); heap_ptrs[j]=null; } return i; }
- Anonymous
February 12, 2010
The comment has been removed