Share via


Adventures in analyzing high memory use on a Duet Client

 

The Problem

Here in Duet Support, there are times when we have issues where processes are consuming a large amount of memory, and there is usually a suspicion of Duet doing bad things.  In this blog, I will explain how I found the root cause for Outlook consuming a lot of memory. The reported problem is that when using Duet, Outlook memory use grows to 1.2GB or higher, and some errors are reported to the user. To add some more information, this specific problem does not happen all the time, nor is there a set of steps that will directly cause the issue.

 

The Setup

My approach under these conditions is to enable ‘user mode call stacks’ with gflags.  What this does is tell the memory manager in the OS to keep track of the code that makes each memory allocation.  The OS does this by creating an in-memory database that will save unique call stacks for each allocation so that in the future you can examine the allocations, and determine what code path caused the allocation.  Sounds simple right?  It’s simple to enable :) 

Here is the command that was run on the client to enable this feature:

C:\>gflags /i outlook.exe +ust
Current Registry Settings for outlook.exe executable are: 00001000
ust - Create user mode stack trace database

From the output, it shows that the executable flags were updated for the image outlook.exe.  For this to take effect, the process needs to be restarted.  Now that we have this in place, when the problem happens, we can take a process dump of outlook.exe and analyze the memory allocations.  This can be done many ways.  I prefer to start windbg, and attach to the outlook.exe process, and the run the command:

.dump /ma c:\outlook.dmp

This produces a file on the file system that is a snapshot of memory in use by outlook.exe.  Now we wait for the user to say the problem has happened.

 

The Analysis

Luckily the user has reported that they saw the problem, and a dump has been collected.  Now, this is where the blog post will start to get long!  Let’s get to work!

 

The main tool in analyzing a dump is WinDbg.  To start, open the dump and examine all the heaps in the process.  The command for doing this is !heap.  Here is how to see a heap summary:

0:000> !heap -s
NtGlobalFlag enables following debugging aids for new heaps:
validate parameters
stack back traces
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
01160000 58000062 16384 8380 10104 820 96 9 0 79509 L
01260000 58001062 1088 84 84 3 1 1 0 0 L
01270000 58008060 64 12 12 10 1 1 0 0
01480000 58000062 64 8 8 0 0 1 0 0 L
00030000 58001062 1088 124 256 55 4 4 0 0 L
01360000 58001062 15424 12960 12968 61 7 2 0 a10f L
018a0000 58001062 64 20 20 3 1 1 0 0 L
01a70000 58001062 64 16 16 4 1 1 0 0 L
032f0000 58001062 64 64 64 8 2 0 0 0 L
03300000 58041062 256 92 92 2 1 1 0 0 L
03360000 58001062 1280 304 304 65 11 1 0 0 L
033b0000 58001062 256 76 80 37 2 2 0 12a L
033f0000 58000062 1024 24 24 0 0 1 0 0 L
03670000 58001062 1280 480 480 121 7 1 0 3e34 L
036b0000 58001062 256 48 48 6 1 1 0 1 L
036f0000 58001062 326912 289804 289804 51 1 1 0 10046 L <== I focused here.
045a0000 58001062 256 32 32 3 1 1 0 0 L
04610000 58001062 31808 26192 26192 356 1 1 0 0 L
06ee0000 58001062 1024 1024 1024 1016 2 0 0 0 L
07640000 58001062 64 24 24 3 1 1 0 0 L
079b0000 58001062 64 56 56 7 2 1 0 0 L
07d50000 58001063 1280 680 828 246 33 5 0 bad
07e60000 58001063 256 4 4 2 1 1 0 bad
07ea0000 58001063 256 4 4 2 1 1 0 bad
07ee0000 58001063 256 4 4 2 1 1 0 bad
07f20000 58001063 256 4 4 2 1 1 0 bad
07f60000 58001062 64 28 28 1 0 1 0 0 L
08b60000 58001062 3328 956 2068 355 24 20 0 1426 L
<snipped for brevity>

 

In this case, the client has not grown to 1.3GB+, but it is using a lot of ram.  Above I have highlighted the “big” memory usage allocation in red.  Let’s examine heap 036f0000.  We will start by grouping the allocations in that heap by size.

 

0:000> !heap -stat -h 036f0000 -grp S 20
heap @ 036f0000
group-by: TOTSIZE max-display: 32
size #blocks total ( %) (percent of total busy bytes)
    f8 442ed - 420d598 (32.40) <== This is where the majority is
b6 1067a - ba9abc (5.72)
20 4ddb4 - 9bb680 (4.77)
2a 372e0 - 90d8c0 (4.44)
51 196f8 - 80c478 (3.95)
4e 18d41 - 7909ce (3.71)
18 4ff0a - 77e8f0 (3.68)
5f 10665 - 615f7b (2.98)
5a f221 - 551f9a (2.61)
ba 7430 - 546ae0 (2.59)
5e 97bd - 37b766 (1.71)
31 ef87 - 2dd8d7 (1.41)
61 73e0 - 2be7e0 (1.35)
b8 3c1d - 2b34d8 (1.32)
12 1eb43 - 228ab6 (1.06)
<snipped>

Above I have highlighted the large percentage of allocations.  They are all of size f8.  Now, let’s filter out all of these allocations.  Note, this next debugger command will search all heaps, and display the allocations. I have snipped it for brevity.

 

0:000> !heap -flt s f8
<snip>

    _HEAP @ 36f0000
        037066f0 0022 0020  [07]   037066f8    000f8 - (busy)
         ? 3rdpartydll+1
        03707ff0 0022 0022  [07]   03707ff8    000f8 - (busy)
          ? 3rdpartydll+1 
        0372a2e0 0022 0023  [07]   0372a2e8    000f8 - (busy)
          ? 3rdpartydll+1
        0372c750 0022 0022  [07]   0372c758    000f8 - (busy)
          ? 3rdpartydll+1
        0372d068 0022 0022  [07]   0372d070    000f8 - (busy)
          ? 3rdpartydll+1
        0372e598 0022 0022  [07]   0372e5a0    000f8 - (busy)
          ? 3rdpartydll+1

<snip>

 

In this case, what I typically do is send the output to a file by opening a file in the debugger with the command .logopen.  When the output has finished scrolling, I use .logclose.  I then use some powershell commands to trim all the output so that I end up with just the numbers highlighted in green above.  This way I can feed the file into the debugger and see every allocation’s stack information.  Looking at the output, I hope that you noticed the additional line in red above.  This output is indicating that the 3rdpartydll is involved in this allocation. Hmm… Really? Let’s pick one of the allocations and see if we can determine who made it.  Let’s pick 0372e598.

 

0:000> !heap -p -a 0372e598
address 0372e598 found in
_HEAP @ 36f0000
HEAP_ENTRY Size Prev Flags UserPtr UserSize - state
0372e598 0022 0000 [07] 0372e5a0 000f8 - (busy)
? 3rdpartydll+1
Trace: 7dfe
7c96cf9a ntdll!RtlDebugAllocateHeap+0x000000e1
7c949564 ntdll!RtlAllocateHeapSlowly+0x00000044
7c918f01 ntdll!RtlAllocateHeap+0x00000e64
77e781f9 rpcrt4!AllocWrapper+0x0000001e
77e781d0 rpcrt4!operator new+0x0000000d
77e7e697 rpcrt4!DuplicateString+0x00000026
77e7e71a rpcrt4!DCE_BINDING::DCE_BINDING+0x0000006a
77e7ed6b rpcrt4!RpcStringBindingComposeW+0x0000004b
77e93fde rpcrt4!BindToEpMapper+0x000000d2
77e936d3 rpcrt4!EpResolveEndpoint+0x000001e3
77e9f3db rpcrt4!DCE_BINDING::ResolveEndpointIfNecessary+0x0000014a
77e800aa rpcrt4!OSF_BINDING_HANDLE::AllocateCCall+0x00000127
77e7fdbc rpcrt4!OSF_BINDING_HANDLE::NegotiateTransferSyntax+0x00000028
77e78a01 rpcrt4!I_RpcGetBufferWithObject+0x0000005b
77e78a38 rpcrt4!I_RpcGetBuffer+0x0000000f
77e7906d rpcrt4!NdrGetBuffer+0x00000028
77ef557d rpcrt4!NdrAsyncClientCall+0x000001b6
*** ERROR: Symbol file could not be found. Defaulted to export symbols for EMSMDB32.DLL -
38b8ef6a EMSMDB32!RXP_XPProviderInit+0x00000794
38b8ef26 EMSMDB32!RXP_XPProviderInit+0x00000750
38b4c7ca EMSMDB32!HrEmsuiInit+0x000096fd
38b4c70a EMSMDB32!HrEmsuiInit+0x0000963d
38b4965b EMSMDB32!HrEmsuiInit+0x0000658e
326172d1 MSO!Ordinal577+0x000001bc
3261710f MSO!Ordinal320+0x00000031
7c80b713 kernel32!BaseThreadStart+0x00000037

 

What this command is doing is finding the allocation 0372e598, and obtaining the call stack that is associated with it.  In this case, we were expecting to find 3rdpartydll somewhere on the stack.  The reason being is that from the previous output and this output, we still see “? 3rdpartydll+1“ in the output.  However, this stack implies that a call into RPC has allocated it.  If I look closely at the stack, I have highlighted the code in orange that made the call.  Looking at the function name, I would expect that the contents of the allocation is a string. 

 

If you have read this far, we still have a long way to go! 

 

SO WHAT MADE THE ALLOCATION

 

In most cases, the last command (!heap –p –a) would have pointed the finger at who made the allocation.  But in this case, that is not true.  There is more to this mystery!  Let’s begin this quest for the truth by looking at allocation 0372e598 manually.

 

0:000> dc 0372e598 0372e598+f8+14
0372e598 000d0022 001807d6 10000001 03707d50 "...........P}p.
0372e5a8 0fff0102 00000000 00000018 203d4590 .............E=
0372e5b8 001a001e 00000000 1f867038 00000000 ........8p......
0372e5c8 30070040 00000000 500075a0 01c8cfa5 @..0.....u.P....
0372e5d8 300b0102 00000000 00000010 14463e00 ...0.........>F.
0372e5e8 0037001f 00000000 204a2d48 00000000 ..7.....H-J ....
0372e5f8 0037001e 00000000 204a7170 00000000 ..7.....pqJ ....
0372e608 003d001f 00000000 204b6e00 00000000 ..=......nK ....
0372e618 003d001e 00000000 204b7848 00000000 ..=.....HxK ....
0372e628 0e060040 00000000 4a1c9600 01c8cfa5 @..........J....
0372e638 00390040 00000000 55de88e0 01c8cfa5 @.9........U....
0372e648 1035001e 00000000 18686170 00000000 ..5.....pah.....
0372e658 0c1f001f 00000000 193c8340 00000000 ........@.<.....
0372e668 0c1f001e 00000000 03707d58 00000000 ........X}p.....
0372e678 00170003 00000000 00000001 00000000 ................
0372e688 0e1b000b 00000000 00000001 00000000 ................
0372e698 abababab abababab 00007dfe 00000000 .........}......

 

Looking at this memory of the allocation, we can tell this is *NOT* a string, but it is in fact MAPI properties!  For instance, if you look at the first value listed in green above, 0fff0102,  that is PR_ENTRYID.  From this I can known that this allocation was *NOT* made by RPC, because the data is not a string.  So, what is going on?

 

Let’s start with the data highlighted in blue above, 10000001.  Because I know that this allocation is not a string, but rather MAPI properties, I know that the first 8 bytes of the allocation is the flags that MAPI uses to keep track of its own allocations.  In this case, the 0x10000001 is the result of 2 flags combined together like this:  0x10000000 | 0x1.  What is very interesting is that the !heap output above has this in it:

? 3rdpartydll+1

What !heap is doing is checking to see if the first 8 bytes is a virtual method table or vtable.  The extension attempts to do a “dps” on the data.  Since this is not an object with a vtable, but rather a MAPI allocation, the output is quite a coincidence!  If we look at the loaded modules, we find that the 3rd party’s DLL is directly loaded at 0x10000000!

0:000> lmm 3r*
start end module name
10000000 10294000 3rdpartydll (export symbols) 3rdpartydll.dll

 

SO DID RPC MAKE THIS ALLOCATION THEN?

I reviewed the code for the rpcrt4 module, and I checked the assembly in the module and found that the RPC code is indeed *ONLY* doing the allocation of a string.  There is no indication that RPC could allocate a buffer of this sort.  I went to a machine and attached the debugger and investigated the types of allocations that RPC will make in this code path.  The allocations are only as large as the string, and nothing close to the size of f8

This finding began a long search into the OS code on how the allocation is tracked and so forth.  I will spare the details here.  What I did find is that we store the index with the allocation.  This is displayed in the following:

    _HEAP @ 36f0000
HEAP_ENTRY Size Prev Flags UserPtr UserSize - state
0372e598 0022 0000 [07] 0372e5a0 000f8 - (busy)
? 3rdpartydll+1
Trace: 7dfe

I have highlighted the trace tag above in red.  This is an internal index into the process’s Stack Trace Database.  When gflags was run with the +ust option, this enabled the memory manager in the OS to create a Stack Trace Database in the process memory to store the call stacks of the individual allocations.  This database works in a manner to only store unique callstacks, and allow the index to be put with the allocation.  If I manually want to go see this callstack, I can figure that out.  I won’t explain that here, but the command would look like this:

0:000> dds poi(poi(poi(ntdll!RtlpStackTraceDataBase)+64)-4*7dfe)
00547c18 0054ecd0
00547c1c 00000002
00547c20 00197dfe
00547c24 7c96cf9a ntdll!RtlDebugAllocateHeap+0xe1
00547c28 7c949564 ntdll!RtlAllocateHeapSlowly+0x44
00547c2c 7c918f01 ntdll!RtlAllocateHeap+0xe64
00547c30 77e781f9 rpcrt4!AllocWrapper+0x1e
00547c34 77e781d0 rpcrt4!operator new+0xd
00547c38 77e7e697 rpcrt4!DuplicateString+0x26
00547c3c 77e7e71a rpcrt4!DCE_BINDING::DCE_BINDING+0x6a
00547c40 77e7ed6b rpcrt4!RpcStringBindingComposeW+0x4b
00547c44 77e93fde rpcrt4!BindToEpMapper+0xd2
00547c48 77e936d3 rpcrt4!EpResolveEndpoint+0x1e3
00547c4c 77e9f3db rpcrt4!DCE_BINDING::ResolveEndpointIfNecessary+0x14a
00547c50 77e800aa rpcrt4!OSF_BINDING_HANDLE::AllocateCCall+0x127
00547c54 77e7fdbc rpcrt4!OSF_BINDING_HANDLE::NegotiateTransferSyntax+0x28
00547c58 77e78a01 rpcrt4!I_RpcGetBufferWithObject+0x5b
00547c5c 77e78a38 rpcrt4!I_RpcGetBuffer+0xf
00547c60 77e7906d rpcrt4!NdrGetBuffer+0x28
00547c64 77ef557d rpcrt4!NdrAsyncClientCall+0x1b6
00547c68 38b8ef6a EMSMDB32!RXP_XPProviderInit+0x794
00547c6c 38b8ef26 EMSMDB32!RXP_XPProviderInit+0x750
00547c70 38b4c7ca EMSMDB32!HrEmsuiInit+0x96fd
00547c74 38b4c70a EMSMDB32!HrEmsuiInit+0x963d
00547c78 38b4965b EMSMDB32!HrEmsuiInit+0x658e
00547c7c 326172d1 MSO!Ordinal577+0x1bc
00547c80 3261710f MSO!Ordinal320+0x31
00547c84 7c80b713 kernel32!BaseThreadStart+0x37

This is essentially what the !heap extension is doing when you do the !heap –p –a command.  The !heap command is working correctly, as you can go way up in this email and find the red highlight of the ram memory I dumped, you can see that the OS stored a 7dfe with the allocation.  That tag is wrong, and that is what is pointing the finger at RPC.  RPC did not make this allocation.  We *KNOW* that MAPI made this allocation not RPC.

Are you still reading this?  Good Job!!  We are getting closer.  Let’s take a look at some OS internals and see if we can figure this out.

 

OS INTERNALS

 

Let’s take a look at what we know. 

  • RPC did not make the allocation
  • MAPI properties are in the allocation
  • We have a tag stored with the allocation that is incorrect.

Let’s start by looking at the Stack Trace Database.  Here are the steps to find that information:

 

0:000> x ntdll!RtlpStackTraceDataBase
7c97b170 ntdll!RtlpStackTraceDataBase = <no type information>
0:000> dt poi( 7c97b170 ) ntdll!_Stack_Trace_DataBase
+0x000 Lock : __unnamed
+0x038 AcquireLockRoutine : 0x7c901000 long ntdll!RtlEnterCriticalSection+0
+0x03c ReleaseLockRoutine : 0x7c9010e0 long ntdll!RtlLeaveCriticalSection+0
+0x040 OkayToLockRoutine : 0x7c9518ea unsigned char ntdll!NtdllOkayToLockRoutine+0
+0x044 PreCommitted : 0 ''
+0x045 DumpInProgress : 0 ''
+0x048 CommitBase : 0x00160000
+0x04c CurrentLowerCommitLimit : 0x010e0000
+0x050 CurrentUpperCommitLimit : 0x010e0000
+0x054 NextFreeLowerMemory : 0x010dffec ""
+0x058 NextFreeUpperMemory : 0x010e0254 "???"
+0x05c NumberOfEntriesLookedUp : 0x33c97e8
+0x060 NumberOfEntriesAdded : 0x1ff6b
+0x064 EntryIndexArray : 0x01160000 -> 0x000000c8 _RTL_STACK_TRACE_ENTRY
+0x068 NumberOfBuckets : 0x89
+0x06c Buckets : [1] 0x00161c84 _RTL_STACK_TRACE_ENTRY

 

What I did above is use the x command to find where in memory the Stack Trace Database is located.  Then I took that address (in orange), and treated it as a pointer, and cast it as the type _Stack_Trace_Database so that it would show me the values.  Now, the interesting thing to note here is what I have highlighted in red.  This is the key to unlock the whole mystery! 

 

MYSTERY SOLVED!

 

The Stack Trace Database is a collection of unique callstacks that are stored for all the allocations in the process.  When the allocation is made, the *INDEX* is stored with the allocation.  The problem comes in that the *INDEX* is of type USHORT which means that it can effectively address a range from 0 to 0xFFFF.  If you look above at the line NumberOfEntriesAdded : 0x1ff6b, you will notice that we have more entries than 0xFFFF!   In this case, we have more entries than the index!  This means that if the computed index is 0x17dfewe would not store that, but rather it would be truncated to 0x7dfe!  Now, going back to what we know, let’s plug in some new values!

 

First Let’s look at the 0x7dfe entry.  Looking above you can see in blue that the Stack Trace Database contains entries of type _RTL_STACK_TRACE_ENTRY.  Using that information, run the following command:

0:000> dt _RTL_STACK_TRACE_ENTRY poi(poi(poi(ntdll!RtlpStackTraceDataBase)+64)-4*7dfe)
ntdll!_RTL_STACK_TRACE_ENTRY
+0x000 HashChain : 0x0054ecd0 _RTL_STACK_TRACE_ENTRY
+0x004 TraceCount : 2
+0x008 Index : 0x7dfe
+0x00a Depth : 0x19
+0x00c BackTrace : [32] 0x7c96cf9a

 

Now, get the stack from that manually:

0:000> ? poi(poi(poi(ntdll!RtlpStackTraceDataBase)+64)-4*7dfe)
Evaluate expression: 5536792 = 00547c18

0:000> dds 00547c18+c
00547c24 7c96cf9a ntdll!RtlDebugAllocateHeap+0xe1
00547c28 7c949564 ntdll!RtlAllocateHeapSlowly+0x44
00547c2c 7c918f01 ntdll!RtlAllocateHeap+0xe64
00547c30 77e781f9 rpcrt4!AllocWrapper+0x1e
00547c34 77e781d0 rpcrt4!operator new+0xd
00547c38 77e7e697 rpcrt4!DuplicateString+0x26
00547c3c 77e7e71a rpcrt4!DCE_BINDING::DCE_BINDING+0x6a
00547c40 77e7ed6b rpcrt4!RpcStringBindingComposeW+0x4b
00547c44 77e93fde rpcrt4!BindToEpMapper+0xd2
00547c48 77e936d3 rpcrt4!EpResolveEndpoint+0x1e3
00547c4c 77e9f3db rpcrt4!DCE_BINDING::ResolveEndpointIfNecessary+0x14a
00547c50 77e800aa rpcrt4!OSF_BINDING_HANDLE::AllocateCCall+0x127
00547c54 77e7fdbc rpcrt4!OSF_BINDING_HANDLE::NegotiateTransferSyntax+0x28
00547c58 77e78a01 rpcrt4!I_RpcGetBufferWithObject+0x5b
00547c5c 77e78a38 rpcrt4!I_RpcGetBuffer+0xf
00547c60 77e7906d rpcrt4!NdrGetBuffer+0x28
00547c64 77ef557d rpcrt4!NdrAsyncClientCall+0x1b6
00547c68 38b8ef6a EMSMDB32!RXP_XPProviderInit+0x794
00547c6c 38b8ef26 EMSMDB32!RXP_XPProviderInit+0x750
00547c70 38b4c7ca EMSMDB32!HrEmsuiInit+0x96fd
00547c74 38b4c70a EMSMDB32!HrEmsuiInit+0x963d
00547c78 38b4965b EMSMDB32!HrEmsuiInit+0x658e
00547c7c 326172d1 MSO!Ordinal577+0x1bc
00547c80 3261710f MSO!Ordinal320+0x31
00547c84 7c80b713 kernel32!BaseThreadStart+0x37

We know that is not what we are looking for.  We are looking for the tag 0x17dfe:

 

0:000> dt _RTL_STACK_TRACE_ENTRY poi(poi(poi(ntdll!RtlpStackTraceDataBase)+64)-4*17dfe)
ntdll!_RTL_STACK_TRACE_ENTRY
+0x000 HashChain : 0x00ced18c _RTL_STACK_TRACE_ENTRY
+0x004 TraceCount : 0x442ac
+0x008 Index : 0x7dfe
+0x00a Depth : 0xc
+0x00c BackTrace : [32] 0x7c96cf9a

0:000> ? poi(poi(poi(ntdll!RtlpStackTraceDataBase)+64)-4*17dfe)
Evaluate expression: 13549032 = 00cebde8

0:000> dds 00cebde8+c
00cebdf4 7c96cf9a ntdll!RtlDebugAllocateHeap+0xe1
00cebdf8 7c949564 ntdll!RtlAllocateHeapSlowly+0x44
00cebdfc 7c918f01 ntdll!RtlAllocateHeap+0xe64
00cebe00 38eea271 OLMAPI32!MAPIAllocateBuffer+0x58
00cebe04 38eea23c OLMAPI32!MAPIAllocateBuffer+0x23
00cebe08 38ef0674 OLMAPI32!HrCopyString8Ex2+0x114
00cebe0c 38ef4783 OLMAPI32!HrGetMAPIMalloc2+0xe62
00cebe10 38f0b31e OLMAPI32!UNKOBJ_ScAllocateMore+0x486
00cebe14 38f0b208 OLMAPI32!UNKOBJ_ScAllocateMore+0x370
00cebe18 38ef82fa OLMAPI32!HrCopySPropTagArray+0x3ac
00cebe1c 38f2d7fd OLMAPI32!HrGetNamedPropsCRC+0x738
00cebe20 100c8141 3rdpartydll!3rdpartyclass::3rdpartyfunction+0x37901

 

Now, we can clearly see that the 3rdpartydll.dll made this allocation. 

 

TO SUMMARIZE

 

What an amazing coincidence!  The 3rd party’s DLL was loaded at an address of 0x10000000, which happens to closely resemble a MAPI flag, which causes the !heap command to incorrectly think that the MAPI flag is a vtable entry which happens to point to the 3rd party’s DLL, but ultimately the 3rd party’s dll was responsible!  HAH!

In this case, what has happened is that the 3rdpartydll.dll file has made an allocation, and not freed it properly.  In talking with my team mate Steve Griffin, the suspected behavior is that the 3rd party’s code is most likely calling MAPI’s QueryRows, and then later on performing a MapiFreeBuffer.  The idea is that they are not freeing *ALL* the rows.  Note the following comment in the QueryRows documentation:

Memory used for the SPropValue structures in the row set pointed to by the lppRows parameter must be separately allocated and freed for each row. Use MAPIFreeBuffer to free the property value structures and to free the row set. When a call to QueryRows returns zero, however, indicating the beginning or end of the table, only the SRowSet structure itself needs to be freed. For more information about how to allocate and free memory in an SRowSet structure, see Managing Memory for ADRLIST and SRowSet Structures.

 

If the 3rd party’s dll doesn’t free all the structures they will be leaked.  Steve has a blog post on some of the common mistakes people make with MAPI memory management. 

Happy debugging!

Comments

  • Anonymous
    March 12, 2009
    PingBack from http://www.clickandsolve.com/?p=21847

  • Anonymous
    March 13, 2009
    James wrote up a good article on analyzing a MAPI memory leak using user mode stack tracing . I wanted

  • Anonymous
    March 16, 2009
    Continuing our look at lessons learned from James’ article on the MAPI memory leak , we look at the memory

  • Anonymous
    March 16, 2009
    Continuing our look at lessons learned from James’ article on the MAPI memory leak , we look at the memory

  • Anonymous
    March 17, 2009
    Digging more into lessons learned from James’ blog on analyzing memory usage (my first two articles are

  • Anonymous
    March 20, 2009
    I think this will be the last article to come from James’ post on debugging MAPI (previous posts here

  • Anonymous
    March 20, 2009
    I think this will be the last article to come from James’ post on debugging MAPI (previous posts here