Compartilhar via


Two things to avoid for better memory usage

OK, I never give rules, because they always have exceptions, and I won't start today, but I'm gonna give a couple things that look like rules but aren't. I'll leave it as an exercise to the reader to decide when they should break these almost-rules :)

Almost-rule #1: Never call GC.Collect()

If you think you need to force a collection something has gone very wrong. You have to ask yourself, what sort of collection do I think I need? Gen0, Gen1, Gen2? How do I know that the GC will decide to do the collection that I think I need? What did I do so that now I need that kind of collection?

Let me break these down a bit more.

First case, "I think I need a Generation 0 collect right now"

Gen0 collects happen comparatively often anyway, and they're also comparatively cheap. The collector will have been using your usual allocation rate plus your processors cache size to figure out how much temporary memory it should be letting you create before it's economical to collect gen0. If you force the collect before then, you may be giving it too small of a time sample for it to do a good job predicting the right budget for the next collect and you may end up with more gen0 collects than you need. Since gen0 never gets incredibly big, your best bet is to just leave it be, it will be cleaned up really soon and with best economy if you just let the GC do it as scheduled.

(Random statistic: if I see about one gen0 collect per second I'm a happy guy)

Second case, "I think I need a Generation 1 collect right now"

Your first problem is that GC.Collect() doesn't promise a Gen1 collect. Your second problem is that to know how big gen1 even is you'd need to be looking at things like the survival rate in gen0, so it is pretty tricky to know if gen1 collect would really be a good idea. The final problem is that gen1 is also comparatively small/cheap to collect (not quite as cheap as gen0) the same problems I describe for gen0 still apply... probably even more so with regard to the gen1 budget because promotion to gen1 can have greater variance than the raw allocation rate.

So how could you reasonably know that there is stuff in gen1? Well, gen1 is sort of designed to handle objects that belong to a transaction in flight. If enough work is needed for each transaction that it is likely that during the course of processing the transaction (especially if many are happening in parallel) many gen0 collects are required to get rid of the temporary objects but the transactions state lives on until the transaction is completed, at which time it all dies. So if you just reached the end of a transaction you might know that there are a lot of transaction oriented objects that should be cleaned up and that they're lived far to long for you to expect them to still be in gen0.

So if you're in this situation where you think at the end of your transaction you have lots of objects that just died (an remember many transactions are probably happening at once so transactions are ending all the time) then rather than call GC.Collect() which probably won't do what you want, go back to the basics of the design and see if you can't find some or your state that isn't needed towards the end of the transaction and then change the code/algorithm so that those objects become unreachable as soon as possible. Anytime you can change long lived objects to medium, or medium to short, you're doing a good thing.

(Random statistic: if I see about one gen1 collect per 10 seconds I'm a happy guy)

Third case: "I think I need a Generation 2 collect right now"

Oh boy. If this is happening often enough that its a problem for you then you're in a world of pain. Generation 2 collects are full collects, so they are much more expensive than gen1, or gen0. If your algorithm is regularly producing objects that live to gen2 and then die shortly thereafter, you're going to find that the percent time spent in GC goes way up. Forcing more of these collects is really the last thing you wanted to do. The advice I gave earlier about finding ways to have as much of your memory be reclaimable as soon as possible, before those objects live into gen2, applies doubly here. If you're using the performance counters to watch the GC then you want gen2 size to be growing VERY slowly after startup. Promoting lots of objects to gen2 means those objects will die an expensive death.

A classic situation here is a web server that in turn calls a web service to get some of the results. If there is a lot of pending state during the call to the web service its likely that those objects will get aged all the way to gen2 -- badness. Minimize the amount of state and you'll be much happier.

(Random statistic: if I see about one gen2 collect per 100 seconds I'm a happy guy)

"But Rico, my memory still isn't being reclaimed!"

Even if that's the case, calling GC.Collect() probably isn't going to help the overall goodness of your program. If your memory isn't getting reclaimed in a timely fashion, it most likely means that for one reason or another the memory has aged to a generation that collects less often than you need. Remember age in the GC world is relative, aging is based on survived collections, and collections are triggered by allocations, so if you reduce the overall demand on the collector for memory (less ultra temporary objects for instance) then everything tends to stay "younger". Another way of thinking of this is that the aging rate of all your objects is relative to the aging rate of your most temporary objects. If you can find ways to have more objects die sooner, and to reduce the overall churn rate then you get much better behavior from the collector. Using allocation profiler to find objects that are getting relocated (implying that they survived a collect) will give you a good indication of where your surviving objects are coming from. Target those that you can for early eradication.

(Random statistic: during intensive computation if I see memory allocation rates below about 2 megabytes per second I'm a happy guy)

There's another common reason that memory isn't reclaimed as fast as you would like, which brings me to my second rule.

Almost-rule #2: Never have finalizers
(except on leaf objects which hold a single unmanaged resource and nothing else)

In C++ it's common to use destructors to release memory. It's tempting to do that with finalizers but you must not. The GC will collect the memory associated with the members of a dead object just fine without those members needing to be nulled. You only need to null the members if you want part of the state to go away early, while the object is still alive (see above). So really the only reason to finalize is if you're holding on to an unmanaged resource (like some kind of operating system handle) and you need to Close the handle or something like that.

Why is this so important? Well, objects that need to be finalized don't die right away. When the GC discovers they are dead they are temporarily brought back to life so that they can get queued up for finalization by the finalizer thread. Since the object is now alive so is everything that it points to! So if you were to put a finalizer on all tree nodes in an application for instance, when you released the root of the tree, no memory would be reclaimed at first because the root of the tree holds on to everything else. Yuck! If those tree nodes need to be finalized because they might hold an unmanaged resource it would be much better* to wrap that unmanged resource in a object that does nothing else but hold the resource and let that wrapper object be the finalized thing. Then your tree nodes are just normal and the only thing that's pending finalization is the one wrapper object (a leaf), which doesn't keep any other objects alive.

*Note: Whenever I say "it would be much better" that's special performance-guy-techno-jargon, what I really mean is: "It probably would be much better but of course you have to measure to know for sure because I can never predict anything."

Actually, the situation is even worse than I made it out to be above, when an object has a finalizer it will necessarily survive at least one collection (because of being brought back to life) which means it might very well get promoted. If that happens, even the next collect won't reclaim the memory, you need the next collect for the next bigger generation to reclaim the memory, and if things are going well the next higher level of collect will be happening only 1/10th as often, so that could be a long time. All the more reason to have as little memory as possible tied up in finalizable objects and all the more reason to use the Dispose pattern whenever possible so that finalization is not necessary.

Of course if you never have finalizers, you won't have to worry about these problems

Comments

  • Anonymous
    April 21, 2004
    "Your first problem is that GC.Collect() doesn't promise a Gen1 collect."

    It does according to the documentation for GC.Collect():

    "Forces garbage collection of all generations."

    It doesn't guarantee that everything will be collected - objects with finalizers will quite probably get promoted etc - but I don't think an implementation which did a straight gen0 collect would be valid under the documentation above.

    What am I missing here?

  • Anonymous
    April 21, 2004
    GC.Collect() gives you a gen2 collect by default (i.e. all generations). However I find a lot of people seem to call this API without really knowing which generations will be collected at all which is sort of my point. You could of course do GC.Collect(1) but it leaves you with the bigger problem of trying to guess where it is that your objects have migrated, a game the GC is better suited to play on its own.

  • Anonymous
    April 15, 2005
    The comment has been removed

  • Anonymous
    August 26, 2006
    While the slides and recordings from Gamefest are not yet available on the conference website, Rico...

  • Anonymous
    November 15, 2006
    Problem: Using the WebClient.Upload method for posting large files will eventually leave you stranded with OutOfMemoryExceptions. Cause: WebClient.Upload reads the entire file to memory by default. Resolution: Build your own uploader with just a few lines

  • Anonymous
    December 14, 2006
    PingBack from http://compulsivecoder.com/caffeine/?p=15

  • Anonymous
    May 28, 2007
    Bei Microsoft zu arbeiten ist Himmel und Hölle zugleich! Täglich finden sich tausende interessante Dinge

  • Anonymous
    June 21, 2007
    Ah. Garbage Collection... how I love and hate thee. =P I think one sad thing about programming in .net

  • Anonymous
    June 21, 2007
    Ah. Garbage Collection... how I love and hate thee. =P I think one sad thing about programming in .net

  • Anonymous
    June 21, 2007
    Ah. Garbage Collection... how I love and hate thee. =P I think one sad thing about programming in .net

  • Anonymous
    August 22, 2007
    PingBack from http://icodeinc.com/blog/?p=26

  • Anonymous
    January 31, 2008
    PingBack from http://kimerop.exofire.net/wordpress/?p=22

  • Anonymous
    December 22, 2008
    PingBack from http://www.taheta.org/?p=94