Partager via


Need some advice on testing memory performance

I've recently converted a value type over into a reference type. I.e. i've converted a C# struct into a class. I've been able to write tests to measure the CPU performance overhead of using the reference version rather than the struct version. I did this by performing some intensive mathematical operations and measuring how long it took before and after the change. However, reference types have an additional cost associated with them given that they're allocated on the heap and they involve the GC to get them cleaned up. However, I'm not sure how to measure the affect of this overhead. The test I was doing before doesn't really apply because it only involved really short lived objects so you never really saw any memory increase, or churn, or page hits, or cache coherency issues. Can you think of good ways to try to test this impact? Some kind of realistic test that would let me know if there are any downsides to using this class instead of the original struct.

I have experience with CPU profiling to find hotspots, but I don't have experience on the memory side of things. Tools would be preferred, but I'm certainly willing to take manual measurements.

Specifically I've been trying to measure the impact of using a class based implementation of Nullable<A> over a struct based one. I see there being a lot of compelling reasons to use the class based version (specifically because the programming model and semantics become to clear), however, in order to convince anyone we need to know all the tradeoffs involved. As class/structs behave differently in terms of local/heap storage and GC impact, that difference needs to be quantified so that when all factors are weighed we know realistically the cost of our choices.

Comments

  • Anonymous
    June 16, 2004
    The comment has been removed
  • Anonymous
    June 19, 2004
    Thanks Josh. It was very very helpful!