Freigeben über


Back To Basics: Memory allocation, a walk down the history

Star fish on the Suratkal beach

This post is Part 1 in the series of posts on Garbage Collection (GC). Please see the index here.

Most languages we deal with today support a whole bunch of different memory allocation options. We have an option to use static allocation, stack-allocation and heap allocation. However, same was not true in the pre-historic era, life was simple then and the earlier languages just supported static allocation, then slowly stack and then heap allocation came by and today we have come to expect automatic memory management from most new languages.

However, that didn’t really mean that there was no advantage is the simplistic approach taken by these early languages.

Static allocation

This is the simplest allocation option and was supported by initial languages like Fortran (early 50’s). All memory was allocated (reserved) during compile time and variable names were bound to the target memory and frozen. This had some serious limitation like variable memory couldn’t be supported. In case memory usage might vary the developer simply allocated the largest size he anticipated and used from it.

This also meant that size of all Data Structures were known at runtime (e.g. fixed length arrays, no link-lists). Since name binding happened statically, simple things we have come to expect over time like recursion was not possible because the second instantiation of a method would mean that same variables are accessed and since names are bound to addresses statically they would over-write data.

Even though this seems to be pretty restrictive this had some advantages. The most obvious is robustness and predictable behavior. Since there are no runtime memory allocation, there is no possibility of running out of memory (so no OutOfMemoryException :) ). The execution was also very fast because there is no memory to manage and no stack/heap to tear down. Moreover the compiler could emit direct memory accesses instruction without going through any indirection (as symbol->address mapping doesn’t change).

Stack Allocation

The first languages to bring in stack allocation was Algol in late 50’s. This worked past some limitations of static allocation by supporting memory frames that got pushed onto the system stack as procedures were called. What this meant was that based on the input parameters of a method it could push a different sized frame and hence had variable memory allocation (atleast for method local variables). However, return value had to be of fixed size (fixed by compile time) because the caller would need to know how much was left behind by callee on the stack.

Recursion was possible because every invocation would have it’s exclusive frame, and hence when a method returned the frame would unwind (pop) and memory allocated by it would be no longer available to the callee.

Obviously with this came reduced robustness because based on control flow the program could get out of stack and the additional stack management overhead reduced speed (though not significantly).

Even though it wasn’t much but it still was a huge improvement…

Heap Allocation

Funny to call it so but this is the state of the art in memory allocation :) and has remained so for a long time (with some improvements).

With heap data could be allocated in chunks of variable size and later de-allocated. There is no ordering requirement in-between the allocation and de-allocation unlike the stack frames where callee memory gets deallocated before the caller. So now it was possible to allocate and pass variable sized memory around (e.g. from Callee to Caller). It was easier to manage memory better, e.g. by using growable non-contiguous lists over arrays, model data closer to their real life existence like a tree.

With the heap allocation life seemed simpler at first but actually became harder. Robustness nose-dived. Memory allocated dynamically had to be managed carefully because if allocated memory is not de-allocated after it’s use is over, it becomes garbage and un-available (called memory leak) and slowly runs out of space. At the end the dreaded out of memory scenario comes up where there is no more memory left to be allocated to the requestor and the program fails to continue. Heap allocation is also a costly operation and can have severe perf implications.

Even with all the tooling support it still remains hard to locate memory leaks and constitutes one of the harder classes of bugs to be tackled.

Garbage Collection

LISP in 1959 started a new technique called Automatic Memory Management known by the more popular term Garbage Collection (GC). This tries to address the memory leak issue by needing explicit memory allocation but removing the need to de-allocate memory. Instead it provides language facilities that actually hunt down the un-used memory and automatically free them without user-code intervention.

As you might guess that even though GC increases robustness and reduces coding overhead significantly it also has performance implications. This makes using GC in hard in a whole bunch of scenarios. E.g. you typically do not expect to see a GC being used for a real-time system or a system driver (but there are exceptions to this as well); but you do expect to see it in a web-development platform that manages huge amount of memory and can sometimes live with a bit of response delay when the GC is actually running.

Putting it all together

Most modern programming languages support all three forms of storage. E.g. C++ or C# supports all 3.

Even though all of them might be available at the same time, the choice of usage is key to ensure robustness and meet performance goals of a program. Unfortunately a lot of developers take a casual approach to it and later land up into trouble when the memory foot-print become larger and harder to handle.

Consider the following pseudo-code

 void foo()
{
    MyType m1;                                   // Option-1 stack allocation
    MyType* m2 = AllocateOnHeap(sizeof(MyType)); // Option-2 heap allocation
    ...
    if (SomeCond)
       foo();      // recursive call    
    ...
    DeAllocate(m);
}

-

Here foo is called in recursion. If MyType is of significant size and we use option-1 of using stack allocation then the depth of recursion supported by the method will be significantly reduced because each function call will need additional stack space to accommodate MyType. However, if we use heap (Option-2) then the stack needed will be significantly lower (increase in depth of recursion) but since heap allocation is much slower the speed will suffer. Based on the size of MyType, the need of the program the allocation would have to be carefully chosen.

Based on target domain different languages have chosen

Comments

  • Anonymous
    January 24, 2009
    PingBack from http://foros.3dgames.com.ar/programacion.97/506885.back-to-basics-memory-allocation.html#post9338993

  • Anonymous
    January 24, 2009
    After becoming the .NET Compact Framework (.NETCF) dynamic memory management module owner I am continually

  • Anonymous
    January 25, 2009
    Just a small historical inaccuracy -- you state that Fortran was "early 50's". Yes, the language was under development for several years, but it was first delivered to customers in 1957, so I'd suggest "late 50's". While I'm here, another historical fact only tangentially related to memory, namely subscripting. The original Fortran language allowed an array to have a maximum of 3 subscripts. Why? Because the compiler produced code for the IBM 704, which had 3 index registers. And the compiler had to be an optimizing compiler (computers then being so expensive but slow, Fortran had to compete with hand-coded assembler code). But the compiler team didn't have time to work out an algorithm to stuff (say) 7 subscripts into 3 registers. (That came almost a decade later, from IBM's Les Belady, via LRU and related algorithms.) So the original Fortran language limited arrays to a maximum of 3 dimensions.

  • Anonymous
    January 25, 2009
    Thank you for submitting this cool story - Trackback from DotNetShoutout

  • Anonymous
    January 31, 2009
    This post is Part 2 in the series of posts on Garbage Collection (GC). Please see the index here. Garbage

  • Anonymous
    November 22, 2009
    you stated that - "we use option-1 of using stack allocation then the depth of recursion supported by the method will be significantly reduced because each function call will need additional stack space to accommodate MyType." why the recurssion supported by method wiil be reduced in option-1 not in option-2? I think memory requirement will be the same in both cases.

  • Anonymous
    November 22, 2009
    Akhlesh memory requirement will be almost the same in both case, the question is which memory. Stack in general is more constrained than heap and you cannot easily handle it running out. In case on allocating on the stack the stack will grow much faster and hence you'll go out of it faster and the depth of recursion that your code will support will be lesser. However on using heap, even though you'll use almost the same amount it will support a larger depth of recursion as it will be allocated on the larger heap (albeit slower).

  • Anonymous
    March 19, 2010
    Fortran stands for FORmula TRANslator and was intended to be used in solving Math Physics problems. Obviously 3 subscripts is just "what doctor ordered" for 3 dimensional case (who needs more? :) There was not too much space fro recursion in numeric algorithms and there were techniques of heap allocations via assembler.