次の方法で共有


The Design of the .Net Compact Framework CLR, Part II: JIT Compiler Design Considerations

This is the second in a series of posts describing the design of the .Net Compact Framework CLR. If you missed Part 1, you can find it here. This installment describes the basic design tenants of the .Net Compact Framework JIT compilers.

----

The primary design difference, at least with respect to memory usage, between the .Net Compact Framework JIT compilers and those on the full .Net Framework is the ability to free jitted code and return memory to the operating system during times of memory pressure. As you might expect, the motivation for this design decision comes from the fact that the heap used to store jitted code in is the application’s private 32 MB address space (see Part 1 for more background). In addition to the fact that the private address spaces are relatively small, recall that they are never paged, so the goal of reducing pressure on this space when needed is absolutely essential to run applications well on memory constrained devices.

As a program is executing, the jit compiler allocates memory in its heap to store the native code for each method as it is compiled. Because compilation and memory allocation happens per-method on the fly, each allocation made to the heap is relatively small. That is, the jit heap typically grows in small increments. The expansion of the jit heap grows unbounded as the program runs. In earlier releases of the Compact Framework, the size of the jit heap was capped at a fixed number. In version 2, this cap has been removed so the heap grows as long as new methods need to be compiled.

There are three scenarios when the majority of the jit heap will be freed and the memory returned to the OS (I say majority because the Compact Framework must always keep the jitted code for the method in the application that is currently executing). First, the jit heap is shrunk when the CLR receives a failure from the operating system when trying to allocate more memory. The CLR takes this failure as an indication that the amount of available memory is scarce, and releases all the code it can in the jit heap. The act of releasing native code from the jit heap is termed code pitching. Second, code is pitched when an application switches to the background. On Windows Mobile, applications typically don’t close, but are instead moved to the background. By pitching code when an application moves to the background, the CLR makes more memory available to the foreground application thereby increasing the number of applications that can be run on a device simultaneously. Finally, the CLR will pitch jitted code when a managed application receives a WM_HIBERNATE message from Windows CE.  WM_HIBERNATE is sent when the OS detects that resources are running low. Code pitching in response to WM_HIBERNATE is part of the CLR's overall attempt to free memory and other resources when device resources are becoming scarce. 

Code pitching happens as part of a full garbage collection as you’ll see when I discuss automatic memory management in a future post in this series.

Figure 3
The size of the JIT heap over the lifetime of an application.

A few things are worth noting about Figure 3. First, the two low points in the graph correspond to times when the application was switched to the background and the size of the heap was reduced due to code pitching. Also, notice that when the application is launched, much more code is jitted than when the application is brought back after being in the background. This is presumably because the application contained some initialization code that was only needed when the application started.

Because the CLR can throw away native code under memory pressure or when an application moves to the background, it is quite possible that the same IL code may need to be jit compiled again when the application continues running. This fact leads to our second major jit compiler design decision: the time it takes to compile IL code often takes precedence over the quality of the resulting native code. As with all good compilers, the Compact Framework jit compiler does some basic optimizations, but because of the need to regenerate code quickly in order for applications to remain responsive, more extensive optimizations generally take a back seat to shear compilation speed.

The final key design tenant of the jit compiler is not related to memory usage at all, but instead is aimed at making the jit compiler itself more portable. As I described in Part 1, the environment in which the Compact Framework runs results not only in the need to run well on memory constrained devices, but also in the need to be portable across a number of processors. The .Net Compact Framework currently runs on several processor families including x86, Arm, SH, and MIPS and there are requirements to support more. Because of the need to span a wide range of devices, the jit compiler is architected to minimize the time it takes to support a new processor. One technique used to increase portability is to keep the amount of processor-specific optimizations to an absolute minimum.

Why no Native Images?

The desktop version of the .Net Framework uses a technique called native images to dramatically reduce, and sometimes eliminate all together, the amount of IL code that needs to be jit compiled when an application is loaded. By taking advantage of native images, applications can generally start faster. A native image is a file on disk that contains pre-compiled IL code. When the .Net Framework installs, it calls the jit compiler to generate the native CPU instructions for class libraries including mscorlib, System.Windows.Forms and so on. The need to jit compile the IL code for these assemblies when the application starts is eliminated by using the previously generated native code stored in the native image. Customers can also generate native images for their own assemblies (see the documentation in the .Net Framework SDK for the ngen.exe tool).

The .Net Compact Framework does not use native images primarily because of their size. Depending on the underlying native instruction set, the size of the native code generated when an assembly is jit compiled is between 3 and 4 times the size of the original IL file. When uncompressed, the .Net Compact Framework class libraries are approximately 4.5 MB. If the corresponding native images can be 4 times that size, you can easily see how storage required to hold the native images can take a significant portion of the amount of memory available on the device. One could also consider storing the native images on a removable storage card if present. However, the speed of reading a file from a storage card is relatively slow, so it’s not clear that startup time would be reduced over just compiling the methods as they are needed.

The Compact Framework's JIT team revisits the issue of native images with each major release, so it's possible that native images, or a technique similar to it, will find its way into the CLR in a future release.

The next post in this series will look at the design considerations that affected how the Compact Framework's Garbage Collector was built.

This posting is provided "AS IS" with no warranties, and confers no rights.

Comments

  • Anonymous
    June 02, 2008
    Узнай, что внутри у .NET Compact Framework CLR. Цикл статей от разрабочика.