共用方式為


The Design of the .Net Compact Framework CLR, Part 1: Overview and Background

In the last few weeks I've been working on a series of posts that describes why various design decisions were made when building the .Net Compact Framework CLR. In this first post, I describe the environmental factors that have influenced the design and provide an overview of how the CLR manages memory. Subsequent posts will follow with details on the main design tenants of the JIT compiler, garbage collector, and class loader as well as information about how to analyze the memory usage of your Compact Framework application.

Throughout the series I'll be noting design decisions made when building the Compact Framework's CLR that are quite different than those made when building the CLR in the full .Net Framework.

---------

On the surface, the .Net Compact Framework appears to be a direct port of Microsoft’s .Net Framework runtime environment. At the high level, the similarities between the two products are intentional and provide many benefits. Both the Compact Framework and the full .Net Framework have the same programming model, use the same file format, share the same compilers, and so on. The primary benefit in having the two programming environments so similar is developers that have learned to program in one environment can quickly become productive in the other. For example, it takes almost no time for a developer familiar with the .Net Framework to write his first device application using the .Net Compact Framework.

Despite these similarities on the surface, when you look under the covers you’ll see that the implementation of the Compact Framework, especially its CLR component, is drastically different than its desktop counterpart. Not surprisingly, the environment in which the Compact Framework runs has directly influenced the architecture of its key internal components. The two environmental factors that have most influenced the way the Compact Framework CLR is built are the requirement to run in small amounts of memory, and the need to be portable across both processor types and operating systems.

This series of posts describe the internals workings of the CLR by looking at how the constraints in which it must run have influenced its design. Throughout the series I’ll point out where the design of the CLR has intentionally diverged from that of the desktop in order to run managed code in memory constrained environments.

Understanding the internals of the CLR may seem like an esoteric topic, but a deeper knowledge of how the platform works underneath your application will give you a better understanding of how your application uses resources on the device and how to diagnose problems related to memory usage or performance when they occur.

The Compact Framework runs on a number of different operating systems, but the largest installed base runs on Windows CE. Let’s start by taking a look at the Windows CE memory architecture. Understanding the services that the underlying operating system provides to the Compact Framework establishes a basis for understanding why the Compact Framework team made the design decisions they did when building the CLR.

The Windows CE Memory Model

As a 32 bit operating system, Windows CE can address 4GB of virtual address space just as the desktop versions of Windows can. However, the way in which this address space is partitioned has a direct affect on the architecture of Windows CE applications. The primary constraint, and the most common reason for Windows CE applications to experience memory problems, is that each application is granted just 32 MB of virtual address space. Memory can be allocated outside of this 32 MB space, but that memory is global to all applications on the device - it is not private to the application that allocates the memory. The description of the memory model presented here is an overview aimed at those aspects of Windows CE that we’ll need to understand as we explore how the Compact Framework uses memory. A more detailed description of the Windows CE memory model can be found in the Microsoft Press book “Programming Windows CE” by Doug Boling.

The following figure shows how Windows CE partitions the memory available for use by applications.

Figure 1
Memory available to Windows CE applications

As can be seen, there are 3 address space partitions that come into play when an application runs:
 

  • System Code Space. The read-only code pages for all system dlls, such as coredll.dll, are loaded into this space. There is one system code space per device so all applications share the code pages for the system dlls. Windows CE can page portions of this memory to storage and pull them back later if needed.
  • Per-Process Address Space. As described, each Windows CE process is allocated 32 MB of virtual memory. The stack for each thread in the application, the code pages for the application’s executable files, and any heaps allocated and used by the application are among the elements stored in this space.
  • High Memory Area. The 1GB high memory area provides virtual address space from which requests for large amounts of virtual memory can be satisfied. Any calls to VirtualAlloc that request more than 2GB of virtual memory will be satisfied out of this space. In addition, all memory mapped files are stored in high memory. All data stored in the high memory area is visible to applications on the device. Windows CE can swap pages from the high memory area to storage and back if needed.

The .Net Compact Framework uses memory from all three of these partitions when running an application. As we’ll see, it is Compact Framework’s aggressive management of the per-process address space that provides the most benefit to developers of managed applications.

.NET Compact Framework Memory Management Basics

Increased developer productivity is one of the main reasons driving the broad adoption of both the .Net Framework and the .Net Compact Framework. Discussions about how the CLR contributes to developer productivity often focus on features like automatic memory management (garbage collection), processor independence and so on. While the Compact Framework definitely provides these benefits, it also provides additional features to help make developers more productive on devices. In particular, the .Net Compact Framework CLR manages the memory in the per-process 32 MB virtual address space on behalf of the developer. By insulating the developer from having to worry about when to allocate and free memory in order to keep their application running well within the 32 MB limit, the Compact Framework makes it much easier to write applications that behave well on memory constrained devices. As we’ll see throughout this series of posts, many of the key design decisions made when building the .Net Compact Framework CLR were made in order to efficiently manage the 32 MB per-process virtual memory limit. Said differently, the fact that Windows CE restricts each process to a relatively small virtual address space has caused the Compact Framework team to design a platform that enables applications to run well given the constraints of the environment.

Before describing the specific design decisions made to run on memory constrained devices, we need to summarize all the runtime data the operating system and the CLR create when executing a managed application. After I’ve described the categories of data required to run an application, I’ll show where the CLR allocates the runtime data relative to the Windows CE memory partitions described above. Consider the categories of runtime data that will require memory when the following simple “Hello World” application is run.

using System;
using System.ComponentModel;
using System.Drawing;
using System.Text;
using System.Windows.Forms;

namespace HelloDevice
{

public class Form1 : Form
{

    private MainMenu mainMenu1;
private Label label1;

public Form1()
{
InitializeComponent();
}

    private void InitializeComponent()
{
this.mainMenu1 = new System.Windows.Forms.MainMenu();
this.label1 = new System.Windows.Forms.Label();

// Position the label
this.label1.Location = new System.Drawing.Point(64, 81);
this.label1.Size = new System.Drawing.Size(100, 20);
this.label1.Text = "Hello Device!";

// Size the form
this.AutoScaleDimensions = new System.Drawing.SizeF(96F, 96F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Dpi;
this.ClientSize = new System.Drawing.Size(240, 268);
this.Controls.Add(this.label1);
this.Menu = this.mainMenu1;
this.MinimizeBox = false;
this.Text = "Simple App";

}

}

    static class Program
{
static void Main()
{
Application.Run(new Form1());
}
}

}

As you can see, this program creates a form with a single label containing the text “Hello Device!”. I’ve grouped the memory used at runtime into the following 6 categories:

  • Native code pages for the CLR dlls. The Compact Framework CLR consists of two dlls, mscoree.dll and mscoree2_0.dll. These dlls along with the native portion of the Compact Framework’s Windows Forms implementation, netcfagl2_0.dll, are considered system dlls. As a result, the code for these dlls is loaded into the system code space.
  • Application and Class Library assemblies. The CLR must load all of the IL code for both the application and for the class libraries it uses into memory so the JIT compiler has access to the IL it needs when generating the corresponding native code instructions and the class loader has access to the metadata it needs when laying out the data structures that describe the types and so on. In addition to the assembly containing the “Hello World” code, the listing above also requires the IL for mscorlib, System, System.Windows.Forms and System.Drawing. The files containing these assemblies are memory mapped into the high memory area as they are needed.
  • JIT-compiled native code. As an application is executing, the JIT compiler is called upon to generate the native code for each method that is accessed. This native code is stored in a buffer in the per-process virtual address space.
  • Allocated reference types. The listing above allocates numerous reference types. In addition to the main form itself, instances of MainMenu, Label, Point, and Size are created. More types are likely allocated within the implementation of the class libraries as well. The memory for all reference types comes from the garbage collector’s heap. The GC heap is a per-application heap stored in the address space specific to the application.
  • In-memory representation of type metadata. As classes and their methods are needed during the execution of a program, the CLR reads their metadata representation from the copy of the assembly mapped into the high memory area. The metadata is used to generate an in-memory representation of the classes and their methods. This representation is stored in a heap called the AppDomain heap. The AppDomain heap is stored in the per-process virtual address space.
  • Miscellaneous allocations. In addition to the categories of allocations described above, the CLR generates a small amount of additional data as it runs an application. The data in this category includes stubs that the JIT compiler uses to determine whether a method has been compiled and other short-lived data elements.

Now that we’ve seen the categories of data needed to run a managed application, let’s map those allocations back to the Windows CE memory model described earlier. Figure 2 shows which of the Windows CE memory partitions are used to store each category of runtime data.

Figure 2
The mapping between Compact Framework memory allocations and the Windows CE memory model.

In looking at Figure 2, it’s important to note which memory allocations have a per-process cost and which are shared among all processes. Recall from our discussion of the Windows CE memory model that the code pages in the system code space and all allocations made in the high memory area are shared among all applications, while all allocations made in the per-process space are private to that process. Because the per-process costs are not shared, it’s important to focus on making wise use of each process’s 32 MB of virtual address space. As a result, most of the design decisions we’ll look at throughout the rest of this series are those that affect the heaps that store jitted code, reference types, in-memory type representations and the various smaller allocations the CLR makes from the per-process area. For more information on the expected size of these per-process heaps, see Mike Zintel's post on Advanced Compact Framework Memory Management.

Now that we've covered the basics, the next post will describe some basic design tenants of the .Net Compact Framework's JIT compilers.

This posting is provided "AS IS" with no warranties, and confers no rights.

Comments