Freigeben über


Chapter 8 — Improving Enterprise Services Performance

 

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

patterns & practices Developer Center

Improving .NET Application Performance and Scalability

J.D. Meier, Srinath Vasireddy, Ashish Babbar, and Alex Mackman
Microsoft Corporation

May 2004

Related Links

Home Page for Improving .NET Application Performance and Scalability

Chapter 5 — Improving Managed Code Performance

Checklist: Enterprise Services Performance

Send feedback to Scale@microsoft.com

patterns & practices Library

Summary: This chapter provides guidelines for designing and building scalable serviced components that take advantage of COM+ services provided by Enterprise Services. This chapter presents coding guidelines to efficiently manage state, object pooling, resources, transactions, thread synchronization, and much more.

Contents

Objectives
Overview
How to Use This Chapter
Component Services Provided By Enterprise Services
Architecture
Prescriptive Guidance for Choosing Web Services, Enterprise Services, and .NET Remoting
Performance and Scalability Issues
Design Considerations
Object Pooling
State Management
Resource Management
Queued Components
Loosely Coupled Events
Transactions
Security
Threading
Synchronization Attribute
Summary
Additional Resources

Objectives

  • Design serviced components for optimum performance.
  • Monitor and tune object pooling.
  • Combine object pooling and just-in-time (JIT) activation for optimum performance.
  • Use a trusted identity and avoid impersonation to improve scalability.
  • Manage resources efficiently.
  • Choose an appropriate transaction model.
  • Avoid threading bottlenecks.

Overview

The Microsoft .NET Framework provides access to COM+ services from managed code through Enterprise Services (ES). To use Enterprise Services, create components by deriving managed classes from the ServicedComponent-based class. Enterprise Services provides a broad range of important infrastructure-level features for middle tier components, including distributed transaction management, object pooling, and role - based security.

If your application requires COM+ service features, you need to know how to use them efficiently. When used properly, features such as object pooling and JIT activation can improve your application's performance. When used improperly, your application's performance can suffer. This chapter describes how to optimize the performance of your application's Enterprise Service middle tier and how to develop efficient serviced components.

How to Use This Chapter

Use this chapter to apply proven strategies and best practices for designing and writing high-performance Enterprise Services code. To get the most out of this chapter:

  • Jump to topics or read from beginning to end. The main headings in this chapter help you locate the topics that interest you. Alternatively, you can read the chapter from beginning to end to gain a thorough appreciation of performance and scalability design issues.
  • Use the "Architecture" section of this chapter to understand how Enterprise Services works. By understanding the architecture, you can make better design and implementation choices.
  • Use the "Design Considerations" section of this chapterto understand the higher-level decisions that will affect implementation choices for Enterprise Services code.
  • Read Chapter 13, "Code Review: .NET Application Performance." See the "Enterprise Services" section for specific guidance.
  • Measure your application performance. Read the "Enterprise Services" and ".NET Framework Technologies" sections of Chapter 15, "Measuring .NET Application Performance" to learn about the key metrics that can be used to measure application performance. It is important that you be able to measure application performance so that performance issues can be accurately targeted.
  • Test your application performance. Read Chapter 16, "Testing .NET Application Performance" to learn how to apply performance testing to your application. It is important that you apply a coherent testing process and that you be able to analyze the results.
  • Tune your application performance. Read the "Enterprise Services" section of Chapter 17, "Tuning .NET Application Performance" to learn how to resolve performance issues identified through the use of tuning metrics.
  • Use the accompanying checklist in the "Checklists" section of this guide. Use the "Checklist: Enterprise Services Performance" checklist to quickly view and evaluate the guidelines presented in this chapter.

Component Services Provided By Enterprise Services

Table 8.1 summarizes the COM+ services that are available to managed classes that derive from ServicedComponent.

Table 8.1   COM+ Services

Service Description
Automatic Transactions Supports declarative transaction-processing features
BYOT(Bring Your Own Transaction) Enables a form of transaction inheritance.
Compensating Resource Managers (CRMs) Applies atomicity and durability properties to nontransactional resources.
Just-In-Time Activation Activates an object on a method call and deactivates when the call returns.
Loosely Coupled Events (LCE) Provides a loosely coupled publisher/subscriber notification service.
Object Construction Passes a persistent string value to a class instance on construction of the instance.
Object Pooling Provides a pool of ready-made objects.
Queued Components Provides component-based asynchronous message queuing.
Role-based Security Applies security permissions based on role membership.
Shared Property Manager Shares state among multiple objects within a server process
SOAP Services Publishes serviced components as (Extensible Markup Language (XML) Web services to support Simple Object Access Protocol (SOAP)-based interaction over Hypertext Transfer Protocol (HTTP).
Synchronization (Activity) Manages concurrency using declarative attributes.
XS Interoperability Supports the X/Open transaction-processing model.

More Information

For more information about the component services provided by Enterprise Services, see ".NET Enterprise Services and COM+ 1.5 Architecture" on MSDN at https://msdn.microsoft.com/en-us/library/ms973484.aspx.

Architecture

Serviced components run inside COM+ applications, which could be library applications or server applications. Library applications run inside the caller process address space, and server applications run inside a separate process (Dllhost.exe) on either the local or a remote computer.

Each call to a component in a server application requires an interprocess communication (IPC) call and marshaling, together with additional security checks. Server applications also use COM interop. A runtime callable wrapper (RCW) is created when calling unmanaged COM+ components. Calls are dispatched through the RCW to the remote object using DCOM. Library applications run inside the caller's process address space, so they do not incur cross-process marshaling overhead.

The general architecture of an Enterprise Services solution is shown in Figure 8.1.

Ff647809.ch08-es-architecture(en-us,PandP.10).gif

Figure 8.1: Enterprise Services architecture

Boundary Considerations

A call to a serviced component crosses a number of boundaries. Each time a boundary is crossed, a performance hit occurs. Sometimes this is necessary and unavoidable. Table 8.2 shows common boundaries that a call traverses.

Table 8.2   Boundaries and Associated Performance Hits

Boundary Main reasons for performance hit
Machine Marshaling, network latency, security
Process Marshaling, IPC, security
Apartment Thread switch and marshaling, security
Application domain Marshaling context, security
Context Interception services provided by lightweight proxy, security

Reducing the number of boundaries that a call must traverse can optimize the performance of calling your components. Within an Enterprise Services application, the main boundary performance hit occurs when a call needs to be marshaled from one apartment to another because this can entail a thread switch, marshaling, and serialization. Crossing application domains may require less overhead than crossing apartments because application domains do not require a thread switch. By using a consistent threading model for all components, you can avoid cross-apartment calls and the associated overhead. Cross-apartment calls use a heavyweight proxy that performs the thread switch and marshaling, while cross-context (intra-apartment) calls use a lightweight proxy that does not perform a thread switch. The purpose of the lightweight proxy is to provide interception services, add services to the component, and handle the marshaling of interface pointers. If you avoid cross-apartment thread switches, you avoid the overhead.

Figure 8.2 summarizes the main boundaries. The thread in Process A initially calls a serviced component inside a library application (at point (A)), and then a call is made to a serviced component inside a server application (at point (B)).

Ff647809.ch08-es-boundaries(en-us,PandP.10).gif

Figure 8.2: Enterprise Services architecture showing boundaries

Figure 8.2 shows that two security checks occur when a call enters a server application running in Dllhost.exe. The first check occurs when the COM service control manager (SCM) determines whether to launch the process. If the process is already running, the only part of this check that still occurs is a process boundary security check. Whether the call succeeds and passes the process boundary check is determined by whether the caller is a member of any role defined within the server application. If the caller is a member of any role, the process can be launched (if necessary) and the call can proceed to a component inside the server application. The second security check occurs when the call enters the server application. If component-level access checks are enabled, the caller must be a member of at least one role assigned to the target method, or its interface or class. Note that this second level of checking also applies to library applications. Within an application, no further security checks are performed.

Prescriptive Guidance for Web Services, Enterprise Services, and .NET Remoting

Services are the preferred communication technique to use across application boundaries, including platform, deployment, and trust boundaries. You can implement services today by using Web services or Web Services Enhancements (WSE). Although WSE provides a rich set of features, you should evaluate whether or not you can accept the WSE support policy. Enterprise Services provides component services such as object pooling, queued components, a role-based security model and distributed transactions, and should be used as an implementation detail within your service when you need those features. .NET remoting is preferred for cross-application communication within the same process.

Object Orientation and Service Orientation

When you design distributed applications, use the services approach whenever possible. Although object orientation provides a pure view of what a system should look like and is effective for producing logical models, an object-based approach can fail to consider real-world factors, such as physical distribution, trust boundaries, and network communication, as well as nonfunctional requirements, such as performance and security.

Table 8.3 summarizes some key differences between object orientation and service orientation:

Table 8.3   Object Orientation vs. Service Orientation

Object Orientation Service Orientation
Assumes a homogeneous platform and execution environment. Assumes a heterogeneous platform and execution environment.
Shares types, not schemas. Shares schemas, not types.
Assumes cheap, transparent communication. Assumes variable cost, explicit communication.
Objects are linked: object identity and lifetime are maintained by the infrastructure. Services are autonomous: security and failure isolation are a must.
Typically requires synchronized deployment of both client and server. Allows continuous, separate deployment of client and server.
Is easy to conceptualize and thus provides a natural model to follow. Builds on ideas from component software and distributed objects. Dominant theme is to manage/reduce sharing between services.
Provides no explicit guidelines for state management and ownership. Owns and maintains state or uses the reference state.
Assumes a predictable sequence, timeframe, and outcome of invocations. Assumes message-oriented, potentially asynchronous, and long-running communications.
Goal is to transparently use functions and types remotely. Goal is to provide inter-service isolation and wire interoperability based on standards.

Application Boundaries

Common application boundaries include platform, deployment, trust, and evolution. (Evolution refers to whether or not you develop and upgrade applications together.) When you evaluate architecture and design decisions that affect your application boundaries, consider the following:

  • Objects and remote procedure calls (RPC) are appropriate within boundaries.
  • Services are appropriate across and within boundaries.

Recommendations for Web Services, Enterprise Services, and .NET Remoting

When you are working with ASP.NET Web services, Enterprise Services, and .NET remoting, Microsoft recommends that you:

  • Build services using ASP.NET Web services.
  • Enhance your ASP.NET Web services with WSE if you need the WSE feature set and you can accept the support policy.
  • Use object technology, such as Enterprise Services or .NET remoting, within a service implementation.
  • Use Enterprise Services inside your service boundaries for the following scenarios:
    • You need the Enterprise Services feature set (such as object pooling; declarative, distributed transactions; role-based security; and queued components).
    • You are communicating between components on a local server and you have performance issues with ASP.NET Web services or WSE.
  • Use .NET remoting inside your service boundaries in the following scenarios:
    • You need in-process, cross-application domain communication. Remoting has been optimized to pass calls between application domains extremely efficiently.
    • You need to support custom wire protocols. Understand, however, that this customization will not port cleanly to future Microsoft implementations.

Caveats

When you work with ASP.NET Web services, Enterprise Services, or .NET remoting, consider the following caveats:

  • If you use ASP.NET Web services, avoid or abstract your use of low-level extensibility features such as the HTTP Context object.
  • If you use .NET remoting, avoid or abstract your use of low-level extensibility such as .NET remoting sinks and custom channels.
  • If you use Enterprise Services, avoid passing object references inside Enterprise Services. Also, do not use COM+ APIs. Instead, use types from the System.EnterpriseServices namespace.

More Information

Performance and Scalability Issues

This section lists high-level factors that can affect the performance and scalability of your applications. Details about how to overcome these issues are provided later in the chapter.

  • Impersonating clients. If you impersonate the original caller to access a backend database, a connection pool is created per unique user identity. This consumes resources and reduces scalability. Connection pooling is most effective if you use a trusted subsystem model and access the database using a fixed service account such as the application's process identity. For more information, see the "Security" section in this chapter.
  • Calling single-threaded apartment (STA) components. All calls to and from the STA component can only be serviced by the thread that created or instantiated it. All callers sharing an STA object instance are serialized onto the same thread; there is also a thread switch from the calling thread to the apartment's single thread. For more information, see "Avoid STA Components" later in this chapter.
  • Performing long running transactions. Long running transactions retain locks and hold expensive resources such as database connections for prolonged periods. This reduces throughput and impacts scalability. Alternative approaches such as compensating transactions can be appropriate for scenarios where you cannot avoid long running transactions. For more information see "Transactions" later in this chapter.
  • Using inappropriate isolation levels. High isolation levels increase database integrity but reduce concurrency. Using inappropriate isolation levels can unnecessarily hinder performance. Choose an appropriate isolation level for your components depending on the type of create, read, update, and delete operation you need to perform. For more information, see "Transactions" later in this chapter.
  • Using stateful components. Stateful components limit application scalability and increase the likelihood of data inconsistency. Use a stateless programming model with Enterprise Services.
  • Using encryption unnecessarily. Encrypting your data twice Is unnecessary from a security standpoint and unnecessarily impacts performance. For example, there is no point using packet privacy authentication to encrypt communication to and from serviced components if your application is deployed inside a secure data center that already protects its inter-server communication channels, for example, by using Internet Protocol Security (IPSec) encryption. For more information, see "Security" later in this chapter.
  • Failing to release resources quickly enough. Failing to release shared resources such as database connections and unmanaged COM objects promptly, impacts application scalability. For more information, see "Resource Management" later in this chapter.
  • Failing to pool resources. If you do not use pooling for objects that take a long time to initialize for example because they need to access resources such as a network or database connections, these objects are destroyed and recreated for each request. This reduces application performance. For more information, see "Object Pooling" later in this chapter.
  • Specifying too large a minimum pool size. If you set the minimum pool size to a large number, the initial call request can take a long time to populate the pool with the minimum number of objects. Set the pool size based on the type of resource that your objects maintain. Also consider manually starting the application to initialize the pool prior to the first live request.
  • Using inappropriate synchronization techniques. If you are building a high-performance multithreaded application to access your serviced components, deadlocks and race conditions can cause significant problems. Use the declarative COM+ synchronization attribute to manage concurrency and threading complexities. For more information, see "Synchronization Attribute" later in this chapter.
  • Using unneeded services. Each additional service your component is configured for affects performance. Make sure each component is configured only for those specific services it requires.
  • Clients failing to release reference quickly enough. Clients that bind early and release late can increase server resource utilization and quickly create performance and scalability problems.
  • Clients failing to call Dispose. Clients that do not call Dispose on service components create significant performance bottlenecks.

Design Considerations

To help ensure your Enterprise Services applications are optimized for performance, there are a number of issues that you must consider and a number of decisions that you must make at design time.

This section summarizes the major considerations:

  • Use Enterprise Services only if you need to.
  • Use library applications if possible.
  • Consider DLL and class relationships.
  • Use distributed transactions only if you need to.
  • Use object pooling to reduce object creation overhead.
  • Design pooled objects based on calling patterns.
  • Use explicit interfaces.
  • Design less chatty interfaces.
  • Design stateless components.

Use Enterprise Services Only if You Need To

Use Enterprise Services inside your service implementation when you need the component services that Enterprise Services provides. Enterprise Services provides a broad range of important infrastructure-level features for middle tier components, including distributed transaction management, object pooling, and role-based security.

Each service means more infrastructure code to execute. As a result, there is a performance overhead with using Enterprise Services so you should build serviced components and host them in Enterprise Services only if you specifically need to use the features it provides. .If you need those services, more code needs to be executed anyway, so using Enterprise Services is the right choice.

Use Library Applications if Possible

Enterprise Services provides server and library applications. Server applications run in their own process (Dllhost.exe) and use a process identity that you configure. Library applications run in their creator's process using the client's identity. Library applications offer performance benefits because they do not incur the significant marshaling overhead associated with an IPC (or cross network) call to a server application.

As such, you should use library applications whenever possible. Use server applications only if you need your components to run under a different security context from the client, or if you need them to be isolated from the client to provide additional fault tolerance.

The following code shows how to declaratively specify the activation type using an assembly level attribute.

using System.EnterpriseServices;
[ assembly: ApplicationActivation(ActivationOption.Library)] 
public class Account : ServicedComponent
{
   void DoSomeWork() {}
}

Consider DLL and Class Relationships

If your solution includes serviced components in multiple DLLs, and there is heavy interaction between two components in separate DLLs, make sure they are located in the same Enterprise Services application. This minimizes the marshaling and security overhead associated with crossing application boundaries.

Use Distributed Transactions Only if You Need To

Enterprise Services and COM+ transactions use the services of the Microsoft Distributed Transaction Coordinator (DTC). Use DTC-based transactions if you need your transaction to span multiple remote databases or a mixture of resource manager types, such as a Microsoft SQL Server database and a Windows message queue. Also, if you need to flow transactions in a distributed application scenario, for example, across components even against a single database, consider Enterprise Services transactions.

Use Object Pooling to Reduce Object Creation Overhead

Object pooling helps minimize component activations and disposal, which can be costly compared to method calls. Consider the following recommendations:

  • Use object pooling if you have a component that callers use briefly and in rapid succession, and where a significant portion of the object's initialization time is spent acquiring resources or performing other initialization prior to performing specific work for the caller, configure the component to use COM+ object pooling.
  • Use object pooling to control the maximum number of objects running at any given time. This allows you to throttle server resources because when optimum maximum value is set (best decided by trying various values and testing your application scenario), object pooling ensures that server resources are not all consumed.
  • Avoid object pooling if you need only one object in your pool. Instead, investigate the singleton object model supported by .NET remoting.
  • Note that object pooling is less beneficial for objects that take a very small amount of time to initialize.

For more information, see "Object Pooling" later in this chapter.

Design Pooled Objects Based on Calling Patterns

If you adopt a stateless component design and use JIT Activation, you minimize the number of active objects on the server at any given time, which means that you use the least amount of resources possible at any given time. However, this is at the expense of many activations and deactivations.

For objects that are expensive to initialize, it is best to initialize them as little as possible and enable clients to reference them between calls. For objects that retain limited, shared resources, such as database connections, it is best to free them up as soon as possible and use a stateless model with JIT Activation.

Therefore for some objects, it may be worth the cost of repetitive activations and deactivations to keep them freed up as much as possible, and for other objects it may be best to limit the activations and deactivations and keep them around.

Pooling provides a compromise for objects that are expensive to create/destroy, or for objects whose resources are expensive to acquire/release.

Use Explicit Interfaces

You should implement explicit interfaces for any serviced component that is hosted in a server application and is called from another process or computer. This allows clients to call methods on these interfaces instead of calling class methods on the default class interface. Consider the following example:

// When you create a class that is a service component,
// it has a default interface
public class CustomClass : ServicedComponent
{
  public void DoSomething();
}

// instead explicitly create an interface 
public interface ICustomInterface
{
  void DoSomething();
}
public class CustomClass : ServicedComponent, ICustomInterface
{
  public void DoSomething();
}

Explicit interfaces result in improved performance because there are fewer serialization costs involved. When you call an explicit interface, DCOM serialization occurs. When you call a class member directly, an additional .NET remoting serialization cost is incurred.

Design Less Chatty Interfaces

When you design interfaces, provide methods that batch arguments together to help minimize round trips. Reduce round trips by avoiding properties. For each property access, the call is marshaled to the remote object; it is intercepted for providing the services which can be relatively slow compared to grouping them into a single method call. For more information, see "Design Chunky Interfaces" in Chapter 3, "Design Guidelines for Application Performance."

Design Stateless Components

A design that avoids holding state inside components is easy to scale. Conversely, components that hold user-specific state across caller method calls cause server affinity, which limits your scalability options.

Even if your serviced component resides on the same server as your presentation layer, consider using a stateless design and take advantage of services such as JIT activation and object pooling to strike a balance between performance and scalability. This also helps if your future workload requires you to scale out.

When used correctly, object pooling can help maintain some state (such as a connection) and scale as well. Use object pooling for objects that retain connections so they can be shared by multiple clients.

Object Pooling

To reduce the performance overhead of object creation and destruction on a per method call basis you can use object pooling. Components whose initialization code contains resource intensive operations (for example, creating multiple subobjects that aggregate data from multiple database tables) are well suited to object pooling. When a caller creates an instance of a pooled object, a previously constructed object is retrieved from the object pool, if available. Otherwise, a new object is created (subject to the maximum pool size) within the pool and the new instance is used. This can minimize the number of new objects created and initialized, and it can significantly improve performance.

To configure an object for pooling, use either a declarative attribute, as shown in the following code sample, or use the Component Services administration tool to directly manipulate the object's configuration in the COM+ catalog.

[ObjectPooling(Enabled=true, MinPoolSize=2, MaxPoolSize=10)]
public class YourClass : ServicedComponent
{
   // your other methods
   public override void YourClass(string constructString)
   { // your resource intensive or long running code }

   public override void Activate()
   { // your activate code }
   public override void Deactivate()
   { // your deactivate code }
   protected override bool CanBePooled()
   {
      return true;
   }       
 }

Object Pooling Explained

Figure 8.3 illustrates object pooling mechanics.

Ff647809.ch08-object-pooling(en-us,PandP.10).gif

Figure 8.3: Object pooling

The sequence of events shown in Figure 8.3 is as follows:

  1. When the application starts, COM+ populates the object pool with enough objects to reach the configured minimum pool size. At this point, objects are created and their language-specific constructors are called. Pooled objects typically acquire expensive resources, such as network or database connections, at this point and perform any other time-consuming initialization.
  2. The client requests the creation of a new object by calling new.
  3. Rather than creating a new object, COM+ takes an existing object from the pool and places it in a context. If there are no more available objects in the pool and the configured maximum pool size has been reached, the object creation request is queued and the caller blocks. When an object is released by another client and becomes available, the queued creation request can be satisfied.
  4. The client makes a method call.
  5. When the method call returns and the client no longer requires the object, the client must call Dispose to ensure the object is swiftly returned to the pool.
  6. The object returns to the pool and is available for subsequent requests from the same or different clients.

Object Pooling with JIT Activation

Object pooling is often used with JIT activation. This has the advantage of completely disassociating the lifetime of the object from the client. You can also ensure that objects are returned to the pool promptly, and with JIT activation you are no longer reliant on the client calling Dispose.

**Note   **Client code should always call Dispose on any object that implements IDisposable including serviced components.

Figure 8.4 shows the sequence of events that occur for an object configured for object pooling and JIT activation.

Ff647809.ch08-object-pooling-with-(en-us,PandP.10).gif

Figure 8.4: Object pooling with JIT activation

Note that Figure 8.4 does not show a pre-started COM+ application, so the pool is not initialized until after the first call to new.

The sequence of events in Figure 8.4 is as follows:

  1. The client calls new.

  2. An object is retrieved from the pool and placed in a context. At this point the context's Done flag is set to false.

    **Note   **Two important flags maintained by the object context are the Done flag and the Consistent flag. The Done flag is used by COM+ to detect when to deactivate an object. The Consistent flag determines transaction outcome for transactional components. If this flag is set to false (for example, by the object calling SetAbort), the transaction rolls back. If the flag is set to true (for example, with SetComplete), the transaction commits.

  3. The client calls a method.

  4. COM+ calls the object's Activate method to allow it to perform second phase initialization. For example, it might need to reassociate a database connection obtained during first phase initialization (when the object was constructed) with a transaction. If you need to perform specific second phase initialization, override the virtual Activate method exposed by the ServicedComponent base class.

  5. The method executes and performs work for the client.

  6. At the end of the method, the object should set the Done flag in its context to true to make sure the object is swiftly returned to the pool. There are a number of ways to set the Done flag. For more information, see "Return Objects to the Pool Promptly" later in this chapter.

  7. When the object is deactivated, COM+ calls the object's Deactivate method.

  8. COM+ finally calls the object's CanBePooled method. You can override the method for your objects to provide the custom implementation. If you override and return false the object is not returned into the pool and awaits garbage collection instead (this may be the case if the object's retained resources have been lost or irreparably corrupted).

  9. If you returned true from CanBePooled or you do not override this method and rely on the base class implementation, the object is always returned to the pool at this point until the maximum pool size is reached.

To use object pooling efficiently, follow these guidelines:

  • Return objects to the pool promptly.
  • Monitor and tune pool size.
  • Preload applications that have large minimum pool sizes.

Return Objects to the Pool Promptly

Unmanaged COM+ objects return to the pool when their reference counts return to zero. Managed objects return to the pool only when garbage collected. There are several ways you can ensure an object is immediately returned to the pool and these vary depending on whether the object is configured for JIT activation:

  • With JIT activation, use ASAP deactivation.
  • Without JIT activation, the caller controls lifetime.

With JIT Activation, Use ASAP Deactivation

The context-maintained Done flag is initialized to false each time COM+ creates a new object in a context. If the Done flag is set to true when a method returns, COM+ deactivates and either destroys the object, or if object pooling is enabled, returns the object to the pool.

You can set the Done flag to true and force an object back to the pool in the following three ways:

  • Use the AutoComplete attribute.

    Upon completion of a method marked with this attribute, the COM+ runtime either calls SetComplete or SetAbort, depending on whether the method generates an exception. Both methods set the Done flag in the object's context to true, which ensures that the object is returned to the pool. Use of the AutoComplete attribute is shown in the following code sample.

    [ObjectPooling(MinPoolSize=0, MaxPoolSize=1)]
    [JustInTimeActivation()]
    public class YourClass : ServicedComponent
    {
       [AutoComplete]
       public void SomeMethod()
       {
          ...
       }
       ...
    }
    

    **Note   **Using this attribute is equivalent to selecting the Automatically deactivate this object when this method returns check box on a method's Properties dialog box in Component Services.

  • Set ContextUtil.DeactivateOnReturn=true at the end of your method as shown in the following code sample.

    public void SomeMethod()
    {
      // Do some work
      . . .
      // Make sure the object returns to the pool by setting DeacivateOnReturn
      ContextUtil.DeactivateOnReturn = true;
    }
    
  • Call ContextUtil.SetComplete or ContextUtil.SetAbort at the end of your method. Both methods set the Done flag to true. Transactional components also use these methods to vote for the outcome of a transaction. SetComplete represents a vote for the transaction to commit while SetAbort votes for a transaction rollback.

    **Note   **Transaction outcome is dependent on the voting of all objects participating in the current transaction.

    public void SomeMethod()
    {
      // Do some work
      . . .
      // Calling SetComplete (or SetAbort) sets the Done flag to true
      // which ensures the object is returned to the pool.
      ContextUtil.SetComplete();
    }
    

Without JIT Activation, the Caller Controls Lifetime

If your pooled object is not configured for JIT activation, the object's caller must call Dispose and therefore controls the lifetime of the object. This is the only way Enterprise Services can know when it is safe to return the object to the pool.

**Note   **Clients should always call Dispose on a disposable object regardless of the JIT activation setting. For more information, see "Resource Management" later in this chapter.

In C#, you can use the using keyword to ensure that Dispose is called.

// your pooled object's client code
public void ClientMethodCallingPooledObject()
{  
   using (YourPooledType pooledObject = new YourPooledType())
   {
        pooledObject.SomeMethod();
   } // Dispose is automatically called here
}

Monitor and Tune Pool Size

COM+ automatically adjusts the pool size to meet changing client loads. This behavior is automatic, but you can fine tune the behavior to optimize performance for your particular application. If the pool size is too large, you incur the overhead of populating the pool with an initialized set of objects, many of which remain redundant. Depending on the nature of the object, these objects might unnecessarily consume resources. Also, unless you manually start the application before the first client request is received, the first client takes the associated performance hit as the pool is populated with objects.

For more information about how to monitor object pooling, see Chapter 15, "Measuring .NET Application Performance."

Preload Applications That Have Large Minimum Pool Sizes

When an application is started, it initializes the object pool and creates enough objects to satisfy the configured minimum pool size. By manually starting an application before the first client request is received, you eliminate the initial performance hit that the initial request would otherwise entail.

To automate application startup, you can use the following script code.

Dim oApplications 'As COMAdminCatalogCollection
Dim oCatalog 'As COMAdminCatalog
Dim oApp 'As COMAdminCatalogObject

Set oCatalog = CreateObject("ComAdmin.COMAdminCatalog")
Set oApplications = oCatalog.GetCollection("Applications")
oApplications.Populate

For Each oApp In oApplications
  If oApp.Name = "<Provide Your Server Application Name>" Then
    Call oCatalog.StartApplication(oApp.Name)
    Wscript.Echo oApp.Name + "Started..."
  End If
Next

**Note   **The automation script code applies only for server applications and not for library applications.

More Information

For more information about object pooling, see Microsoft Knowledge Base article 317336, "HOW TO: Use Enterprise Services Object Pooling in Visual Basic .NET," at https://support.microsoft.com/default.aspx?scid=kb;en-us;317336.

State Management

Improper state management results in poor application scalability. For improved scalability, COM+ components should be stateless or they should store and retrieve state from a common store. Consider the following guidelines:

  • Prefer stateless objects. Ideally, you should avoid holding state to maximize scalability. If state is needed, store and retrieve the state information from a common store like a database.
  • Avoid using the Shared Property Manager(SPM). The SPM is designed for storing small pieces of information (simple strings, integers) and not complex or large amounts of data. It uses ReaderWriterlock to synchronize single-write and multiple-reads; therefore, storing large amounts of data can cause throughput bottlenecks and high CPU. Using this feature causes server affinity, so you cannot use it in applications that will be deployed in a Web farm or application cluster. Even for single machine scenarios, do not use it as a cache or as a placeholder for complex data.

More Information

For more information, see "Design Stateless Components" and "Object Pooling" in this chapter. Also see "State Management" in Chapter 3, "Design Guidelines for Application Performance."

Resource Management

Inefficient resource management is a common cause of performance and scalability issues in Enterprise Services applications. The most common types of resources you need to manage in Enterprise Services applications are database connections, memory, and COM objects (although Enterprise Services hides the fact that there can be unmanaged COM objects beneath the managed components that you usually deal with).

For more information, see "Resource Management" in Chapter 3, "Design Guidelines for Application Performance." To ensure that your serviced components manage resources as efficiently as possible, use the following the guidelines:

  • Optimize idle time management for server applications.
  • Always call Dispose.
  • If you call COM components, considercalling ReleaseComObject.

Optimize Idle Time Management for Server Applications

COM+ shuts down the host process (Dllhost.exe) after a configured period of inactivity (the idle time) from any client. By default, the process stays in memory for three minutes if there are no clients using the application. To optimize idle time management:

  • Consider increasing the idle time if clients tend to access components in short, sharp intervals in between lengthy periods of idle time. This will reduce the number of process restarts.
  • If your application contains a pool of objects, leave the process running idle to avoid having to repopulate the object pool. If you expect your component to be called every ten minutes, increase the idle time to a slightly longer time. For example, set it to twelve minutes.

To configure the idle time, use the Advanced page of the application's Properties dialog box in Component Services. Values in the range 1–1440 minutes are supported.

Always Call Dispose

Client code that calls serviced components must always call the object's Dispose method as soon as it is finished using it. Setting the object reference to null or Nothing is not adequate. If you do not call Dispose, unmanaged resources must go through finalization which is less efficient and more resource intensive. Clients that do not call Dispose can cause activity deadlock in multithreaded applications due to the asynchronous cleanup of object references. If you do not call Dispose on pooled objects that do not use JIT activation, the pooled objects are not returned to the pool until they go through finalization and garbage collection. By calling Dispose, you efficiently release the unmanaged resources (such as COM objects) used by the serviced component and reduce memory utilization.

Calling Dispose

For class methods, you can simply call Dispose as shown in the following sample.

ComPlusLibrary comLib = new ComPlusLibrary();
comLib.Dispose();

For interface methods, you need to cast to IDisposable as shown in the following sample:

ServicedComp.ICom comLib = new ServicedComp.ComPlusLibrary();
// comLib.Dispose();  // Dispose not available when using the interface
((IDisposable)comlib).Dispose(); // Cast to IDisposable

If your client code does not call Dispose, one workaround is to use the DisableAsyncFinalization registry setting, but with negative consequences as described later in this chapter.

More Information

For more information about calling Dispose and releasing serviced components, see the following " Knowledge Base articles:

DisableAsyncFinalization Registry Setting

If your managed client code does not call Dispose to release managed serviced components and you cannot change the client source code, as a last resort you can consider using the DisableAsyncFinalization registry key. This key prevents the serviced component infrastructure from co-opting user threads to help out with cleanup (leaving all the work to the finalizer thread).

To enable this feature, create the following registry key.

HKLM\Software\Microsoft\COM3\System.EnterpriseServices
DisableAsyncFinalization = DWORD(0x1)

If You Call COM Components, Consider Calling ReleaseComObject

Consider calling ReleaseComObject if you call COM components. Examples include hosting COM objects in COM+ and calling them from Enterprise Services or calling them directly from a managed client, such as ASP.NET. Marshal.ReleaseComObject helps release the COM object as soon as possible. Under load, garbage collection (and finalization) might not occur soon enough and performance might suffer.

ReleaseComObject decrements the reference count of the RCW, which itself maintains a reference count on the underlying COM object. When the RCW's internal reference count goes to zero, the underlying COM object is released.

Calling ReleaseComObject

Consider the following scenarios where you might need to call ReleaseComObject:

  • ASP.NET calling a COM component hosted in unmanaged COM+. The ASP.NET code should call ReleaseComObject when it has finished using the component.
  • ASP.NET calling a serviced component that wraps and internally calls a COM component. In this case, you should implement Dispose in your serviced component and your Dispose method should call ReleaseComObject. The ASP.NET code should call your serviced component's Dispose method.
  • Using a Queued Component recorder proxy or an LCE event class. In both cases, you are invoking unmanaged COM+ code.

**Note   **If you call ReleaseComObject before all clients have finished using the COM object, an exception will be generated, if the object is subsequently accessed.

Marshal.Release

Calling the Marshal.Release method is unnecessary unless you manually manage object lifetime using Marshal.AddRef. It is also applicable when you call Marshal.GetComInterfaceForObject, Marshal.GetIUnknownForObject, or Marshal.GetIDispatchForObject to obtain an IUnknown interface pointer.

More Information

For more information about calling ReleaseComObject when you reference regular COM components (nonserviced) through COM interop, see "Marshal.ReleaseComObject" in Chapter 7, "Improving Interop Performance."

Summary of Dispose, ReleaseComObject, and Release Guidelines

You should only call ReleaseComObject where your managed code references an unmanaged COM+ component. In this instance, the unmanaged COM+ component will not provide a Dispose method. In cases where managed client code references a managed serviced component, the client code can and should call Dispose to force the release of unmanaged resources because all managed serviced components implement IDisposable.

Table 8.4 summarizes when you need to call ReleaseComObject.

Table 8.4   When to Call Dispose, ReleaseComObject, and IUnknown.Release

Client Server Call Dispose Call ReleaseComObject Call IUnknown

Release

Managed Managed component using ES Yes No No
Managed Unmanaged component using COM+ No Yes No
Unmanaged Managed component using ES Yes No Yes
Unmanaged Unmanaged component using COM+ No No Yes

Note that unmanaged code should always call IUnknown.Release. If unmanaged code references a managed component using Enterprise Services, it should also first call Dispose on the COM Callable Wrapper (CCW) through which it communicates with the managed object. If unmanaged code references an unmanaged COM+ component, it simply calls IUnknown.Release.

The following are general guidelines:

  • Cast to IDisposable. If the call is successful, call Dispose.
  • Cast to Marshal. If the call is successful, call ReleaseComObject.

Queued Components

Enterprise Services provides a queuing feature for applications that require asynchronous and offline processing. Components that support this feature are referred to as Queued Components (QC). When your code calls a method on a queued component, the method calls are not directly executed; they are "recorded" on the client and then dispatched transparently by Microsoft Windows Message Queuing (also known as MSMQ) to the server. Subsequently on the server, they are "replayed" to the target object and the appropriate method implementation is executed.

Queued Components completely abstract the underlying Message Queuing details. The basic QC architecture is shown in Figure 8.5.

Ff647809.ch08-qc(en-us,PandP.10).gif

Figure 8.5: Basic queued component architecture

The core elements of the QC architecture are as follows:

  • Recorder proxy. This object provides an implementation of those interfaces that are marked in the COM+ catalog as queued interfaces. The recorder uses the Message Queuing API to send a message containing recorded method calls to the server.
  • Message Queuing. This is used to provide a reliable delivery mechanism to transport the recorded method calls to the server application. It also supports transactions.
  • Listener. The listener is an extension of Dllhost.exe. It uses the Message Queuing API to receive messages from the process's public message queue.
  • Player. The system-provided player component creates the target object instance and forwards method calls to the target object, using the unpackaged contents of the message.

If you plan to or are using Queued Components, consider the following guidelines:

  • Use Queued Components to decouple client and server lifetimes.
  • Do not wait for a response from a queued component.

Use Queued Component to Decouple Client and Server Lifetimes

Queued Components enable you to decouple your application's front end from back-end systems. This has a number of key benefits:

  • Improves performance. Clients become more responsive because they are not awaiting back-end system processing. Synchronous communications force the client to wait for a server response whether or not one is required. This can cause significant delays on slow networks.
  • Improves availability. In a synchronous system, no part of a business transaction can complete successfully unless all components are available. In a queued message-based system, the user interaction portion of the transaction can be separated from the availability of the back-end system. Later, when the back-end system becomes available, messages are moved for processing and subsequent transactions complete the business process.
  • Facilitates server scheduling. An application using asynchronous messaging is well-suited to deferring noncritical work to an off-peak period. Messages can be queued and processed in batch mode during off-peak periods to reduce demands on servers and CPUs.

Do Not Wait for a Response from a Queued Component

Method calls made to a queued component return immediately. QC is a "fire and forget" model and COM+ does not allow you to return values from a queued component. One of the ways to address this issue is to send a response back from the server using a separate message to a queued component that resides in the client process. However, the client should not wait for a response before proceeding because it cannot guarantee when the server will read and process the call from its queue. The target server may be offline or unreachable due to network issues, or the client might be disconnected.

If you need to ensure that a dispatched message is processed in a particular amount of time, include an expiration time in the message. The target component can check this before processing the message. If the message expires, it can log the message details. Even with this solution, synchronizing time between disparate systems is challenging. If the client absolutely has to have a response from the server before moving on to its next operation, do not use Queued Components.

Loosely Coupled Events

The COM+ loosely coupled event (LCE) service provides a distributed publisher-subscriber model. You define and register an "event" class that implements an interface that you also define. Subscriber components implement this interface and register themselves with COM+. When a publisher calls a method on the event class, the method call is forwarded by COM+ to all registered subscriber objects. You can add subscribers administratively or at run time, and the lifetime of the publisher and subscriber can be completely decoupled by combining queued components with the LCE service. The basic LCE service architecture is shown in Figure 8.6.

Ff647809.ch08-lce(en-us,PandP.10).gif

Figure 8.6: LCE service architecture

For more information about the architecture of LCE, see "COM+ Technical Series: Loosely Coupled Events," on MSDN at https://msdn.microsoft.com/en-us/library/ms809247.aspx.

The .NET Framework provides a number of event models including delegate-based events for in-process event notification. The advantage of COM+ LCE is that it works cross-process and cross-machine in a distributed environment. Some of the benefits of LCE are the following:

  • You benefit from a loosely coupled system design.
  • Server resources are not blocked and therefore concurrency and synchronization issues are avoided.
  • Server and client lifetimes are decoupled.
  • Scalability is high, especially when used along with Queued Components.

If you need to implement a distributed publisher-subscriber model, consider the following guidelines:

  • Consider the fire in parallel option.
  • Avoid LCE for multicast scenarios.
  • Use Queued Components with LCE from ASP.NET.
  • Do not subscribe to LCE events from ASP.NET.

Consider the Fire in Parallel Option

When a publisher raises an event, the method call does not return until all subscriber components are activated and contacted. With large numbers of subscribers, this can severely affect performance. You can use the FireInParallel property to instruct the event system to use multiple threads to deliver events to subscribers.

public interface ICustomInterface
{
   void OnEventA();
   void OnEventB();
}
[EventClass(FireInParallel = true)]
public class CustomClass : ServicedComponent, ICustomInterface
{
   public void OnEventA();
   public void OnEventB();
}

This approach can increase performance in certain circumstances, particularly when one or more of the subscribers take a long time to process the notification.

**Note   **Selecting Fire in parallel does not guarantee that the event is delivered at the same time to multiple subscribers, but it instructs COM+ to permit it to happen.

You can set the Fire in parallel option on the Advanced tab of the event class component's Properties dialog box.

Potential Pitfalls

Fire in parallel might mean that the subscribers gain concurrent access to the same objects. For example, if you use a DataSet as a parameter, you might end up with many threads accessing it. As a result, observe the following:

  • Do not use STA objects as parameters to LCE events.
  • Make your LCE subscribers read only the data. Otherwise, you might run into synchronization issues where subscriber A writes to the same object at the same time as subscriber B reads from it.

Avoid LCE for Multicast Scenarios

Evaluate whether you have too many subscribers for an event, because LCE is not designed for large multicast scenarios where large numbers of subscribers need to be notified. For this scenario, you usually do not know or do not care whether notifications are received, and you do not want to block awaiting a response from each subscriber.

When you have large numbers of subscribers, a good alternative is to use User Datagram Protocol (UDP) to deliver messages over the network; for example, by using the Socket class.

Use Queued Components with LCE from ASP.NET

If you want to publish events from an ASP.NET application, configure the event class as a queued component. This causes the event to be published asynchronously and does not block the main thread servicing the current ASP.NET request.

Do Not Subscribe to LCE Events from ASP.NET

The transient nature of the page class makes it difficult to subscribe to loosely coupled events from an ASP.NET application without blocking and waiting for the event to occur. This approach is not recommended.

More Information

For more information, see Microsoft Knowledge Base article 318185, "HOW TO: Use Loosely Coupled Events from Visual Studio .NET," at https://support.microsoft.com/default.aspx?scid=kb;en-us;318185.

Transactions

Transactions enable you to perform multiple tasks together or fail as a unit. You can perform transactions on a single resource or span multiple resources with distributed transactions. Enterprise Services and COM+ use the Microsoft Distributed Transaction Coordinator (DTC) to manage distributed transactions. You can quickly and easily add transaction support to your components by configuring the necessary attributes and adding a few lines of code. However, before you configure your components to use transactions, consider the following guidelines:

  • Choose the right transaction mechanism.
  • Choose the right isolation level.
  • Use compensating transactions to reduce lock times.

Choose the Right Transaction Mechanism

Avoid configuring your components to use transactions unless you really need them. If your component reads data from a database only to display a report, there is no need for any type of transaction. If you do need transactions because you are performing update operations, choose the right transaction mechanism.

Use Enterprise Services transactions for the following:

  • You need to flow a transaction in a distributed application scenario. For example, you need to flow transactions across components.
  • You require a single transaction to span multiple remote databases.
  • You require a single transaction to encompass multiple resource managers; for example, a database and Message Queuing resource manager.

Choose the Right Isolation Level

Transaction isolation levels determine the degree to which transactions are protected from the effects of other concurrent transactions in a multiuser system. A fully isolated transaction offers complete isolation and guarantees data consistency; however, it does so at the expense of server resources and performance. When choosing an isolation level, consider the following guidelines:

  • Use Serializable if data read by your current transaction cannot by changed by another transaction until the current transaction is finished. This also prevents insertion of new records that would affect the outcome of the current transaction. This offers the highest level of data consistency and least concurrency compared to other isolation levels.
  • Use Repeatable Read if data read by your current transaction cannot by changed by another transaction until the current transaction is finished; however, insertion of new data is acceptable.
  • Use Read Committed if you do not want to read data that is modified and uncommitted by another transaction. This is the default isolation level of SQL Server.
  • Use Read Uncommitted if you do not care about reading data modified by others (dirty reads) which could be committed or uncommitted by another transaction. Choose this when you need highest concurrency and do not care about dirty reads.
  • Use Any for downstream components that need to use the same isolation level as an upstream component (transactions flowing across components). If the root component uses Any, the isolation level used is Serializable.

When you flow transactions across components, ensure that the isolation level for downstream components is set to Any, the same value as the upstream component or a lower isolation level. Otherwise, a run-time error occurs and the transaction is canceled.

Configuring the Isolation Level

On Microsoft Windows 2000 Server, it is not possible to change the isolation level when you use automated transactions. Consider using manual transactions such as ADO.NET transactions, using T-SQL hints, or adding the following line to your stored procedures.

SET TRANSACTION ISOLATION LEVEL READ COMMITTED

On Microsoft Windows Server 2003, you can configure the isolation level either administratively, by using Component Services, or programmatically, by setting the Transaction attribute for your component as shown in the following code sample.

[Transaction(Isolation=TransactionIsolationLevel.ReadCommitted)]

More Information

Use Compensating Transactions to Reduce Lock Times

A compensating transaction is a separate transaction that undoes the work of a previous transaction. Compensating transactions are a great way to reduce lock times and to avoid long running synchronous transactions. To reduce the length of a transaction, consider the following:

  • Do only work directly related to the transaction in the scope of the transaction.
  • Reduce the number of participants in the transaction by breaking the transaction into smaller transactions.

Consider an example where a Web application has to update three different databases when processing a request. When the system is under load, transactions might begin to time out frequently. The problem here is that all three databases have to hold locks until all three complete the work and report back to the transaction coordinator. By using compensating transactions, you can break the work into three logical pieces — each of which can complete faster, releasing locks sooner — and therefore increase concurrency. The trade-off here is that you will have to create code that coordinates a "logical" transaction and deal with failure conditions if one of the updates fails. In this event, you need to execute a compensating transaction on the other two databases to keep data consistent across all three.

More Information

For more information about performing distributed transaction with a .NET Framework provider, see the following Knowledge Base articles:

Security

Security and performance is a trade-off. The trick is to develop high performance systems that are still secure. A common pitfall is to reduce security measures to improve performance. The following recommendations help you to build secure solutions while maximizing performance and scalability:

  • Use a trusted server model if possible.
  • Avoid impersonation in the middle tier.
  • Use packet privacy authentication only if you need encryption.

Use a Trusted Server Model if Possible

With the trusted server model, a serviced component uses its fixed process identity to access downstream resources instead of flowing the security context of the original caller with impersonation. Since all database calls from the middle tier use the same process identity, you gain the maximum benefit from connection pooling. For a server application, you configure the process run as identity, using the Component Services tool. For a library application, the identity is determined by the account used to run the client process. With the trusted server model, the downstream resources authenticate and authorize the process identity.

More Information

For more information, see "The Trusted Subsystem Model" in Chapter 3, "Authentication and Authorization," of Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication on MSDN at https://msdn.microsoft.com/en-us/library/aa302383.aspx.

Avoid Impersonation in the Middle Tier

Middle tier impersonation is generally performed to flow the original caller's identity to the back-end resource. It allows the back-end resource to authorize the caller directly because the caller's identity is used for access. You should generally avoid this approach because it prevents the efficient use of connection pooling and it does not scale.

If you need to audit the caller at the back end, pass the original caller's identity through a stored procedure parameter. Authorize the original caller in the application's middle tier using Enterprise Service roles.

More Information

For more information, see "Choosing a Resource Access Model" in Chapter 3, "Authentication and Authorization," of Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication on MSDN at https://msdn.microsoft.com/en-us/library/aa302383.aspx.

Use Packet Privacy Authentication Only if You Need Encryption

If you need to ensure that packets have not been tampered with in transit between the caller and serviced component, and you do not need encryption, then use AuthenticationOption.Integrity. If you need to ensure the privacy of data sent to and from a serviced component, you should consider using AuthenticationOption.Privacy.

However, do not use this option if your application is located in a secure network that uses IPSec encryption to protect the communication channels between servers. You can configure the packet privacy authentication level using the following assembly-level attribute.

[assembly: ApplicationAccessControl(Authentication =
                                    AuthenticationOption.Privacy)]

More Information

For more information, see the following resources:

Threading

Enterprise Services-serviced components built using Microsoft Visual C# or Microsoft Visual Basic .NET do not exhibit thread affinity because their threading model is set to Both. This setting indicates that the component can be activated in a STA or MTA depending on the caller. The component will be created in the same apartment as its caller.

Avoid STA Components

STA components such as Visual Basic 6.0 components with the Single threading model serialize all callers onto a single STA thread. As a result, an expensive thread switch and cross apartment marshaling occurs every time the object is called.

In addition to the costly thread switches, using STA components creates problems due to the fact that multiple requests made for STA objects are queued until the thread servicing the STA object is free to serve the queued requests.. The required STA thread might already be busy or blocked servicing another request for another component in the same STA. This in turn blocks the caller and creates a significant bottleneck.

More Information

For more information about reducing threading bottlenecks, see "Reduce or Avoid Cross - Apartment Calls" in Chapter 7, "Improving Interop Performance."

For more information about threads and apartments, see "Marshaling and COM Apartments" in "Interop Marshaling Overview" of the .NET Framework Developer's Guide on MSDN at https://msdn.microsoft.com/en-us/library/eaw10et3.aspx.

Synchronization Attribute

You use the Synchronization attribute to synchronize access to a class and guarantee that it can be called by only one caller at a time. Serviced components that are configured for transactions and JIT activation are automatically synchronized. Generally, in a server application, you do not need to worry about synchronizing access to a serviced component's class members because by default each component services a single client request and concurrent access does not occur.

If you use a serviced component in a library application from a multithreaded client, you might need synchronized access due to the potential of multiple threads accessing components simultaneously. Also, global variables require separate synchronization.

Use Locks or Mutexes for Granular Synchronization

You can use the Synchronization attribute only at the class level. This means that all access to an object instance is synchronized. Consider the following example.

public interface ICustomInterface
{
   void DoSomething();
}
[Transaction(TransactionOption.Required)]
[Synchronization(SynchronizationOption.Required)]
[JustInTimeActivation(true)]
public class CustomClass : ServicedComponent, ICustomInterface
{
   public void DoSomething();
}

If you need to synchronize only a small part of your object's code; for example, to ensure that a file or global variable is not accessed concurrently, use the C# Lock keyword instead.

More Information

For more information about .NET Framework synchronization classes, see "Locking and Synchronization Explained," in Chapter 5, "Improving Managed Code Performance."

Summary

Enterprise Services (COM+) provides a broad range of important infrastructure-level features for middle tier components, including distributed transaction management, object pooling, and role-based security.

Start by considering whether you need services. If you do, consider whether you can use a highly efficient library application or whether you need the added fault tolerance and security benefits provided by server applications. Physical deployment considerations might determine that you need server applications. Remember that server applications incur the added overhead of IPC, marshaling, and additional security checks.

After you decide to use Enterprise Services, use the guidance in this chapter to ensure that you use each service as efficiently as possible.

Additional Resources

For more information about Enterprise Services performance, see the following resources in this guide:

For related resources, see the following Microsoft Knowledge Base articles:

patterns & practices Developer Center

Retired Content

This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.

© Microsoft Corporation. All rights reserved.