Udostępnij za pośrednictwem


Threading and Synchronization

This chapter is excerpted from C# 3.0 Cookbook, Third Edition: More than 250 solutions for C# 3.0 programmers by Jay Hilyard, Stephen Teilhet, published by O'Reilly Media

C# 3.0 Cookbook, Third Edition

Logo

Buy Now

Introduction

A thread represents a single flow of execution logic in a program. Some programs never need more than a single thread to execute efficiently, but many do, and that is what this chapter is about. Threading in .NET allows you to build responsive and efficient applications. Many applications have a need to perform multiple actions at the same time (such as user interface interaction and processing data), and threading provides the capability to achieve this. Being able to have your application perform multiple tasks is a very liberating and yet complicating factor in your application design. Once you have multiple threads of execution in your application, you need to start thinking about what data in your application needs to be protected from multiple accesses, what data could cause threads to develop an interdependency that could lead to deadlocking (Thread A has a resource that Thread B is waiting for, and Thread B has a resource that Thread A is waiting for), and how to store data you want to associate with the individual threads. You will explore some of these issues to help you take advantage of this wonderful capability of the .NET Framework. You will also see the areas where you need to be careful and items to keep in mind while designing and creating your multithreaded application.

Creating Per-Thread Static Fields

Problem

Static fields, by default, are shared between threads within an application domain. You need to allow each thread to have its own nonshared copy of a static field, so that this static field can be updated on a per-thread basis.

Solution

Use ThreadStaticAttribute to mark any static fields as not shareable between threads:

  using System;
    using System.Threading;

    public class Foo
    {
        [ThreadStaticAttribute( )]
        public static string bar = "Initialized string";
    }

Discussion

By default, static fields are shared between all threads that access these fields in the same application domain. To see this, you'll create a class with a static field called bar and a static method to access and display the value contained in this field:

 using System;
    using System.Threading;

    public class ThreadStaticField
    {
        public static string bar = "Initialized string";

        public static void DisplayStaticFieldValue( )
        {
            string msg =
                string.Format("{0} contains static field value of: {1}",
                    Thread.CurrentThread.GetHashCode( ),
                    ThreadStaticField.bar);
            Console.WriteLine(msg);
        }
    }

Next, create a test method that accesses this static field both on the current thread and on a newly spawned thread:

    public static void TestStaticField( )
    {
        ThreadStaticField.DisplayStaticFieldValue( );

        Thread newStaticFieldThread =
            new Thread(ThreadStaticField.DisplayStaticFieldValue);

        newStaticFieldThread.Start( );

        ThreadStaticField.DisplayStaticFieldValue( );
    }

This code displays output that resembles the following:

 9 contains static field value of: Initialized string
    10 contains static field value of: Initialized string
    9 contains static field value of: Initialized string

In the preceding example, the current thread's hash value is 9, and the new thread's hash value is 10. These values will vary from system to system. Notice that both threads are accessing the same static bar field. Next, add the ThreadStaticAttribute to the static field:

  public class ThreadStaticField
    {
                 [ThreadStaticAttribute( )]
        public static string bar = "Initialized string";
    
        public static void DisplayStaticFieldValue( )
        {
            string msg =
                string.Format("{0} contains static field value of: {1}",
                    Thread.CurrentThread.GetHashCode( ),
                    ThreadStaticField.bar);
            Console.WriteLine(msg);
        }
    }

Now, output resembling the following is displayed:

  9 contains static field value of: Initialized string
    10 contains static field value of:
    9 contains static field value of: Initialized string

Notice that the new thread returns a null for the value of the static bar field. This is the expected behavior. The bar field is initialized only in the first thread that accesses it. In all other threads, this field is initialized to null. Therefore, it is imperative that you initialize the bar field in all threads before it is used.

Tip

Remember to initialize any static field that is marked with ThreadStaticAttribute before it is used in any thread. That is, this field should be initialized in the method passed in to the ThreadStart delegate. You should make sure to not initialize the static field using a field initializer as shown in the prior code, since only one thread gets to see that initial value.

The bar field is initialized to the "Initialized string" string literal before it is used in the first thread that accesses this field. In the previous test code, the bar field was accessed first, and, therefore, it was initialized in the current thread. Suppose you were to remove the first line of the TestStaticField method, as shown here:

    public static void TestStaticField( )
    {
         // ThreadStaticField.DisplayStaticFieldValue( );

         Thread newStaticFieldThread =
             new Thread(ThreadStaticField.DisplayStaticFieldValue);
         newStaticFieldThread.Start( );

         ThreadStaticField.DisplayStaticFieldValue( );
    }

This code now displays similar output to the following:

 10 contains static field value of: Initialized string
    9 contains static field value of:

The current thread does not access the bar field first and therefore does not initialize it. However, when the new thread accesses it first, it does initialize it.

Note that adding a static constructor to initialize the static field marked with this attribute will still follow the same behavior. Static constructors are executed only one time per application domain.

See Also

The "ThreadStaticAttribute Attribute" and "Static Modifier (C#)" topics in the MSDN documentation.

Providing Thread-Safe Access to Class Members

Problem

You need to provide thread-safe access through accessor functions to an internal member variable.

The following NoSafeMemberAccess class shows three methods: ReadNumericField, IncrementNumericField, and ModifyNumericField. While all of these methods access the internal numericField member, the access is currently not safe for multithreaded access:

  public static class NoSafeMemberAccess
    {
        private static int numericField = 1;

        public static void IncrementNumericField( )
        {
            ++numericField;
        }

        public static void ModifyNumericField(int newValue)
        {
            numericField = newValue;
        }

        public static int ReadNumericField( )
        {
            return (numericField);
        }
    }

Solution

NoSafeMemberAccess could be used in a multithreaded application, and therefore it must be made thread-safe. Consider what would occur if multiple threads were calling the IncrementNumericField method at the same time. It is possible that two calls could occur to IncrementNumericField while the numericField is updated only once. In order to protect against this, you will modify this class by creating an object that you can lock against in critical sections of the code:

   public static class SaferMemberAccess
    {

        private static int numericField = 1;
             private static object syncObj = new object( );

        public static void IncrementNumericField( )
       {
                 lock(syncObj)              {
                ++numericField;
                 }
       }

        public static void ModifyNumericField(int newValue)
        {
                 lock(syncObj)              {
                numericField = newValue;
         }
        }

        public static int ReadNumericField( )
        {
                 lock (syncObj)            {
                return (numericField);
                 }
        }
    }

Using the lock statement on the syncObj object lets you synchronize access to the numericField member. This now makes all three methods safe for multithreaded access.

Discussion

Marking a block of code as a critical section is done using the lock keyword. The lock keyword should not be used on a public type or on an instance out of the control of the program, as this can contribute to deadlocks. Examples of this are using the "this" pointer, the type object for a class (typeof(MyClass)), or a string literal ("MyLock"). If you are attempting to protect code in only public static methods, the System.Runtime.CompilerServices.MethodImpl attribute could also be used for this purpose with the MethodImplOption.Synchronized value:

  [MethodImpl (MethodImplOptions.Synchronized)]
    public static void MySynchronizedMethod( )
    {
    }

There is a problem with synchronization using an object such as syncObj in the SaferMemberAccess example. If you lock an object or type that can be accessed by other objects within the application, other objects may also attempt to lock this same object. This will manifest itself in poorly written code that locks itself, such as the following code:

  public class DeadLock
    {
        public void Method1( )
        {
            lock(this)
            {
                // Do something.
            }
        }
    }

When Method1 is called, it locks the current DeadLock object. Unfortunately, any object that has access to the DeadLock class may also lock it. This is shown here:

 using System;
    using System.Threading;

    public class AnotherCls
    {
        public void DoSomething( )
        {

            DeadLock deadLock = new DeadLock( );
            lock(deadLock)
            {
                Thread thread = new Thread(deadLock.Method1);
                thread.Start( );

                // Do some time-consuming task here.
            }
         }
    }

The DoSomething method obtains a lock on the deadLock object and then attempts to call the Method1 method of the deadLock object on another thread, after which a very long task is executed. While the long task is executing, the lock on the deadLock object prevents Method1 from being called on the other thread. Only when this long task ends, and execution exits the critical section of the DoSomething method, will the Method1 method be able to acquire a lock on the this object. As you can see, this can become a major headache to track down in a much larger application.

Jeffrey Richter has come up with a relatively simple method to remedy this situation, which he details quite clearly in the article "Safe Thread Synchronization" in the January 2003 issue of MSDN Magazine. His solution is to create a private field within the class on which to synchronize. Only the object itself can acquire this private field; no outside object or type may acquire it. This solution is also now the recommended practice in the MSDN documentation for the lock keyword. The DeadLock class can be rewritten, as follows to fix this problem:

 public class DeadLock
    {

        private object syncObj = new object( );

        public void Method1( )
        {
            lock(syncObj)
            {
                // Do something.
            }
        }
    }

Now in the DeadLock class, you are locking on the internal syncObj, while the DoSomething method locks on the DeadLock class instance. This resolves the deadlock condition, but the DoSomething method still should not lock on a public type. There-fore, change the AnotherCls class like so:

    public class AnotherCls
    {
        private object deadLockSyncObj = new object( );

        public void DoSomething( )
        {
            DeadLock deadLock = new DeadLock( );
            lock(deadLockSyncObj)
            {
                Thread thread = new Thread(deadLock.Method1);
                thread.Start( );

                // Do some time-consuming task here.
            }
        }
    }

Now the AnotherCls class has an object of its own to protect access to the DeadLock class instance in DoSomething instead of locking on the public type.

To clean up your code, you should stop locking any objects or types except for the synchronization objects that are private to your type or object, such as the syncObj in the fixed DeadLock class. This recipe makes use of this pattern by creating a static syncObj object within the SaferMemberAccess class. The IncrementNumericField, ModifyNumericField, and ReadNumericField methods use this syncObj to synchronize access to the numericField field. Note that if you do not need a lock while the numericField is being read in the ReadNumericField method, you can remove this lock block and simply return the value contained in the numericField field.

Tip

Minimizing the number of critical sections within your code can significantly improve performance. Use what you need to secure resource access, but no more.

If you require more control over locking and unlocking of critical sections, you might want to try using the overloaded static Monitor.TryEnter methods. These methods allow more flexibility by introducing a timeout value. The lock keyword will attempt to acquire a lock on a critical section indefinitely. However, with the TryEnter method, you can specify a timeout value in milliseconds (as an integer) or as a TimeSpan structure. The TryEnter methods return true if a lock was acquired and false if it was not. Note that the overload of the TryEnter method that accepts only a single parameter does not block for any amount of time. This method returns immediately, regardless of whether the lock was acquired.

The updated class using the Monitor methods is shown in Example 18-1, "Using Monitor methods".

Example 18-1. Using Monitor methods

using System;
using System.Threading;

public static class MonitorMethodAccess
{
    private static int numericField = 1;
    private static object syncObj = new object( );

    public static object SyncRoot
    {
        get { return syncObj; }
    }

    public static void IncrementNumericField( )
    {
        if (Monitor.TryEnter(syncObj, 250))
        {
            try
            {
                ++numericField;
            }
            finally
            {
                Monitor.Exit(syncObj);
            }

            }
    }

    public static void ModifyNumericField(int newValue)
    {
        if (Monitor.TryEnter(syncObj, 250))
        {
            try
            {
                numericField = newValue;
            }
            finally
            {
                Monitor.Exit(syncObj);
            }
        }
    }

    public static int ReadNumericField( )
    {
        if (Monitor.TryEnter(syncObj, 250))
        {
            try
            {
                return (numericField);
            }
            finally
            {
                Monitor.Exit(syncObj);
            }
        }

        return (-1);
    }
}

Note that with the TryEnter methods, you should always check to see whether the lock was in fact acquired. If not, your code should wait and try again or return to the caller.

You might think at this point that all of the methods are thread-safe. Individually, they are, but what if you are trying to call them and you expect synchronized access between two of the methods? If ModifyNumericField and ReadNumericField are used one after the other by Class 1 on Thread 1 at the same time Class 2 is using these methods on Thread 2, locking or Monitor calls will not prevent Class 2 from modifying the value before Thread 1 reads it. Here is a series of actions that demonstrates this:

  • Class 1 Thread 1
    Calls ModifyNumericField with 10.

  • Class 2 Thread 2
    Calls ModifyNumericField with 15.

  • Class 1 Thread 1
    Calls ReadNumericField and gets 15, not 10.

  • Class 2 Thread 2
    Calls ReadNumericField and gets 15, which it expected.

In order to solve this problem of synchronizing reads and writes, the calling class needs to manage the interaction. The external class can accomplish this by using the Monitor class to establish a lock on the exposed synchronization object SyncRoot from MonitorMethodAccess, as shown here:

   int num = 0;
    if(Monitor.TryEnter(MonitorMethodAccess.SyncRoot,250))
    {
        MonitorMethodAccess.ModifyNumericField(10);
        num = MonitorMethodAccess.ReadNumericField( );
        Monitor.Exit(MonitorMethodAccess.SyncRoot);
    }
    Console.WriteLine(num);

See Also

The "Lock Statement," "Thread Class," and "Monitor Class" topics in the MSDN documentation; see the "Safe Thread Synchronization" article in the January 2003 issue of MSDN Magazine.

Preventing Silent Thread Termination

Problem

An exception thrown in a spawned worker thread will cause this thread to be silently terminated if the exception is unhandled. You need to make sure all exceptions are handled in all threads. If an exception happens in this new thread, you want to handle it and be notified of its occurrence.

Solution

You must add exception handling to the method that you pass to the ThreadStart delegate with a try-catch, try-finally,or try-catch-finally block. The code to do this is shown in the section called "Preventing Silent Thread Termination" in bold.

Example 18-2. Preventing silent thread termination

using System;
using System.Threading;

public class MainThread
{
    public void CreateNewThread( )
    {
        // Spawn new thread to do concurrent work.
        Thread newWorkerThread = new Thread(Worker.DoWork);
        newWorkerThread.Start( );
    }

}

public class Worker
{
    // Method called by ThreadStart delegate to do concurrent work
    public static void DoWork ( )
    {
        try        {
           // Do thread work here.
        }        catch         {
          // Handle thread exception here.
          // Do not re-throw exception.
        }      finally         {
          // Do thread cleanup here.
        }
    }
}

Discussion

If an unhandled exception occurs in the main thread of an application, the main thread terminates, along with your entire application. An unhandled exception in a spawned worker thread, however, will terminate only that thread. This will happen without any visible warnings, and your application will continue to run as if nothing happened.

Simply wrapping an exception handler around the Start method of the Thread class will not catch the exception on the newly created thread. The Start method is called within the context of the current thread, not the newly created thread. It also returns immediately once the thread is launched, so it isn't going to wait around for the thread to finish. Therefore, the exception thrown in the new thread will not be caught since it is not visible to any other threads.

If the exception is rethrown from the catch block, the finally block of this structured exception handler will still execute. However, after the finally block is finished, the rethrown exception is, at that point, rethrown. The rethrown exception cannot be handled and the thread terminates. If there is any code after the finally block, it will not be executed, since an unhandled exception occurred.

Tip

Never rethrow an exception at the highest point in the exception-handling hierarchy within a thread. Since no exception handlers can catch this rethrown exception, it will be considered unhandled, and the thread will terminate after all finally blocks have been executed.

What if you use the ThreadPool and QueueUserWorkItem? This method will still help you because you added the handling code that will execute inside the thread. Just make sure you have the finally block set up so that you can notify yourself of exceptions in other threads as shown earlier.

In order to provide a last-chance exception handler for your WinForms application, you need to hook up to two separate events. The first event is the System.AppDomain. CurrentDomain.UnhandledException event, which will catch all unhandled exceptions in the current AppDomain on worker threads; it will not catch exceptions that occur on the main UI thread of a WinForms application. See the section called "Dealing with Unhandled Exceptions in WinForms Applications" for more information on the System.AppDomain.UnhandledException event. In order to catch those, you also need to hook up to the System.Windows.Forms.Application.ThreadException, which will catch unhandled exceptions in the main UI thread. See the section called "Dealing with Unhandled Exceptions in WinForms Applications" for more information about the ThreadException event.

See Also

The "Thread Class" and "Exception Class" topics in the MSDN documentation.

Being Notified of the Completion of an Asynchronous Delegate

Problem

You need a way of receiving notification from an asynchronously invoked delegate that it has finished. This scheme must allow your code to continue processing without having to constantly call IsCompleted in a loop or to rely on the WaitOne method. Since the asynchronous delegate will return a value, you must be able to pass this return value back to the invoking thread.

Solution

Use the BeginInvoke method to start the asynchronous delegate, but use the first parameter to pass a callback delegate to the asynchronous delegate, as shown in Example 18-3, "Getting notification on completion of an anonymous delegate".

Example 18-3. Getting notification on completion of an anonymous delegate

using System;
using System.Threading;

public class AsyncAction2
{
    public void CallbackAsyncDelegate( )
    {
        AsyncCallback callBack = DelegateCallback;

        AsyncInvoke method1 = TestAsyncInvoke.Method1;
        Console.WriteLine("Calling BeginInvoke on Thread {0}",
            Thread.CurrentThread.ManagedThreadId);
        IAsyncResult asyncResult = method1.BeginInvoke(callBack, method1);

    // No need to poll or use the WaitOne method here, so return to the calling
// method.
        return;
    }

    private static void DelegateCallback(IAsyncResult iresult)
    {
        Console.WriteLine("Getting callback on Thread {0}",
            Thread.CurrentThread.ManagedThreadId);
        AsyncResult asyncResult = (AsyncResult)iresult;
        AsyncInvoke method1 = (AsyncInvoke)asyncResult.AsyncDelegate;

        int retVal = method1.EndInvoke(asyncResult);
        Console.WriteLine("retVal (Callback): " + retVal);
    }
}

This callback delegate will call the DelegateCallback method on the thread the method was invoked on when the asynchronous delegate is finished processing.

The following code defines the AsyncInvoke delegate and the asynchronously invoked static method TestAsyncInvoke.Method1:

 public delegate int AsyncInvoke( );

    public class TestAsyncInvoke
    {
        public static int Method1( )
        {
            Console.WriteLine("Invoked Method1 on Thread {0}",
                Thread.CurrentThread.ManagedThreadId );
            return (1);
        }
    }

To run the asynchronous invocation, create an instance of the AsyncAction class and call the CallbackAsyncDelegate method like so:

  AsyncAction aa2 = new AsyncAction();
    aa2.CallbackAsyncDelegate();

The output for this code is shown next. Note that the thread ID for Method1 is different:

    Calling BeginInvoke on Thread 9
    Invoked Method1 on Thread 10
    Getting callback on Thread 10
    retVal (Callback): 1

Discussion

The asynchronous delegates in this recipe are created and invoked in the same fashion as the asynchronous delegate in the section called "Preventing Silent Thread Termination". Instead of using the IsCompleted property to determine when the asynchronous delegate is finished processing (or the WaitOne method to block for a specified time while the asynchronous delegate continues processing), This recipe uses a callback to indicate to the calling thread that the asynchronous delegate has finished processing and that its return value, ref parameter values, and out parameter values are available.

Invoking a delegate in this manner is much more flexible and efficient than simply polling the IsCompleted property to determine when a delegate finishes processing. When polling this property in a loop, the polling method cannot return and allow the application to continue processing. A callback is also better than using a WaitOne method, since the WaitOne method will block the calling thread and allow no processing to occur.

The CallbackAsyncDelegate method in this recipe makes use of the first parameter to the BeginInvoke method of the asynchronous delegate to pass in another delegate. This contains a callback method to be called when the asynchronous delegate finishes processing. After calling BeginInvoke, this method can now return, and the application can continue processing; it does not have to wait in a polling loop or be blocked while the asynchronous delegate is running.

The AsyncInvoke delegate that is passed into the first parameter of the BeginInvoke method is defined as follows:

 public delegate void AsyncCallback(IAsyncResult ar)

When this delegate is created, as shown here, the callback method passed in, DelegateCallback, will be called as soon as the asynchronous delegate completes:

 AsyncCallback callBack = new AsyncCallback(DelegateCallback);

DelegateCallback will not run on the same thread as BeginInvoke but rather on a Thread from the ThreadPool. This callback method accepts a parameter of type IAsyncResult. You can cast this parameter to an AsyncResult object within the method and use it to obtain information about the completed asynchronous delegate, such as its return value, any ref parameter values, and any out parameter values. If the delegate instance that was used to call BeginInvoke is still in scope, you can just pass the IAsyncResult to the EndInvoke method. In addition, this object can obtain any state information passed into the second parameter of the BeginInvoke method. This state information can be any object type.

The DelegateCallback method casts the IAsyncResult parameter to an AsyncResult object and obtains the asynchronous delegate that was originally called. The EndInvoke method of this asynchronous delegate is called to process any return value, ref parameters, or out parameters. If any state object was passed in to the BeginInvoke method's second parameter, it can be obtained here through the following line of code:

   object state = asyncResult.AsyncState;

See Also

The "AsyncCallback Delegate" topic in the MSDN documentation.

Storing Thread-Specific Data Privately

Problem

You want to store thread-specific data discovered at runtime. This data should be accessible only to code running within that thread.

Solution

Use the AllocateDataSlot, AllocateNamedDataSlot, or GetNamedDataSlot method on the Thread class to reserve a thread local storage (TLS) slot. Using TLS, a large object can be stored in a data slot on a thread and used in many different methods. This can be done without having to pass the structure as a parameter.

For this example, a class called ApplicationData here represents a set of data that can grow to be very large in size:

   public class ApplicationData
    {
        // Application data is stored here.
    }

Before using this structure, a data slot has to be created in TLS to store the class. GetNamedDataSlot is called to get the appDataSlot. Since that doesn't exist, the default behavior for GetNamedDataSlot is to just create it. The following code creates an instance of the ApplicationData class and stores it in the data slot named appDataSlot:

    ApplicationData appData = new ApplicationData();
    Thread.SetData(Thread.GetNamedDataSlot("appDataSlot"), appData);

Whenever this class is needed, it can be retrieved with a call to Thread.GetData. The following line of code gets the appData structure from the data slot named appDataSlot:

    ApplicationData storedAppData =
            (ApplicationData)Thread.GetData(Thread.GetNamedDataSlot("appDataSlot"));

At this point, the storedAppData structure can be read or modified. After the action has been performed on storedAppData, then storedAppdata must be placed back into the data slot named appDataSlot:

   Thread.SetData(Thread.GetNamedDataSlot("appDataSlot"), storedAppData);

Once the application is finished using this data, the data slot can be released from memory using the following method call:

   Thread.FreeNamedDataSlot("appDataSlot");

The HandleClass class in Example 18-4, "Using TLS to store a structure" shows how TLS can be used to store a structure.

Example 18-4. Using TLS to store a structure

using System;
using System.Threading;

public class HandleClass
{
    public static void Main( )
    {
        // Create structure instance and store it in the named data slot.
        ApplicationData appData = new ApplicationData( );
        Thread.SetData(Thread.GetNamedDataSlot("appDataSlot"), appData);

        // Call another method that will use this structure.
        HandleClass.MethodB( );

        // When done, free this data slot.
        Thread.FreeNamedDataSlot("appDataSlot");
    }

    public static void MethodB( )
    {
        // Get the structure instance from the named data slot.
        ApplicationData storedAppData =
           (ApplicationData)Thread.GetData(Thread.GetNamedDataSlot("appDataSlot"));

        // Modify the ApplicationData.

        // When finished modifying this data, store the changes back
        // into the named data slot.
        Thread.SetData(Thread.GetNamedDataSlot("appDataSlot"),
                        storedAppData);
        // Call another method that will use this data.
        HandleClass.MethodC( );
    }

    public static void MethodC( )
    {
        // Get the instance from the named data slot.
        ApplicationData storedAppData =
            (ApplicationData)Thread.GetData(Thread.GetNamedDataSlot("appDataSlot"));

       // Modify the data.

       // When finished modifying this data, store the changes back into
       // the named data slot.
       Thread.SetData(Thread.GetNamedDataSlot("appDataSlot"), storedAppData);
    }
}

Discussion

Thread local storage is a convenient way to store data that is usable across method calls without having to pass the structure to the method or even without knowledge about where the structure was actually created.

Data stored in a named TLS data slot is available only to that thread; no other thread can access a named data slot of another thread. The data stored in this data slot is accessible from anywhere within the thread. This setup essentially makes this data global to the thread.

To create a named data slot, use the static Thread.GetNamedDataSlot method. This method accepts a single parameter, name, that defines the name of the data slot. This name should be unique; if a data slot with the same name exists, then the contents of that data slot will be returned, and a new data slot will not be created. This action occurs silently; there is no exception thrown or error code available to inform you that you are using a data slot someone else created. To be sure that you are using a unique data slot, use the Thread.AllocateNamedDataSlot method. This method throws a System.ArgumentException if a data slot already exists with the same name. Otherwise, it operates similarly to the GetNamedDataSlot method.

It is interesting to note that this named data slot is created on every thread in the process, not just the thread that called this method. This fact should not be much more than an inconvenience to you, though, since the data in each data slot can be accessed only by the thread that contains it. In addition, if a data slot with the same name was created on a separate thread and you call GetNamedDataSlot on the current thread with this name, none of the data in any data slot on any thread will be destroyed.

GetNamedDataSlot returns a LocalDataStoreSlot object that is used to access the data slot. Note that this class is not creatable through the use of the new keyword. It must be created through one of the AllocateDataSlot or AllocateNamedDataSlot methods on the Thread class.

To store data in this data slot, use the static Thread.SetData method. This method takes the object passed in to the data parameter and stores it in the data slot defined by the dataSlot parameter.

The static Thread.GetData method retrieves the object stored in a data slot. This method retrieves a LocalDataStoreSlot object that is created through the Thread.GetNamedDataSlot method. The GetData method then returns the object that was stored in that particular data slot. Note that the object returned might have to be cast to its original type before it can be used.

The static method Thread.FreeNamedDataSlot will free the memory associated with a named data slot. This method accepts the name of the data slot as a string and, in turn, frees the memory associated with that data slot. Remember that when a data slot is created with GetNamedDataSlot, a named data slot is also created on all of the other threads running in that process. This is not really a problem when creating data slots with the GetNamedDataSlot method because, if a data slot exists with this name, a LocalDataStoreSlot object that refers to that data slot is returned, a new data slot is not created, and the original data in that data slot is not destroyed.

This situation becomes more of a problem when using the FreeNamedDataSlot method. This method will free the memory associated with the data slot name passed in to it for all threads, not just the thread that it was called on. Freeing a data slot before all threads have finished using the data within that data slot can be disastrous to your application.

A way to work around this problem is to not call the FreeNamedDataSlot method at all. When a thread terminates, all of its data slots in TLS are freed automatically. The side effect of not calling FreeNamedDataSlot is that the slot is taken up until the garbage collector determines that the thread the slot was created on has finished and the slot can be freed.

If you know the number of TLS slots you need for your code at compile time, consider using the ThreadStaticAttribute on a static field of your class to set up TLS-like storage.

See Also

The "Thread Local Storage and Thread Relative Static Fields," "ThreadStaticAttribute Attribute," and "Thread Class" topics in the MSDN documentation.

Granting Multiple Access to Resources with a Semaphore

Problem

You have a resource you want only a certain number of clients to access at a given time.

Solution

Use a semaphore to enable resource-counted access to the resource. For example, if you have an Xbox 360 and a copy of Halo3 (the resource) and a development staff eager to blow off some steam (the clients), you have to synchronize access to the Xbox 360. Since the Xbox 360 has four controllers, up to four clients can be playing at any given time. The rules of the house are that when you die, you give up your controller.

To accomplish this, create a class called Halo3Session with a Semaphore called _Xbox360 like this:

 public class Halo3Session
    {
        // A semaphore that simulates a limited resource pool.
        private static Semaphore _Xbox360;

In order to get things rolling, you need to call the Play method, as shown in Example 18-5, "Play method", on the Halo3Session class.

Example 18-5. Play method

public static void Play( )
{
    // An Xbox360 has 4 controller ports so 4 people can play at a time
    // We use 4 as the max and zero to start with, as we want Players
    // to queue up at first until the Xbox360 boots and loads the game
    //
    using (_Xbox360 = new Semaphore(0, 4, "Xbox360"))
    {
        using (ManualResetEvent GameOver =
            new ManualResetEvent(false))
        {
           //
           // 9 Players log in to play
           //
           List<Xbox360Player.PlayerInfo> players =
               new List<Xbox360Player.PlayerInfo>( ) {
                   new Xbox360Player.PlayerInfo { Name="Igor", Dead=GameOver},
                   new Xbox360Player.PlayerInfo { Name="AxeMan", Dead=GameOver},
                   new Xbox360Player.PlayerInfo { Name="Dr. Death",Dead=GameOver},
                   new Xbox360Player.PlayerInfo { Name="HaPpyCaMpEr",Dead=GameOver},
                   new Xbox360Player.PlayerInfo { Name="Executioner",Dead=GameOver},
                   new Xbox360Player.PlayerInfo { Name="FragMan",Dead=GameOver},
                   new Xbox360Player.PlayerInfo { Name="Beatdown",Dead=GameOver},
                   new Xbox360Player.PlayerInfo { Name="Stoney",Dead=GameOver},
                   new Xbox360Player.PlayerInfo { Name="Pwned",Dead=GameOver}
                   };

           foreach (Xbox360Player.PlayerInfo player in players)
           {
               Thread t = new Thread(Xbox360Player.JoinIn);

               // put a name on the thread
               t.Name = player.Name;
              // fire up the player
               t.Start(player);
           }

           // Wait for the Xbox360 to spin up and load Halo3 (3 seconds)
           Console.WriteLine("Xbox360 initializing...");
           Thread.Sleep(3000);
           Console.WriteLine(
               "Halo3 loaded & ready, allowing 4 players in now...");

           // The Xbox360 has the whole semaphore count. We call
           // Release(4) to open up 4 slots and
           // allow the waiting players to enter the Xbox360(semaphore)
           // up to four at a time.
           //
           _Xbox360.Release(4);

           // wait for the game to end...
           GameOver.WaitOne( );
        }
    }
}

The first thing the Play method does is to create a new semaphore that has a maximum resource count of 4 and a name of _Xbox360. This is the semaphore that will be used by all of the player threads to gain access to the game. A ManualResetEvent called GameOver is created to track when the game has ended:

   public class Xbox360Player
    {
        public class PlayerInfo
        {
            public ManualResetEvent Dead {get; set;}
            public string Name {get; set;}
        }

        //... more class
    }

To simulate the developers, you create a thread for each with its own Xbox360Player.PlayerInfo class instance to contain the player name and a reference to the original GameOver ManualResetEvent held in the Dead event on the PlayerInfo, which indicates the player has died. The thread creation is using the ParameterizedThreadStart delegate, which takes the method to execute on the new thread in the constructor, but also allows you to pass the data object directly to a new overload of the Thread.Start method.

Once the players are in motion, the Xbox 360 "initializes" and then calls Release on the semaphore to open four slots for player threads to grab onto, and then waits until it detects that the game is over from the firing of the Dead event for the player.

The players initialize on separate threads and run the JoinIn method, as shown in Example 18-6, "JoinIn method". First they open the Xbox 360 semaphore by name and get the data that was passed to the thread. Once they have the semaphore, they call WaitOne to queue up to play. Once the initial four slots are opened or another player "dies," then the call to WaitOne unblocks and the player "plays" for a random amount of time, and then dies. Once the players are dead, they call Release on the semaphore to indicate their slot is now open. If the semaphore reaches its maximum resource count, the GameOver event is set.

Example 18-6. JoinIn method

public static void JoinIn(object info)
{
    // open up the semaphore by name so we can act on it
    using (Semaphore Xbox360 = Semaphore.OpenExisting("Xbox360"))
    {

        // get the data object
        PlayerInfo player = (PlayerInfo)info;

        // Each player notifies the Xbox360 they want to play
        Console.WriteLine("{0} is waiting to play!", player.Name);

        // they wait on the Xbox360 (semaphore) until it lets them
        // have a controller
        Xbox360.WaitOne( );

        // The Xbox360 has chosen the player! (or the semaphore has
        // allowed access to the resource...)
        Console.WriteLine("{0} has been chosen to play. " +
            "Welcome to your doom {0}. >:)", player.Name);

        // figure out a random value for how long the player lasts
        System.Random rand = new Random(500);
        int timeTillDeath = rand.Next(100, 1000);

        // simulate the player is busy playing till they die
        Thread.Sleep(timeTillDeath);

        // figure out how they died
        rand = new Random( );
        int deathIndex = rand.Next(6);

        // notify of the player's passing
        Console.WriteLine("{0} has {1} and gives way to another player",
            player.Name, _deaths[deathIndex]);

        // if all ports are open, everyone has played and the game is over
        int semaphoreCount = Xbox360.Release( );
        if (semaphoreCount == 3)
        {
            Console.WriteLine("Thank you for playing, the game has ended.");
            // set the Dead event for the player
            player.Dead.Set( );
            // close out the semaphore
            Xbox360.Close( );
        }
    }
}

When the Play method is run, output similar to the following is generated:

  Igor is waiting to play!
    AxeMan is waiting to play!
    Dr. Death is waiting to play!
    HaPpyCaMpEr is waiting to play!
    Executioner is waiting to play!
    FragMan is waiting to play!
    Beatdown is waiting to play!
    Xbox360 initializing...
    Stoney is waiting to play!
    Pwned is waiting to play!
    Halo3 loaded & ready, allowing 4 players in now...
    Igor has been chosen to play. Welcome to your doom Igor. >:)
    Dr. Death has been chosen to play. Welcome to your doom Dr. Death. >:)
    AxeMan has been chosen to play. Welcome to your doom AxeMan. >:)
    Executioner has been chosen to play. Welcome to your doom Executioner. >:)
    Dr. Death has was captured and gives way to another player
    AxeMan has was captured and gives way to another player
    Executioner has was captured and gives way to another player
    Pwned has been chosen to play. Welcome to your doom Pwned. >:)
    HaPpyCaMpEr has been chosen to play. Welcome to your doom HaPpyCaMpEr. >:)
    Beatdown has been chosen to play. Welcome to your doom Beatdown. >:)
    Igor has was captured and gives way to another player
    FragMan has been chosen to play. Welcome to your doom FragMan. >:)
    Beatdown has shot their own foot and gives way to another player
    Stoney has been chosen to play. Welcome to your doom Stoney. >:)
    HaPpyCaMpEr has shot their own foot and gives way to another player
    Pwned has shot their own foot and gives way to another player
    FragMan has shot their own foot and gives way to another player
    Stoney has choked on a rocket and gives way to another player
    Thank you for playing, the game has ended.

Discussion

Semaphores are primarily used for resource counting and are available cross-process when named (as they are based on the underlying kernel semaphore object). Cross-process may not sound too exciting to many .NET developers until they realize that cross-process also means cross-AppDomain. Say you are creating additional AppDomains to hold assemblies you are loading dynamically that you don't want to stick around for the whole life of your main AppDomain; the semaphore can help you keep track of how many are loaded at a time. Being able to control access up to a certain number of users can be useful in many scenarios (socket programming, custom thread pools, etc.).

See Also

The "Semaphore," "ManualResetEvent," and "ParameterizedThreadStart" topics in the MSDN documentation.

Synchronizing Multiple Processes with the Mutex

Problem

You have two processes or AppDomains that are running code with actions that you need to coordinate.

Solution

Use a named Mutex as a common signaling mechanism to do the coordination. A named Mutex can be accessed from both pieces of code even when running in different processes or AppDomains.

One situation in which this can be useful is when you are using shared memory to communicate between processes. The SharedMemoryManager class presented in this recipe will show the named Mutex in action by setting up a section of shared memory that can be used to pass serializable objects between processes. The "server" process creates a SharedMemoryManager instance, which sets up the shared memory and then creates the Mutex as the initial owner. The "client" process then also creates a SharedMemoryManager instance that finds the shared memory and hooks up to it. Once this connection is established, the "client" process then sets up to receive the serialized objects and waits until one is sent by waiting on the Mutex the "server" process created. The "server" process then takes a serializable object, serializes it into the shared memory, and releases the Mutex. It then waits on it again so that when the "client" is done receiving the object, it can release the Mutex and give control back to the "server." The "client" process that was waiting on the Mutex then deserializes the object from the shared memory and releases the Mutex.

In the example, you will send the Contact structure, which looks like this:

 [StructLayout(LayoutKind.Sequential)]
    [Serializable( )]
    public struct Contact
    {
        public string _name;
        public int _age;
    }

The "server" process code to send the Contact looks like this:

     // Create the initial shared memory manager to get things set up.
       using(SharedMemoryManager<Contact> sm =
           new SharedMemoryManager<Contact>("Contacts",8092))
       {
           // This is the sender process.

           // Launch the second process to get going.
           string processName = Process.GetCurrentProcess( ).MainModule.FileName;
           int index = processName.IndexOf("vshost");
           if (index != -1)
           {

               string first = processName.Substring(0, index);
               int numChars = processName.Length - (index + 7);
               string second = processName.Substring(index + 7, numChars);

               processName = first + second;
           }
           Process receiver = Process.Start(
               new ProcessStartInfo(
                   processName,
                   "Receiver"));

           // Give it 5 seconds to spin up.
           Thread.Sleep(5000);

           // Make up a contact.
           Contact man;
           man._age = 23;
           man._name = "Dirk Daring";

           // Send it to the other process via shared memory.
           sm.SendObject(man);
    }

The "client" process code to receive the Contact looks like this:

       // Create the initial shared memory manager to get things set up.
        using(SharedMemoryManager<Contact> sm =
            new SharedMemoryManager<Contact>("Contacts",8092))
        {

            // Get the contact once it has been sent.
            Contact c = (Contact)sm.ReceiveObject( );

            // Write it out (or to a database...)
            Console.WriteLine("Contact {0} is {1} years old.",
                                c._name, c._age);
            // Show for 5 seconds.
            Thread.Sleep(5000);
    }

The way this usually works is that one process creates a section of shared memory backed by the paging file using the unmanaged Win32 APIs CreateFileMapping and MapViewOfFile. Currently there is no purely managed way to do this, so you have to use P/Invoke, as you can see in Example 18-7, "Constructor and SetupSharedMemory private method" in the constructor code for the SharedMemoryManager and the private SetupSharedMemory method. The constructor takes a name to use as part of the name of the shared memory and the base size of the shared memory block to allocate. It is the base size because the SharedMemoryManager has to allocate a bit extra for keeping track of the data moving through the buffer.

Example 18-7. Constructor and SetupSharedMemory private method

public SharedMemoryManager(string name,int sharedMemoryBaseSize)
{
    if (string.IsNullOrEmpty(name))
        throw new ArgumentNullException("name");

    if (sharedMemoryBaseSize <= 0)
        throw new ArgumentOutOfRangeException("sharedMemoryBaseSize",
            "Shared Memory Base Size must be a value greater than zero");

    // Set name of the region.
    _memoryRegionName = name;
    // Save base size.
    _sharedMemoryBaseSize = sharedMemoryBaseSize;
    // Set up the memory region size.
    _memRegionSize = (uint)(_sharedMemoryBaseSize + sizeof(int));
    // Set up the shared memory section.
    SetupSharedMemory( );
}

private void SetupSharedMemory( )
{
    // Grab some storage from the page file.
    _handleFileMapping =
        PInvoke.CreateFileMapping((IntPtr)INVALID_HANDLE_VALUE,
                            IntPtr.Zero,
                            PInvoke.PageProtection.ReadWrite,
                            0,
                            _memRegionSize,
                            _memoryRegionName);
    if (_handleFileMapping == IntPtr.Zero)
    {
        throw new Win32Exception(
            "Could not create file mapping");
    }

    // Check the error status.
    int retVal = Marshal.GetLastWin32Error( );
    if (retVal == ERROR_ALREADY_EXISTS)
    {

        // We opened one that already existed.
        // Make the mutex not the initial owner
        // of the mutex since we are connecting
        // to an existing one.
        _mtxSharedMem = new Mutex(false,
            string.Format("{0}mtx{1}",
                typeof(TransferItemType), _memoryRegionName));
    }
    else if (retVal == 0)
    {
         // We opened a new one.
         // Make the mutex the initial owner.
         _mtxSharedMem = new Mutex(true,
             string.Format("{0}mtx{1}",
                 typeof(TransferItemType), _memoryRegionName));
    }
    else
    {
         // Something else went wrong.
         throw new Win32Exception(retVal, "Error creating file mapping");
    }

    // Map the shared memory.
    _ptrToMemory = PInvoke.MapViewOfFile(_handleFileMapping,
                                    FILE_MAP_WRITE,
                                    0, 0, IntPtr.Zero);
    if (_ptrToMemory == IntPtr.Zero)
    {
        retVal = Marshal.GetLastWin32Error( );
        throw new Win32Exception(retVal, "Could not map file view");
    }

    retVal = Marshal.GetLastWin32Error( );
    if (retVal != 0 && retVal != ERROR_ALREADY_EXISTS)
    {
        // Something else went wrong.
        throw new Win32Exception(retVal, "Error mapping file view");
    }
}

The code to send an object through the shared memory is contained in the SendObject method, as shown in Example 18-8, "SendObject method". First, it checks to see if the object being sent is indeed serializable by checking the IsSerializable property on the type of the object. If the object is serializable, an integer with the size of the serialized object and the serialized object content are written out to the shared memory section. Then, the Mutex is released to indicate that there is an object in the shared memory. It then waits on the Mutex again to wait until the "client" has received the object.

Example 18-8. SendObject method

public void SendObject(TransferItemType transferObject)
{
    // Can send only Seralizable objects.
    if (!transferObject.GetType( ).IsSerializable)
        throw new ArgumentException(
            string.Format("Object {0} is not serializeable.",
                transferObject));
    // Create a memory stream, initialize size.
    using (MemoryStream ms = new MemoryStream( ))
    {
        // Get a formatter to serialize with.
        BinaryFormatter formatter = new BinaryFormatter( );
        try
        {
            // Serialize the object to the stream.
            formatter.Serialize(ms, transferObject);

            // Get the bytes for the serialized object.
            byte[] bytes = ms.GetBuffer( );

            // Check that this object will fit.
            if(bytes.Length + sizeof(int) > _memRegionSize)
            {
                string fmt =
                    "{0} object instance serialized to {1} bytes " +
                    "which is too large for the shared memory region";

                string msg =
                    string.Format(fmt,
                        typeof(TransferItemType),bytes.Length);

                throw new ArgumentException(msg, "transferObject");
            }

            // Write out how long this object is.
            Marshal.WriteInt32(this._ptrToMemory, bytes.Length);

            // Write out the bytes.
            Marshal.Copy(bytes, 0, this._ptrToMemory, bytes.Length);
        }
        finally
        {
            // Signal the other process using the mutex to tell it
            // to do receive processing.
            _mtxSharedMem.ReleaseMutex( );

            // Wait for the other process to     signal it has received
            // and we can move on.
            _mtxSharedMem.WaitOne( );
        }
    }
}

The ReceiveObject method shown in Example 18-9, "ReceiveObject method" allows the client to wait until there is an object in the shared memory section and then reads the size of the serialized object and deserializes it to a managed object. It then releases the Mutex to let the sender know to continue.

Example 18-9. ReceiveObject method

public TransferItemType ReceiveObject( )
{
    // Wait on the mutex for an object to be queued by the sender.
    _mtxSharedMem.WaitOne( );

    // Get the count of what is in the shared memory.
    int count = Marshal.ReadInt32(_ptrToMemory);
    if (count <= 0)
    {
        throw new InvalidDataException("No object to read");
    }

    // Make an array to hold the bytes.
    byte[] bytes = new byte[count];

    // Read out the bytes for the object.
    Marshal.Copy(_ptrToMemory, bytes, 0, count);

    // Set up the memory stream with the object bytes.
    using (MemoryStream ms = new MemoryStream(bytes))
    {

        // Set up a binary formatter.
        BinaryFormatter formatter = new BinaryFormatter( );

        // Get the object to return.
        TransferItemType item;
        try
        {
            item = (TransferItemType)formatter.Deserialize(ms);
        }
        finally
        {
            // Signal that we received the object using the mutex.
            _mtxSharedMem.ReleaseMutex( );
        }
        // Give them the object.
        return item;
    }
}

Discussion

A Mutex is designed to give mutually exclusive (thus the name) access to a single resource. A Mutex can be thought of as a cross-process named Monitor, where the Mutex is "entered" by waiting on it and becoming the owner, then "exited" by releasing the Mutex for the next thread that is waiting on it. If a thread that owns a Mutex ends, the Mutex is released automatically.

Using a Mutex is slower than using a Monitor as a Monitor is a purely managed construct, whereas a Mutex is based on the Mutex kernel object. A Mutex cannot be "pulsed" as can a Monitor, but it can be used across processes which a Monitor cannot. Finally, the Mutex is based on WaitHandle, so it can be waited on with other objects derived from WaitHandle, like Semaphore and the event classes.

The SharedMemoryManager and PInvoke classes are listed in their entirety in Example 18-10, "SharedMemoryManager and PInvoke classes".

Example 18-10. SharedMemoryManager and PInvoke classes

/// <summary>
/// Class for sending objects through shared memory using a mutex
/// to synchronize access to the shared memory
/// </summary>
public class SharedMemoryManager<TransferItemType> : IDisposable
{
    #region Consts
    const int INVALID_HANDLE_VALUE = -1;
    const int FILE_MAP_WRITE = 0x0002;
    /// <summary>
    /// Define from Win32 API.
    /// </summary>
    const int ERROR_ALREADY_EXISTS = 183;
    #endregion

    #region Private members
    IntPtr _handleFileMapping = IntPtr.Zero;
    IntPtr _ptrToMemory = IntPtr.Zero;
    uint _memRegionSize = 0;
    string _memoryRegionName;
    bool disposed = false;
    int _sharedMemoryBaseSize = 0;
    Mutex _mtxSharedMem = null;
    #endregion

    #region Construction / Cleanup
    public SharedMemoryManager(string name,int sharedMemoryBaseSize)
    {
        // Can be built for only Seralizable objects
        if (!typeof(TransferItemType).IsSerializable)
            throw new ArgumentException(
                string.Format("Object {0} is not serializeable.",
                    typeof(TransferItemType)));

        if (string.IsNullOrEmpty(name))
            throw new ArgumentNullException("name");

        if (sharedMemoryBaseSize <= 0)
            throw new ArgumentOutOfRangeException("sharedMemoryBaseSize",
                "Shared Memory Base Size must be a value greater than zero")

        // Set name of the region.
        _memoryRegionName = name;
        // Save base size.
        _sharedMemoryBaseSize = sharedMemoryBaseSize;
        // Set up the memory region size.
        _memRegionSize = (uint)(_sharedMemoryBaseSize + sizeof(int));
        // Set up the shared memory section.
        SetupSharedMemory( );
    }

    private void SetupSharedMemory( )
    {
        // Grab some storage from the page file.
        _handleFileMapping =
            PInvoke.CreateFileMapping((IntPtr)INVALID_HANDLE_VALUE,
                            IntPtr.Zero,
                            PInvoke.PageProtection.ReadWrite,
                            0,
                            _memRegionSize,
                            _memoryRegionName);

        if (_handleFileMapping == IntPtr.Zero)
        {
            throw new Win32Exception(
                "Could not create file mapping");
        }

        // Check the error status.
        int retVal = Marshal.GetLastWin32Error( );
        if (retVal == ERROR_ALREADY_EXISTS)
        {
            // We opened one that already existed.
            // Make the mutex not the initial owner
            // of the mutex since we are connecting
            // to an existing one.
            _mtxSharedMem = new Mutex(false,
                string.Format("{0}mtx{1}",
                    typeof(TransferItemType), _memoryRegionName));
        }
        else if (retVal == 0)
        {
            // We opened a new one.
            // Make the mutex the initial owner.
            _mtxSharedMem = new Mutex(true,
                string.Format("{0}mtx{1}",
                    typeof(TransferItemType), _memoryRegionName));
        }
        else
        {
            // Something else went wrong.
            throw new Win32Exception(retVal, "Error creating file mapping");
        }

        // Map the shared memory.
        _ptrToMemory = PInvoke.MapViewOfFile(_handleFileMapping,
                                        FILE_MAP_WRITE,
                                        0, 0, IntPtr.Zero);

        if (_ptrToMemory == IntPtr.Zero)
        {
            retVal = Marshal.GetLastWin32Error( );
            throw new Win32Exception(retVal, "Could not map file view");
        }

        retVal = Marshal.GetLastWin32Error( );
        if (retVal != 0 && retVal != ERROR_ALREADY_EXISTS)
        {
            // Something else went wrong.
            throw new Win32Exception(retVal, "Error mapping file view");
        }
    }

    ~SharedMemoryManager( )
    {
         // Make sure we close.
         Dispose(false);
    }

    public void Dispose( )
    {
        Dispose(true);
        GC.SuppressFinalize(this);
    }

    private void Dispose(bool disposing)
    {
        // Check to see if Dispose has already been called.
        if (!this.disposed)
        {
            CloseSharedMemory( );
        }
        disposed = true;
    }

    private void CloseSharedMemory( )
    {
        if (_ptrToMemory != IntPtr.Zero)
    {
        // Close map for shared memory.
        PInvoke.UnmapViewOfFile(_ptrToMemory);
        _ptrToMemory = IntPtr.Zero;
    }
    if (_handleFileMapping != IntPtr.Zero)
    {
        // Close handle.
        PInvoke.CloseHandle(_handleFileMapping);
        _handleFileMapping = IntPtr.Zero;
    }
}
public void Close( )
{
    CloseSharedMemory( );
}
#endregion

#region Properties
public int SharedMemoryBaseSize
{
    get { return _sharedMemoryBaseSize; }
}
#endregion

#region Public Methods
/// <summary>
/// Send a serializable object through the shared memory
/// and wait for it to be picked up.
/// </summary>
/// <param name="transferObject"> </param>
public void SendObject(TransferItemType transferObject)
{
    // Create a memory stream, initialize size.
    using (MemoryStream ms = new MemoryStream( ))
    {
        // Get a formatter to serialize with.
        BinaryFormatter formatter = new BinaryFormatter( );
        try
        {
            // Serialize the object to the stream.
            formatter.Serialize(ms, transferObject);

            // Get the bytes for the serialized object.
            byte[] bytes = ms.ToArray( );

            // Check that this object will fit.
            if(bytes.Length + sizeof(int) > _memRegionSize)
            {

                string fmt = "
                    "{0} object instance serialized to {1} bytes " +
                    "which is too large for the shared memory region";
                string msg =
                    string.Format(fmt,
                        typeof(TransferItemType),bytes.Length);

                throw new ArgumentException(msg, "transferObject");
            }

            // Write out how long this object is.
            Marshal.WriteInt32(this._ptrToMemory, bytes.Length);

            // Write out the bytes.
            Marshal.Copy(bytes, 0, this._ptrToMemory, bytes.Length);
        }
        finally
        {
            // Signal the other process using the mutex to tell it
            // to do receive processing.
            _mtxSharedMem.ReleaseMutex( );

           // Wait for the other process to signal it has received
           // and we can move on.
           _mtxSharedMem.WaitOne( );
        }
     }
}

/// <summary>
/// Wait for an object to hit the shared memory and then deserialize it.
/// </summary>
/// <returns>object passed</returns>
public TransferItemType ReceiveObject( )
{
    // Wait on the mutex for an object to be queued by the sender.
    _mtxSharedMem.WaitOne( );

    // Get the count of what is in the shared memory.
    int count = Marshal.ReadInt32(_ptrToMemory);
    if (count <= 0)
    {
         throw new InvalidDataException("No object to read");
    }

    // Make an array to hold the bytes.
    byte[] bytes = new byte[count];

    // Read out the bytes for the object.
    Marshal.Copy(_ptrToMemory, bytes, 0, count);

    // Set up the memory stream with the object bytes.

       using (MemoryStream ms = new MemoryStream(bytes))
       {
           // Set up a binary formatter.
           BinaryFormatter formatter = new BinaryFormatter( );

           // Get the object to return.
           TransferItemType item;
           try
           {
               item = (TransferItemType)formatter.Deserialize(ms);
           }
           finally
           {
               // Signal that we received the object using the mutex.
               _mtxSharedMem.ReleaseMutex( );
           }
           // Give them the object.
           return item;
       }
    }
    #endregion
}

public class PInvoke
{
    #region PInvoke defines
    [Flags]
    public enum PageProtection : uint
    {
        NoAccess = 0x01,
        Readonly = 0x02,
        ReadWrite = 0x04,
        WriteCopy = 0x08,
        Execute = 0x10,
        ExecuteRead = 0x20,
        ExecuteReadWrite = 0x40,
        ExecuteWriteCopy = 0x80,
        Guard = 0x100,
        NoCache = 0x200,
        WriteCombine = 0x400,
     }
     [DllImport("kernel32.dll", SetLastError = true)]
     public static extern IntPtr CreateFileMapping(IntPtr hFile,
         IntPtr lpFileMappingAttributes, PageProtection flProtect,
         uint dwMaximumSizeHigh,
         uint dwMaximumSizeLow, string lpName);

     [DllImport("kernel32.dll", SetLastError = true)]
     public static extern IntPtr MapViewOfFile(IntPtr hFileMappingObject, uint
         dwDesiredAccess, uint dwFileOffsetHigh, uint dwFileOffsetLow,
         IntPtr dwNumberOfBytesToMap);

      [DllImport("kernel32.dll", SetLastError = true)]
      public static extern bool UnmapViewOfFile(IntPtr lpBaseAddress);

      [DllImport("kernel32.dll", SetLastError = true)]
      public static extern bool CloseHandle(IntPtr hObject);
      #endregion
}

See Also

The "Mutex" and "Mutex Class" topics in the MSDN documentation and Programming Applications for Microsoft Windows, Fourth Edition, by Jeffrey Richter (Microsoft Press).

Using Events to Make Threads Cooperate

Problem

You have multiple threads that need to be served by a server, but only one can be served at a time.

Solution

Use an AutoResetEvent to notify each thread when it is going to be served. For example, a diner has a cook and multiple waitresses. The waitresses can keep bringing in orders, but the cook can serve up only one at a time. You can simulate this with the Cook class shown in the section called "Using Events to Make Threads Cooperate".

Example 18-11. Using events to make threads cooperate

public class Cook
{
    public static AutoResetEvent OrderReady = new AutoResetEvent(false);

    public void CallWaitress( )
    {
        // We call Set on the AutoResetEvent and don't have to
        // call Reset like we would with ManualResetEvent to fire it
        // off again. This sets the event that the waitress is waiting for
        // in PlaceOrder.
        OrderReady.Set( );
    }
}

The Cook class has an AutoResetEvent called OrderReady that the cook will use to tell the waiting waitresses that an order is ready. Since there is only one order ready at a time, and this is an equal-opportunity diner, the waitress who has been waiting longest gets her order first. The AutoResetEvent allows for just signaling the single thread when you call Set on the OrderReady event.

The Waitress class has the PlaceOrder method that is executed by the thread. PlaceOrder takes an object parameter, which is passed in from the call to t.Start in the next code block. The Start method uses a ParameterizedThreadStart delegate, which takes an object parameter. PlaceOrder has been set up to be compatible with it. It takes the AutoResetEvent passed in and calls WaitOne to wait until the order is ready. Once the Cook fires the event enough times that this waitress is at the head of the line, the code finishes:

  public class Waitress
    {
        public static void PlaceOrder(object signal)
        {
            // Cast the AutoResetEvent so the waitress can wait for the
            // order to be ready.
            AutoResetEvent OrderReady = (AutoResetEvent)signal;
            // Wait for the order...
            OrderReady.WaitOne();
            // Order is ready....
            Console.WriteLine("Waitress got order!");
        }
    }

The code to run the "diner" creates a Cook and spins off the Waitress threads, and then calls all waitresses when their orders are ready by calling Set on the AutoResetEvent:

  // We have a diner with a cook who can serve up only one meal at a time.
    Cook Mel = new Cook( );

    // Make up five waitresses and tell them to get orders.
    for (int i = 0; i < 5; i+)
    {

        Thread t = new Thread(Waitress.PlaceOrder);
        // The Waitress places the order and then waits for the order.
        t.Start(Cook.OrderReady);
    }

    // Now we can go through and let people in.
    for (int i = 0; i < 5; i+)
    {
        // Make the waitresses wait...
        Thread.Sleep(2000);
        // OK, next waitress, pickup!
        Mel.CallWaitress( );
    }

Discussion

There are two types of events, AutoResetEvent and ManualResetEvent. There are two main differences between the events. The first is that AutoResetEvents release only one of the threads that are waiting on the event while a ManualResetEvent will release all of them when Set is called. The second difference is that when Set is called on an AutoResetEvent, it is automatically reset to a nonsignaled state, while the ManualResetEvent is left in a signaled state until the Reset method is called.

See Also

The "AutoResetEvent" and "ManualResetEvent" topics in the MSDN documentation and Programming Applications for Microsoft Windows, (Fourth Edition) by Microsoft Press.

Get the Naming Rights for Your Events

Problem

You want to have code running in worker threads, or in other processes or AppDomains, to be able to wait on an event.

Solution

Use the EventWaitHandle class. With it, you can create a named event that will allow any code running on the local machine to find and wait on the event. AutoResetEvent and ManualResetEvent are excellent for signaling events in threaded code and even between AppDomains if you are willing to go through the hassle of passing the event reference around. Why bother? Both of them derive from EventWaitHandle, but neither exposes the naming facility. EventWaitHandle can not only take the name of the event, but also can take an EventResetMode parameter to indicate if it should act like a ManualResetEvent (EventResetMode.ManualReset) or an AutoResetEvent (EventResetMode.AutoReset). Named events have been available to Windows developers for a long time, and the EventWaitHandle class can serve as a named version of either an AutoResetEvent or a ManualResetEvent.

To set up a named EventWaitHandle that operates as a ManualResetEvent, do this:

   // Make a named manual reset event.
    EventWaitHandle ewhSuperBowl =
        new EventWaitHandle(false, // Not initially signaled
                            EventResetMode.ManualReset,
                            @"Champs");
    // Spin up three threads to listen for the event.
    for (int i = 0; i < 3; i+)
    {
         Thread t = new Thread(ManualFan);
         // The fans wait anxiously...
         t.Name = "Fan " + i;
         t.Start( );
    }
    // Play the game.
    Thread.Sleep(10000);
    // Notify people.
    Console.WriteLine("Patriots win the SuperBowl!");
    // Signal all fans.
    ewhSuperBowl.Set( );
    // Close the event.
    ewhSuperBowl.Close( );

The ManualFan method is listed here:

   public static void ManualFan( )
    {
        // Open the event by name.
        EventWaitHandle ewhSuperBowl =
            new EventWaitHandle(false,
                                EventResetMode.ManualReset,
                                @"Champs");
        // Wait for the signal.
        ewhSuperBowl.WaitOne( );
        // Shout out.
        Console.WriteLine("\"They're great!\" says {0}",Thread.CurrentThread.Name);
        // Close the event.
        ewhSuperBowl.Close( );
    }

The output from the manual event code will resemble the listing here (the ManualFan threads might be in a different order):

 Patriots win the SuperBowl!
    "They're great!" says Fan 2
    "They're great!" says Fan 1
    "They're great!" says Fan 0

To set up a named EventWaitHandle to operate as an AutoResetEvent, do this:

   // Make a named auto reset event.
    EventWaitHandle ewhSuperBowl =
        new EventWaitHandle(false, // Not initially signalled
                            EventResetMode.AutoReset,
                            @"Champs");
    // Spin up three threads to listen for the event.
    for (int i = 0; i < 3; i+)
    {
        Thread t = new Thread(AutoFan, i);
        // The fans wait anxiously...
        t.Name = "Fan " + i;
        t.Start( );
    }
    // Play the game.
    Thread.Sleep(10000);
    // Notify people.
    Console.WriteLine("Patriots win the SuperBowl!");
    // Signal one fan at a time.
    for (int i = 0; i < 3; i+)
    {
        Console.WriteLine("Notify fans");
        ewhSuperBowl.Set( );
    }
    // Close the event.
    ewhSuperBowl.Close( );

The AutoFan method is listed here:

 public static void AutoFan( )
    {
        // Open the event by name.
        EventWaitHandle ewhSuperBowl =
            new EventWaitHandle(false,
                                EventResetMode.AutoReset,
                                @"Champs");
        // Wait for the signal.
        ewhSuperBowl.WaitOne( );
        // Shout out.
        Console.WriteLine("\"Yahoo!\" says {0}", Thread.CurrentThread.Name);
        // Close the event.
        ewhSuperBowl.Close( );
    }

The output from the automatic event code will resemble the listing here (the AutoFan threads might be in a different order):

    Patriots win the SuperBowl!
    Notify fans
    "Yahoo!" says Fan 0
    Notify fans
    "Yahoo!" says Fan 2
    Notify fans
    "Yahoo!" says Fan 1

Discussion

EventWaitHandle is defined as deriving from WaitHandle, which in turn derives from MarshalByRefObject. EventWaitHandle implements the IDisposable interface:

   public class EventWaitHandle : WaitHandle

    public abstract class WaitHandle : MarshalByRefObject, IDisposable

WaitHandle derives from MarshalByRefObject so you can use it across AppDomains, and it implements IDisposable to make sure the event handle gets released properly.

The EventWaitHandle class can also open an existing named event by calling the OpenExisting method and get the event's access-control security from GetAccessControl.

When naming events, one consideration is how it will react in the presence of terminal sessions. Terminal sessions are the underlying technology behind Fast User switching and Remote Desktop, as well as Terminal Services. The consideration is due to how kernel objects (such as events) are created with respect to the terminal sessions. If a kernel object is created with a name and no prefix, it belongs to the Global namespace for named objects and is visible across terminal sessions. By default, EventWaitHandle creates the event in the Global namespace. A kernel object can also be created in the Local namespace for a given terminal session, in which case the named object belongs to the specific terminal session namespace. If you pass the Local namespace prefix (Local\[EventName]), then the event will be created in the local session for events that should be visible from only one terminal session:

  // Open the event by local name.
    EventWaitHandle ewhSuperBowl =
        new EventWaitHandle(false,
                            EventResetMode.ManualReset,
                            @"Local\Champs");

Named events can be quite useful not only when communicating between processes, AppDomains, or threads, but also when debugging code that uses events, as the name will help you identify which event you are looking at if you have a number of them.

See Also

The "EventWaitHandle," "AutoResetEvent," "ManualResetEvent," and "Kernel Object Namespaces (Platform SDK Help)" topics in the MSDN documentation.

Performing Atomic Operations Among Threads

Problem

You are operating on data from multiple threads and want to insure that each operation is carried out fully before performing the next operation from a different thread.

Solution

Use the Interlocked family of functions to insure atomic access. Interlocked has methods to increment and decrement values, add a specific amount to a given value, exchange an original value for a new value, compare the current value to the original value, and exchange the original value for a new value if it is equal to the current value.

To increment or decrement an integer value, use the Increment or Decrement methods, respectively:

  int i = 0;
    long l = 0;
    Interlocked.Increment(ref i); // i = 1
    Interlocked.Decrement(ref i); // i = 0
    Interlocked.Increment(ref l); // l = 1
    Interlocked.Decrement(ref i); // l = 0

To add a specific amount to a given integer value, use the Add method:

 Interlocked.Add(ref i, 10); // i = 10;
    Interlocked.Add(ref l, 100); // l = 100;

To replace an existing value, use the Exchange method:

   string name = "Mr. Ed";
    Interlocked.Exchange(ref name, "Barney");

To check if another thread has changed a value out from under the existing code before replacing the existing value, use the CompareExchange method:

    int i = 0;
    double runningTotal = 0.0;
    double startingTotal = 0.0;
    double calc = 0.0;
    for (i = 0; i < 10; i+)
    {
        do
        {
            // Store of the original total
            startingTotal = runningTotal;

            // Do an intense calculation.
            calc = runningTotal + i * Math.PI * 2 / Math.PI;
        }
        // Check to make sure runningTotal wasn't modified
        // and replace it with calc if not. If it was,
        // run through the loop until we get it current.
        while (startingTotal !=
            Interlocked.CompareExchange(
                ref runningTotal, calc, startingTotal));
    }

Discussion

In an operating system like Microsoft Windows, with its ability to perform preemptive multitasking, certain considerations must be given to data integrity when working with multiple threads. There are many synchronization primitives to help secure sections of code, as well as signal when data is available to be modified. To this list is added the capability to perform operations that are guaranteed to be atomic in nature.

If there has not been much threading or assembly language in your past, you might wonder what the big deal is and why you need these atomic functions at all. The basic reason is that the line of code written in C# ultimately has to be translated down to a machine instruction, and along the way, the one line of code written in C# can turn into multiple instructions for the machine to execute. If the machine has to execute multiple instructions to perform a task and the operating system allows for preemption, it is possible that these instructions may not be executed as a block. They could be interrupted by other code that modifies the value being changed by the original line of C# code in the middle of the C# code being executed. As you can imagine, this could lead to some pretty spectacular errors, or it might just round off the lottery number that keeps a certain C# programmer from winning the big one.

Threading is a powerful tool, but like most "power" tools, you have to understand its operation to use it effectively and safely. Threading bugs are notorious for being some of the most difficult to debug, as the runtime behavior is not constant. Trying to reproduce them can be a nightmare. Recognizing that working in a multithreaded environment imposes a certain amount of forethought about protecting data access, and understanding when to use the Interlocked class will go a long way toward preventing long, frustrating evenings with the debugger.

See Also

The "Interlocked" and "Interlocked Class" topics in the MSDN documentation.

Optimizing Read-Mostly Access

Problem

You are operating on data that is mostly read with occasional updates and want to perform these actions in a thread-safe but efficient manner.

Solution

Use the ReaderWriterLockSlim to give multiple read/single write access with the capacity to upgrade the lock from read to write. The example we use to show this is that of a Developer starting a new project. Unfortunately, the project is under-staffed, so the Developer has to respond to tasks from many other individuals on the team by themselves. Each of the other team members will also ask for status updates on their tasks, and some can even change the priority of the tasks the Developer is assigned.

The act of adding a task to the Developer using the AddTask method is protected with a write lock using the ReaderWriterLockSlim by calling EnterWriteLock and ExitWriteLock when complete:

 public void AddTask(Task newTask)
    {
        try
        {
            _rwlSlim.EnterWriteLock( );
            // if we already have this task (unique by name)
            // then just accept the add as sometimes people
            // give you the same task more than once :)

            var taskQuery = from t in _tasks
                            where t == newTask
                            select t;
            if (taskQuery.Count<Task>( ) == 0)
            {
                Console.WriteLine("Task " + newTask.Name + " was added to developer");
                _tasks.Add(newTask);
            }
        }
        finally
        {
            _rwlSlim.ExitWriteLock( );
        }
    }

When a project team member needs to know about the status of a task, they call the IsTaskDone method, which uses a read lock on the ReaderWriterLockSlim by calling EnterReadLock and ExitReadLock:

 public bool IsTaskDone(string taskName)
    {
        try
        {
            _rwlSlim.EnterReadLock( );
            var taskQuery = from t in _tasks
                            where t.Name == taskName
                            select t;
            if (taskQuery.Count<Task>( ) > 0)
            {
                Task task = taskQuery.First<Task>( );
                Console.WriteLine("Task " + task.Name + " status was reported.");
                return task.Status;
            }
        }
        finally
        {
            _rwlSlim.ExitReadLock( );
        }
        return false;
    }

There are certain managerial members of the team that have the right to increase the priority of the tasks they assigned to the Developer. This is accomplished by calling the IncreasePriority method on the Developer. IncreasePriority uses an upgradable lock on the ReaderWriterLockSlim by first calling the EnterUpgradeableLock method to acquire a read lock, and then, if the task is in the queue, it upgrades to a write lock in order to adjust the priority of the task. Once the priority is adjusted, the write lock is released, which degrades the lock back to a read lock, and that lock is released by calling ExitUpgradeableReadLock:

    public void IncreasePriority(string taskName)
    {
        try
        {
             _rwlSlim.EnterUpgradeableReadLock( );
             var taskQuery = from t in _tasks
                             where t.Name == taskName
                             select t;
             if(taskQuery.Count<Task>( )>0)
             {
                Task task = taskQuery.First<Task>( );
                _rwlSlim.EnterWriteLock( );
                task.Priority++;
                Console.WriteLine("Task " + task.Name +
                    " priority was increased to " + task.Priority +
                    " for developer");
                _rwlSlim.ExitWriteLock( );
             }
        }
        finally
        {
            _rwlSlim.ExitUpgradeableReadLock( );
        }
    }

Discussion

The ReaderWriterLockSlim was created to replace the existing ReaderWriterLock for a number of reasons:

  • Performance: ReaderWriterLock was more than 5 times slower than using a Monitor.

  • Recursion semantics of ReaderWriterLock were not standard and were broken in some thread reentrancy cases.

  • The upgrade lock method is nonatomic in ReaderWriterLock.

While the ReaderWriterLockSlim is only about two times slower than the Monitor, it is more flexible and prioritizes writes, so in few write, many read scenarios, it is more scalable than the Monitor. There are also methods to determine what type of lock is held as well as how many threads are waiting to acquire it.

By default, lock acquisition recursion is disallowed. If you call EnterReadLock twice, you get a LockRecursionException. Lock Recursion can be enabled by passing a LockRecusionPolicy.SupportsRecursion enumeration value to the constructor over-load of ReaderWriterLockSlim that accepts it. Even though it is possible to enable lock recursion, it is generally discouraged, as it complicates things to no small degree, and these are not fun issues to debug.

There are some scenarios where the ReaderWriterLockSlim is not appropriate for use, although most of these are not applicable to everyday development:

  • SQLCLR: Due to the incompatible HostProtection attributes, ReaderWriterLockSlim is precluded from use in SQL Server CLR scenarios.

  • Host using Thread aborts: Because it doesn't mark critical regions, hosts that use this won't know that it will be harmed by thread aborts, so if the host uses them, it will cause issues in the hosted AppDomains.

  • It cannot handle asynchronous exceptions (thread aborts, out of memory, etc.) and could end up with corrupt lock state, which could cause deadlocks or other issues.

The entire code base for the example is listed here:

    static Developer _dev = new Developer(15);
    static bool _end = false;

    /// <summary>
    /// </summary>
    public static void TestReaderWriterLockSlim( )
    {
        LaunchTeam(_dev);
        Thread.Sleep(10000);
    }

    private static void LaunchTeam(Developer dev)
    {
        LaunchManager("CTO", dev);
        LaunchManager("Director", dev);
        LaunchManager("Project Manager", dev);
        LaunchDependent("Product Manager", dev);
        LaunchDependent("Test Engineer", dev);
        LaunchDependent("Technical Communications Professional", dev);
        LaunchDependent("Operations Staff", dev);
        LaunchDependent("Support Staff", dev);
    }

    public class TaskInfo
    {
        private Developer _dev;
        public string Name { get; set; }
        public Developer Developer
        {
            get { return _dev; }
            set { _dev = value; }
        }
    }
    private static void LaunchManager(string name, Developer dev)
    {
        ThreadPool.QueueUserWorkItem(
            new WaitCallback(CreateManagerOnThread),
            new TaskInfo( ) { Name = name, Developer = dev });
    }

    private static void LaunchDependent(string name, Developer dev)
    {
        ThreadPool.QueueUserWorkItem(
            new WaitCallback(CreateDependentOnThread),
            new TaskInfo( ) { Name = name, Developer = dev });
    }

    private static void CreateManagerOnThread(object objInfo)
    {
        TaskInfo taskInfo = (TaskInfo)objInfo;
        Console.WriteLine("Added " + taskInfo.Name + " to the project...");
        TaskManager mgr = new TaskManager(taskInfo.Name, taskInfo.Developer);
    }

    private static void CreateDependentOnThread(object objInfo)
    {
        TaskInfo taskInfo = (TaskInfo)objInfo;
        Console.WriteLine("Added " + taskInfo.Name + " to the project...");
        TaskDependent dep = new TaskDependent(taskInfo.Name, taskInfo.Developer);
    }

    public class Task
    {

        public Task(string name)
        {
            Name = name;
        }
        public string Name { get; set; }
        public int Priority { get; set; }
        public bool Status { get; set; }

        public override string ToString( )
        {
            return this.Name;
        }

        public override bool Equals(object obj)
        {
            Task task = obj as Task;
            if(task != null)
                return this.Name == task.Name;
            return false;
        }

        public override int GetHashCode( )
        {
            return this.Name.GetHashCode( );
        }
    }

    public class Developer
    {
        /// <summary>
        /// Dictionary for the tasks
        /// </summary>
        private List<Task> _tasks = new List<Task>( );
        private ReaderWriterLockSlim _rwlSlim = new ReaderWriterLockSlim( );
        private System.Threading.Timer _timer;
        private int _maxTasks;

        public Developer(int maxTasks)
        {
            // the maximum number of tasks before the developer quits
            _maxTasks = maxTasks;
            // do some work every 1/4 second
            _timer = new Timer(new TimerCallback(DoWork), null, 1000, 250);
        }

        // Execute a task
        protected void DoWork(Object stateInfo)
        {
            ExecuteTask( );
            try
            {
                _rwlSlim.EnterWriteLock( );
                // if we finished all tasks, go on vacation!
                if (_tasks.Count == 0)
                {
                    _end = true;
                    Console.WriteLine("Developer finished all tasks, go on vacation!");
                    return;
                }

                if (!_end)
                {
                    // if we have too many tasks quit
                    if (_tasks.Count > _maxTasks)
                    {
                        // get the number of unfinished tasks
                        var query = from t in _tasks
                                    where t.Status == false
                                    select t;
                        int unfinishedTaskCount = query.Count<Task>( );

                        _end = true;
                        Console.WriteLine("Developer has too many tasks, quitting! " +
                            unfinishedTaskCount + " tasks left unfinished.");
                }
            }
            else
                _timer.Dispose( );
        }
        finally
        }
                _rwlSlim.ExitWriteLock( );
        }
    }

    public void AddTask(Task newTask)
    {
        try
        {
            _rwlSlim.EnterWriteLock( );
            // if we already have this task (unique by name)
            // then just accept the add as sometimes people
            // give you the same task more than once :)
            var taskQuery = from t in _tasks
                            where t == newTask
                            select t;
            if (taskQuery.Count<Task>( ) == 0)
            {
                Console.WriteLine("Task " + newTask.Name + " was added to developer");
                _tasks.Add(newTask);
            }
        }
        finally
        {
            _rwlSlim.ExitWriteLock( );
        }
    }

    /// <summary>
    /// Increase the priority of the task
    /// </summary>
    /// <param name="taskName">name of the task</param>
    public void IncreasePriority(string taskName)
    {
        try
        {
            _rwlSlim.EnterUpgradeableReadLock( );
            var taskQuery = from t in _tasks
                            where t.Name == taskName
                            select t;
            if(taskQuery.Count<Task>( )>0)
            {
                Task task = taskQuery.First<Task>( );
                _rwlSlim.EnterWriteLock( );
                task.Priority++;
                Console.WriteLine("Task " + task.Name +
                    " priority was increased to " + task.Priority +
                    " for developer");
                _rwlSlim.ExitWriteLock( );
            }
        }
        finally
        {
                _rwlSlim.ExitUpgradeableReadLock( );
        }
    }

    /// <summary>
    /// Allows people to check if the task is done
    /// </summary>
    /// <param name="taskName">name of the task</param>
    /// <returns>False if the taks is undone or not in the list, true if done</returns>
    public bool IsTaskDone(string taskName)
    {
        try
        {
            _rwlSlim.EnterReadLock( );
            var taskQuery = from t in _tasks
                            where t.Name == taskName
                            select t;
            if (taskQuery.Count<Task>( ) > 0)
            {
                Task task = taskQuery.First<Task>( );
                Console.WriteLine("Task " + task.Name + " status was reported.");
                return task.Status;
            }
        }
        finally
        {
            _rwlSlim.ExitReadLock( );
        }
        return false;
    }

    private void ExecuteTask( )
    {
        // look over the tasks and do the highest priority
        var queryResult = from t in _tasks
                          where t.Status == false
                          orderby t.Priority
                          select t;
        if (queryResult.Count<Task>( ) > 0)
        {
            // do the task
            Task task = queryResult.First<Task>( );
            task.Status = true;
            task.Priority = -1;
            Console.WriteLine("Task " + task.Name + " executed by developer.");
            }
        }
    }

    public class TaskManager : TaskDependent
    {
        private System.Threading.Timer _mgrTimer;

        public TaskManager(string name, Developer taskExecutor) :
            base(name, taskExecutor)
        {
            // intervene every 2 seconds
            _mgrTimer = new Timer(new TimerCallback(Intervene), null, 0, 2000);
        }

        // Intervene in the plan
        protected void Intervene(Object stateInfo)
        {
            ChangePriority( );
            // developer ended, kill timer
            if (_end)
            {
                _mgrTimer.Dispose( );
                _developer = null;
            }
        }

        public void ChangePriority( )
        {
            if (_tasks.Count > 0)
            {
                int taskIndex = _rnd.Next(0, _tasks.Count - 1);
                Task checkTask = _tasks[taskIndex];
                // make those developers work faster on some random task!
                if (_developer != null)
                {
                    _developer.IncreasePriority(checkTask.Name);
                    Console.WriteLine(Name + "intervened and changed priority for task "
    +
                                       checkTask.Name);
                }
            }
        }
    }

    public class TaskDependent
    {
        protected List<Task> _tasks = new List<Task>( );
        protected Developer _developer;
        protected Random _rnd = new Random( );
        private Timer _taskTimer;
        private Timer _statusTimer;

        public TaskDependent(string name, Developer taskExecutor)
        {
            Name = name;
            _developer = taskExecutor;
            // add work every 1 second
            _taskTimer = new Timer(new TimerCallback(AddWork), null, 0, 1000);
            // check status every 3 seconds
            _statusTimer = new Timer(new TimerCallback(CheckStatus), null, 0, 3000);
        }
            // Add more work to the developer
            protected void AddWork(Object stateInfo)
            {
                SubmitTask( );
                // developer ended, kill timer
                if (_end)
                {
                    _taskTimer.Dispose( );
                    _developer = null;
            }
        }

        // Check Status of work with the developer
        protected void CheckStatus(Object stateInfo)
        {
            CheckTaskStatus( );
            // developer ended, kill timer
            if (_end)
            {
                _statusTimer.Dispose( );
                _developer = null;
            }
        }

        public string Name { get; set; }

        public void SubmitTask( )
        {
            int taskId = _rnd.Next(10000);
            string taskName = "(" + taskId + " for " + Name + ")";
            Task newTask = new Task(taskName);
            if (_developer != null)
            {
                _developer.AddTask(newTask);
                _tasks.Add(newTask);
            }
        }

        public void CheckTaskStatus( )
        {
            if (_tasks.Count > 0)
            {
                int taskIndex = _rnd.Next(0, _tasks.Count - 1);
                Task checkTask = _tasks[taskIndex];
                if (_developer != null &&
                    _developer.IsTaskDone(checkTask.Name))
                {
                    Console.WriteLine("Task " + checkTask.Name + " is done for " + Name);
                    // remove it from the todo list
                    _tasks.Remove(checkTask);
                }
            }
        }
    }

You can see the series of events in the project in the output. The point at which the Developer has had enough is highlighted:

  Added CTO to the project...
    Added Director to the project...
    Added Project Manager to the project...
    Added Product Manager to the project...
    Added Test Engineer to the project...
    Added Technical Communications Professional to the project...
    Added Operations Staff to the project...
    Added Support Staff to the project...
    Task (6267 for CTO) was added to developer
    Task (6267 for CTO) status was reported.
    Task (6267 for CTO) priority was increased to 1 for developer
    CTO intervened and changed priority for task (6267 for CTO)
    Task (6267 for Director) was added to developer
    Task (6267 for Director) status was reported.
    Task (6267 for Director) priority was increased to 1 for developer
    Director intervened and changed priority for task (6267 for Director)
    Task (6267 for Project Manager) was added to developer
    Task (6267 for Project Manager) status was reported.
    Task (6267 for Project Manager) priority was increased to 1 for developer
    Project Manager intervened and changed priority for task (6267 for Project
    Manager)
    Task (6267 for Product Manager) was added to developer
    Task (6267 for Product Manager) status was reported.
    Task (6267 for Technical Communications Professional) was added to developer
    Task (6267 for Technical Communications Professional) status was reported.
    Task (6267 for Operations Staff) was added to developer
    Task (6267 for Operations Staff) status was reported.
    Task (6267 for Support Staff) was added to developer
    Task (6267 for Support Staff) status was reported.
    Task (6267 for Test Engineer) was added to developer
    Task (5368 for CTO) was added to developer
    Task (5368 for Director) was added to developer
    Task (5368 for Project Manager) was added to developer
    Task (6153 for Product Manager) was added to developer
    Task (913 for Test Engineer) was added to developer 
    Task (6153 for Technical Communications Professional) was added to developer
    Task (6153 for Operations Staff) was added to developer
    Task (6153 for Support Staff) was added to developer
    Task (6267 for Product Manager) executed by developer.
    Task (6267 for Technical Communications Professional) executed by developer.
    Task (6267 for Operations Staff) executed by developer.
    Task (6267 for Support Staff) executed by developer.
    Task (6267 for CTO) priority was increased to 2 for developer
    CTO intervened and changed priority for task (6267 for CTO)
    Task (6267 for Director) priority was increased to 2 for developer
    Director intervened and changed priority for task (6267 for Director)
    Task (6267 for Project Manager) priority was increased to 2 for developer
    Project Manager intervened and changed priority for task (6267 for Project
    Manager)
    Task (6267 for Test Engineer) executed by developer.
    Task (7167 for CTO) was added to developer
    Task (7167 for Director) was added to developer
    Task (7167 for Project Manager) was added to developer
    Task (5368 for Product Manager) was added to developer
    Task (6153 for Test Engineer) was added to developer
    Task (5368 for Technical Communications Professional) was added to developer
    Task (5368 for Operations Staff) was added to developer
    Task (5368 for Support Staff) was added to developer
    Task (5368 for CTO) executed by developer.
    Task (5368 for Director) executed by developer.
    Task (5368 for Project Manager) executed by developer.
    Task (6267 for CTO) status was reported.
    Task (6267 for Director) status was reported.
    Task (6267 for Project Manager) status was reported.
    Task (913 for Test Engineer) status was reported.
    Task (6267 for Technical Communications Professional) status was reported.
    Task (6267 for Technical Communications Professional) is done for Technical
    Communications Professional
    Task (6267 for Product Manager) status was reported.
    Task (6267 for Product Manager) is done for Product Manager
    Task (6267 for Operations Staff) status was reported.
    Task (6267 for Operations Staff) is done for Operations Staff
    Task (6267 for Support Staff) status was reported.
    Task (6267 for Support Staff) is done for Support Staff
    Task (6153 for Product Manager) executed by developer.
    Task (2987 for CTO) was added to developer
    Task (2987 for Director) was added to developer
    Task (2987 for Project Manager) was added to developer
    Task (7167 for Product Manager) was added to developer
    Task (4126 for Test Engineer) was added to developer
    Task (7167 for Technical Communications Professional) was added to developer
    Task (7167 for Support Staff) was added to developer
    Task (7167 for Operations Staff) was added to developer
    Task (913 for Test Engineer) executed by developer.
    Task (6153 for Technical Communications Professional) executed by developer.
    Developer has too many tasks, quitting! 21 tasks left unfinished.
    Task (6153 for Operations Staff) executed by developer.
    Task (5368 for CTO) priority was increased to 0 for developer
    CTO intervened and changed priority for task (5368 for CTO)
    Task (5368 for Director) priority was increased to 0 for developer
    Director intervened and changed priority for task (5368 for Director)
    Task (5368 for Project Manager) priority was increased to 0 for developer
    Project Manager intervened and changed priority for task (5368 for Project
    Manager)
    Task (6153 for Support Staff) executed by developer.
    Task (4906 for Product Manager) was added to developer
    Task (7167 for Test Engineer) was added to developer
    Task (4906 for Technical Communications Professional) was added to developer
    Task (4906 for Operations Staff) was added to developer
    Task (4906 for Support Staff) was added to developer
    Task (7167 for CTO) executed by developer.
    Task (7167 for Director) executed by developer.
    Task (7167 for Project Manager) executed by developer.
    Task (5368 for Product Manager) executed by developer.
    Task (6153 for Test Engineer) executed by developer.
    Task (5368 for Technical Communications Professional) executed by developer.
    Task (5368 for Operations Staff) executed by developer.
    Task (5368 for Support Staff) executed by developer.
    Task (2987 for CTO) executed by developer.
    Task (2987 for Director) executed by developer.
    Task (2987 for Project Manager) executed by developer.
    Task (7167 for Product Manager) executed by developer.
    Task (4126 for Test Engineer) executed by developer.

See Also

The "ReaderWriterLockSlim" and "SQL Server Programming and Host Attributes" topics in the MSDN documentation.