Udostępnij za pośrednictwem


Synchronization Primitives

Microsoft Silverlight will reach end of support after October 2021. Learn more.

The .NET Framework provides a range of synchronization primitives for controlling the interactions of threads and avoiding race conditions. These can be roughly divided into three categories: locking, signaling, and interlocked operations.

The categories are not tidy and clearly defined: Some synchronization mechanisms have characteristics of multiple categories; events that release a single thread at a time are functionally like locks; the release of any lock can be thought of as a signal; and interlocked operations can be used to construct locks. However, the categories are still useful.

It is important to remember that thread synchronization is cooperative. If even one thread bypasses a synchronization mechanism and accesses the protected resource directly, that synchronization mechanism cannot be effective.

NoteNote:

In addition to explicit synchronization, Silverlight-based applications can use classes like BackgroundWorker and Dispatcher that provide implicit synchronization of method calls and events. For more information about implicit synchronization, see Synchronizing Data for Multithreading.

Locking

Locks give control of a resource to one thread at a time, or to a specified number of threads. A thread that requests an exclusive lock when the lock is in use blocks until the lock becomes available.

Exclusive Locks

The simplest form of locking is the C# lock statement (SyncLock in Visual Basic), which controls access to a block of code. Such a block is frequently referred to as a critical section. The lock statement is implemented by using the Enter and Exit methods of the Monitor class, and it uses try…catch…finally to ensure that the lock is released.

In general, using the lock statement to protect small blocks of code, never spanning more than a single method, is the best way to use the Monitor class. Although powerful, the Monitor class is prone to orphan locks and deadlocks.

Important noteImportant Note:

Do not lock the type — that is, typeof(MyType) in C# or GetType(MyType) in Visual Basic — to protect code that accesses static data (Shared in Visual Basic). Use an object in a private static variable instead. Similarly, do not lock this in C# (Me in Visual Basic) to protect code that accesses instance data. Use a private object instead. A class or instance can be locked by code other than your own, potentially causing deadlocks or performance problems.

Decorating a method with a MethodImplAttribute that specifies MethodImplOptions.Synchronized has the same effect as using Monitor or one of the compiler keywords to lock the entire body of the method. However, we do not recommend this because the synchronization is accomplished by using the object to lock instance methods and the type to lock static methods.

Monitor Class

The Monitor class provides additional functionality, which can be used with the lock statement:

  • The TryEnter method allows a thread that is blocked to give up after a specified interval. It returns a Boolean value indicating success or failure, which can be used to detect and avoid potential deadlocks.

  • The Wait method is called by a thread in a critical section. It gives up control of the resource and blocks until the resource is available again.

  • The Pulse and PulseAll methods allow a thread that is about to release the lock or to call Wait to put one or more threads into the ready queue, so that they can acquire the lock.

Time-outs on Wait method overloads allow waiting threads to escape to the ready queue.

Monitor has thread affinity. That is, a thread that entered the monitor must exit by calling Exit or Wait.

The Monitor class is not instantiable. Its methods are static (Shared in Visual Basic), and act on an instantiable lock object.

For a conceptual overview, see Monitors.

Signaling

The simplest way to wait for a signal from another thread is to call the Join method, which blocks until the other thread is finished. Join has two overloads that allow the blocked thread to break out of the wait after a specified interval has elapsed.

Wait handles provide a much richer set of waiting and signaling capabilities.

Wait Handles

Wait handles derive from the WaitHandle class. Threads block on wait handles by calling the instance method WaitOne or one of the static methods WaitAll or WaitAny. How they are released depends on which method was called, and on the kind of wait handles.

For a conceptual overview, see Wait Handles.

Event Wait Handles

Event wait handles include the EventWaitHandle class and its derived classes, AutoResetEvent and ManualResetEvent. Threads are released from an event wait handle when the event wait handle is signaled by calling its Set method.

Event wait handles either reset themselves automatically, like a turnstile that allows only one thread through each time it is signaled, or must be reset manually, like a gate that is closed until signaled and then open until someone closes it. As their names imply, AutoResetEvent and ManualResetEvent represent the former and latter, respectively.

An EventWaitHandle can represent either type of event, and can be either local or global. The derived classes AutoResetEvent and ManualResetEvent are always local.

Event wait handles do not have thread affinity. Any thread can signal an event wait handle.

For a conceptual overview, see EventWaitHandle, AutoResetEvent, and ManualResetEvent.

Interlocked Operations

Interlocked operations are simple atomic operations performed on a memory location by static methods of the Interlocked class. Those atomic operations include addition, increment and decrement, exchange, conditional exchange depending on a comparison, and read operations for 64-bit values on 32-bit platforms.

NoteNote:

The guarantee of atomicity is limited to individual operations; when multiple operations must be performed as a unit, a more coarse-grained synchronization mechanism must be used.

Although none of these operations are locks or signals, they can be used to construct locks and signals. Because they are native to the Windows operating system, interlocked operations are extremely fast.

Interlocked operations can be used with volatile memory guarantees to write applications that exhibit powerful non-blocking concurrency. However, they require sophisticated, low-level programming, and for most purposes simple locks are a better choice.

For a conceptual overview, see Interlocked Operations.