Dela via


Data Structures for Parallel Programming

The .NET Framework version 4 introduces several new types that are useful in parallel programming, including a set of concurrent collection classes, lightweight synchronization primitives, and types for lazy initialization. You can use these types with any multithreaded application code, including the Task Parallel Library and PLINQ.

Concurrent Collection Classes

The collection classes in the System.Collections.Concurrent namespace provide thread-safe add and remove operations that avoid locks wherever possible and use fine-grained locking where locks are necessary. Unlike collections that were introduced in the .NET Framework versions 1.0 and 2.0, a concurrent collection class does not require user code to take any locks when it accesses items. The concurrent collection classes can significantly improve performance over types such as System.Collections.ArrayList and System.Collections.Generic.List<T> (with user-implemented locking) in scenarios where multiple threads add and remove items from a collection.

The following table lists the new concurrent collection classes:

Type

Description

System.Collections.Concurrent.BlockingCollection<T>

Provides blocking and bounding capabilities for thread-safe collections that implement System.Collections.Concurrent.IProducerConsumerCollection<T>. Producer threads block if no slots are available or if the collection is full. Consumer threads block if the collection is empty. This type also supports non-blocking access by consumers and producers. BlockingCollection<T> can be used as a base class or backing store to provide blocking and bounding for any collection class that supports IEnumerable<T>.

System.Collections.Concurrent.ConcurrentBag<T>

A thread-safe bag implementation that provides scalable add and get operations.

System.Collections.Concurrent.ConcurrentDictionary<TKey, TValue>

A concurrent and scalable dictionary type.

System.Collections.Concurrent.ConcurrentQueue<T>

A concurrent and scalable FIFO queue.

System.Collections.Concurrent.ConcurrentStack<T>

A concurrent and scalable LIFO stack.

For more information, see Thread-Safe Collections.

Synchronization Primitives

The new synchronization primitives in the System.Threading namespace enable fine-grained concurrency and faster performance by avoiding expensive locking mechanisms found in legacy multithreading code. Some of the new types, such as System.Threading.Barrier and System.Threading.CountdownEvent have no counterparts in earlier releases of the .NET Framework.

The following table lists the new synchronization types:

Type

Description

System.Threading.Barrier

Enables multiple threads to work on an algorithm in parallel by providing a point at which each task can signal its arrival and then block until some or all tasks have arrived. For more information, see Barrier (.NET Framework).

System.Threading.CountdownEvent

Simplifies fork and join scenarios by providing an easy rendezvous mechanism. For more information, see CountdownEvent.

System.Threading.ManualResetEventSlim

A synchronization primitive similar to System.Threading.ManualResetEvent. ManualResetEventSlim is lighter-weight but can only be used for intra-process communication. For more information, see ManualResetEvent and ManualResetEventSlim.

System.Threading.SemaphoreSlim

A synchronization primitive that limits the number of threads that can concurrently access a resource or a pool of resources. For more information, see Semaphore and SemaphoreSlim.

System.Threading.SpinLock

A mutual exclusion lock primitive that causes the thread that is trying to acquire the lock to wait in a loop, or spin, for a period of time before yielding its quantum. In scenarios where the wait for the lock is expected to be short, SpinLock offers better performance than other forms of locking. For more information, see SpinLock.

System.Threading.SpinWait

A small, lightweight type that will spin for a specified time and eventually put the thread into a wait state if the spin count is exceeded. For more information, see SpinWait.

For more information, see:

Lazy Initialization Classes

With lazy initialization, the memory for an object is not allocated until it is needed. Lazy initialization can improve performance by spreading object allocations evenly across the lifetime of a program. You can enable lazy initialization for any custom type by wrapping the type Lazy<T>.

The following table lists the lazy initialization types:

Type

Description

System.Lazy<T>

Provides lightweight, thread-safe lazy-initialization.

System.Threading.ThreadLocal<T>

Provides a lazily-initialized value on a per-thread basis, with each thread lazily-invoking the initialization function.

System.Threading.LazyInitializer

Provides static methods that avoid the need to allocate a dedicated, lazy-initialization instance. Instead, they use references to ensure targets have been initialized as they are accessed.

For more information, see Lazy Initialization.

Aggregate Exceptions

The System.AggregateException type can be used to capture multiple exceptions that are thrown concurrently on separate threads, and return them to the joining thread as a single exception. The System.Threading.Tasks.Task and System.Threading.Tasks.Parallel types and PLINQ use AggregateException extensively for this purpose. For more information, see How to: Handle Exceptions Thrown by Tasks and How to: Handle Exceptions in a PLINQ Query.

See Also

Reference

System.Collections.Concurrent

System.Threading

Other Resources

Parallel Programming in the .NET Framework