How to: Use Locks and Prevent Deadlocks?
Multi-threading is used in almost all real-life applications. I summed up my thoughts on use of locks and deadlock prevention in the following related topics:
Thread Safety
From a thread safety perspective, resources (memory) is classified as either thread-exclusive, read-only, or lock-protected.
Unsafe use
- Accessing static variables or heap-allocated memory after it is published (made accessible to other threads).
- (Re-)allocating/freeing resources that have global scope (e.g.: files)
- Indirect accesses through handles, pointers, or references (see the example below in the Guidelines section)
Safe use
- Accessing local variables, heap-allocated memory before it is published (made accessible to other threads).
- Constants and read-only memory (that's different than the C# readonly modifier, which allows you to change the object's data but not its address)
Locking Using Monitors
Every object is a monitor, basically, you create a private readonly object and lock on it. Only one thread will be allowed in the critical section at a time:
1: Monitor.Enter(_lock);
2:
3: try
4: {
5: DoWork();
6: }
7: finally
8: {
9: Monitor.Exit(_lock);
10: }
Or use the shorthand:
1: lock (_lock)
2: {
3: DoWork();
4: }
Can be used to ensure that a job is done only once; first thread to enter is the "victim" that does the job; others wait:
1: private void Initialize()
2: {
3: if (_isInitialized)
4: {
5: return;
6: }
7:
8: lock (_lock)
9: {
10: if (!_isInitialized)
11: {
12: DoWork();
13: _isInitialized = true;
14: }
15: }
16: }
17:
18: private bool _isInitialized = false;
Locking is essential during lazy initialization:
1: public HeavyObject LazyInitialized
2: {
3: if (_data == null)
4: {
5: lock (_lock)
6: {
7: if (_data == null)
8: {
9: _data = new HeavyObject();
10: }
11: }
12: }
13:
14: return _data;
15: }
16:
17: private HeavyObject _data = null;
Guidelines
- A resource is mutually-exclusive if and only if no thread writes to it without holding the lock
- Locking is expensive; it can take up to hundreds of cycles. Do NOT use it when it's not needed
- Associate resources with locks, group resources that get written to together as one logical resource (preferably one to prevent deadlocks)
- Document what each lock object is protecting and what each critical section is doing
- The fewer locks you have, the less complex your design is; a single lock is good enough if it meets your throughput goals without contention
- Only lock to protect the critical section's block but not the rest of the method if not needed, this provides more concurrency and less contention
- Avoid locks overlap because it is not useful for resources associated with two different locks to overlap; it's error-prone
- If you randomly enter one of the locks: the block is no longer mutually-exclusive
- If you enter both locks: mutually-exclusive but twice as expensive, an no added value (just use one lock and treat these related resources as one)
- If you always enter one of the locks: use this one and get rid of the other, treat these related resources as one
- Release the lock as soon as you don't need it anymore
- Override the add and remove accessors for events; according to the C# spec 10.7.1, the compiler auto-generates lock(this) for object members and lock(typeof(TypeName)) for static members around the accessors, so use your own lock instead
- Avoid using the [MethodImpl(MethodImplOptions.Synchronized)] method attribute; C# auto-generates lock(this) for object methods and lock(typeof(TypeName)) for static mthods around the method's code
- Only lock on private readonly members (of type System.Object)
- Do NOT lock on an object that someone else could possibly get to
- Do NOT lock on this; it makes the lock as visible as the type of the object
- Do NOT lock on a value type (int, bool, etc.) because of auto-boxing; a new object is created each time you access the value
- Do NOT return a reference to the shared resource, return a copy instead:
1: // This is WRONG! It returns a reference to the shared resource.
2: // After the mutex, the client code can change the content of the object
3: public Example SharedResource
4: {
5: get
6: {
7: lock (_lock)
8: {
9: return _data;
10: }
11: }
12: }
13:
14: // Deep-copy the data first and return the copy
15: // Returns a snapshot of the shared data at a certain point in time
16: public Example SharedResource
17: {
18: get
19: {
20: lock (_lock)
21: {
22: new Example(_data); // Copy constructor that performs a deep copy
23: }
24: }
25: }
Pros
- Easy to use
- Reentrant; works well with recursive locks; a thread that has entered the monitor can re-enter without blocking. This allows calling other methods that require the same lock (or same method recursively) without causing a deadlock:
1: public class Example
2: {
3: public void DecrementFoo(int delta)
4: {
5: lock (_lock)
6: {
7: _foo -= delta;
8: }
9: }
10:
11: public void IncrementBar(int delta)
12: {
13: lock (_lock)
14: {
15: _bar += delta;
16: }
17: }
18:
19: public void DecrementFooAndIncrementBar(int delta)
20: {
21: lock (_lock)
22: {
23: // Lock is acquired; these calls will not be blocked
24: DecrementFoo(delta);
25: IncrementBar(delta);
26: }
27: }
28:
29: private int _foo = 0;
30: private int _bar = 0;
31: private readonly object _lock = new object();
32: }
Cons
- Not useful when multiple locks need to be acquired at the same time
- Exclusivity; the lock does NOT allow multiple readers to enter the mutex block concurrently (see Locking Using Reader/Writer Locks)
- No means of cleanup if an exception was thrown, you will need an inner try statement
- Debug asserts can't be used to ensure that a lock is held by the current thread
- Low concurrency; throughput can be affected if threads are often waiting to acquire the lock (contention)
Locking Using Reader/Writer Locks
Access operations are classified as either reads or writes
Guidelines
- Multiple readers can hold the lock concurrently, thus higher throughput for read-intensive operations
- Writer locks are exclusive (held by a maximum of one thread at a time)
- A thread's request to acquire a reader lock is granted unless a writer lock is being requested or held by another thread
- A thread's request to acquire a writer lock blocks other threads' reader locks requests
- The writer lock has to be relinquished by the holding thread before new reader locks are granted to other threads
- All reader locks have to be relinquished by the holding threads before a new writer lock is granted
- Reader/Writer locks can be implemented using one of these two classes:
Supported in |
.Net 1.0+ |
.Net 3.5+ |
Usage |
Robustness: thread aborts and OOM exceptions |
Performance: 3x to 6x faster |
Reentrance |
Supported but not advised |
Supported (LockRecursionPolicy) but not advised |
- Use Debug assertions to assert that you can enter the lock
- Helps to prevent regression, synchronization defects should be caught by assertions
- Helps to detect deadlock early (in debug builds)
- Acts as a contract for callers
- Assertion messages document the code
- Re-factor code so that shared functionality and helper methods are not public and don't enter locks
- Lock "high" at the beginning of public APIs
- Assert that the blocking locks are NOT being held (by other threads) in the beginning of public APIs
- Assert that the required lock is still being held (by the current thread) in helper methods (the caller entered the lock)
- Reentrance (lock recursion) is not advised; a thread that holds a reader lock then waits on the writer lock (lock upgrade) will deadlock itself:
1: public void WriteAll()
2: {
3: Debug.Assert(!_lock.IsReadLockHeld && !_lock.IsWriteLockHeld,
4: "Lock is held by others.");
5:
6: _lock.EnterWriteLock();
7:
8: try
9: {
10: WriteX();
11: WriteY();
12: }
13: finally
14: {
15: _lock.ExitWriteLock();
16: }
17: }
18:
19: public void WriteX()
20: {
21: // This assertion will catch this defect
22: Debug.Assert(!_lock.IsReadLockHeld && !_lock.IsWriteLockHeld,
23: "Lock is held by others.");
24:
25: // This will cause the deadlock
26: // Waiting for the lock, held by WriteAll(), to be relinquished
27: _lock.EnterWriteLock();
28:
29: try
30: {
31: _x = _a + _b;
32: }
33: finally
34: {
35: _lock.ExitWriteLock();
36: }
37: }
One possible solution is to set a timeout using TryEnterWriteLock(timeout) and TryEnterReadLock(timeout) , but it's not recommended because this doesn't guarantee mutual exclusion
The fix is to re-factor the actual work done by WriteX() into a private helper method and make WriteAll() and WriteX() call it:
1: public void WriteAll()
2: {
3: Debug.Assert(!_lock.IsReadLockHeld && !_lock.IsWriteLockHeld,
4: "Lock is held by others.");
5:
6: _lock.EnterWriteLock();
7:
8: try
9: {
10: DoWriteX();
11: DoWriteY();
12: }
13: finally
14: {
15: _lock.ExitWriteLock();
16: }
17: }
18:
19: public void WriteX()
20: {
21: Debug.Assert(!_lock.IsReadLockHeld && !_lock.IsWriteLockHeld,
22: "Lock is held by others.");
23:
24: _lock.EnterWriteLock(); // No problem :)
25:
26: try
27: {
28: DoWriteX();
29: }
30: finally
31: {
32: _lock.ExitWriteLock();
33: }
34: }
35:
36: public void DoWriteX()
37: {
38: Debug.Assert(_lock.IsWriteLockHeld,
39: "The required write lock is NOT held by this thread.");
40:
41: _x = _a + _b;
42: }
Here's an example of locking using ReaderWriterLockSlim that also shows the atomicity of variable references and use of the volatile modifier:
1: public class Example
2: {
3: public int Foo
4: {
5: // Atomic;
6: // Lock is not required to read a 32-bit volatile value
7: get { return _foo; }
8: }
9:
10: public int Bar
11: {
12: get
13: {
14: Debug.Assert(!_lock.IsWriteLockHeld,
15: "Another thread is holding the write lock.");
16:
17: _lock.EnterReadLock();
18:
19: try
20: {
21: // Atomic; but _bar is NOT volatile; read lock is needed
22: return _bar; // Value-type, no need to return a copy (auto-boxed)
23: }
24: finally
25: {
26: _lock.ExitReadLock();
27: }
28: }
29: }
30:
31: public void DecrementFoo(int delta)
32: {
33: Debug.Assert(!_lock.IsReadLockHeld && !_lock.IsWriteLockHeld,
34: "Lock is held by others.");
35:
36: _lock.EnterWriteLock();
37:
38: try
39: {
40: DoDecrementFoo(delta);
41: }
42: finally
43: {
44: _lock.ExitWriteLock();
45: }
46: }
47:
48: public void IncrementBar(int delta)
49: {
50: Debug.Assert(!_lock.IsReadLockHeld && !_lock.IsWriteLockHeld,
51: "Lock is held by others.");
52:
53: _lock.EnterWriteLock();
54:
55: try
56: {
57: DoIncrementBar(delta);
58: }
59: finally
60: {
61: _lock.ExitWriteLock();
62: }
63: }
64:
65: private readonly ReaderWriterLockSlim _lock = new ReaderWriterLockSlim();
66: private volatile int _foo = 0;
67: private int _bar = 0;
68: }
Pros
- Higher throughput and less contention for read-intensive operations (reads usually outnumber writes)
- The lock object knows whether it's being held or not; hence debug assertion can be used to check that
Cons
- No reentrancy, which forces you to isolate the business logic in non-public methods and call them in the public methods (surrounded by the lock)
Interlocked Operations
See also: https://msdn.microsoft.com/en-us/library/sbhbke0y.aspx
We already know that some operations are guaranteed to be atomic (like reading a 32-bit value).
.Net has the Interlocked class which provides some common functionality that can be called in an atomic manner. Consider the following example:
1: public void IncrementFooBy1()
2: {
3: lock (_lock)
4: {
5: _foo++;
6: }
7: }
The mutex block _foo++; is compiled into 3 assembly instructions that look similar to the following:
1: MOV EAX, [_foo] // Load
2: INC EAX // Increment
3: MOV [_foo], EAX // Save
The instructions above are not guaranteed to be atomic. However, the same functionality can be accomplished using the following code instead:
1: public void IncrementFooBy1()
2: {
3: Interlocked.Increment(ref _foo);
4: }
In this case, the CLR guarantees that it's an atomic operation, which looks similar to the following assembly instruction:
1: LOCK INC DWORD PTR [_foo]
Deadlock Prevention
- Have as few locks as possible (preferably just one); if you have two locks A and B, a thread is holding A and is waiting on B, another thread is holding B and is waiting on B, that's a deadlock
- Break the chain by enforcing lock acquisition order in such a way that no circular waiting occurs; deadlocks happen when each thread is waiting on a lock already held by the next thread in line (dining philosophers problem).
See also: my post on debugging deadlocks.
I know that it’s a long read, but I hope it was worth it. I’d like to thank Vance Morrison and Philip Kelley for sharing their knowledge about this topic.