다음을 통해 공유


Why threading is hard

Anybody who says "I can write correct multi threaded code" probably should be saying "I don't test my multi-threaded code".

It is very difficult to write correct multi-threaded code.

One way to appreciate this is various "find-the-bug" pop quizzes that occasionally come up.

Another approach that I'll go after here is to look at the state space.

If single-threaded apps have O[n] execution paths where n = number of atomic operations, just two threads have O[n!] paths or worse. (It's not O[n*n], or even polynomial - see below). And n is usually a very large number.

Each of those execution paths is an angle to hit your program and find a bug. Maybe you can test against O[n], but you're delusional if you think you can protect against O[n!] or worse.

15 is a small number, and 15 factorial is 1,307,674,368,000 = ~1.3 trillion. Testing a million cases is hard enough. "Trillion" may sound like an unrealistic number, but listen to some of the ship-stopper stress bugs that show up in product's end games followed by the "what are the odds that could ever have actually happened" comments, and it's not so unrealistic.

Practically, that means multi-threaded apps have so many states that you can't possibly keep track of them all in your head. And even if you do, the next guy that goes and changes a line of code just ruined your model.

 

Counting the execution paths

Suppose you have a function that executes sequence of atomic operations:  A0, A1, A2, A3.

I want to just compare single-threaded and multi-threaded cases, so for now, let's ignore loops, conditionals, etc. because that's common across both and so it just complicates the comparison. (This simplification actually downplays the complexity of adding additional threads). In this model, there's only 1 code-path through this sequence of N states: A0, A1, A2, A3. 

Now suppose you have a second function B0, B1, B2, B3 executing on another thread.

The threads can interleave these operations in any order. The convenient case would be (A0,A1,A2, A3, ... B0,B1,B2,B3). But maybe it's something goofy like: (A0,A1,B0,A2,B1,B2,A3,B3). If the bug only occurs when the pattern (A1,B0,A2, B1) is executed, good luck finding it.

It turns out for 2 sequences of lengths N and M, there are choose(N+M, N) ways to interleave them. That's (N+M)! / ( M! N! ) execution paths.   (see quiz here ). Hence O[n!] for 2 threads.

You can see how much more complex just simple execution gets with threads. Now imagine how much more complicated threads make loops, conditionals, etc.

 

How many states?

Locks don't make your code safe. They just reduce the number of states and hedge your bets.  At one extreme, 1 giant lock around everything can reduce you to 1 state per thread. If you put 4 states under 1 big lock (eg, A0...A3), and you put that same lock around every other state (eg, B0...B3), then you've reduced those 4 states to 1 state, thus eliminating 3 states.

You likely don't control the number of states. An operation that appears atomic (like a memory read) may actually be multiple states. Maybe you're looking at the source-line and the compiler split it out into multiple states. Or maybe there are windows where you communicate with the file-system, or other processes - and so they can be effectively injecting states into your state space. Or maybe the hardware has multiple states (usually around memory models).  That's why there's InterlockedIncrement(&i) instead of just doing i++.

Threading vs. Reentrancy:

Now threading models aren't the only way to rack up states. Single-threaded models (like UIs) that queue messages can create a ton of states too, and those tend to create nasty reentrancy bugs. And there will be times when another thread would actually produce fewer overall states then a single-threaded model. Just be wary.

 

What if the states independent from each other?

Granted many of the states may appear to be independent from each other. So one may claim that O[N!] is way to pessimistic. But unless you have a very thorough isolation model, it tough to prove the states can't impact each other.

For example, states A0 and B1 may not directly interact, but maybe A0 impacts A1 which impacts B0 which impacts B1.  Maybe A0 impacts how much memory A1 allocates, and that impacts whether an allocation in B0 fails (especially if there's a process wide memory allocator), and then the bug is in B1 handling the out-of-memory failure case.

And even if most of your states actually are isolated, factorial growth is so fast that you only need a small number before you're doomed.

See also: Measuring how difficult a threading bug is.

Comments

  • Anonymous
    January 30, 2008
    The comment has been removed

  • Anonymous
    January 30, 2008
    when you first start coding, you end up writing bad code.  over time you improve.  after a while it becomes second nature.  it's the same thing with threading - people just need to stop being afraid and take some time to learn.

  • Anonymous
    January 30, 2008
    How is a possible scenario "contrived"?  The difficulty that Mike's talking about is keeping the data separated amongst the threads.  It's when you don't do that you get the increase in possible states.  If you've got more than a trivial number of states, you're likely not testing all of them. But the same token, it's a bit of a leap to assume everyone does have shared state in multi-threaded code and concluding "Anybody who says 'I can write correct multi threaded code' probably should be saying 'I don't test my multi-threaded code'"

  • Anonymous
    January 31, 2008
    You are correct of course that there is great opportunity for problems when adding threading to an application.  However, this is considerably mitigated by well-designed code that avoids obvious pitfalls such as global variables and static state.   I am not a threading expert.  But I have had the opposite experience to you - I found multi-threaded code to be exceptionally easy! For example, late last year, I had a large Windows application, and decided to introduce some background data loading on separate threads.  Essentially, I quickly made a significant portion of the application multi-threaded.  The number of resulting bugs was minimal and easy to resolve.  The only real point of contention was dealing with collections of items - one thread adding or removing items while another was iterating over them.  

  • Anonymous
    January 31, 2008
    I'm not talking about just putting a big lock around some collection class you wrote. Maybe you've got a thread-safe hashtable, but the logic above the hashtable still has to be thread-safe. Multi-threaded code must have some shared state: if the worker thread can't impact the main thread, you wouldn't even have worker thread doing any work at all. Now maybe the shared state is small. But even a few small shared states can lead to a ton of combinations which could be hiding some bugs. Many people that I've met who think they can write multi-threaded code either just apply it to trivial systems (a single collection class) or may be able to write code that's correct enough to stay one step ahead of their testers, but would still break down under stress testing.

  • Anonymous
    January 31, 2008
    I agree that sometimes thread is the way to go (it may be the only way to achieve performance, scalablity, you may need to be consumable by a multi-threaded client, it may actually reduce the overall number of states, etc) Let me take a softer approach. Beware that:

  • multiple threads introduce a whole new class of bugs that can be dormant for a while. It's easy to appear that the app is good.
  • One way to appreciate the potential surface area for bugs is to count the new state space. (that's the primary focus of this entry). It grows N!, not N or even N^2.
  • this creates a maintenance problem. Even if you actually figured out all N! paths, the next guy to own the code probably won't.
  • good design (eliminating statics, etc) can greatly reduce N.
  • It's much harder to prove that your multi-threaded code is correct. You maybe able to stay a step ahead of your testing, but that's hardly the bar for correct.
  • Anonymous
    February 03, 2008
    The comment has been removed
  • Anonymous
    February 16, 2008
    The comment has been removed