Share via


Synchronization with the Concurrency Runtime - Part 1

In a concurrent world multiple entities work together to achieve a common goal. A common way to interact and coordinate is to use shared data. However, shared data must be accessed carefully. This can be achieved through synchronization, primarily using:

i) Blocking methods such as locks and mutexes

ii) Non-blocking methods such as lock-free programming techniques.

I will talk about the synchronization using blocking methods within a process, using constructs provided as part of the concurrency runtime and exposed through the Parallel Pattern Library (PPL). In this blog, I will address the concurrency runtime’s critical section and will cover reader writer lock and events in subsequent blog posts.

For a general picture of the native concurrency runtime, and high level roles of each of its components please refer to this post.


Motivation

Goals of the concurrency runtime’s synchronization primitives:

1. Simple APIs

Unlike their Win32 equivalent, concurrency runtime’s synchronization primitives don’t have C-style initialization and release/destroy type of resource management calls. The exposed interfaces are simple and conform to the C++0x standards and the synchronization objects throw meaningful exceptions on certain illegal operations.

2. Block in a cooperative manner

The synchronization objects are cooperative in nature, in that they yield to other cooperative tasks in the runtime in addition to preempting. For an illustration of this scenario, refer to this post.

Critical Section

This represents a non-reentrant, cooperative mutual exclusion object that uses concurrency runtime’s facilities to enable cooperative scheduling of work when blocked. This class satisfies all Mutex requirements specified in C++0x standards. The concurrency runtime’s critical section provides a C++ façade as compared to its C-styled Win32 equivalent: Windows CRITICAL_SECTION

Similarity to the Win32 CRITICAL_SECTION:

- Can be used only by threads of a single process.

- The critical section object can only be owned by one thread at a time.

Differences with Win32 CRITICAL_SECTION:

- The concurrency runtime’s critical sections are non-recursive. Exceptions are thrown upon recursive calls.

- The concurrency runtime’s critical section object guarantees that threads waiting on a critical section acquire it on a first-come, first-serve basis.

- There is no need to explicitly call Initialization/allocation of resources before use of the concurrency runtime’s critical section and release resources after the use of the critical section.

- Cannot specify spin count for the concurrency runtime’s critical section object.

- The concurrency runtime’s critical section enforces cooperative blocking where they yield to other cooperative tasks in the runtime when blocked.

- Exceptions are thrown by the concurrency runtime’s critical section object; on unlock calls when the lock is not held, or if a lock is destroyed when being held.

 

Example:

The sample below alternates between printing to standard output from FunctionA and FunctionB.

 

// critical_section.cpp

// compile with: /EHsc

#include <ppl.h>

#include <stdio.h>

#include <windows.h>

using namespace std;

using namespace Concurrency;

//number of iterations each thread performs

static const int NUM_ITERATIONS = 5;

//Demonstrates use of critical section

void FunctionA(critical_section* pMutex)

{

    for( int i = 0; i < NUM_ITERATIONS; ++i)

    {

        //use exclusive lock

        pMutex->lock();

        printf_s("A %d\n", i);

        //Sleep for some time, this is to simulate potential work done while holding the lock

        Sleep(100);

        //release exclusive lock

        pMutex->unlock();

    }

}

//Demonstrates use of critical section

void FunctionB(critical_section* pMutex)

{

    for( int i = 0; i < NUM_ITERATIONS; ++i)

    {

      //use exclusive lock

        pMutex->lock();

        printf_s("\tB %d\n", i);

        //Sleep for some time, this is to simulate potential work done while holding the lock

        Sleep(100);

        //release exclusive lock

        pMutex->unlock();

    }

}

int main()

{

    critical_section mutex;

    //call FunctionA and FunctionB in parallel

    parallel_invoke(

        [&] { FunctionA(&mutex); },

        [&] { FunctionB(&mutex); }

    );

    return 0;

}

 

Sample output:

A0

        B0

A1

        B1

A2

        B2

A3

        B3

A4

        B4

Note: There is a possibility that the order may be swapped, where B gets the lock before A; the locks are handed out on a first-come, first-serve basis, it’s a race to try and get the lock at the beginning. One way of guaranteeing consistency is using events.

Comments