Compartilhar via


Parallel Patterns Library (PPL)

 

The new home for Visual Studio documentation is Visual Studio 2017 Documentation on docs.microsoft.com.

The Parallel Patterns Library (PPL) provides an imperative programming model that promotes scalability and ease-of-use for developing concurrent applications. The PPL builds on the scheduling and resource management components of the Concurrency Runtime. It raises the level of abstraction between your application code and the underlying threading mechanism by providing generic, type-safe algorithms and containers that act on data in parallel. The PPL also lets you develop applications that scale by providing alternatives to shared state.

The PPL provides the following features:

  • Task Parallelism: a mechanism that works on top of the Windows ThreadPool to execute several work items (tasks) in parallel

  • Parallel algorithms: generic algorithms that works on top of the Concurrency Runtime to act on collections of data in parallel

  • Parallel containers and objects: generic container types that provide safe concurrent access to their elements

Example

The PPL provides a programming model that resembles the Standard Template Library (STL). The following example demonstrates many features of the PPL. It computes several Fibonacci numbers serially and in parallel. Both computations act on a std::array object. The example also prints to the console the time that is required to perform both computations.

The serial version uses the STL std::for_each algorithm to traverse the array and stores the results in a std::vector object. The parallel version performs the same task, but uses the PPL concurrency::parallel_for_each algorithm and stores the results in a concurrency::concurrent_vector object. The concurrent_vector class enables each loop iteration to concurrently add elements without the requirement to synchronize write access to the container.

Because parallel_for_each acts concurrently, the parallel version of this example must sort the concurrent_vector object to produce the same results as the serial version.

Note that the example uses a naïve method to compute the Fibonacci numbers; however, this method illustrates how the Concurrency Runtime can improve the performance of long computations.

// parallel-fibonacci.cpp
// compile with: /EHsc
#include <windows.h>
#include <ppl.h>
#include <concurrent_vector.h>
#include <array>
#include <vector>
#include <tuple>
#include <algorithm>
#include <iostream>

using namespace concurrency;
using namespace std;

// Calls the provided work function and returns the number of milliseconds 
// that it takes to call that function.
template <class Function>
__int64 time_call(Function&& f)
{
   __int64 begin = GetTickCount();
   f();
   return GetTickCount() - begin;
}

// Computes the nth Fibonacci number.
int fibonacci(int n)
{
   if(n < 2)
      return n;
   return fibonacci(n-1) + fibonacci(n-2);
}

int wmain()
{
   __int64 elapsed;

   // An array of Fibonacci numbers to compute.
   array<int, 4> a = { 24, 26, 41, 42 };

   // The results of the serial computation.
   vector<tuple<int,int>> results1;

   // The results of the parallel computation.
   concurrent_vector<tuple<int,int>> results2;

   // Use the for_each algorithm to compute the results serially.
   elapsed = time_call([&] 
   {
      for_each (begin(a), end(a), [&](int n) {
         results1.push_back(make_tuple(n, fibonacci(n)));
      });
   });   
   wcout << L"serial time: " << elapsed << L" ms" << endl;
   
   // Use the parallel_for_each algorithm to perform the same task.
   elapsed = time_call([&] 
   {
      parallel_for_each (begin(a), end(a), [&](int n) {
         results2.push_back(make_tuple(n, fibonacci(n)));
      });

      // Because parallel_for_each acts concurrently, the results do not 
      // have a pre-determined order. Sort the concurrent_vector object
      // so that the results match the serial version.
      sort(begin(results2), end(results2));
   });   
   wcout << L"parallel time: " << elapsed << L" ms" << endl << endl;

   // Print the results.
   for_each (begin(results2), end(results2), [](tuple<int,int>& pair) {
      wcout << L"fib(" << get<0>(pair) << L"): " << get<1>(pair) << endl;
   });
}

The following sample output is for a computer that has four processors.

serial time: 9250 ms  
parallel time: 5726 ms  
 
fib
(24): 46368  
fib
(26): 121393  
fib
(41): 165580141  
fib
(42): 267914296  

Each iteration of the loop requires a different amount of time to finish. The performance of parallel_for_each is bounded by the operation that finishes last. Therefore, you should not expect linear performance improvements between the serial and parallel versions of this example.

Title Description
Task Parallelism Describes the role of tasks and task groups in the PPL.
Parallel Algorithms Describes how to use parallel algorithms such as parallel_for and parallel_for_each.
Parallel Containers and Objects Describes the various parallel containers and objects that are provided by the PPL.
Cancellation Explains how to cancel the work that is being performed by a parallel algorithm.
Concurrency Runtime Describes the Concurrency Runtime, which simplifies parallel programming, and contains links to related topics.