次の方法で共有


Preface

This book describes patterns for parallel programming, with code examples, that use the new parallel programming support in the Microsoft® .NET Framework 4. This support is commonly referred to as the Parallel Extensions. You can use the patterns described in this book to improve your application's performance on multicore computers. Adopting the patterns in your code makes your application run faster today and also helps prepare for future hardware environments, which are expected to have an increasingly parallel computing architecture.

Who This Book Is For

The book is intended for programmers who write managed code for the .NET Framework on the Microsoft Windows® operating system. This includes programmers who write in Microsoft Visual C#® development tool, Microsoft Visual Basic® development system, and Microsoft Visual F#. No prior knowledge of parallel programming techniques is assumed. However, readers need to be familiar with features of C# such as delegates, lambda expressions, generic types, and Language Integrated Query (LINQ) expressions. Readers should also have at least a basic familiarity with the concepts of processes and threads of execution.

Note

The examples in this book are written in C# and use the features of the .NET Framework 4, including the Task Parallel Library (TPL) and Parallel LINQ (PLINQ). However, you can use the concepts presented here with other frameworks and libraries and with other languages.
Complete code solutions are posted on CodePlex. See http://parallelpatterns.codeplex.com/. There is a C# version for every example. In addition to the C# example code, there are also versions of the examples in Visual Basic and F#.

Why This Book Is Pertinent Now

The advanced parallel programming features that are delivered with Visual Studio® 2010 development system make it easier than ever to get started with parallel programming.

The Task Parallel Library (TPL) is for .NET programmers who want to write parallel programs. It simplifies the process of adding parallelism and concurrency to applications. The TPL dynamically scales the degree of parallelism to most efficiently use all the processors that are available. In addition, the TPL assists in the partitioning of work and the scheduling of tasks in the .NET thread pool. The library provides cancellation support, state management, and other services.

Parallel LINQ (PLINQ) is a parallel implementation of LINQ to Objects. PLINQ implements the full set of LINQ standard query operators as extension methods for the System.Linq namespace and has additional operators for parallel operations. PLINQ is a declarative, high-level interface with query capabilities for operations such as filtering, projection, and aggregation.

Visual Studio 2010 includes tools for debugging parallel applications. The Parallel Stacks window shows call stack information for all the threads in your application. It lets you navigate between threads and stack frames on those threads. The Parallel Tasks window resembles the Threads window, except that it shows information about each taskinstead of each thread. The Concurrency Visualizer views in the Visual Studio profiler enable you to see how your application interacts with the hardware, the operating system, and other processes on the computer. You can use the Concurrency Visualizer to locate performance bottlenecks, processor underutilization, thread contention, cross-core thread migration, synchronization delays, areas of overlapped I/O, and other information.

For a complete overview of the parallel technologies available from Microsoft, see Appendix C, "Technology Overview."

What You Need to Use the Code

The code that is used as examples in this book is at http://parallelpatterns.codeplex.com/. These are the system requirements:

  • Microsoft Windows Vista® SP1, Windows 7, Microsoft Windows Server® 2008, or Windows XP SP3 (32-bit or 64-bit) operating system
  • Microsoft Visual Studio 2010 (Ultimate or Premium edition is required for the Concurrency Visualizer, which allows you to analyze the performance of your application); this includes the .NET Framework 4, which is required to run the samples

How to Use This Book

This book presents parallel programming techniques in terms of particular patterns. Figure 1 shows the different patterns and their relationships to each other. The numbers refer to the chapters in this book where the patterns are described.

Ff963550.666afccd-6a95-4a27-ad8f-0c06f60cc9d2-thumb(en-us,PandP.10).png

Figure 1

Parallel programming patterns

After the introduction, the book has one branch that discusses data parallelism and another that discusses task parallelism.

Both parallel loops and parallel tasks use only the program's control flow as the means to coordinate and order tasks. The other patterns use both control flow and data flow for coordination. Control flow refers to the steps of an algorithm. Data flow refers to the availability of inputs and outputs.

Introduction

Chapter 1 introduces the common problems faced by developers who want to use parallelism to make their applications run faster. It explains basic concepts and prepares you for the remaining chapters. There is a table in the "Design Approaches" section of Chapter 1 that can help you select the right patterns for your application.

Parallelism with Control Dependencies Only

Chapters 2 and 3 deal with cases where asynchronous operations are ordered only by control flow constraints:

  • Chapter 2, "Parallel Loops." Use parallel loops when you want to perform the same calculation on each member of a collection or for a range of indices, and where there are no dependencies between the members of the collection. For loops with dependencies, see Chapter 4, "Parallel Aggregation."
  • Chapter 3, "Parallel Tasks." Use parallel tasks when you have several distinct asynchronous operations to perform. This chapter explains why tasks and threads serve two distinct purposes.

Parallelism with Control and Data Dependencies

Chapters 4 and 5 show patterns for concurrent operations that are constrained by both control flow and data flow:

  • Chapter 4, "Parallel Aggregation." Patterns for parallel aggregation are appropriate when the body of a parallel loop includes data dependencies, such as when calculating a sum or searching a collection for a maximum value.
  • Chapter 5, "Futures." The Futures pattern occurs when operations produce some outputs that are needed as inputs to other operations. The order of operations is constrained by a directed graph of data dependencies. Some operations are performed in parallel and some serially, depending on when inputs become available.

Dynamic Task Parallelism and Pipelines

Chapters 6 and 7 discuss some more advanced scenarios:

  • Chapter 6, "Dynamic Task Parallelism." In some cases, operations are dynamically added to the backlog of work as the computation proceeds. This pattern applies to several domains, including graph algorithms and sorting.
  • Chapter 7, "Pipelines." Use pipelines to feed successive outputs of one component to the input queue of another component, in the style of an assembly line. Parallelism results when the pipeline fills, and when more than one component is simultaneously active.

Supporting Material

In addition to the patterns, there are several appendices:

  • Appendix A, "Adapting Object-Oriented Patterns." This appendix gives tips for adapting some of the common object-oriented patterns, such as facades, decorators, and repositories, to multicore architectures.
  • Appendix B, "Debugging and Profiling Parallel Applications." This appendix gives you an overview of how to debug and profile parallel applications in Visual Studio 2010.
  • Appendix C, "Technology Roadmap." This appendix describes the various Microsoft technologies and frameworks for parallel programming.
  • Glossary. The glossary contains definitions of the terms used in this book.
  • References. The references cite the works mentioned in this book.

Everyone should read Chapters 1, 2, and 3 for an introduction and overview of the basic principles. Although the succeeding material is presented in a logical order, each chapter, from Chapter 4 on, can be read independently.

It's very tempting to take a new tool or technology and try and use it to solve whatever problem is confronting you, regardless of the tool's applicability. As the saying goes, "when all you have is a hammer, everything looks like a nail." The "everything's a nail" mentality can lead to very unfortunate results, which one hopes the bunny in Figure 2 will be able to avoid.

Note

Don't apply the patterns in this book blindly to your applications.

You also want to avoid unfortunate results in your parallel programs. Adding parallelism to your application costs time and adds complexity. For good results, you should only parallelize the parts of your application where the benefits outweigh the costs.

Ff963550.db31b18c-538d-4a5f-a743-e51045986c47-thumb(en-us,PandP.10).png

Figure 2

"When all you have is a hammer, everything looks like a nail."

What Is Not Covered

This book focuses more on processor-bound workloads than on I/O-bound workloads. The goal is to make computationally intensive applications run faster by making better use of the computer's available cores. As a result, the book does not focus as much on the issue of I/O latency. Nonetheless, there is some discussion of balanced workloads that are both processor intensive and have large amounts of I/O (see Chapter 7, "Pipelines"). There is also an important example for user interfaces in Chapter 5, "Futures," that illustrates concurrency for tasks with I/O.

The book describes parallelism within a single multicore node with shared memory instead of the cluster, High Performance Computing (HPC) Server approach that uses networked nodes with distributed memory. However, cluster programmers who want to take advantage of parallelism within a node may find the examples in this book helpful, because each node of a cluster can have multiple processing units.

Goals

After reading this book, you should be able to:

  • Answer the questions at the end of each chapter.
  • Figure out if your application fits one of the book's patterns and, if it does, know if there's a good chance of implementing a straightforward parallel implementation.
  • Understand when your application doesn't fit one of these patterns. At that point, you either have to do more reading and research, or enlist the help of an expert.
  • Have an idea of the likely causes, such as conflicting dependencies or erroneously sharing data between tasks, if your implementation of a pattern doesn't work.
  • Use the "Further Reading" sections to find more material.

Next | Previous | Home | Community