도서 정보
도서 상세설명
1 Why Parallel Computing
1.1 Why We Need Ever-Increasing Performance
1.2 Why We’re Building Parallel Systems
1.3 Why We Need to Write Parallel Programs
1.4 How Do We Write Parallel Programs?
1.5 What We’ll Be Doing
1.6 Concurrent, Parallel, Distributed
1.7 The Rest of the Book
1.8 A Word of Warning
1.9 Typographical Conventions
1.10 Summary
1.11 Exercises
2 Parallel Hardware and Parallel Software
2.1 Some Background
2.2 Modifications to the von Neumann Model
2.3 Parallel Hardware
2.4 Parallel Software
2.5 Input and Output
2.6 Performance
2.7 Parallel Program Design
2.8 Writing and Running Parallel Programs
2.9 Assumptions
2.10 Summary
2.11 Exercises
3 Distributed Memory Programming with MPI
3.1 Getting Started
3.2 The Trapezoidal Rule in MPI
3.3 Dealing with I/O
3.4 Collective Communication
3.5 MPI Derived Datatypes
3.7 A Parallel Sorting Algorithm
3.8 Summary
3.9 Exercises
3.10 Programming Assignments
4 Shared Memory Programming with Pthreads
4.1 Processes, Threads and Pthreads
4.2 Hello, World
4.3 Matrix-Vector Multiplication
4.4 Critical Sections
4.5 Busy-Waiting
4.6 Mutexes
4.7 Producer-Consumer Synchronization and Semaphores
4.8 Barriers and Condition Variables
4.9 Read-Write Locks
4.10 Caches, Cache-Coherence, and False Sharing
4.11 Thread-Safety
4.12 Summary
4.13 Exercises
4.14 Programming Assignments
5 Shared Memory Programming with OpenMP
5.1 Getting Started
5.2 The Trapezoidal Rule
5.3 Scope of Variables
5.4 The Reduction Clause
5.5 The Parallel For Directive
5.6 More About Loops in OpenMP: Sorting
5.7 Scheduling Loops
5.8 Producers and Consumers
5.9 Caches, Cache-Coherence, and False Sharing
5.10 Thread-Safety
5.11 Summary
5.12 Exercises
5.13 Programming Assignments
6 Parallel Program Development
6.1 Two N-Body Solvers
6.2 Tree Search
6.3 A Word of Caution
6.4 Which API?
6.5 Summary
6.6 Exercises
6.7 Programming Assignments
7 Where to Go from Here