Skip to main content
Skip header
Terminated in academic year 2015/2016

Parallel Programming I

Type of study Follow-up Master
Language of instruction Czech
Code 460-4040/01
Abbreviation PPA I
Course title Parallel Programming I
Credits 4
Coordinating department Department of Computer Science
Course coordinator prof. Ing. Pavel Krömer, Ph.D.

Subject syllabus

Lectures:

Introduction.
Motivation, historical remarks, actual trends. Relation to High Performance Computing. Area of interest. Assumed knowledge.

Parallel computer systems.
Flynn's classification and memory-related categories. Used terminology. Review of architectures. Examples of supercomputers. Clusters.

Interconnection subsystems.
Technical solutions (bus, hypercube, multistage network, etc.). Consequences for the design of parallel algorithms.

Introduction to parallel programming.
Parallel programming models (overview): message passing, shared variables, data parallel approach. Examples. Implementation tools. Building a parallel application.

Message passing model.
Principles and characteristics. Types of communication, basic concepts. Implementation conditions. Message passing systems, major representatives.

Parallel Virtual Machine.
Introduction to PVM. Historical remarks. Components and user's interface. Available implementations (Quad, Ultra). Illustrative example.

Parallel Virtual Machine (continued).
Overview of constructs and library routines: process control, information retrieval, message passing, etc. Collective communication. Advanced techniques.

Development of parallel algorithms.
Domain, functional decomposition. Communication analysis. Agglomeration. Mapping onto processors. Load-balancing techniques. Examples, perfectly parallel problems.
Application in semester projects.

Message Passing Interface.
Development of the MPI standard. Comparison with PVM, specific features (derived datatypes, virtual topologies, etc.).

Analysis of parallel algorithms.
Performance models, metrics. Parallel execution time. Speedup, efficiency, cost. Superlinear speedup. Amdahl's law. Experimental studies.

Analysis of parallel algorithms (continued).
Scalability, fixed and variable problem size. Asymptotic analysis.

Programming of symmetric processors.
Technical conditions. Low-level tools, operating system bindings. Thread-based programming.

The OpenMP standard.
Motivation and historical remarks. Constructs (parallel regions, load-balancing directives, synchronization, locking...). Example.

Selected parallel algorithms.
Numerical algorithms, sorting, graph algorithms and other applications. Summary and trends. Alternative tools for the development of parallel applications (High Performance Fortran, Parallel Matlab, etc.). Developments in hardware. Parallel programming on personal computers.


Computer labs:

Tutorial organization. "Parallel thinking".

Working environment.
Multiprocesor Quad and network of workstations Ultra.

Building parallel codes in PVM.

Simple programs under PVM.

Domain decomposition.
Project consultations (throughout all labs).

Domain decomposition (cont.).

Functional decomposition.

Collective communication in v PVM. XPVM.

Analysis of algorithms.

MPI. MPICH, LAM implementations.

MPI (cont.).
Collective communication.

Advanced techniques.

Presentations.

Projects.

Literature

Syllabus
I. Foster: Designing and building of parallel programs. Addison-Wesley, 1995
Al Geist et al.: PVM: Parallel Virtual Machine. A User's Guide and Tutorial for Networked Parallel Computing. The MIT Press, 1994.
MPI: A Message-Passing Interface Standard. Message Passing Interface Forum, University of Tennessee, June 1995.

Advised literature

K. Ježek et al.: Paralelní architektury a programy. ZČU Plzeň, 1997. (In Czech)
B. Wilkinson, M. Allen: Parallel Programming. Prentice Hall, 1999.
R. Chandra et al.: Parallel Programming in OpenMP. Morgan Kaufmann Publishers, 2001.