Skip to main content
Skip header

Parallel Programming I

Course aims

Upon the successful completion of the course, students will be able to:
Analyze an algorithm and design suitable decomposition for its parallelization
Analyze efficiency of the partitioning design
Implement and optimize an algorithm using the instruction-level parallelism, combined computational power of many cores (multi-core (CPU) and many-core (MIC)), OpenMP directives, and combined computational power of many computational nodes through MPI inter-process communication.

Literature

1. Michael McCool (Author), James Reinders (Author), Arch Robison (Author), Structured Parallel Programming: Patterns for Efficient Computation Paperback – June 25, 2012
2. http://mpi-forum.org, MPI: A Message-Passing Interface Standard
3. http://openmp.org, OpenMP Application Program Interface

Advised literature

1. http://software.intel.com, Optimization and Performance Tuning for Intel® Xeon Phi™ Coprocessors – Part 1: Optimization Essentials
2. Intel® 64 and IA-32 Architectures Optimization Reference Manual
3. High Performance Parallelism Pearls: Multicore and Many-core Programming Approaches Paperback – November 17, 2014 by James Reinders (Author), James Jeffers (Author)


Language of instruction čeština, angličtina
Code 9600-1007
Abbreviation PP1
Course title Parallel Programming I
Coordinating department IT4Innovations
Course coordinator Mgr. Branislav Jansík, Ph.D.