Lectures:
Introduction.
Motivation, historical remarks, actual trends. Relation to High Performance Computing. Area of interest. Assumed knowledge.
Parallel computer systems.
Flynn's classification and memory-related categories. Used terminology. Review of architectures. Examples of supercomputers. Clusters.
Interconnection subsystems.
Technical solutions (bus, hypercube, multistage network, etc.). Consequences for the design of parallel algorithms.
Introduction to parallel programming.
Parallel programming models (overview): message passing, shared variables, data parallel approach. Examples. Implementation tools. Building a parallel application.
Message passing model.
Principles and characteristics. Types of communication, basic concepts. Implementation conditions. Message passing systems, major representatives.
Parallel Virtual Machine.
Introduction to PVM. Historical remarks. Components and user's interface. Available implementations (Quad, Ultra). Illustrative example.
Parallel Virtual Machine (continued).
Overview of constructs and library routines: process control, information retrieval, message passing, etc. Collective communication. Advanced techniques.
Development of parallel algorithms.
Domain, functional decomposition. Communication analysis. Agglomeration. Mapping onto processors. Load-balancing techniques. Examples, perfectly parallel problems.
Application in semester projects.
Message Passing Interface.
Development of the MPI standard. Comparison with PVM, specific features (derived datatypes, virtual topologies, etc.).
Analysis of parallel algorithms.
Performance models, metrics. Parallel execution time. Speedup, efficiency, cost. Superlinear speedup. Amdahl's law. Experimental studies.
Analysis of parallel algorithms (continued).
Scalability, fixed and variable problem size. Asymptotic analysis.
Programming of symmetric processors.
Technical conditions. Low-level tools, operating system bindings. Thread-based programming.
The OpenMP standard.
Motivation and historical remarks. Constructs (parallel regions, load-balancing directives, synchronization, locking...). Example.
Selected parallel algorithms.
Numerical algorithms, sorting, graph algorithms and other applications. Summary and trends. Alternative tools for the development of parallel applications (High Performance Fortran, Parallel Matlab, etc.). Developments in hardware. Parallel programming on personal computers.
Computer labs:
Tutorial organization. "Parallel thinking".
Working environment.
Multiprocesor Quad and network of workstations Ultra.
Building parallel codes in PVM.
Simple programs under PVM.
Domain decomposition.
Project consultations (throughout all labs).
Domain decomposition (cont.).
Functional decomposition.
Collective communication in v PVM. XPVM.
Analysis of algorithms.
MPI. MPICH, LAM implementations.
MPI (cont.).
Collective communication.
Advanced techniques.
Presentations.
Projects.
Introduction.
Motivation, historical remarks, actual trends. Relation to High Performance Computing. Area of interest. Assumed knowledge.
Parallel computer systems.
Flynn's classification and memory-related categories. Used terminology. Review of architectures. Examples of supercomputers. Clusters.
Interconnection subsystems.
Technical solutions (bus, hypercube, multistage network, etc.). Consequences for the design of parallel algorithms.
Introduction to parallel programming.
Parallel programming models (overview): message passing, shared variables, data parallel approach. Examples. Implementation tools. Building a parallel application.
Message passing model.
Principles and characteristics. Types of communication, basic concepts. Implementation conditions. Message passing systems, major representatives.
Parallel Virtual Machine.
Introduction to PVM. Historical remarks. Components and user's interface. Available implementations (Quad, Ultra). Illustrative example.
Parallel Virtual Machine (continued).
Overview of constructs and library routines: process control, information retrieval, message passing, etc. Collective communication. Advanced techniques.
Development of parallel algorithms.
Domain, functional decomposition. Communication analysis. Agglomeration. Mapping onto processors. Load-balancing techniques. Examples, perfectly parallel problems.
Application in semester projects.
Message Passing Interface.
Development of the MPI standard. Comparison with PVM, specific features (derived datatypes, virtual topologies, etc.).
Analysis of parallel algorithms.
Performance models, metrics. Parallel execution time. Speedup, efficiency, cost. Superlinear speedup. Amdahl's law. Experimental studies.
Analysis of parallel algorithms (continued).
Scalability, fixed and variable problem size. Asymptotic analysis.
Programming of symmetric processors.
Technical conditions. Low-level tools, operating system bindings. Thread-based programming.
The OpenMP standard.
Motivation and historical remarks. Constructs (parallel regions, load-balancing directives, synchronization, locking...). Example.
Selected parallel algorithms.
Numerical algorithms, sorting, graph algorithms and other applications. Summary and trends. Alternative tools for the development of parallel applications (High Performance Fortran, Parallel Matlab, etc.). Developments in hardware. Parallel programming on personal computers.
Computer labs:
Tutorial organization. "Parallel thinking".
Working environment.
Multiprocesor Quad and network of workstations Ultra.
Building parallel codes in PVM.
Simple programs under PVM.
Domain decomposition.
Project consultations (throughout all labs).
Domain decomposition (cont.).
Functional decomposition.
Collective communication in v PVM. XPVM.
Analysis of algorithms.
MPI. MPICH, LAM implementations.
MPI (cont.).
Collective communication.
Advanced techniques.
Presentations.
Projects.