Advanced Parallel Programming with MPI and OpenMP (training course, online)

Start
29th November 2021 07:45 AM
End
1st December 2021 16:30 PM
Location
online via Zoom

(Course no. 732021 in the training programme 2021 of Forschungszentrum Jülich)

This course will be provided as ONLINE course (using Zoom). The link to the streaming platform will be provided to the registrants only.

 

Contents:

The focus is on advanced programming with MPI and OpenMP. The course addresses participants who have already some experience with C/C++ or Fortran and MPI and OpenMP, the most popular programming models in high performance computing (HPC).

The course will teach newest methods in MPI-3.0/3.1 and OpenMP-4.5 and 5.0, which were developed for the efficient use of current HPC hardware. Topics with MPI are the group and communicator concept, process topologies, derived data types, the new MPI-3.0 Fortran language binding, one-sided communication and the new MPI-3.0 shared memory programming model within MPI. Topics with OpenMP are the OpenMP-4.0 extensions, as the vectorization directives, thread affinity and OpenMP places. (The GPU programming with OpenMP-4.0 directives is not part of this course.) The course also contains performance and best practice considerations.

Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the taught constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. It is organized by JSC in cooperation with HLRS. 

Contents level

in hours

in %

Beginner's contents:

2 h

10 %

Intermediate contents:

8 h

40 %

Advanced contents:

10.5 h

50 %

Community-targeted contents:

0 h

0 %

Agenda

preliminary agenda

Prerequisites:

Unix / C or Fortran / familiar with the principles of MPI, e.g., to the extent of the introductory course MPI and OpenMP, i.e., at least the MPI process model, blocking point-to-point message passing and collective communication, and the single program concept of parallelizing applications, and for the afternoon session of the last day, to be familiar with OpenMP 3.0.

To be able to do the hands-on exercises of this course, you need a computer with an OpenMP capable C/C++ or Fortran compiler and a corresponding, up-to-date MPI library (in case of Fortran, the mpi_f08 module is required). Please note that the course organizers will not grant you access to an HPC system nor any other compute environment. Therefore, please make sure to have a functioning working environment / access to an HPC cluster prior to the course.

Please

tar -xvzf TEST.tar.gz

using https://fs.hlrs.de/projects/par/events/TEST.tar.gz or

unzip TEST.zip

using https://fs.hlrs.de/projects/par/events/TEST.zip and verify your MPI and OpenMP installation with the tests described in TEST/README.txt within the archive.

The exercise about race-condition detection (at the end of the course) is optional. It would require an installation of a race-condition detection tool, e.g., the Intel Inspector together with the Intel compiler.

Slides and exercises:

A few days before the course starts, you will receive pdf files from the slides and tar/zip files for installing the exercises on your system.

Target audience:

Supercomputer users who want to optimize their programs with MPI or OpenMP and already have experience in parallel programming

Language:

This course is given in English.

Duration:

3 days

Date:

29 November - 1 December 2021, 08:45-17:30

Venue:

online via Zoom

Number of Participants:

maximum 40

Instructor:

Dr. Rolf Rabenseifner, HLRS Stuttgart

Contact / Course organizer:

Photo Thomas Breuer

Thomas Breuer


Phone: +49 2461 61-96742


E-mail: t.breuer@fz-juelich.de

Registration:

Please register via the registration form until 31 October 2021.

Last Modified: 20.05.2022