Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

High Performance Computing Lecture 40: Message Passing Interface (MPI) with Docsity.com, Slides of Computer Science

A lecture from docsity.com on high performance computing, focusing on message passing interface (mpi). It covers the standard api, key functions and constants, making mpi programs, mpi communicators, message tags, and synchronous vs asynchronous message passing. It also introduces mpi group communication with examples of broadcast, scatter, gather, and reduce.

Typology: Slides

2012/2013

Uploaded on 04/28/2013

dewaan
dewaan 🇮🇳

3.8

(4)

43 documents

1 / 24

Toggle sidebar

Related documents


Partial preview of the text

Download High Performance Computing Lecture 40: Message Passing Interface (MPI) with Docsity.com and more Slides Computer Science in PDF only on Docsity! High Performance Computing Lecture 40 Docsity.com 3 Message Passing Interface (MPI) Standard API  Hides software/hardware details  Portable, flexible Implemented as a library Your program MPI Library Custom software Standard TCP/IP Standard network HW Custom hardware Docsity.com 6 MPI Communicators  Defines communication domain of a communication operation: set of processes that are allowed to communicate among themselves  Initially all processes are in the communicator MPI_COMM_WORLD  Processes have unique ranks associated with communicator, numbered from 0 to n-1  Other communicators can be established for groups of processes Docsity.com 7 Example main (int argc, char *argv[]) { MPI_Init(&argc, &argv); . . MPI_Comm_rank(MPI_COMM_WORLD, &myrank); if (myrank == 0) master(); else slave(); . . MPI_Finalize(); } Docsity.com 8 Example MPI_Comm_rank(MPI_COMM_WORLD,&myrank); if (myrank == 0) { int x; MPI_Send(&x, 1, MPI_INT, 1, msgtag, MPI_COMM_WORLD); } else if (myrank == 1) { int x; MPI_Recv(&x, 1, MPI_INT, 0,msgtag,MPI_COMM_WORLD,status); } Docsity.com 11 MPI Message Tag  Cooperating processes may need to send several messages between each other  Message tag: Used to differentiate between different types of messages being sent  Message tag is carried within the message and used in both send and receive calls  If special matching is not required, a wild card message tag is used so that the receive will match with any send  MPI_ANY_TAG Docsity.com 12 MPI: Matching Sends and Recvs  Sender always specifies destination and tag  Receiver can specify for exact match or using wild cards  MPI_ANY_SOURCE  MPI_ANY_TAG Flavours of Sends/Receives Synchronous Asynchronous Docsity.com 13 Synchronous Message Passing  Send/Receive routines that return when message transfer completed  Synchronous send  Waits until complete message can be accepted by receiving process before sending the message  Synchronous receive  Waits until the message it is expecting arrives  Synchronous routines perform two actions  transfer data  synchronize processes Docsity.com 16 MPI Blocking and Non-blocking  Blocking - return after local actions complete, though the message transfer may not have been completed  Non-blocking - return immediately  Assumes that data storage to be used for transfer is not modified by subsequent statements prior to being used for transfer  Implementation dependent local buffer space is used for keeping message temporarily Docsity.com 17 Non-blocking Routines  MPI_Isend (buf, count, datatype, dest, tag, comm, request)  MPI_Irecv (buf, count, datatype, source, tag, comm, request)  Completion detected by MPI_Wait() and MPI_Test()  MPI_Wait() waits until operation completed and then returns  MPI_Test() returns with flag set indicating whether or not operation has completed Docsity.com 18 MPI Group Communication  Until now we have looked at what are called point-to-point messages  MPI also provides routines that sends messages to a group of processes or receive messages from a group of processes  Not absolutely necessary for programming  More efficient than separate point-to-point routines  Examples: broadcast, gather, scatter, reduce, barrier  MPI_Bcast, MPI_Reduce, MPI_Allreduce, MPI_Alltoall, MPI_Scatter, MPI_Gather, MPI_Barrier Docsity.com 21 Scatter Process 0 Process 1 Process n-1 data buf MPI_scatter(..); data MPI_scatter(..); data MPI_scattert(..); … Docsity.com 22 Gather Process 0 Process 1 Process n-1 data buf MPI_gather(..); data MPI_gather(..); data MPI_gather(..); … Docsity.com 23 Reduce Process 0 Process 1 Process n-1 data buf MPI_reduce(..); data MPI_reduce(..); data MPI_reduce(..); … + Docsity.com
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved