Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

MPI: Understanding the Message Passing Interface for Parallel Programming, Slides of Parallel Computing and Programming

Mpi, or message passing interface, is a standard library for message-passing that enables the development of portable message-passing programs using c or fortran. An overview of the mpi standard, its core routines, and their usage for starting and terminating the mpi library, querying information, and sending and receiving messages. It also includes an example mpi program.

Typology: Slides

2011/2012

Uploaded on 07/23/2012

paramita
paramita 🇮🇳

4.6

(15)

126 documents

1 / 13

Toggle sidebar

Related documents


Partial preview of the text

Download MPI: Understanding the Message Passing Interface for Parallel Programming and more Slides Parallel Computing and Programming in PDF only on Docsity! MPI: the Message Passing Interface • MPI defines a standard library for message-passing that can be used to develop portable message- passing programs using either C or Fortran. • The MPI standard defines both the syntax as well as the semantics of a core set of library routines. • Vendor implementations of MPI are available on almost all commercial parallel computers. • It is possible to write (min) fully-functional message- passing programs by using only the six routines. docsity.com MPI: the Message Passing Interface The minimal set of MPI routines. MPI_Init Initializes MPI. MPI_Finalize Terminates MPI. MPI_Comm_size Determines the number of processes. MPI_Comm_rank Determines the label of calling process. MPI_Send Sends a message. MPI_Recv Receives a message. docsity.com Querying Information • The MPI_Comm_size & MPI_Comm_rank functions are used to determine the number of processes & the label of the calling process, respectively. • The calling sequences of these routines are as follows: int MPI_Comm_size(MPI_Comm comm, int *size) int MPI_Comm_rank(MPI_Comm comm, int *rank) • The rank of a process is an integer that ranges from zero up to the size of the communicator minus one. docsity.com Our First MPI Program #include <mpi.h> main(int argc, char *argv[]) { int npes, myrank; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &npes); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); printf("From process %d out of %d, Hello World!\n", myrank, npes); MPI_Finalize(); } docsity.com MPI Datatypes MPI Datatype C Datatype MPI_CHAR signed char MPI_SHORT signed short int MPI_INT signed int MPI_LONG signed long int MPI_UNSIGNED_CHAR unsigned char MPI_UNSIGNED_SHORT unsigned short int MPI_UNSIGNED unsigned int MPI_UNSIGNED_LONG unsigned long int MPI_FLOAT float MPI_DOUBLE double MPI_LONG_DOUBLE long double MPI_BYTE MPI_PACKED MPI provides equivalent datatypes for all C datatypes. This is done for portability reasons. The datatype MPI_BYTE corresponds to a byte (8 bits) and MPI_PACKED corresponds to a collection of data items that has been created by packing non-contiguous data. docsity.com Sending and Receiving Messages • On the receiving end, the status variable can be used to get information about the MPI_Recv operation. • The corresponding data structure contains: typedef struct MPI_Status { int MPI_SOURCE; int MPI_TAG; int MPI_ERROR; }; • The MPI_Get_count function returns the precise count of data items received. int MPI_Get_count(MPI_Status *status, MPI_Datatype datatype, int *count) docsity.com Example MPI-Program: Adding N-numbers using p-processors (N > p , N mod p = 0) docsity.com if (my rank == 0) { printf ("Enter N:" ); scanf ("“%d", &N) ; for (i= 1; i < group size; i++) MPI_Send(&N, 1, MPI_INT, i, i, MPI_COMM_WORLD) ; for (1 = my_rank; i<N; 1=1+ group size) sum = sum + x(i) ; for (i = 1; 1< group size; i++) ( MPI Recv(&tmp, 1, MPI_INT, i, i, MPI COMM WORLD, &status) ; sum = sum + tmp; printf (“The result = $d”, sum) ; ® docsity.com
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved