Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

MPI: Message Passing Interface for Parallel Computing, Study notes of Computer Science

Mpi (message passing interface) is a library for message passing communication between processes in parallel computing. It is language-neutral, more so than openmp, and helps launch processes with a run-time manager. The mpi program model, old message-passing architecture, mpi architecture, and provides 'hello world' examples in c and java. Mpi communicators and basic messages are also discussed, along with sending and receiving messages, mpi send modes, and mpi blocking.

Typology: Study notes

Pre 2010

Uploaded on 08/30/2009

koofers-user-3ur
koofers-user-3ur 🇺🇸

10 documents

1 / 15

Toggle sidebar

Related documents


Partial preview of the text

Download MPI: Message Passing Interface for Parallel Computing and more Study notes Computer Science in PDF only on Docsity! MPI MPI = message passing interface • No shared memory • More language-neutral than OpenMP Library (no new compiler) ⇒ essentially a grown-up bmsg.c Biased toward C and Fortran, but also implemented in other languages • Run-time manager helps launch processes Latest version is 2.0, but 1.3 is enough for our purposes 1 MPI Program Model Write one program... • Run-time manager runs it P times • Each process discovers its rank ⇒ role • Processes coordinate through explicit messages 2 MPI Architecture x.c x x x 5 MPI Architecture MPI run-time x.c x x x 6 MPI “Hello World” in C #include <stdio.h> #include <mpi.h> int main(int argc, char *argv[]) { int numprocs, rank, namelen; char processor_name[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &numprocs); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Get_processor_name(processor_name, &namelen); printf("Process %d on %s out of %d\n", rank, processor_name, numprocs); MPI_Finalize(); } 7 MPI Basic Messages int me = MPI.COMM_WORLD.Rank(); int size = 1; int array[] = new int[size]; if (me == 0) { array[0] = 42; MPI.COMM_WORLD.Send(array, 0, size, MPI.INT, 1, 8); System.out.println("sent " + array[0]); } else { MPI.COMM_WORLD.Recv(array, 0, size, MPI.INT, 0, 8); System.out.println("got " + array[0]); } 10 Sending a Message To send: • Specificy data as array, size, and type • Specify target process (by its rank) • Specify a tag A kind of mailbox id within the target process Meaning of a tag is completely up to programmer 11 Receiving a Message To receive: • Specificy data area as array, size, and type • Specify source process (by its rank) or use ANY_SOURCE • Specify a tag or use ANY_TAG 12
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved