Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Parallel Programming with PVM and MPI: Message Passing and Process Control - Prof. Alan L., Exams of Computer Science

An overview of the parallel virtual machine (pvm) and message passing interface (mpi) for parallel programming. Pvm is a free, portable parallel environment that supports message passing and os services, while mpi is a standard for message passing between parallel processes. The features, process control, and message passing functions of both pvm and mpi, as well as their performance issues and differences. It also includes a sample pvm program and references to resources for further learning.

Typology: Exams

Pre 2010

Uploaded on 07/30/2009

koofers-user-iva-1
koofers-user-iva-1 🇺🇸

5

(1)

10 documents

1 / 3

Toggle sidebar

Related documents


Partial preview of the text

Download Parallel Programming with PVM and MPI: Message Passing and Process Control - Prof. Alan L. and more Exams Computer Science in PDF only on Docsity! CMSC 818 - Alan Sussman (from J. Hollingsworth) 1 Message Passing with PVM and MPI 2CMSC 818 – Alan Sussman (from J. Hollingsworth) PVM  Provide a simple, free, portable parallel environment  Run on everything – Parallel Hardware: SMP, MPPs, Vector Machines – Network of Workstations: ATM, Ethernet, • UNIX machines and PCs running Win32 API – Works on a heterogenous collection of machines • handles type conversion as needed  Provides two things – message passing library • point-to-point messages • synchronization: barriers, reductions – OS support • process creation (pvm_spawn) 3CMSC 818 – Alan Sussman (from J. Hollingsworth) PVM Environment (UNIX) Application Process Bus Network PVMDPVMD PVMDPVMD PVMD Application Process Application Process Application ProcessApplication Process Sun SPARC Sun SPARC IBM RS/6000 Cray Y-MPDECmmp 12000  One PVMD per machine – all processes communicate through pvmd (by default)  Any number of application processes per node 4CMSC 818 – Alan Sussman (from J. Hollingsworth) PVM Message Passing  All messages have tags – an integer to identify the message – defined by the user  Messages are constructed, then sent – pvm_pk{int,char,float}(*var, count, stride) – pvm_unpk{int,char,float} to unpack  All processes are named based on task ids (tids) – local/remote processes are the same  Primary message passing functions – pvm_send(tid, tag) – pvm_recv(tid, tag) 5CMSC 818 – Alan Sussman (from J. Hollingsworth) PVM Process Control  Creating a process – pvm_spawn(task, argv, flag, where, ntask, tids) – flag and where provide control of where tasks are started – ntask controls how many copies are started – program must be installed on target machine  Ending a task – pvm_exit – does not exit the process, just the PVM machine  Info functions – pvm_mytid() - get the process task id 6CMSC 818 – Alan Sussman (from J. Hollingsworth) PVM Group Operations  Group is the unit of communication – a collection of one or more processes – processes join group with pvm_joingroup(“<group name>“) – each process in the group has a unique id • pvm_gettid(“<group name>“)  Barrier – can involve a subset of the processes in the group – pvm_barrier(“<group name>“, count)  Reduction Operations – pvm_reduce( void (*func)(), void *data, int count, int datatype, int msgtag, char *group, int rootinst) • result is returned to rootinst node • does not block – pre-defined funcs: PvmMin, PvmMax,PvmSum,PvmProduct CMSC 818 - Alan Sussman (from J. Hollingsworth) 2 7CMSC 818 – Alan Sussman (from J. Hollingsworth) PVM Performance Issues  Messages have to go through PVMD – can use direct route option to prevent this problem  Packing messages – semantics imply a copy – extra function call to pack messages  Heterogenous Support – information is sent in machine independent format – has a short circuit option for known homogenous comm. • passes data in native format then 8CMSC 818 – Alan Sussman (from J. Hollingsworth) Sample PVM Program int main(int argc, char **argv) { int myGroupNum; int friendTid; int mytid; int tids[2]; int message[MESSAGESIZE]; int c,i,okSpawn; /* Initialize process and spawn if necessary */ myGroupNum=pvm_joingroup("ping-pong"); mytid=pvm_mytid(); if (myGroupNum==0) { /* I am the first process */ pvm_catchout(stdout); okSpawn=pvm_spawn(MYNAME,argv,0,"",1,&friendTid); if (okSpawn!=1) { printf("Can't spawn a copy of myself!\n"); pvm_exit(); exit(1); } tids[0]=mytid; tids[1]=friendTid; } else { /*I am the second process */ friendTid=pvm_parent(); tids[0]=friendTid; tids[1]=mytid; } pvm_barrier("ping-pong",2); /* Main Loop Body */ if (myGroupNum==0) { /* Initialize the message */ for (i=0 ; i<MESSAGESIZE ; i++) { message[i]='1'; } /* Now start passing the message back and forth */ for (i=0 ; i<ITERATIONS ; i++) { pvm_initsend(PvmDataDefault); pvm_pkint(message,MESSAGESIZE,1); pvm_send(tid,msgid); pvm_recv(tid,msgid); pvm_upkint(message,MESSAGESIZE,1); } } else { pvm_recv(tid,msgid); pvm_upkint(message,MESSAGESIZE,1); pvm_initsend(PvmDataDefault); pvm_pkint(message,MESSAGESIZE,1); pvm_send(tid,msgid); } pvm_exit(); exit(0); } 9CMSC 818 – Alan Sussman (from J. Hollingsworth) MPI  Goals: – Standardize previous message passing: • PVM, P4, NX, MPL, … – Support copy-free message passing – Portable to many platforms  Features: – point-to-point messaging – group/collective communications – profiling interface: every function has a name shifted version  Buffering (in standard mode) – no guarantee that there are buffers – possible that send will block until receive is called  Delivery Order – two sends from same process to same dest. will arrive in order – no guarantee of fairness between processes on recv. 10CMSC 818 – Alan Sussman (from J. Hollingsworth) MPI Communicators  Provide a named set of processes for communication – plus a context – system allocated unique tag  All processes within a communicator can be named – numbered from 0…n-1  Allows libraries to be constructed – application creates communicators – library uses it – prevents problems with posting wildcard receives • adds a communicator scope to each receive  All programs start with MPI_COMM_WORLD – Functions for creating communicators from other communicators (split, duplicate, etc.) – Functions for finding out about processes within communicator (size, my_rank, …) 11CMSC 818 – Alan Sussman (from J. Hollingsworth) Non-Blocking Point-to-point Functions  Two Parts – post the operation – wait for results  Also includes a poll/test option – checks if the operation has finished  Semantics – must not alter buffer while operation is pending (wait returns or test returns true) 12CMSC 818 – Alan Sussman (from J. Hollingsworth) Collective Communication  Communicator specifies process group to participate  Various operations, that may be optimized in an MPI implementation – Barrier synchronization – Broadcast – Gather/scatter (with one destination, or all in group) – Reduction operations – predefined and user-defined • Also with one destination or all in group – Scan – prefix reductions  Collective operations may or may not synchronize – Up to the implementation, so application can’t make assumptions
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved