Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Message Passing with PVM and MPI: A Comparison of Parallel Programming Environments - Prof, Study notes of Computer Science

An overview of message passing interface (mpi) and parallel virtual machine (pvm), two popular parallel programming environments. The features, benefits, and differences between these systems, including their architecture, message passing mechanisms, process control, and performance issues. It also includes sample code for both pvm and mpi.

Typology: Study notes

Pre 2010

Uploaded on 02/13/2009

koofers-user-anx
koofers-user-anx 🇺🇸

5

(1)

10 documents

1 / 14

Toggle sidebar

Related documents


Partial preview of the text

Download Message Passing with PVM and MPI: A Comparison of Parallel Programming Environments - Prof and more Study notes Computer Science in PDF only on Docsity! Message Passing with PVM and MPI 2CMSC 818 – Alan Sussman (from J. Hollingsworth) PVM Provide a simple, free, portable parallel environment Run on everything – Parallel Hardware: SMP, MPPs, Vector Machines – Network of Workstations: ATM, Ethernet, • UNIX machines and PCs running Win32 API – Works on a heterogenous collection of machines • handles type conversion as needed Provides two things – message passing library • point-to-point messages • synchronization: barriers, reductions – OS support • process creation (pvm_spawn) 5CMSC 818 – Alan Sussman (from J. Hollingsworth) PVM Process Control Creating a process – pvm_spawn(task, argv, flag, where, ntask, tids) – flag and where provide control of where tasks are started – ntask controls how many copies are started – program must be installed on target machine Ending a task – pvm_exit – does not exit the process, just the PVM machine Info functions – pvm_mytid() - get the process task id 6CMSC 818 – Alan Sussman (from J. Hollingsworth) PVM Group Operations Group is the unit of communication – a collection of one or more processes – processes join group with pvm_joingroup(“<group name>“) – each process in the group has a unique id • pvm_gettid(“<group name>“) Barrier – can involve a subset of the processes in the group – pvm_barrier(“<group name>“, count) Reduction Operations – pvm_reduce( void (*func)(), void *data, int count, int datatype, int msgtag, char *group, int rootinst) • result is returned to rootinst node • does not block – pre-defined funcs: PvmMin, PvmMax,PvmSum,PvmProduct 7CMSC 818 – Alan Sussman (from J. Hollingsworth) PVM Performance Issues Messages have to go through PVMD – can use direct route option to prevent this problem Packing messages – semantics imply a copy – extra function call to pack messages Heterogenous Support – information is sent in machine independent format – has a short circuit option for known homogenous comm. • passes data in native format then 10CMSC 818 – Alan Sussman (from J. Hollingsworth) MPI Communicators Provide a named set of processes for communication – plus a context – system allocated unique tag All processes within a communicator can be named – numbered from 0…n-1 Allows libraries to be constructed – application creates communicators – library uses it – prevents problems with posting wildcard receives • adds a communicator scope to each receive All programs start with MPI_COMM_WORLD – Functions for creating communicators from other communicators (split, duplicate, etc.) – Functions for finding out about processes within communicator (size, my_rank, …) 11CMSC 818 – Alan Sussman (from J. Hollingsworth) Non-Blocking Point-to-point Functions Two Parts – post the operation – wait for results Also includes a poll/test option – checks if the operation has finished Semantics – must not alter buffer while operation is pending (wait returns or test returns true) 12CMSC 818 – Alan Sussman (from J. Hollingsworth) Collective Communication Communicator specifies process group to participate Various operations, that may be optimized in an MPI implementation – Barrier synchronization – Broadcast – Gather/scatter (with one destination, or all in group) – Reduction operations – predefined and user-defined • Also with one destination or all in group – Scan – prefix reductions Collective operations may or may not synchronize – Up to the implementation, so application can’t make assumptions
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved