Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Message-Passing Programming: A Comprehensive Guide by Jingke Li - Prof. Jingke Li, Exams of Computer Science

This document, authored by jingke li from portland state university, provides an in-depth exploration of message-passing programming. It covers hardware characteristics, data and computation decomposition, communication strategies, and various communication primitives. Students in computer science, particularly those taking cs 415/515, will find this document useful for understanding the concepts and techniques of message-passing programming.

Typology: Exams

Pre 2010

Uploaded on 08/18/2009

koofers-user-sv6
koofers-user-sv6 🇺🇸

10 documents

1 / 13

Toggle sidebar

Related documents


Partial preview of the text

Download Message-Passing Programming: A Comprehensive Guide by Jingke Li - Prof. Jingke Li and more Exams Computer Science in PDF only on Docsity! Message-Passing Programming Jingke Li Portland State University Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 1 / 26 Hardware Characteristics • Nodes are independent computers with private memory • Processors communicate via message passing through an interconnection network Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 2 / 26 Basic Programming Issues • Decomposition — Partitioning data and computation, and distributing them to processors. • Which first, data decomposition or computation decomposition? • How to select a decomposition strategy? • Communication — Passing messages between processors to facilitate data sharing and computation synchronization. • Figuring out senders and receivers for each communication • Selecting proper communication routines • Placing communication routines in program • Deciding when to invoke communication routines Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 3 / 26 Data and Computation Decomposition • Computation Decomposition First — Decompose the computation workload into disjoint tasks, and map them to the processors first. Partition and distribute data later. Since these tasks are likely to access the same data set, regardless how the data are distributed, a large amount of messages may have to be generated. Not really suitable for large-scale message-passing systems. • Data Decomposition First — Decompose the data into small portions, and map them to the processors. For each data portion, the associated computation is carried out on the assigned processor. Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 4 / 26 Surface-Volume Example Volume = 4 × 4 = 16 Surface = 4 × 8 = 32 Volume = 8 × 2 = 16 Surface = 8 × 4 + 2 × 4 = 40 Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 9 / 26 Data Alignment Often times there are multiple data objects in a program. Data decomposition needs to take the dependencies between the objects into consideration, or a higher communication cost may result. For example: forall i=1,n forall j=1,n b(j) = b(j) + a(i,j) end forall end forall j i Domain of a Domain of b Two possible alignments for the two domains: • Align array b with first row of array a. • Align array b with first column of array a. Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 10 / 26 Data Alignment (cont.) j i j i As shown above, in one case, the messages are confined in individual rows; while in the other case, the messages are very scattered. A common approach to data alignment problem is to map data objects to a virtual array first, then decompose the virtual array and map it to the actual processors. Unfortunately, the problem of finding the optimal alignment is proven NP-complete. Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 11 / 26 Communication • Point-to-Point vs. Collective Communication — Point-to-point involves only a single pair of send and receive, while a collective communication involves multiple pairs of send and receive working collectively. • Synchronous vs. Asynchronous — In synchronous communication, senders and receivers execute in a coordinated fashion (sender/receiver pairs); asynchronous communication may require that a receiver obtain data without the cooperation of the sender. Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 12 / 26 Send/Receive Primitives Provide the basic message-passing service from one source node to one destination node. The two nodes do not have to be connected by a physical link, e.g. any node can send a message to any other node. • Sender — The user program issues a send routine call; the routine copies data from the user space and sends it to the destination (it may use an intermediate buffer). • Receiver — The user program issues a receive routine call; the routine goes to receive the message sent from the sender (it may need to wait), and place it to the user space. Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 13 / 26 Blocking vs. Non-Blocking Depending on the timing of return, a send or receive routine can be either blocking or non-blocking. • Blocking means the send/receive routine will block until it is “safe” to return — when a blocking routine returns, it is safe to issue other sends/receives. • For a blocking send routine, safe means that the message data can be modified and the communication buffer can be reused. • For a blocking receive routine, safe means the message has been received and is available for use. If the message has not arrived at the time the receive routine is issued, it will wait until the message arrives. • If not careful, blocking sends and receives can lead to deadlock. Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 14 / 26 “One-Way” Communication The data producers (senders) are passive, they only respond to requests from the consumers (receivers). “Remote reads” and “remote writes.” C C C C D D D D read(1)1 read(3) 3 write(5) 0 1 2 3 4 5 6 7 The distributed data structure is encapsulated in a set of tasks responsible only for responding to read and write requests. Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 19 / 26 Collective Communications Synchronous concurrent messages implementing global communication patterns. • One-to-Many — Spread data from one node to many other nodes. • Broadcast: send same data to every other node. • Multicast: send same data to a set of nodes. • Scatter: send different data to different nodes. broadcast multicast ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ • ◦ ◦ ◦ ◦ ◦ ◦ ◦ • ◦ ◦ ◦ • ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 20 / 26 Collective Communications (cont.) • Many-to-One — Combine messages from many nodes to one node. • Reduce: combine multiple data by a reduction function to a single data. (+, −, ×, /, min, max, etc.) • Gather: collect multiple data to a single node. ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ • ◦ ◦ ◦ ◦ ◦ ◦ ◦ • ◦ ◦ ◦ • ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 21 / 26 Collective Communications (cont.) • Many-to-Many — Concurrent, disjoint send/receive pairs. • ◦ • ◦ • ◦ • ◦ • ◦ ◦ ◦ • ◦ ◦ ◦ • ◦ ◦ ◦ • ◦ ◦ ◦ ◦ • • • ◦ • • • ◦ • • • ◦ • • • Shift Transpose Rotate Flip Skew • • • • • • • • • • • • • • • • ◦ ◦ ◦ ◦ • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 22 / 26 Programming Message-Passing MIMD Systems Theoretically, each node on a MIMD system can execute an independent programs. In the real world, people typically follow the SPMD (Same Program Multiple Data) Model to program a MIMD system: All processors execute the same program; control statements in the program customize the code for individual processors. Example: if (mynode = source node) send data to dest node if (mynode = dest node) receive data from source node broadcast(source node, data, size) A Variant of the SPMD model is the Master/Slave Model, where a copy of master program is executed on a single master node and a copy of slave program is executed on multiple slave nodes. Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 23 / 26 Other Aspects of Communication • Structured vs. Unstructured — In structured communication, a task and its neighbors form a regular structure, such as as tree or grid; unstructured communication networks may be arbitrary graphs. • Static vs. Dynamic — In static communication, the identity of communication partners does not change overtime; in dynamic communication, the identity may be computed at runtime. Jingke Li (Portland State University) CS 415/515 Message-Passing Programming 24 / 26
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved