High Performance ComputingCourse Notes 2007-2008.ppt
《High Performance ComputingCourse Notes 2007-2008.ppt》由会员分享,可在线阅读,更多相关《High Performance ComputingCourse Notes 2007-2008.ppt(57页珍藏版)》请在麦多课文档分享上搜索。
1、High Performance Computing Course Notes 2007-2008 Message Passing Programming I,Message Passing Programming,Message Passing is the most widely used parallel programming model Message passing works by creating a number of tasks, uniquely named, that interact by sending and receiving messages to and f
2、rom one another (hence the message passing) Generally, processes communicate through sending the data from the address space of one process to that of another Communication of processes (via files, pipe, socket) Communication of threads within a process (via global data area) Programs based on messa
3、ge passing can be based on standard sequential language programs (C/C+, Fortran), augmented with calls to library functions for sending and receiving messages,Message Passing Interface (MPI),MPI is a specification, not a particular implementation Does not specify process startup, error codes, amount
4、 of system buffer, etc MPI is a library, not a language The goals of MPI: functionality, portability and efficiency Message passing model MPI specification MPI implementation,OpenMP vs MPI,In a nutshellMPI is used on distributed-memory systemsOpenMP is used for code parallelisation on shared-memory
5、systemsBoth are explicit parallelism High-level control (OpenMP), lower-level control (MPI),A little history,Message-passing libraries developed for a number of early distributed memory computers By 1993 there were loads of vendor specific implementations By 1994 MPI-1 came into being By 1996 MPI-2
6、was finalized,The MPI programming model,MPI standards - MPI-1 (1.1, 1.2), MPI-2 (2.0) Forwards compatibility preserved between versions Standard bindings - for C, C+ and Fortran. Have seen MPI bindings for Python, Java etc (all non-standard) We will stick to the C binding, for the lectures and cours
7、ework. More info on MPI www.mpi-forum.org Implementations - For your laptop pick up MPICH (free portable implementation of MPI (http:/www-unix.mcs.anl. gov/mpi/mpich/index.htm) Coursework will use MPICH,MPI,MPI is a complex system comprising of 129 functions with numerous parameters and variants Six
8、 of them are indispensable, but can write a large number of useful programs already Other functions add flexibility (datatype), robustness (non-blocking send/receive), efficiency (ready-mode communication), modularity (communicators, groups) or convenience (collective operations, topology). In the l
9、ectures, we are going to cover most commonly encountered functions,The MPI programming model,Computation comprises one or more processes that communicate via library routines and sending and receiving messages to other processes (Generally) a fixed set of processes created at outset, one process per
10、 processor Different from PVM,Intuitive Interfaces for sending and receiving messages,Send(data, destination), Receive(data, source) minimal interface Not enough in some situations, we also need Message matching add message_id at both send and receive interfaces they become Send(data, destination, m
11、sg_id), receive(data, source, msg_id) Message_id Is expressed using an integer, termed as message tag Allows the programmer to deal with the arrival of messages in an orderly fashion (queue and then deal with,How to express the data in the send/receive interfaces,Early stages: (address, length) for
12、the send interface (address, max_length) for the receive interface They are not always good The data to be sent may not be in the contiguous memory locations Storing format for data may not be the same or known in advance in heterogeneous platform Enventually, a triple (address, count, datatype) is
13、used to express the data to be sent and (address, max_count, datatype) for the data to be received Reflecting the fact that a message contains much more structures than just a string of bits, For example, (vector_A, 300, MPI_REAL) Programmers can construct their own datatype Now, the interfaces beco
14、me send(address, count, datatype, destination, msg_id) and receive(address, max_count, datatype, source, msg_id),How to distinguish messages,Message tag is necessary, but not sufficientSo, communicator is introduced ,Communicators,Messages are put into contexts Contexts are allocated at run time by
15、the system in response to programmer requests The system can guarantee that each generated context is unique The processes belong to groups The notions of context and group are combined in a single object, which is called a communicator A communicator identifies a group of processes and a communicat
16、ion context The MPI library defines a initial communicator, MPI_COMM_WORLD, which contains all the processes running in the system The messages from different process groups can have the same tag So the send interface becomes send(address, count, datatype, destination, tag, comm),Status of the recei
17、ved messages,The structure of the message status is added to the receive interface Status holds the information about source, tag and actual message size In the C language, source can be retrieved by accessing status.MPI_SOURCE, tag can be retrieved by status.MPI_TAG and actual message size can be r
18、etrieved by calling the function MPI_Get_count(&status, datatype, &count) The receive interface becomes receive(address, maxcount, datatype, source, tag, communicator, status),How to express source and destination,The processes in a communicator (group) are identified by ranks If a communicator cont
19、ains n processes, process ranks are integers from 0 to n-1 Source and destination processes in the send/receive interface are the ranks,Some other issues,In the receive interface, tag can be a wildcard, which means any message will be received In the receive interface, source can also be a wildcard,
20、 which match any source,MPI basics,First six functions (C bindings)MPI_Send (buf, count, datatype, dest, tag, comm)Send a messagebuf address of send buffercount no. of elements to send (=0)datatype of elementsdest process id of destination tag message tagcomm communicator (handle),MPI basics,First s
21、ix functions (C bindings)MPI_Send (buf, count, datatype, dest, tag, comm)Send a messagebuf address of send buffercount no. of elements to send (=0)datatype of elementsdest process id of destination tag message tagcomm communicator (handle),MPI basics,First six functions (C bindings)MPI_Send (buf, co
22、unt, datatype, dest, tag, comm)Send a messagebuf address of send buffercount no. of elements to send (=0)datatype of elementsdest process id of destination tag message tagcomm communicator (handle),MPI basics,First six functions (C bindings)MPI_Send (buf, count, datatype, dest, tag, comm)Calculating
23、 the size of the data to be send buf address of send buffercount * sizeof (datatype) bytes of data,MPI basics,First six functions (C bindings)MPI_Send (buf, count, datatype, dest, tag, comm)Send a messagebuf address of send buffercount no. of elements to send (=0)datatype of elementsdest process id
24、of destination tag message tagcomm communicator (handle),MPI basics,First six functions (C bindings)MPI_Send (buf, count, datatype, dest, tag, comm)Send a messagebuf address of send buffercount no. of elements to send (=0)datatype of elementsdest process id of destination tag message tagcomm communi
- 1.请仔细阅读文档,确保文档完整性,对于不预览、不比对内容而直接下载带来的问题本站不予受理。
- 2.下载的文档,不会出现我们的网址水印。
- 3、该文档所得收入(下载+内容+预览)归上传者、原创作者;如果您是本文档原作者,请点此认领!既往收益都归您。
下载文档到电脑,查找使用更方便
2000 积分 0人已下载
下载 | 加入VIP,交流精品资源 |
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- HIGHPERFORMANCECOMPUTINGCOURSENOTES20072008PPT

链接地址:http://www.mydoc123.com/p-372912.html