问题描述
我有我这样开始这两了openmpi节目
I have two openmpi programs which I start like this
mpirun -n 4 ./prog1 : -n 2 ./prog2
现在我该怎样利用 MPI_Comm_size(MPI_COMM_WORLD,&安培;大小)
这样,我得到的尺寸值
Now how do I use MPI_Comm_size(MPI_COMM_WORLD, &size)
such that i get size values as
prog1 size=4
prog2 size=2.
截至目前,我在这两个项目获得6。
As of now I get "6" in both programs.
推荐答案
这是可行的虽然有点麻烦得到。其原理是基于价值的argv
,其中包含可执行文件的名称。 MPI_COMM_WORLD
来分成传播者[0]
This is doable albeit a bit cumbersome to get that. The principle is to split MPI_COMM_WORLD
into communicators based on the value of argv[0]
, which contains the executable's name.
这可能是类似的东西:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <mpi.h>
int main( int argc, char *argv[] ) {
MPI_Init( &argc, &argv );
int wRank, wSize;
MPI_Comm_rank( MPI_COMM_WORLD, &wRank );
MPI_Comm_size( MPI_COMM_WORLD, &wSize );
int myLen = strlen( argv[0] ) + 1;
int maxLen;
// Gathering the maximum length of the executable' name
MPI_Allreduce( &myLen, &maxLen, 1, MPI_INT, MPI_MAX, MPI_COMM_WORLD );
// Allocating memory for all of them
char *names = malloc( wSize * maxLen );
// and copying my name at its place in the array
strcpy( names + ( wRank * maxLen ), argv[0] );
// Now collecting all executable' names
MPI_Allgather( MPI_IN_PLACE, 0, MPI_DATATYPE_NULL,
names, maxLen, MPI_CHAR, MPI_COMM_WORLD );
// With that, I can sort-out who is executing the same binary as me
int binIdx = 0;
while( strcmp( argv[0], names + binIdx * maxLen ) != 0 ) {
binIdx++;
}
free( names );
// Now, all processes with the same binIdx value are running the same binary
// I can split MPI_COMM_WORLD accordingly
MPI_Comm binComm;
MPI_Comm_split( MPI_COMM_WORLD, binIdx, wRank, &binComm );
int bRank, bSize;
MPI_Comm_rank( binComm, &bRank );
MPI_Comm_size( binComm, &bSize );
printf( "Hello from process WORLD %d/%d running %d/%d %s binary\n",
wRank, wSize, bRank, bSize, argv[0] );
MPI_Comm_free( &binComm );
MPI_Finalize();
return 0;
}
在我的机器,我编译并运行它如下:
On my machine, I compiled and ran it as follow:
~> mpicc mpmd.c
~> cp a.out b.out
~> mpirun -n 3 ./a.out : -n 2 ./b.out
Hello from process WORLD 0/5 running 0/3 ./a.out binary
Hello from process WORLD 1/5 running 1/3 ./a.out binary
Hello from process WORLD 4/5 running 1/2 ./b.out binary
Hello from process WORLD 2/5 running 2/3 ./a.out binary
Hello from process WORLD 3/5 running 0/2 ./b.out binary
在理想情况下,这可以大大利用 MPI_Comm_split_type简化()
如果用二进制整理出相应的类型存在。不幸的是,有没有这样的 MPI_COMM_TYPE _
$ P $在3.1 MPI标准对界定。唯一的pre定义一个是 MPI_COMM_TYPE_SHARED
梳理出相同的共享内存计算节点上运行...太糟糕进程之间!也许值得考虑为标准的下一个版本?
Ideally, this could be greatly simplified by using MPI_Comm_split_type()
if the corresponding type for sorting out by binaries existed. Unfortunately, there is no such MPI_COMM_TYPE_
pre-defined in the 3.1 MPI standard. The only pre-defined one is MPI_COMM_TYPE_SHARED
to sort-out between processes running on the same shared memory compute nodes... Too bad! Maybe something to consider for the next version of the standard?
这篇关于MPMD的openmpi获得通信大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!