我是一般编程的新手,尤其是MPI。我试图将多个数组从根处理器分散到其他处理器,对这些数组执行一些操作,然后收集数据,但是它将所有数据分散到所有处理器,并且输出邻接矩阵不正确,所以我假设是因为我使用了scatterv和/或collectv错误。我不确定是否应该逐个元素散布矩阵,或者是否有办法散布整个矩阵。如果您可以看一下我的代码,将不胜感激。谢谢!
int rank, size;
MPI_Status status;
MPI_Datatype strip;
bool passflag[Nmats];
MPI::Init();
rank = MPI::COMM_WORLD.Get_rank();
size = MPI::COMM_WORLD.Get_size();
int sendcounts[size], recvcounts, displs[size], rcounts[size];
if(rank == root){
fin.open(infname);
fout.open(outfname);
/* INPUT ADJ-MATS */
for(i = 0; i < Nmats; i++){
fin >> dummy;
for (j = 0; j < N; j++){
for (k = 0; k < N; k++) {
fin >> a[i][j][k];
}
}
}
}
/* Nmats = Number of matrices; N = nodes; Nmats isn't divisible by the number of processors */
Nmin= Nmats/size;
Nextra = Nmats%size;
k=0;
for(i=0; i<size; i++){
if( i < Nextra) sendcounts[i] = Nmin + 1;
else sendcounts[i] = Nmin;
displs[i] = k;
k = k + sendcounts[i];
}
recvcounts = sendcounts[rank];
MPI_Type_vector(Nmin, N, N, MPI_FLOAT, &strip);
MPI_Type_commit(&strip);
MPI_Scatterv(a, sendcounts, displs, strip, a, N*N, strip, 0, MPI_COMM_WORLD);
/* Perform operations on adj-mats */
for(i=0; i<size; i++){
if(i<Nextra) rcounts[i] = Nmin + 1;
else rcounts[i] = Nextra;
displs[i] = k;
k = k + rcounts[i];
}
MPI_Gatherv(&passflag, 1, MPI::BOOL, &passflag, rcounts , displs, MPI::BOOL, 0, MPI_COMM_WORLD);
MPI::Finalize();
//OUTPUT ADJ_MATS
for(i = 0; i < Nmats; i++) if (passflag[i]) {
for(j=0;j<N; j++){
for(k=0; k<N; k++){
fout << a[i][j][k] << " ";
}
fout << endl;
}
fout << endl;
}
fout << endl;
嗨,我能够使代码适用于静态分配,但是当我尝试动态分配代码时,代码或多或少会“中断”。我不确定是否需要在MPI之外分配内存,或者在初始化MPI之后是否应该这样做。我们欢迎所有的建议!
//int a[Nmats][N][N];
/* Prior to adding this part of the code it ran fine, now it's no longer working */
int *** a = new int**[Nmats];
for(i = 0; i < Nmats; ++i){
a[i] = new int*[N];
for(j = 0; j < N; ++j){
a[i][j] = new int[N];
for(k = 0; k < N; k++){
a[i][j][k] = 0;
}
}
}
int rank, size;
MPI_Status status;
MPI_Datatype plane;
bool passflag[Nmats];
MPI::Init();
rank = MPI::COMM_WORLD.Get_rank();
size = MPI::COMM_WORLD.Get_size();
MPI_Type_contiguous(N*N, MPI_INT, &plane);
MPI_Type_commit(&plane);
int counts[size], recvcounts, displs[size+1];
if(rank == root){
fin.open(infname);
fout.open(outfname);
/* INPUT ADJ-MATS */
for(i = 0; i < Nmats; i++){
fin >> dummy;
for (j = 0; j < N; j++){
for (k = 0; k < N; k++) {
fin >> a[i][j][k];
}
}
}
}
Nmin= Nmats/size;
Nextra = Nmats%size;
k=0;
for(i=0; i<size; i++){
if( i < Nextra) counts[i] = Nmin + 1;
else counts[i] = Nmin;
displs[i] = k;
k = k + counts[i];
}
recvcounts = counts[rank];
displs[size] = Nmats;
MPI_Scatterv(&a[displs[rank]][0][0], counts, displs, plane, &a[displs[rank]][0][0], recvcounts, plane, 0, MPI_COMM_WORLD);
/* Perform operations on matrices */
MPI_Gatherv(&passflag[displs[rank]], counts, MPI::BOOL, &passflag[displs[rank]], &counts[rank], displs, MPI::BOOL, 0, MPI_COMM_WORLD);
MPI_Type_free(&plane);
MPI::Finalize();
最佳答案
看来,您在a
中实际上是每个Nmat
x N
个元素的N
平面。在嵌套循环中将a
的元素填充时为其编制索引的方式表明,这些矩阵在内存中连续布置。因此,应将a
视为Nmat
元素的数组,每个元素都是N*N
化合物。您只需要注册一个跨越单个矩阵内存的连续类型:
MPI_Type_contiguous(N*N, MPI_FLOAT, &plane);
MPI_Type_commit(&plane);
使用分散操作的就地模式可以完成数据分散而不在根目录使用其他数组:
// Perform an in-place scatter
if (rank == 0)
MPI_Scatterv(a, sendcounts, displs, plane,
MPI_IN_PLACE, 0, plane, 0, MPI_COMM_WORLD);
// ^^^^^^^^ ignored because of MPI_IN_PLACE
else
MPI_Scatterv(a, sendcounts, displs, plane,
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ignored by non-root ranks
a, sendcounts[rank], plane, 0, MPI_COMM_WORLD);
// ^^^^^^^^^^^^^^^^ !!!
请注意,每个等级必须通过提供
sendcounts[]
中的相应元素(在固定为N*N
的代码中)来指定应接收的正确平面数。就地模式也应在收集操作中使用:
if (rank == 0)
MPI_Gatherv(MPI_IN_PLACE, 0, MPI_BOOL,
// ^^^^^^^^^^^^ ignored because of MPI_IN_PLACE
passflag, rcounts, displs, MPI_BOOL, 0, MPI_COMM_WORLD);
else
MPI_Gatherv(passflag, rcounts[rank], displs, MPI_BOOL,
// ^^^^^^^^^^^^^ !!!
passflag, rcounts, displs, MPI_BOOL, 0, MPI_COMM_WORLD);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ignored by non-root ranks
请注意,
rcounts
和sendcounts
具有基本相同的值,您不必计算两次。只需调用数组counts
并在MPI_Scatterv
和MPI_Gatherv
调用中使用它。 displs
的值也相同-不要重复计算,因为它们相同。在第二次计算之前,您似乎也没有将k
设置为零(尽管可能不会在此处发布的代码中显示出来)。关于c++ - MPI_Scatterv和MPI_Gatherv用于多个3D阵列,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/24633337/