我是MPI新手,用C语言编写了以下程序。我不想使用指针,而是想设置如下所示的数组。我的第一个数组元素读取正确,之后,它将不会读取数组元素。你能告诉我这是不是正确的使用分散和聚集的方法吗
以下是我得到的结果::$ mpicc test.c -o test
:$ mpirun -np 4 test
1. Processor 0 has data: 0 1 2 3
2. Processor 0 has data 0
3. Processor 0 doubling the data, now has 5
2. Processor 1 has data 32767
3. Processor 1 doubling the data, now has 5
2. Processor 2 has data -437713961
3. Processor 2 doubling the data, now has 5
2. Processor 3 has data 60
3. Processor 3 doubling the data, now has 5
4. Processor 0 has data: 5 1 2 3
正确的结果应该是::$ mpicc test.c -o test
:$ mpirun -np 4 test
1. Processor 0 has data: 0 1 2 3
2. Processor 0 has data 0
3. Processor 0 doubling the data, now has 5
2. Processor 1 has data 1
3. Processor 1 doubling the data, now has 5
2. Processor 2 has data 2
3. Processor 2 doubling the data, now has 5
2. Processor 3 has data 3
3. Processor 3 doubling the data, now has 5
4. Processor 0 has data: 5 5 5 5
任何帮助都将不胜感激。以下代码使用4个处理器运行:
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
int size, rank;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int globaldata[4];/*wants to declare array this way*/
int localdata[4];/*without using pointers*/
int i;
if (rank == 0) {
for (i=0; i<size; i++)
globaldata[i] = i;
printf("1. Processor %d has data: ", rank);
for (i=0; i<size; i++)
printf("%d ", globaldata[i]);
printf("\n");
}
MPI_Scatter(globaldata, 1, MPI_INT, &localdata, 1, MPI_INT, 0, MPI_COMM_WORLD);
printf("2. Processor %d has data %d\n", rank, localdata[rank]);
localdata[rank]= 5;
printf("3. Processor %d now has %d\n", rank, localdata[rank]);
MPI_Gather(&localdata, 1, MPI_INT, globaldata, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0) {
printf("4. Processor %d has data: ", rank);
for (i=0; i<size; i++)
printf("%d ", globaldata[i]);
printf("\n");
}
MPI_Finalize();
return 0;
}
最佳答案
你的设置和散布原则上是可以的。您的问题在于打印,因为您误解了此处的散射/聚集细节。
当分散4元素数组时,每个进程只得到一个元素(正如您用MPI_Scatter call()
的第2个和第5个参数定义的那样)。此元素存储在本地数组的0索引中。它实际上是一个标量。
一般来说,您可能会分散非常大的数组,每个进程可能仍然需要处理一个大的本地数组。在这些情况下,正确计算全局指标和局部指标至关重要。
假设存在以下玩具问题:您希望将数组[1 2 3 4 5 6]分散到两个进程。Proc0应该有[1 2 3]部分,Proc1应该有[4 5 6]部分。在这种情况下,全局数组的大小为6,而本地数组的大小为3。Proc0获取全局元素0、1、2,并将它们赋给其本地元素0、1、2。Proc1获取全局元素3、4、5并将它们分配给其本地0、1、2。
当您了解MPI_Scatterv时,您可能会更好地理解这个概念,因为它并没有为每个进程假设相同数量的本地元素。
此版本的代码似乎有效:
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
int size, rank;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int globaldata[4];/*wants to declare array this way*/
int localdata;/*without using pointers*/
int i;
if (rank == 0) {
for (i=0; i<size; i++)
globaldata[i] = i;
printf("1. Processor %d has data: ", rank);
for (i=0; i<size; i++)
printf("%d ", globaldata[i]);
printf("\n");
}
MPI_Scatter(globaldata, 1, MPI_INT, &localdata, 1, MPI_INT, 0, MPI_COMM_WORLD);
printf("2. Processor %d has data %d\n", rank, localdata);
localdata= 5;
printf("3. Processor %d now has %d\n", rank, localdata);
MPI_Gather(&localdata, 1, MPI_INT, globaldata, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0) {
printf("4. Processor %d has data: ", rank);
for (i=0; i<size; i++)
printf("%d ", globaldata[i]);
printf("\n");
}
MPI_Finalize();
return 0;
}
享受学习MPI!:-)
关于c - 如何使用MPI散点图并通过数组收集,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/40080362/