问题描述
我需要一个MPI C代码,通过MPI I / O将数据写入二进制文件。我需要进程0来编写一个短标题,然后我需要整个进程范围来编写标题所指示的自己的数组。然后我需要进程0来编写另一个头,然后是所有进程编写下一个数组的部分,等等。我想出了以下测试代码,它实际上做了我想要的。没有人比我更惊讶。
I need an MPI C code to write data to a binary file via MPI I/O. I need process 0 to write a short header, then I need the whole range of processes to write their own pieces of the array indicated by the header. Then I need process 0 to write another header, followed by all processes writing their pieces of the next array, etc. I came up with the following test code which actually does what I want. No one will be more surprised about that than me.
我的问题是,我是MPI I / O的新手。所以我得到它?我这样做是正确的方式还是有更高效或更紧凑的方式来做到这一点?
My question is, I am new at MPI I/O. So am I "getting it"? Am I doing this the "right way" or is there some more efficient or compact way to do it?
代码是:(顺便说一句,如果你想测试一下,只试试4次触发。)
Code is: (BTW, if you think of testing this, try it with 4 procs only.)
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "mpi.h"
#define ROWS 9
#define COLS 10
int main(int argc, char *argv[]) {
int size_mpi, rank_mpi, row_mpi, col_mpi;
int i,j,p,ttlcols;
int sizes[]= {2*ROWS,2*COLS};
int subsizes[]= {ROWS,COLS};
int starts[] = {0,0};
int vals[ROWS][COLS];
char hdr[] = "This is just a header.\n";
MPI_Status stat_mpi;
MPI_Datatype subarray;
MPI_File fh;
MPI_Offset offset, end_of_hdr;
MPI_Info info_mpi;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD,&size_mpi);
MPI_Comm_rank(MPI_COMM_WORLD,&rank_mpi);
ttlcols = 2*COLS;
/* Where are we in the array of processes? */
col_mpi = rank_mpi%2;
row_mpi = rank_mpi/2;
/* Populate the array */
for (j=0; j<ROWS; j++){
for (i=0; i<COLS; i++){
vals[j][i] = ttlcols*(ROWS*row_mpi + j) +
COLS*col_mpi + i;
}
}
/* MPI derived datatype for setting a file view */
starts[0] = row_mpi*ROWS;
starts[1] = col_mpi*COLS;
MPI_Type_create_subarray(2, sizes, subsizes, starts,
MPI_ORDER_C, MPI_INT,
&subarray);
MPI_Type_commit(&subarray);
/* open the file */
printf("opening file\n");
MPI_File_open(MPI_COMM_WORLD, "arrdata.dat",
MPI_MODE_WRONLY | MPI_MODE_CREATE,
MPI_INFO_NULL, &fh);
printf("opened file\n");
/* set the initial file view */
MPI_File_set_view(fh, 0, MPI_CHAR, MPI_CHAR, "native", MPI_INFO_NULL);
/* proc 0 writes first header */
if (rank_mpi == 0) {
MPI_File_write(fh, (void*)hdr, strlen(hdr), MPI_CHAR, &stat_mpi);
MPI_File_get_position(fh, &offset);
MPI_File_get_byte_offset(fh, offset, &end_of_hdr);
}
/* everybody has to know where proc 0 stopped writing */
MPI_Bcast((void*)&end_of_hdr, 1, MPI_INT, 0, MPI_COMM_WORLD);
/* re-set file view for writing first array */
MPI_File_set_view(fh, end_of_hdr, MPI_INT,
subarray, "native",
MPI_INFO_NULL);
/* and write the array */
MPI_File_write(fh, (void*)vals, ROWS*COLS, MPI_INT,
&stat_mpi);
/* now go through the whole thing again to test */
MPI_File_get_position(fh, &offset);
MPI_File_get_byte_offset(fh, offset, &end_of_hdr);
MPI_File_set_view(fh, end_of_hdr, MPI_CHAR, MPI_CHAR, "native", MPI_INFO_NULL);
if (rank_mpi == 0) {
MPI_File_write(fh, (void*)hdr, strlen(hdr), MPI_CHAR, &stat_mpi);
MPI_File_get_position(fh, &offset);
MPI_File_get_byte_offset(fh, offset, &end_of_hdr);
}
MPI_Bcast((void*)&end_of_hdr, 1, MPI_INT, 0, MPI_COMM_WORLD);
MPI_File_set_view(fh, end_of_hdr, MPI_INT,
subarray, "native",
MPI_INFO_NULL);
MPI_File_write(fh, (void*)vals, ROWS*COLS, MPI_INT,
&stat_mpi);
MPI_File_close(&fh);
MPI_Finalize();
return 0;
}
推荐答案
您的方法很好,如果你现在需要一些东西将比特放在一个文件中,请继续自称完成。
Your approach is fine and if you need something right now to put bits in a file, go ahead and call yourself done.
以下是一些提高效率的建议:
Here are some suggestions for more efficiency:
-
您可以查询状态对象,了解写入的字节数,而不是获取位置并转换为字节。
You can consult the status object for how many bytes were written, instead of getting the position and translating into bytes.
如果你有内存在写入之前保存所有数据,你可以用MPI数据类型来描述你的I / O(诚然,这可能最终会让你很难创建) )。然后所有进程都会发出一个集体调用。
If you have the memory to hold all the data before you write, you could describe your I/O with an MPI datatype (admittedly, one that might end up being a pain to create). Then all processes would issue a single collective call.
您应该使用集体I / O而不是独立的I / O. 质量库应该能够为您提供相同甚至更好的性能(如果没有,您可以提出MPI实现的问题)。
You should use collective I/O instead of independent I/O. A "quality library" should be able to give you equal if not better performance (and if not, you could raise the issue with your MPI implementation).
如果进程有不同的数据量要写,MPI_EXSCAN是收集谁拥有哪些数据的好方法。然后你可以调用MPI_FILE_WRITE_AT_ALL到文件中的正确偏移量。
If the processes have different amounts of data to write, MPI_EXSCAN is a good way to collect who has what data. Then you can call MPI_FILE_WRITE_AT_ALL to the correct offset in the file.
这篇关于MPI I / O,单进程和多进程输出的混合的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!