我有一个一维矩阵数据。在每次迭代中,每个处理器更新其Q_send_matrix并将其发送到上一个处理器(Q_send_matrix),而它从下一个处理器(rank-1)接收新更新的矩阵作为Q_recv_matrix。例如,在迭代中,rank+1更新其Proc[0]并将其发送到Q_send_matrix,而它从Proc[3]接收Q_recv_matrix。正如你所估计的,这就像一个环形通信。请看下面的代码后,我的解释下面的代码。

        MPI_Request request;
        MPI_Status status;

        // All the elements of Q_send and Q_recv buffers
        // are set to 1.0 initially. Each processor
        // updates its Q_send buffer to prepare it
        // to be sent below.(above part is big, so it
        // is not added here...)

        /**
         * Transfer Q matrix blocks among processors
         *      + Each processor sends the Q matrix
         *  + to the previous processor while receives
         *  + the Q matrix from the next processor
         *      + It is like a ring communication
         * */


        /* Receive Q matrix with MPI_Irecv */
        source = (my_rank+1)%comm_size;
        recv_count = no_col_per_proc[source]*input_k;

        MPI_Irecv(Q_recv_matrix, recv_count,
                MPI_FP_TYPE, source,
                0, MPI_COMM_WORLD,
                &request);


        /* Send Q matrix */
        dest = (my_rank-1+comm_size)%comm_size;
        send_count = no_col_per_proc[my_rank]*input_k;

        MPI_Send(Q_send_matrix, send_count,
                MPI_FP_TYPE, dest,
                0, MPI_COMM_WORLD);


        /* Wait status */
        // MPI_Wait(request, status);

        /* Barrier */
        MPI_Barrier(MPI_COMM_WORLD);

        /* Print Q send and receive matrices */
        for( j = 0; j < send_count; j ++ )
        {
            printf("P[%d] sends Q_send[%d] to P[%d] = %.2f\n",
                    my_rank, j, dest, Q_send_matrix[j]);
        }

        for( j = 0; j < recv_count; j ++ )
        {
            printf("P[%d] receives Q_recv[%d] from P[%d] = %.2f\n",
                    my_rank, j, source, Q_recv_matrix[j]);
        }

我想用同步音来进行交流。但是,由于基于其阻塞特性的死锁,Proc[1]MPI_Send不可能。因此,我将MPI_RecvMPI_IrecvMPI_Send一起使用。但是,它没有完成,所有的处理器都在等待。所以,我用MPI_Wait代替MPI_Barrier使它们同步,并解决了处理器的等待问题,因此它们完成了工作。但是,它没有正常工作。以下代码的某些输出是错误的。每个处理器发送正确的数据,发送端没有问题。另一方面,接收的数据缓冲区没有变化。这意味着,在某些处理器中,即使从如下所述的其他处理器之一接收到数据,所接收缓冲区的初始值仍然保持不变。
P[0] sends Q_send[0] to P[3] = -2.12
P[0] sends Q_send[1] to P[3] = -2.12
P[0] sends Q_send[2] to P[3] = 4.12
P[0] sends Q_send[3] to P[3] = 4.12
P[0] receives Q_recv[0] from P[1] = 1.00
P[0] receives Q_recv[1] from P[1] = 1.00
P[0] receives Q_recv[2] from P[1] = 1.00
P[0] receives Q_recv[3] from P[1] = 1.00

P[1] sends Q_send[0] to P[0] = -2.12
P[1] sends Q_send[1] to P[0] = -2.12
P[1] sends Q_send[2] to P[0] = 0.38
P[1] sends Q_send[3] to P[0] = 0.38
P[1] receives Q_recv[0] from P[2] = 1.00
P[1] receives Q_recv[1] from P[2] = 1.00
P[1] receives Q_recv[2] from P[2] = 1.00
P[1] receives Q_recv[3] from P[2] = 1.00

P[2] sends Q_send[0] to P[1] = 1.00
P[2] sends Q_send[1] to P[1] = 1.00
P[2] sends Q_send[2] to P[1] = -24.03
P[2] sends Q_send[3] to P[1] = -24.03
P[2] receives Q_recv[0] from P[3] = 1.00
P[2] receives Q_recv[1] from P[3] = 1.00
P[2] receives Q_recv[2] from P[3] = 1.00
P[2] receives Q_recv[3] from P[3] = 1.00

P[3] sends Q_send[0] to P[2] = 7.95
P[3] sends Q_send[1] to P[2] = 7.95
P[3] sends Q_send[2] to P[2] = 0.38
P[3] sends Q_send[3] to P[2] = 0.38
P[3] receives Q_recv[0] from P[0] = -2.12
P[3] receives Q_recv[1] from P[0] = -2.12
P[3] receives Q_recv[2] from P[0] = 4.12
P[3] receives Q_recv[3] from P[0] = 4.12

最佳答案

在访问MPI_Wait中的数据之前,必须完成MPI_Test或成功的MPI_Irecv。你不能用屏障代替它。
对于环形通信,请考虑使用MPI_Sendrecv。它可以比使用异步通信更简单。

关于c - MPI_Irecv无法正确接收MPI_Send发送的数据,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/42121393/

10-11 19:06
查看更多