本文介绍了改善并行计算的内存布局的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

限时删除!!

我正在尝试针对使用C ++ AMP的并行计算优化算法(Lattice Boltzmann).在寻找优化内存布局的建议时,发现将结构中的一个参数删除到另一个向量(阻塞向量)中可以增加约10%.

I'm trying to optimize an algorithm (Lattice Boltzmann) for parallel computing using C++ AMP. And looking for some suggestions to optimize the memory layout, just found out that removing one parameter from the structure into another vector (the blocked vector) gave and increase of about 10%.

任何人都有可以进一步改善这一点的提示,还是我应该考虑的一些提示?下面是每个时间步执行的最耗时的功能,以及用于布局的结构.

Anyone got any tips that can further improve this, or something i should take into consideration?Below is the most time consuming function that is executed for each timestep, and the structure used for the layout.

struct grid_cell {
//  int blocked;    // Define if blocked
    float n;        // North
    float ne;       // North-East
    float e;        // East
    float se;       // South-East
    float s;
    float sw;
    float w;
    float nw;
    float c;        // Center
};

int collision(const struct st_parameters param, vector<struct grid_cell> &node, vector<struct grid_cell> &tmp_node, vector<int> &obstacle) {
    int x,y;
    int i = 0;
    float c_sq = 1.0f/3.0f;     // Square of speed of sound
    float w0 = 4.0f/9.0f;       // Weighting factors
    float w1 = 1.0f/9.0f;
    float w2 = 1.0f/36.0f;

    int chunk = param.ny/20;
    float total_density = 0;

    float u_x,u_y;              // Avrage velocities in x and y direction
    float u[9];                 // Directional velocities
    float d_equ[9];             // Equalibrium densities
    float u_sq;                 // Squared velocity
    float local_density;        // Sum of densities in a particular node

    for(y=0;y<param.ny;y++) {
        for(x=0;x<param.nx;x++) {
            i = y*param.nx + x; // Node index
            // Dont consider blocked cells
            if (obstacle[i] == 0) {
                // Calculate local density
                local_density = 0.0;
                local_density += tmp_node[i].n;
                local_density += tmp_node[i].e;
                local_density += tmp_node[i].s;
                local_density += tmp_node[i].w;
                local_density += tmp_node[i].ne;
                local_density += tmp_node[i].se;
                local_density += tmp_node[i].sw;
                local_density += tmp_node[i].nw;
                local_density += tmp_node[i].c;

                // Calculate x velocity component
                u_x = (tmp_node[i].e + tmp_node[i].ne + tmp_node[i].se -
                      (tmp_node[i].w + tmp_node[i].nw + tmp_node[i].sw))
                      / local_density;
                // Calculate y velocity component
                u_y = (tmp_node[i].n + tmp_node[i].ne + tmp_node[i].nw -
                      (tmp_node[i].s + tmp_node[i].sw + tmp_node[i].se))
                      / local_density;
                // Velocity squared
                u_sq = u_x*u_x +u_y*u_y;

                // Directional velocity components;
                u[1] =  u_x;        // East
                u[2] =        u_y;  // North
                u[3] = -u_x;        // West
                u[4] =      - u_y;  // South
                u[5] =  u_x + u_y;  // North-East
                u[6] = -u_x + u_y;  // North-West
                u[7] = -u_x - u_y;  // South-West
                u[8] =  u_x - u_y;  // South-East

                // Equalibrium densities
                // Zero velocity density: weight w0
                d_equ[0] = w0 * local_density * (1.0f - u_sq / (2.0f * c_sq));
                // Axis speeds: weight w1
                d_equ[1] = w1 * local_density * (1.0f + u[1] / c_sq
                                 + (u[1] * u[1]) / (2.0f * c_sq * c_sq)
                                 - u_sq / (2.0f * c_sq));
                d_equ[2] = w1 * local_density * (1.0f + u[2] / c_sq
                                 + (u[2] * u[2]) / (2.0f * c_sq * c_sq)
                                 - u_sq / (2.0f * c_sq));
                d_equ[3] = w1 * local_density * (1.0f + u[3] / c_sq
                                 + (u[3] * u[3]) / (2.0f * c_sq * c_sq)
                                 - u_sq / (2.0f * c_sq));
                d_equ[4] = w1 * local_density * (1.0f + u[4] / c_sq
                                 + (u[4] * u[4]) / (2.0f * c_sq * c_sq)
                                 - u_sq / (2.0f * c_sq));
                // Diagonal speeds: weight w2
                d_equ[5] = w2 * local_density * (1.0f + u[5] / c_sq
                                 + (u[5] * u[5]) / (2.0f * c_sq * c_sq)
                                 - u_sq / (2.0f * c_sq));
                d_equ[6] = w2 * local_density * (1.0f + u[6] / c_sq
                                 + (u[6] * u[6]) / (2.0f * c_sq * c_sq)
                                 - u_sq / (2.0f * c_sq));
                d_equ[7] = w2 * local_density * (1.0f + u[7] / c_sq
                                 + (u[7] * u[7]) / (2.0f * c_sq * c_sq)
                                 - u_sq / (2.0f * c_sq));
                d_equ[8] = w2 * local_density * (1.0f + u[8] / c_sq
                                 + (u[8] * u[8]) / (2.0f * c_sq * c_sq)
                                 - u_sq / (2.0f * c_sq));

                // Relaxation step
                node[i].c = (tmp_node[i].c + param.omega * (d_equ[0] - tmp_node[i].c));
                node[i].e = (tmp_node[i].e + param.omega * (d_equ[1] - tmp_node[i].e));
                node[i].n = (tmp_node[i].n + param.omega * (d_equ[2] - tmp_node[i].n));
                node[i].w = (tmp_node[i].w + param.omega * (d_equ[3] - tmp_node[i].w));
                node[i].s = (tmp_node[i].s + param.omega * (d_equ[4] - tmp_node[i].s));
                node[i].ne = (tmp_node[i].ne + param.omega * (d_equ[5] - tmp_node[i].ne));
                node[i].nw = (tmp_node[i].nw + param.omega * (d_equ[6] - tmp_node[i].nw));
                node[i].sw = (tmp_node[i].sw + param.omega * (d_equ[7] - tmp_node[i].sw));
                node[i].se = (tmp_node[i].se + param.omega * (d_equ[8] - tmp_node[i].se));
            }
        }
    }
    return 1;
}

推荐答案

众所周知,当前的GPU依赖于内存布局.如果没有有关您的应用程序的更多详细信息,那么我建议您进行一些探索:

Current GPUs are notoriously depending about memory layout. Without more details about your application here are some things I would suggest you explore:

  1. 单位跨步访问非常重要,因此GPU比结构的数组"更喜欢数组的结构".当您确实将受阻"的场移动到矢量障碍物"时,将"grid_cell"的所有场转换为单独的矢量应该是有利的.对于不访问所有字段的循环,这也应该在CPU上显示出优势.

  1. Unit-stride access is very important so GPUs prefer "structs of arrays" to "arrays of structures". As you did moving field "blocked" into vector "obstacle", it should be advantageous to convert all of the fields of "grid_cell" into separate vectors. This should show benefit on CPU as well for loops that don’t access all of the fields.

如果障碍"非常稀疏(我想这不太可能),那么将其移动到其自己的向量就特别有价值.像CPU这样的GPU会以缓存行或其他形式从内存系统中加载多个字,并且当您不需要某些数据时会浪费带宽.对于许多系统而言,带宽是瓶颈资源,因此任何减少带宽的方法都将有所帮助.

If "obstacle" is very sparse (which I guess is unlikely) then moving it to its own vector is particularly value. GPUs like CPUs will load more than one word from the memory system either in cache lines or some other form and you waste bandwidth when you don’t need some of the data. For many system memory bandwidth is the bottleneck resource so any way to reduce bandwidth helps.

这更具推测性,但是既然您正在写入所有输出矢量,则内存子系统可能会避免读取节点"中的值,而该值将被简单地覆盖

This is more speculative, but now that you are writing all of the output vector, it is possible the memory subsystem is avoiding reading values in "node" that will simply be overwritten

在某些系统上,片上存储器被划分为存储体,并且结构中具有奇数个字段可能有助于消除存储体冲突.

On some systems, the on-chip memory is split into banks and having an odd number of fields within your structure may help remove bank conflicts.

某些系统还将对装载和存储进行向量化",因此再次从结构中删除阻塞"可能会实现更多的向量化.向数组结构的转移减轻了这种担忧.

Some systems will also "vectorize" loads and stores so again removing "blocked" from the structure might enable more vectorization. The shift to struct-of-arrays mitigates this worry.

感谢您对C ++ AMP的关注.

Thanks for your interest in C++ AMP.

David Callahan

David Callahan

http://blogs.msdn.com/b/nativeconcurrency/ C ++ AMP团队博客

http://blogs.msdn.com/b/nativeconcurrency/ C++ AMP Team Blog

这篇关于改善并行计算的内存布局的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

1403页,肝出来的..

09-06 10:54