我在玩OpenMP,偶然发现了一些我不了解的东西。我正在使用以下并行代码(正常工作)。当使用两倍以上的线程时,其执行时间几乎减半。但是,使用带有一个线程的OpenMP的执行时间为35秒,而当我注释编译指示时,它的执行时间减少到25秒!我有什么办法可以减少这种庞大的开销?
我正在使用gcc 4.8.1并使用“ -O2 -Wall -fopenmp”进行编译。
我读过类似的主题(OpenMP with 1 thread slower than sequential version,OpenMP overhead)-意见从无开销到大量开销都有所不同。我很好奇在我的特殊情况下(for
循环和在parallell for
内部)使用OpenMP的更好方法。
for (size_t k = 0 k < maxk; ++k) { // k is ~5000
// init reduction variables
const bool is_time_for_reduction = ;// init from k
double mmin = INFINITY, mmax = -INFINITY;
double sum = 0.0;
#pragma omp parallel shared(m1, m2)
{
// w, h are both between 1000 and 2000
#pragma omp for
for (size_t i = 0; i < h; ++i) { // w,h - consts
for (size_t j = 0; j < w; ++j) {
// computations with matrices m1 and m2, using only m1,m2 and constants w,h
}
}
if (is_time_for_reduction) {
#pragma omp for reduction (max/min/sum: mmax,mmin,sum)
for (size_t i = 0; i < h; ++i) {
for (size_t j = 0; j < w; ++j) {
// reductions
}
}
}
}
if (is_time_for_reduction) {
// use "reduced" variables
}
}
最佳答案
我没有理由更改您的原始顺序代码。我会尝试这样的:
for (size_t k = 0 k < maxk; ++k) {
// init reduction variables
const bool is_time_for_reduction = ;// init from k
double mmin = INFINITY, mmax = -INFINITY;
double sum = 0.0;
#pragma omp parallel for
for (size_t i = 0; i < h; ++i) { // w,h - consts
for (size_t j = 0; j < w; ++j) {
// computations with matrices m1 and m2, using only m1,m2 and constants w,h
}
}
if (is_time_for_reduction) {
#pragma omp parallel for reduction (max/min/sum: mmax,mmin,sum)
for (size_t i = 0; i < h; ++i) {
for (size_t j = 0; j < w; ++j) {
// reductions
}
}
// use "reduced" variables
}
}
关于c - 单线程OpenMP与顺序的开销,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/26892343/