问题描述
我试图用最优化的并行处理,一个难题获得更好的性能。
I'm trying to optimise a puzzle with parallel processing, for better performance.
在理想情况下,在C99瓦特/ OpenMP的,我应该能够做到这一点与的帮助的#pragma OMP并行的
之前的所讨论
循环,那么它应该是到系统散布在CPU之间的负载。
Ideally, in C99 w/ OpenMP, I should be able to do that with the help of a #pragma omp parallel for
prior to a for
loop in question, and then it should be up to the system to distribute the load between the CPUs.
有关围棋在时,然而,似乎表明,用于并行处理的我必须(0),手动从运行时环境获取内核的数量,(1)中,环在所述芯,(2),从而有效地code最大鲜明的for循环每个内核,(3),循环再次核心上,以确保所有的东西得到了处理。
The official documentation for Go at https://golang.org/doc/effective_go.html#parallel, however, seems to suggest that for parallel processing I must, (0), manually get the number of cores from the runtime environment, (1), loop over said cores, (2), effectively code up a distinct for loop for each core, (3), loop over the cores once again to make sure all the stuff got processed.
我缺少的东西吗?对于简单的情况下,是OpenMP的同古Ç优于全新的往那边走,被吹捧为C的最佳替代品?对于更复杂的例子,究竟你在CPU之间划分了范围
?
Am I missing something? For the simplest case, is OpenMP with the ancient C superior to the brand new Go that's touted as C's best replacement? For a more complicated example, how exactly do you split up a range
between the CPUs?
推荐答案
有效围棋是过时的这件事,去自动设置GOMAXPROCS到处理器的数量自动(您仍可以手动设置它迫使你想要的号码)。
Effective Go is outdated about that, Go automatically sets GOMAXPROCS to the number of processors automatically (you can still manually set it to force the number you want).
有关这里的一个切片的并行处理一个很简单的例子:
For here's a very simple example for parallel processing of a slice:
data := make([]float64, SZ)
var wg sync.WaitGroup
for i := range data {
wg.Add(1)
go func(v *float64) {
// note that using rand is a bad example because global rand uses a mutex
*v = rand.Float64()
wg.Done()
}(&data[i])
}
wg.Wait()
这篇关于是否有golang像OpenMP的一个简单的'平行for`?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!