我试图用一个非常简单的例子来替补parMap vs map:

import Control.Parallel.Strategies
import Criterion.Main

sq x = x^2

a = whnf sum $ map sq [1..1000000]
b = whnf sum $ parMap rseq sq [1..1000000]

main = defaultMain [
    bench "1" a,
    bench "2" b
  ]

我的结果似乎表明parMap的加速比为零,我想知道为什么会这样吗?
benchmarking 1
Warning: Couldn't open /dev/urandom
Warning: using system clock for seed instead (quality will be lower)
time                 177.7 ms   (165.5 ms .. 186.1 ms)
                     0.997 R²   (0.992 R² .. 1.000 R²)
mean                 185.1 ms   (179.9 ms .. 194.1 ms)
std dev              8.265 ms   (602.3 us .. 10.57 ms)
variance introduced by outliers: 14% (moderately inflated)

benchmarking 2
time                 182.7 ms   (165.4 ms .. 199.5 ms)
                     0.993 R²   (0.976 R² .. 1.000 R²)
mean                 189.4 ms   (181.1 ms .. 195.3 ms)
std dev              8.242 ms   (5.896 ms .. 10.16 ms)
variance introduced by outliers: 14% (moderately inflated)

最佳答案

问题在于parMap会为每个单独的列表元素触发并行计算。它根本不会像您从注释中看到的那样对列表进行分块处理,而这需要使用 parListChunk 策略。

因此parMap的开销很高,因此每个 Spark 仅平方一个数字的事实意味着该开销被该开销所淹没。

关于Haskell parMap性能?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/36777606/

10-13 03:09