我对 https://github.com/inducer/pyopencl/blob/master/examples/benchmark-all.py 的标准代码做了一点修改

替换成数字,变量zz

import pyopencl as cl
import numpy
import numpy.linalg as la
import datetime
from time import time
zz=100
a = numpy.random.rand(zz).astype(numpy.float32)
b = numpy.random.rand(zz).astype(numpy.float32)
c_result = numpy.empty_like(a)

# Speed in normal CPU usage
time1 = time()
for i in range(zz):
        for j in range(zz):
                c_result[i] = a[i] + b[i]
                c_result[i] = c_result[i] * (a[i] + b[i])
                c_result[i] = c_result[i] * (a[i] / 2)
time2 = time()
print("Execution time of test without OpenCL: ", time2 - time1, "s")


for platform in cl.get_platforms():
    for device in platform.get_devices():
        print("===============================================================")
        print("Platform name:", platform.name)
        print("Platform profile:", platform.profile)
        print("Platform vendor:", platform.vendor)
        print("Platform version:", platform.version)
        print("---------------------------------------------------------------")
        print("Device name:", device.name)
        print("Device type:", cl.device_type.to_string(device.type))
        print("Device memory: ", device.global_mem_size//1024//1024, 'MB')
        print("Device max clock speed:", device.max_clock_frequency, 'MHz')
        print("Device compute units:", device.max_compute_units)

        # Simnple speed test
        ctx = cl.Context([device])
        queue = cl.CommandQueue(ctx,
                properties=cl.command_queue_properties.PROFILING_ENABLE)

        mf = cl.mem_flags
        a_buf = cl.Buffer(ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=a)
        b_buf = cl.Buffer(ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=b)
        dest_buf = cl.Buffer(ctx, mf.WRITE_ONLY, b.nbytes)

        prg = cl.Program(ctx, """
            __kernel void sum(__global const float *a,
            __global const float *b, __global float *c)
            {
                        int loop;
                        int gid = get_global_id(0);
                        for(loop=0; loop<%s;loop++)
                        {
                                c[gid] = a[gid] + b[gid];
                                c[gid] = c[gid] * (a[gid] + b[gid]);
                                c[gid] = c[gid] * (a[gid] / 2);
                        }
                }
                """ % (zz)).build()

        exec_evt = prg.sum(queue, a.shape, None, a_buf, b_buf, dest_buf)
        exec_evt.wait()
        elapsed = 1e-9*(exec_evt.profile.end - exec_evt.profile.start)

        print("Execution time of test: %g s" % elapsed)

        c = numpy.empty_like(a)
        cl.enqueue_read_buffer(queue, dest_buf, c).wait()
        error = 0
        for i in range(zz):
                if c[i] != c_result[i]:
                        error = 1
        if error:
                print("Results doesn't match!!")
        else:
                print("Results OK")

如果 zz=100 我有:
('Execution time of test without OpenCL: ', 0.10500001907348633, 's')
===============================================================
('Platform name:', 'AMD Accelerated Parallel Processing')
('Platform profile:', 'FULL_PROFILE')
('Platform vendor:', 'Advanced Micro Devices, Inc.')
('Platform version:', 'OpenCL 1.1 AMD-APP-SDK-v2.5 (684.213)')
---------------------------------------------------------------
('Device name:', 'Cypress\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
('Device type:', 'GPU')
('Device memory: ', 800, 'MB')
('Device max clock speed:', 850, 'MHz')
('Device compute units:', 20)
Execution time of test: 0.00168922 s
Results OK
===============================================================
('Platform name:', 'AMD Accelerated Parallel Processing')
('Platform profile:', 'FULL_PROFILE')
('Platform vendor:', 'Advanced Micro Devices, Inc.')
('Platform version:', 'OpenCL 1.1 AMD-APP-SDK-v2.5 (684.213)')
---------------------------------------------------------------
('Device name:', 'Intel(R) Core(TM) i5 CPU         750  @ 2.67GHz\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
('Device type:', 'CPU')
('Device memory: ', 8183L, 'MB')
('Device max clock speed:', 3000, 'MHz')
('Device compute units:', 4)
Execution time of test: 4.369e-05 s
Results OK

我们有3次:
normal  ('Execution time of test without OpenCL: ', 0.10500001907348633, 's')
pyopencl radeon 5870: Execution time of test: 0.00168922 s
pyopencl i5 CPU 750: Execution time of test: 4.369e-05 s

第一个问题包:pyopencl i5 CPU 750 是什么?为什么他更快“正常”('没有 OpenCL 的测试执行时间) 250 倍?为什么他的“pyopencl radeon 5870”速度更快约 38 倍?

如果 zz=1000 我们有:
normal  ('Execution time of test without OpenCL: ', 9.05299997329712, 's')
pyopencl radeon 5870:Execution time of test: 0.0104431 s
pyopencl i5 CPU 750: Execution time of test: 0.00238112 s

i5*5=radeon5870

i5*3800=正常

如果 zz=10000
normal its to long... comment code...
redeon58700, Execution time of test: 0.085571 s
i5, Execution time of test: 0.261854 s

在这里我们看看如何赢得显卡。

比较时间序列结果仍然很有趣。
normal_stage1*90=normal_stage2 normal_stage2*~95=normal_stage3(根据经验)

i5_stage1*52=i5_stage2 i5_stage2*109=i5_stage3

radeon5870_stage1*6=radeon_stage2 radeon_stage2*8=radeon_stage3

有人解释为什么结果 opencl 增长不是线性的?

最佳答案

好吧,增长不太可能是线性的,因为算法复杂度是 O(zz^2)。

要得出关于“线性”的结论,您应该有 3 个以上的点(并且在进行此类分析时误差条也非常有用),因为对于 GPU 100 个线程来说,到目前为止还不足以充分利用它的计算能力(如您的实验所示, GPU 仅以 10k 或更多线程开始击败 CPU——这是很正常的情况)。

仅在 CPU 上提速 250 倍也并非完全不可能,因为 Python 是解释性语言,因此本身速度不是很快,而且 OpenCL 积极使用 CPU 的 SIMD 指令,这也提供了相当好的速度提升,即使进行比较到 C+OpenMP。

关于python - PyOpenCl 基准问题,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/7376616/

10-12 14:19