我正在使用CUDA 7.0和nVidia 980 GTX进行一些图像处理。在特定的迭代中,通过15-20个内核调用和多个cuFFT/IFFT API调用独立地处理多个tile。
因此,我将每个tile放在它自己的CUDA流中,这样每个tile就可以异步地对主机执行它的操作字符串。在一个迭代中,每个tile的大小相同,因此它们共享一个cuFFT计划。主机线程在命令中快速移动,试图让GPU加载工作。我正在经历一个周期性的竞争条件,而这些操作正在并行处理,但有一个问题,特别是袖带。如果我在流0中使用cuFFTSetStream()为tile 0放置cuFFT计划,并且在主机将共享cuFFT计划的流设置为tile 1的流1并在GPU上发布tile 1的工作之前,tile 0的FFT实际上尚未在GPU上执行,那么cuFFTExec()对此计划的行为是什么?
更简单地说,是否在计划在cufftExec()调用时设置为的流中执行对cufftExec()的调用,而不管是否在前面的FFT调用实际开始/完成之前使用cuFFTSetStream()更改后续平铺的流?
很抱歉没有发布代码,但我无法发布我的实际源代码。
最佳答案
编辑:正如注释中指出的,如果相同的计划(相同的创建句柄)用于通过流在同一设备上同时执行FFT,则the user is responsible for managing separate work areas for each usage of such plan。这个问题似乎集中在流行为本身,而我剩下的答案也集中在这一点上,但这是一个重要的问题。
如果我在流0中使用cuFFTSetStream()为tile 0放置cuFFT计划,并且在主机将共享cuFFT计划的流设置为tile 1的流1并在GPU上发布tile 1的工作之前,tile 0的FFT实际上尚未在GPU上执行,那么cuFFTExec()对此计划的行为是什么?
让我假设你说的是流1和流2,这样我们就可以避免在空流周围出现任何可能的混淆。
CUFFT应该尊重在计划通过cufftExecXXX()
传递给CUFFT时为计划定义的流。通过cufftSetStream()
对计划的后续更改不应影响先前发出的cufftExecXXX()
调用所使用的流。
我们可以通过一个相当简单的测试来验证这一点,使用profiler。考虑以下测试代码:
$ cat t1089.cu
// NOTE: this code omits independent work-area handling for each plan
// which is necessary for a plan that will be shared between streams
// and executed concurrently
#include <cufft.h>
#include <assert.h>
#include <nvToolsExt.h>
#define DSIZE 1048576
#define BATCH 100
int main(){
const int nx = DSIZE;
const int nb = BATCH;
size_t ws = 0;
cufftHandle plan;
cufftResult res = cufftCreate(&plan);
assert(res == CUFFT_SUCCESS);
res = cufftMakePlan1d(plan, nx, CUFFT_C2C, nb, &ws);
assert(res == CUFFT_SUCCESS);
cufftComplex *d;
cudaMalloc(&d, nx*nb*sizeof(cufftComplex));
cudaMemset(d, 0, nx*nb*sizeof(cufftComplex));
cudaStream_t s1, s2;
cudaStreamCreate(&s1);
cudaStreamCreate(&s2);
res = cufftSetStream(plan, s1);
assert(res == CUFFT_SUCCESS);
res = cufftExecC2C(plan, d, d, CUFFT_FORWARD);
assert(res == CUFFT_SUCCESS);
res = cufftSetStream(plan, s2);
assert(res == CUFFT_SUCCESS);
nvtxMarkA("plan stream change");
res = cufftExecC2C(plan, d, d, CUFFT_FORWARD);
assert(res == CUFFT_SUCCESS);
cudaDeviceSynchronize();
return 0;
}
$ nvcc -o t1089 t1089.cu -lcufft -lnvToolsExt
$ cuda-memcheck ./t1089
========= CUDA-MEMCHECK
========= ERROR SUMMARY: 0 errors
$
我们只是连续做两个向前的fft,在两者之间切换流。我们将使用nvtxmarker来清楚地标识发生计划流关联更改请求的点。现在让我们看一下
nvprof --print-api-trace
输出(去掉冗长的启动序言):983.84ms 617.00us cudaMalloc
984.46ms 21.628us cudaMemset
984.48ms 37.546us cudaStreamCreate
984.52ms 121.34us cudaStreamCreate
984.65ms 995ns cudaPeekAtLastError
984.67ms 996ns cudaConfigureCall
984.67ms 517ns cudaSetupArgument
984.67ms 21.908us cudaLaunch (void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [416])
984.69ms 349ns cudaGetLastError
984.69ms 203ns cudaPeekAtLastError
984.70ms 296ns cudaConfigureCall
984.70ms 216ns cudaSetupArgument
984.70ms 8.8920us cudaLaunch (void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [421])
984.71ms 272ns cudaGetLastError
984.71ms 177ns cudaPeekAtLastError
984.72ms 314ns cudaConfigureCall
984.72ms 229ns cudaSetupArgument
984.72ms 9.9230us cudaLaunch (void spRadix0256B::kernel3Mem<unsigned int, float, fftDirection_t=-1, unsigned int=16, unsigned int=2, L1, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix3_t, unsigned int, float>) [426])
984.73ms 295ns cudaGetLastError
984.77ms - [Marker] plan stream change
984.77ms 434ns cudaPeekAtLastError
984.78ms 357ns cudaConfigureCall
984.78ms 228ns cudaSetupArgument
984.78ms 10.642us cudaLaunch (void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [431])
984.79ms 287ns cudaGetLastError
984.79ms 193ns cudaPeekAtLastError
984.80ms 293ns cudaConfigureCall
984.80ms 208ns cudaSetupArgument
984.80ms 7.7620us cudaLaunch (void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [436])
984.81ms 297ns cudaGetLastError
984.81ms 178ns cudaPeekAtLastError
984.81ms 269ns cudaConfigureCall
984.81ms 214ns cudaSetupArgument
984.81ms 7.4130us cudaLaunch (void spRadix0256B::kernel3Mem<unsigned int, float, fftDirection_t=-1, unsigned int=16, unsigned int=2, L1, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix3_t, unsigned int, float>) [441])
984.82ms 312ns cudaGetLastError
984.82ms 152.63ms cudaDeviceSynchronize
$
我们看到每个FFT操作需要3个内核调用。在这两者之间,我们可以看到nvtx标记指示何时请求更改计划流,这并不奇怪,它发生在前3个内核启动之后,但在最后3个内核启动之前。最后,我们注意到基本上所有的执行时间都集中在最后的
cudaDeviceSynchronize()
调用中。前面的所有调用都是异步的,因此在执行的第一毫秒内或多或少地“立即”执行。最终的同步将占用6个内核的所有处理时间,总计约150毫秒。因此,如果
cufftSetStream
对cufftExecC2C()
调用的第一次迭代有影响,我们将期望看到前3个内核中的一些或所有被启动到与前3个内核相同的流中。但是当我们看到nvprof --print-gpu-trace
输出时:$ nvprof --print-gpu-trace ./t1089
==3757== NVPROF is profiling process 3757, command: ./t1089
==3757== Profiling application: ./t1089
==3757== Profiling result:
Start Duration Grid Size Block Size Regs* SSMem* DSMem* Size Throughput Device Context Stream Name
974.74ms 7.3440ms - - - - - 800.00MB 106.38GB/s Quadro 5000 (0) 1 7 [CUDA memset]
982.09ms 23.424ms (25600 2 1) (32 8 1) 32 8.0000KB 0B - - Quadro 5000 (0) 1 13 void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [416]
1.00551s 21.172ms (25600 2 1) (32 8 1) 32 8.0000KB 0B - - Quadro 5000 (0) 1 13 void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [421]
1.02669s 27.551ms (25600 1 1) (16 16 1) 61 17.000KB 0B - - Quadro 5000 (0) 1 13 void spRadix0256B::kernel3Mem<unsigned int, float, fftDirection_t=-1, unsigned int=16, unsigned int=2, L1, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix3_t, unsigned int, float>) [426]
1.05422s 23.592ms (25600 2 1) (32 8 1) 32 8.0000KB 0B - - Quadro 5000 (0) 1 14 void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [431]
1.07781s 21.157ms (25600 2 1) (32 8 1) 32 8.0000KB 0B - - Quadro 5000 (0) 1 14 void spRadix0064B::kernel1Mem<unsigned int, float, fftDirection_t=-1, unsigned int=32, unsigned int=4, CONSTANT, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix1_t, unsigned int, float>) [436]
1.09897s 27.913ms (25600 1 1) (16 16 1) 61 17.000KB 0B - - Quadro 5000 (0) 1 14 void spRadix0256B::kernel3Mem<unsigned int, float, fftDirection_t=-1, unsigned int=16, unsigned int=2, L1, ALL, WRITEBACK>(kernel_parameters_t<fft_mem_radix3_t, unsigned int, float>) [441]
Regs: Number of registers used per CUDA thread. This number includes registers used internally by the CUDA driver and/or tools and can be more than what the compiler shows.
SSMem: Static shared memory allocated per CUDA block.
DSMem: Dynamic shared memory allocated per CUDA block.
$
我们看到,事实上前3个内核按照请求被发到第一个流中,最后3个内核被发到第二个流中。(并且所有内核的总执行时间大约为150毫秒,正如API跟踪输出所建议的)。因为底层内核启动是异步的,并且在返回
cufftExecC2C()
调用之前发出,如果仔细考虑,就会得出结论,它必须是这样的。要将内核启动到的流在内核启动时指定。(当然,我认为这被视为“首选”行为。)关于c - 并发流中的CUDA cuFFT API行为,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/35488348/