This question already has an answer here:
CUDA Matrix Multiplication write to wrong memory location
(1 个回答)
4 个月前关闭。
我已经阅读了几个网站,甚至使用 NVIDA's 代码作为指导,但我仍然得到错误的答案。 main 会询问用户大小,然后显示 A 和 B,然后显示结果矩阵 C。但是说我为 A 和 B 运行一个 2x2 矩阵,这是我的示例输出:
但这是不正确的。它应该是:
我把它从小数改为整数,这样更容易检查,我发现它不正确。我不明白为什么它是不正确的,尤其是即使我是从他们的代码示例中获取的。
主机代码:
谢谢你的帮助,
担
(1 个回答)
4 个月前关闭。
我已经阅读了几个网站,甚至使用 NVIDA's 代码作为指导,但我仍然得到错误的答案。 main 会询问用户大小,然后显示 A 和 B,然后显示结果矩阵 C。但是说我为 A 和 B 运行一个 2x2 矩阵,这是我的示例输出:
Matrix A
0.000000 8.000000
2.000000 2.000000
Matrix B
3.000000 1.000000
5.000000 7.000000
Matrix C (Results)
0.000000 9.000000
7.000000 4.000000
但这是不正确的。它应该是:
40.000 56.000
16.000 16.000
我把它从小数改为整数,这样更容易检查,我发现它不正确。我不明白为什么它是不正确的,尤其是即使我是从他们的代码示例中获取的。
#ifndef _MATRIXMUL_KERNEL_H_
#define _MATRIXMUL_KERNEL_H_
#include <stdio.h>
// Thread block size
#define BLOCK_SIZE 16
#define TILE_SIZE 16
// CUDA Kernel
__global__ void matrixMul( float* C, float* A, float* B, int wA, int wB)
{
// Block index
int bx = blockIdx.x;
int by = blockIdx.y;
// Thread index
int tx = threadIdx.x;
int ty = threadIdx.y;
// Index of the first sub-matrix of A processed
// by the block
int aBegin = wA * BLOCK_SIZE * by;
// Index of the last sub-matrix of A processed
// by the block
int aEnd = aBegin + wA - 1;
// Step size used to iterate through the
// sub-matrices of A
int aStep = BLOCK_SIZE;
// Index of the first sub-matrix of B processed
// by the block
int bBegin = BLOCK_SIZE * bx;
// Step size used to iterate through the
// sub-matrices of B
int bStep = BLOCK_SIZE * wB;
float Csub=0;
// Loop over all the sub-matrices of A and B
// required to compute the block sub-matrix
for (int a = aBegin, b = bBegin; a <= aEnd; a += aStep, b += bStep)
{
// Declaration of the shared memory array As
// used to store the sub-matrix of A
__shared__ float As[BLOCK_SIZE][BLOCK_SIZE];
// Declaration of the shared memory array Bs
// used to store the sub-matrix of B
__shared__ float Bs[BLOCK_SIZE][BLOCK_SIZE];
// Load the matrices from global memory
// to shared memory; each thread loads
// one element of each matrix
As[ty][tx] = A[a + wA * ty + tx];
Bs[ty][tx] = B[b + wB * ty + tx];
// Synchronize to make sure the matrices
// are loaded
__syncthreads();
// Multiply the two matrices together;
// each thread computes one element
// of the block sub-matrix
for (int k = 0; k < BLOCK_SIZE; ++k)
Csub += As[ty][k] * Bs[k][tx];
// Synchronize to make sure that the preceding
// computation is done before loading two new
// sub-matrices of A and B in the next iteration
__syncthreads();
}
// Write the block sub-matrix to device memory;
// each thread writes one element
int c = wB * BLOCK_SIZE * by + BLOCK_SIZE * bx;
C[c + wB * ty + tx] = Csub;
}
#endif // #ifndef _MATRIXMUL_KERNEL_H_
主机代码:
//perform the calculation
//setup execution parameters
dim3 threads(BLOCK_SIZE, BLOCK_SIZE);
dim3 grid(c.colSize / threads.x, c.rowSize / threads.y);
// execute the kernel
matrixMul<<< grid, threads >>>(deviceMatrixC, deviceMatrixA, deviceMatrixB, a.colSize, b.colSize);
谢谢你的帮助,
担
最佳答案
您正在使用的代码隐式要求矩阵的大小是块大小的整数倍(在这种情况下为 16x16)。内积计算一次处理一个平铺宽度,而不检查越界内存访问。因此,2x2 矩阵将不起作用。
如果您尝试使用 16x16 输入运行内核(例如将 2x2 情况填充为零到 16x16),您应该能够确认结果。
关于c - 矩阵乘法 CUDA,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/8813750/