我创建了一个看起来像这样的DataLoader

class ToTensor(object):
    def __call__(self, sample):
        return torch.from_numpy(sample).to(device)

class MyDataset(Dataset):
    def __init__(self, data, transform=None):
        self.data = data
        self.transform = transform

    def __len__(self):
        return len(self.data)

    def __getitem__(self, idx):
        sample = self.data[idx, :]

        if self.transform:
            sample = self.transform(sample)

        return sample


我正在像这样使用此数据加载器

dataset = MLBDataset(
        data=data,
        transform=transforms.Compose([
            ToTensor()
        ]))
dataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)
dataiter = iter(dataloader)
x = dataiter.next()


消息失败

THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THC/THCGeneral.cpp line=55 error=3 : initialization error
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THC/THCGeneral.cpp line=55 error=3 : initialization error
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THC/THCGeneral.cpp line=55 error=3 : initialization error
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THC/THCGeneral.cpp line=55 error=3 : initialization error
...
    torch._C._cuda_init()
RuntimeError: cuda runtime error (3) : initialization error at /opt/conda/conda-bld/pytorch_1549628766161/work/aten/src/THC/THCGeneral.cpp:55


对于return中的ToTensor()命令,实际上任何在GPU中移动张量的尝试都将失败。我试过了:

a = np.array([[[1, 2, 3, 4], [5, 6, 7, 8], [25, 26, 27, 28]],
             [[11, 12, np.nan, 14], [15, 16, 17, 18], [35, 36, 37, 38]]])
print(torch.from_numpy(a).to(device))


__call__ToTensor()正文中,它会失败并显示相同的消息,而在其他所有地方都成功。

为什么会产生此错误,我该如何解决?

最佳答案

根据link,这可能与多处理问题有关。您可以找到以下workaround

关于python - 将NumPy数组正确转换为在GPU上运行的PyTorch张量,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/54773293/

10-12 14:05