问题描述
我在第三种方式中失败了.t3
仍在 CPU 上.不知道为什么.
I failed in the third way. t3
is still on CPU. No idea why.
a = np.random.randn(1, 1, 2, 3)
t1 = torch.tensor(a)
t1 = t3.to(torch.device('cuda'))
t2 = torch.tensor(a)
t2 = t2.cuda()
t3 = torch.tensor(a, device=torch.device('cuda'))
推荐答案
这三种方法都对我有用.
All three methods worked for me.
在 1 和 2 中,您在 CPU 上创建一个张量,然后在使用 .to(device)
或 .cuda()
时将其移动到 GPU.他们在这里是一样的.
In 1 and 2, you create a tensor on CPU and then move it to GPU when you use .to(device)
or .cuda()
. They are the same here.
但是,当您使用 .to(device)
方法时,您可以通过设置 device=torch.device("cuda:<id>")
.使用 .cuda()
你必须做 .cuda()
才能移动到某个特定的 GPU.
However, when you use .to(device)
method you can explicitly tell torch to move to specific GPU by setting device=torch.device("cuda:<id>")
. with .cuda()
you have to do .cuda(<id>)
to move to some particular GPU.
那为什么会有这两种方法?
.to(device)
是在 0.4 中引入的,因为在代码顶部将 device
变量声明为
.to(device)
was introduced in 0.4 because it is easier to declare device
variable at top of the code as
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
并在任何地方使用 .to(device)
.这使得从 CPU 切换到 GPU 非常容易,反之亦然
and use .to(device)
everywhere. This makes it quite easy to switch from CPU to GPU and vice-versa
在此之前,我们必须使用 .cuda()
并且您的代码将在任何地方使用 if
检查 cuda.is_available()
使得在 GPU/CPU 之间切换变得很麻烦.
Before this, we had to use .cuda()
and your code will have if
check for cuda.is_available()
everywhere which made it cumbersome to switch between GPU/CPU.
第三种方法不在CPU上创建张量,直接将数据拷贝到GPU,效率更高.
The third method doesn't create a tensor on the CPU and directly copies data to GPU, which is more efficient.
这篇关于Pytorch 0.4.0:在 CUDA 设备上可以通过三种方式创建张量.它们之间有什么区别吗?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!