本文介绍了Pytorch 复制张量的首选方式的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在 Pytorch 中似乎有几种方法可以创建张量的副本,包括

y = tensor.new_tensor(x) #ay = x.clone().detach() #by = torch.empty_like(x).copy_(x) #cy = torch.tensor(x) #d

根据用户警告,

b 明显优于 ad,如果我执行 ad.为什么是首选?表现?我认为它的可读性较差.

支持/反对使用 c 的任何理由?

解决方案

TL;DR

使用 .clone().detach()(或者最好是 .detach().clone())

如果先分离张量然后克隆它,计算路径不会被复制,反之则被复制然后被放弃.因此,.detach().clone() 的效率要稍微高一些.--

注意: 在多次运行中,我注意到在 b、c、e 中,任何方法都可以有最短的时间.对于 a 和 d 也是如此.但是,方法 b、c、e 的时间始终低于 a 和 d.

导入火炬导入性能图perfplot.show(设置 = lambda n:torch.randn(n),内核=[lambda a: a.new_tensor(a),lambda a: a.clone().detach(),lambda a: torch.empty_like(a).copy_(a),lambda a:torch.tensor(a),lambda a: a.detach().clone(),],标签=[new_tensor()"、clone().detach()"、empty_like().copy()"、tensor()"、detach().clone"()"],n_range=[2 ** k for k in range(15)],xlabel=len(a)",logx=假,逻辑=错误,title='复制pytorch张量的时间比较',)

There seems to be several ways to create a copy of a tensor in Pytorch, including

y = tensor.new_tensor(x) #a

y = x.clone().detach() #b

y = torch.empty_like(x).copy_(x) #c

y = torch.tensor(x) #d

b is explicitly preferred over a and d according to a UserWarning I get if I execute either a or d. Why is it preferred? Performance? I'd argue it's less readable.

Any reasons for/against using c?

解决方案

TL;DR

Use .clone().detach() (or preferrably .detach().clone())

as it's slightly fast and explicit in what it does.


Using perflot, I plotted the timing of various methods to copy a pytorch tensor.

y = tensor.new_tensor(x) # method a

y = x.clone().detach() # method b

y = torch.empty_like(x).copy_(x) # method c

y = torch.tensor(x) # method d

y = x.detach().clone() # method e

The x-axis is the dimension of tensor created, y-axis shows the time. The graph is in linear scale. As you can clearly see, the tensor() or new_tensor() takes more time compared to other three methods.

Note: In multiple runs, I noticed that out of b, c, e, any method can have lowest time. The same is true for a and d. But, the methods b, c, e consistently have lower timing than a and d.

import torch
import perfplot

perfplot.show(
    setup=lambda n: torch.randn(n),
    kernels=[
        lambda a: a.new_tensor(a),
        lambda a: a.clone().detach(),
        lambda a: torch.empty_like(a).copy_(a),
        lambda a: torch.tensor(a),
        lambda a: a.detach().clone(),
    ],
    labels=["new_tensor()", "clone().detach()", "empty_like().copy()", "tensor()", "detach().clone()"],
    n_range=[2 ** k for k in range(15)],
    xlabel="len(a)",
    logx=False,
    logy=False,
    title='Timing comparison for copying a pytorch tensor',
)

这篇关于Pytorch 复制张量的首选方式的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-18 08:27
查看更多