问题描述
我正在 Pytorch 中训练一个模型,我想使用输入的截断 SVD 分解.为了计算 SVD,我将输入女巫是 Pytorch Cuda Tensor 传输到 CPU,并使用 scikit-learn
中的 TruncatedSVD
执行截断,之后,我将结果传输回 GPU.以下是我的模型的代码:
I'm training a model in Pytorch and I want to use truncated SVD decomposition of input. For calculating SVD I transfer input witch is a Pytorch Cuda Tensor to CPU and using TruncatedSVD
from scikit-learn
perform truncate, after that, I transfer the result back to GPU. The following is code for my model:
class ImgEmb(nn.Module):
def __init__(self, input_size, hidden_size):
super(ImgEmb, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.drop = nn.Dropout(0.2)
self.mlp = nn.Linear(input_size/2, hidden_size)
self.relu = nn.Tanh()
self.svd = TruncatedSVD(n_components=input_size/2)
def forward(self, input):
svd=self.svd.fit_transform(input.cpu())
svd_tensor=torch.from_numpy(svd)
svd_tensor=svd_tensor.cuda()
mlp=self.mlp(svd_tensor)
res = self.relu(mlp)
return res
我想知道有没有一种方法可以实现截断的 SVD 而无需来回传输到 GPU?(因为很费时间,一点效率也没有)
I wonder is a way to implement truncated SVD without transferring back and forth to GPU? (Because it's very time consuming and is not efficient at all)
推荐答案
您可以直接使用 PyTorch 的 SVD 并手动截断它,也可以使用来自 TensorLy,带有 PyTorch 后端:
You could directly use PyTorch's SVD and truncate it manually, or you can use the truncated SVD from TensorLy, with the PyTorch backend:
import tensorly as tl
tl.set_backend('pytorch')
U, S, V = tl.truncated_svd(matrix, n_eigenvecs=10)
然而,GPU SVD 在大型矩阵上的扩展性不是很好.您还可以使用 TensorLy 的部分 svd,它仍会将您的输入复制到 CPU,但如果您只保留几个特征值,速度会快得多,因为它将使用稀疏特征分解.在 Scikit-learn 的截断 SVD 中,您还可以使用 'algorithm = arpack' 来使用 Scipy 的稀疏 SVD,如果您只需要几个组件,它可能会更快.
However, the GPU SVD does not scale very well on large matrices. You can also use TensorLy's partial svd which will still copy your input to CPU but will be much faster if you keep only a few eigenvalues as it will use a sparse eigendecomposition. In Scikit-learn's truncated SVD, you can also use 'algorithm = arpack' to use Scipy's sparse SVD which again might be faster if you only need a few components.
这篇关于在不转移到 cpu 的情况下截断 Pytorch 张量的 SVD 分解的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!