问题描述
在阅读解卷积时,经常提到在上采样时使用权重的转置,但在我能找到的 Tensorflow 中的少数示例中,情况并非如此.转置是否发生在内部?以下哪个是正确的?
In reading about deconvolution, it is often mentioned to use the transpose of the weights when upsampling, but in the few examples in Tensorflow that I can find, this is not the case. Does the transpose happen internally? Which of the following is correct?
tf.nn.conv2d_transpose(matrix, tf.transpose(W1, [1, 0, 2, 3]), ...)
tf.nn.conv2d_transpose(matrix, W1, ...)
推荐答案
您不需要转置权重.这只是一个命名约定.您可以在此处了解他们为何如此命名.简短的总结是它不执行反卷积,而是执行分数步幅卷积.
You don't need to transpose the weights. It's just a naming convention.You can see why they named it the way they did here. The short summary is that it isn't performing deconvolution and is instead performing a fractionally strided convolution.
同样直接回答你的问题,第二个是正确的.
Also to answer your question directly the second one is correct.
这篇关于Tensorflow 中卷积自编码器中的共享权重的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!