问题描述
通过这本本书,我熟悉以下内容:
Going through this book, I am familiar with the following:
对于每个训练实例,反向传播算法首先生成一个预测(前向传递),测量误差,然后通过每个反向层以测量每个的误差贡献连接(反向传递),最后稍微调整连接权重以减少误差.
但是,我不确定这与 TensorFlow 的反向模式 autodiff 实现有何不同.
However I am not sure how this differs from the reverse-mode autodiff implementation by TensorFlow.
据我所知,反向模式 autodiff 首先沿正向遍历图形,然后在第二遍中计算输出相对于输入的所有偏导数.这与传播算法非常相似.
As far as I know reverse-mode autodiff first goes through the graph in the forward direction and then in the second pass computes all partial derivatives for the outputs with respect to the inputs. This is very similar to the propagation algorithm.
反向传播与反向模式 autodiff 有何不同?
推荐答案
感谢 David Parks 的回答提供有效的贡献和有用的链接,但是我已经找到了本书作者本人对这个问题的答案,其中可能会提供更简洁的答案:
Thanks to the answer by David Parks for the valid contribution and useful links, however I have found the answer to this question by the author of the book himself, which may provide a more concise answer:
Bakpropagation 是指使用多个反向传播步骤训练人工神经网络的整个过程,每个步骤计算梯度并使用它们执行梯度下降步骤.相比之下,反向模式自动差异只是一种用于有效计算梯度的技术,它恰好被反向传播使用.
这篇关于反向传播和反向模式 autodiff 之间有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!