问题描述
在这里浏览了Caffe教程之后: http://caffe.berkeleyvision.org/收集/examples/mnist.html
After going through the Caffe tutorial here: http://caffe.berkeleyvision.org/gathered/examples/mnist.html
我对本教程中使用的不同(高效)模型感到非常困惑,该模型在此处定义: https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_train_test.prototxt
I am really confused about the different (and efficient) model using in this tutorial, which is defined here: https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_train_test.prototxt
据我了解,Caffe中的卷积层仅计算每个输入的Wx + b之和,而无需应用任何激活函数.如果要添加激活函数,则应在该卷积层的紧下方添加另一个层,例如Sigmoid,Tanh或Relu层.我在互联网上阅读的任何论文/教程都将激活功能应用于神经元单元.
As I understand, Convolutional layer in Caffe simply calculate the sum of Wx+b for each input, without applying any activation function. If we would like to add the activation function, we should add another layer immediately below that convolutional layer, like Sigmoid, Tanh, or Relu layer. Any paper/tutorial I read on the internet applies the activation function to the neuron units.
这给我留下了一个很大的问号,因为我们只能看到卷积层和池化层在模型中交织在一起.我希望有人能给我一个解释.
It leaves me a big question mark as we only can see the Convolutional layers and Pooling layers interleaving in the model. I hope someone can give me an explanation.
作为站点注释,对我来说另一个疑问是此求解器中的max_iter: https://github.com/BVLC/caffe/blob/master /examples/mnist/lenet_solver.prototxt
As a site note, another doubt for me is the max_iter in this solver:https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_solver.prototxt
我们有60.000张图像用于训练,10.000张图像用于测试.那么,为什么在这里max_iter仅10.000(它仍然可以获得> 99%的准确率)? Caffe在每次迭代中做什么?实际上,我不确定准确率是否是正确的预测/测试总大小.
We have 60.000 images for training, 10.000 images for testing. So why does the max_iter here only 10.000 (and it still can get > 99% accuracy rate)? What does Caffe do in each iteration?Actually, I'm not so sure if the accuracy rate is the total correct prediction/test size.
我对这个示例感到非常惊讶,因为我还没有找到任何示例,因此该框架可以在很短的时间内(只有5分钟才能获得99%以上的准确率)实现如此高的准确率.因此,我怀疑应该被误解了.
I'm very amazed of this example, as I haven't found any example, framework that can achieve this high accuracy rate in that very short time (only 5 mins to get >99% accuracy rate). Hence, I doubt there should be something I misunderstood.
谢谢.
推荐答案
Caffe使用批处理. max_iter
为10,000,因为batch_size
为64.No of epochs = (batch_size x max_iter)/No of train samples
.因此,epochs
的数量接近10.在test data
上计算精度.是的,由于数据集不是很复杂,因此该模型的准确性确实高于99%.
Caffe uses batch processing. The max_iter
is 10,000 because the batch_size
is 64. No of epochs = (batch_size x max_iter)/No of train samples
. So the number of epochs
is nearly 10. The accuracy is calculated on the test data
. And yes, the accuracy of the model is indeed >99% as the dataset is not very complicated.
这篇关于难以理解的Caffe MNIST示例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!