我正在尝试创建自己的自定义python层来计算网络准确性(用于阶段:TEST)。

我的问题:它是否仍应具有所有这四个功能:


设置-使用从图层变量获得的参数初始化图层
前进-图层的输入和输出
向后-给定下一层的预测和渐变,计算上一层的渐变
重塑-根据需要重塑斑点


如果是,为什么?我只想在测试阶段和计算准确性时使用它,而不是在学习中使用(前进和后退似乎是为了训练)。

感谢大家!

最佳答案

尽管我不确定如果没有定义所有这四个方法,Caffe可能会输出错误,但是您肯定需要安装和转发:


设置:正是您所说的。例如,在我的“准确性”层中,我通常会为整个测试集和每个样本的softmax概率保存一些度量标准(真假肯定/假肯定/否定,f分数),以防万一我想合并/融合不同的网络/方法后来。这是我打开文件的地方,我将在其中写入这些信息。
转发:在这里,您将自己计算准确度,将批次中每个样品的预测结果与标签进行比较。通常,该层将有两个输入,标签(可能由数据/输入层提供的地面真相)和一个输出每个类的批次中每个样本的预测/得分/概率的层(我通常使用SoftMax层);
重塑和后退:不用担心这些。您无需担心后退传球,也无需重新塑造斑点。


这是一个精度层的示例:

# Remark: This class is designed for a binary problem with classes '0' and '1'
# Saving this file as accuracyLayer.py

import caffe
TRAIN = 0
TEST = 1

class Accuracy_Layer(caffe.Layer):
    #Setup method
    def setup(self, bottom, top):
        #We want two bottom blobs, the labels and the predictions
        if len(bottom) != 2:
            raise Exception("Wrong number of bottom blobs (prediction and label)")

        #Initialize some attributes
        self.correctPredictions = 0.0
        self.totalImgs = 0

    #Forward method
    def forward(self, bottom, top):
        #The order of these depends on the prototxt definition
        predictions = bottom[0].data
        labels = bottom[1].data

        self.totalImgs += len(labels)

        for i in range(len(labels)): #len(labels) is equal to the batch size
                pred = predictions[i]   #pred is a tuple with the normalized probability
                                        #of a sample i.r.t. two classes
                lab = labels[i]

                if pred[0] > pred[1]:   #this means it was predicted as class 0
                        if lab == 0.0:
                                self.correctPredictions += 1.0

                else:                  #else, predicted as class 1
                        if lab == 1.0:
                                self.correctPredictions += 1.0

        acc = correctPredictions / self.totalImgs

       #output data to top blob
       top[0].data = acc

    def reshape(self, bottom, top):
        """
        We don't need to reshape or instantiate anything that is input-size sensitive
        """
        pass

    def backward(self, bottom, top):
        """
        This layer does not back propagate
        """
        pass


以及如何在原型中定义它。这是您对Caffe所说的,该层仅在TEST阶段出现:

layer {
  name: "metrics"
  type: "Python"
  top: "Acc"
  top: "FPR"
  top: "FNR"

  bottom: "prediction"   #let's suppose we have these two bottom blobs
  bottom: "label"

  python_param {
    module: "accuracyLayer"
    layer: "Accuracy_Layer"
  }
  include {
    phase: TEST.    #This will ensure it will only be executed in TEST phase
  }
}


顺便说一句,I've written a gist,它可能是您正在寻找的精度Python层的更复杂示例。

10-07 21:52