问题描述
我正在尝试为recall = (recall of class1 + recall of class2)/2
创建一个自定义宏.我想出了以下代码,但不确定如何计算0类的正值.
I am trying to create a custom macro for recall = (recall of class1 + recall of class2)/2
. I came up with the following code but I am not sure how to calculate the true positive of class 0.
def unweightedRecall():
def recall(y_true, y_pred):
# recall of class 1
true_positives1 = K.sum(K.round(K.clip(y_pred * y_true, 0, 1)))
possible_positives1 = K.sum(K.round(K.clip(y_true, 0, 1)))
recall1 = true_positives1 / (possible_positives1 + K.epsilon())
# --- get true positive of class 0 in true_positives0 here ---
# Also, is there a cleaner way to get possible_positives0
possible_positives0 = K.int_shape(y_true)[0] - possible_positives1
recall0 = true_positives0 / (possible_positives0 + K.epsilon())
return (recall0 + recall1)/2
return recall
似乎我将不得不使用Keras.backend.equal(x, y)
,但是如何创建具有形状K.int_shape(y_true)[0]
和所有值(例如x)的张量?
It seems I will have to use Keras.backend.equal(x, y)
, but how do i create a tensor with shape K.int_shape(y_true)[0]
and all values, say x?
编辑1
基于Marcin的评论,我想基于keras中的回调创建自定义指标.在浏览Keras中的问题时,我遇到了以下用于f1指标的代码:
Based on Marcin's comments, I wanted to create a custom metric based on callback in keras. While browsing issues in Keras, I came across the following code for f1 metric:
class Metrics(keras.callbacks.Callback):
def on_epoch_end(self, batch, logs={}):
predict = np.asarray(self.model.predict(self.validation_data[0]))
targ = self.validation_data[1]
self.f1s=f1(targ, predict)
return
metrics = Metrics()
model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size, validation_data=[X_test,y_test],
verbose=1, callbacks=[metrics])
但是回调如何返回准确性?我想实现unweighted recall = (recall class1 + recall class2)/2
.我可以想到以下代码,但希望能有帮助完成它
But how is the callback returning the accuracy? I wanted to implement unweighted recall = (recall class1 + recall class2)/2
. I can think of the following code but would appreciate any help to complete it
from sklearn.metrics import recall_score
class Metrics(keras.callbacks.Callback):
def on_epoch_end(self, batch, logs={}):
predict = np.asarray(self.model.predict(self.validation_data[0]))
targ = self.validation_data[1]
# --- what to store the result in?? ---
self.XXXX=recall_score(targ, predict, average='macro')
# we really dont need to return anything ??
return
metrics = Metrics()
model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size, validation_data=[X_test,y_test],
verbose=1, callbacks=[metrics])
模型:
Edit 2: model:
def createModelHelper(numNeurons=40, optimizer='adam'):
inputLayer = Input(shape=(data.shape[1],))
denseLayer1 = Dense(numNeurons)(inputLayer)
outputLayer = Dense(1, activation='sigmoid')(denseLayer1)
model = Model(input=inputLayer, output=outputLayer)
model.compile(loss=unweightedRecall, optimizer=optimizer)
return model
推荐答案
keras版本(存在问题).
keras version (with the mean problem).
您的两个类实际上仅是一维输出(0或1)吗?
Are your two classes actually only one dimension output (0 or 1)?
如果是这样:
def recall(y_true, y_pred):
# recall of class 1
#do not use "round" here if you're going to use this as a loss function
true_positives = K.sum(K.round(y_pred) * y_true)
possible_positives = K.sum(y_true)
return true_positives / (possible_positives + K.epsilon())
def unweightedRecall(y_true, y_pred):
return (recall(y_true,y_pred) + recall(1-y_true,1-y_pred))/2.
现在,如果您的两个类实际上是2元素输出:
Now, if your two classes are actually a 2-element output:
def unweightedRecall(y_true, y_pred):
return (recall(y_true[:,0],y_pred[:,0]) + recall(y_true[:,1],y_pred[:,1]))/2.
回调版本:
对于回调,可以使用LambdaCallback
,然后手动打印或存储结果:
For the callback, you can use a LambdaCallback
, and you manually print or store the results:
myCallBack = LambdaCallback(on_epoch_end=unweightedRecall)
stored_metrics = []
def unweightedRecall(epoch,logs):
predict = model.predict(self.validation_data[0])
targ = self.validation_data[1]
result = (recall(targ,predict) + recall(1-targ,1-predict))/2.
print("recall for epoch " + str(epoch) + ": " + str(result))
stored_metrics.append(result)
其中recall
是使用np
而不是K
的函数.还有epsilon = np.finfo(float).eps
或epsilon = np.finfo(np.float32).eps)
Where recall
is a function using np
instead of K
. And epsilon = np.finfo(float).eps
or epsilon = np.finfo(np.float32).eps)
这篇关于用于在keras中调用的自定义宏的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!