问题描述
我有一个看起来不错的线性回归模型,但我想显示模型的准确性.
I have linear regression model that seems to be working fine, but I want to display the accuracy of the model.
首先,我初始化变量和占位符...
First, I initialize the variables and placeholders...
X_train, X_test, Y_train, Y_test = train_test_split(
X_data,
Y_data,
test_size=0.2
)
n_rows = X_train.shape[0]
X = tf.placeholder(tf.float32, [None, 89])
Y = tf.placeholder(tf.float32, [None, 1])
W_shape = tf.TensorShape([89, 1])
b_shape = tf.TensorShape([1])
W = tf.Variable(tf.random_normal(W_shape))
b = tf.Variable(tf.random_normal(b_shape))
pred = tf.add(tf.matmul(X, W), b)
cost = tf.reduce_sum(tf.pow(pred-Y, 2)/(2*n_rows-1))
optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate).minimize(cost)
X_train
的形状为(6702, 89)
,Y_train
的形状为(6702, 1)
.接下来,我运行该会话,并显示每个时期的费用以及总的MSE ...
X_train
has shape (6702, 89)
and Y_train
has shape (6702, 1)
. Next I run the session and I display the cost per epoch as well as the total MSE...
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(FLAGS.training_epochs):
avg_cost = 0
for (x, y) in zip(X_train, Y_train):
x = np.reshape(x, (1, 89))
y = np.reshape(y, (1,1))
sess.run(optimizer, feed_dict={X:x, Y:y})
# display logs per epoch step
if (epoch + 1) % FLAGS.display_step == 0:
c = sess.run(
cost,
feed_dict={X:X_train, Y:Y_train}
)
y_pred = sess.run(pred, feed_dict={X:X_test})
test_error = r2_score(Y_test, y_pred)
print(test_error)
print("Epoch:", '%04d' % (epoch + 1), "cost=", "{:.9f}".format(c))
print("Optimization Finished!")
pred_y = sess.run(pred, feed_dict={X:X_test})
mse = tf.reduce_mean(tf.square(pred_y - Y_test))
print("MSE: %4f" % sess.run(mse))
这一切似乎都能正常工作.但是,现在我想看看模型的准确性,因此我想实现tf.metrics.accuracy
.该文档说它有2个自变量labels
和predictions
.接下来我添加了以下内容...
This all seems to work correctly. However, now I want to see the accuracy of my model, so I want to implement tf.metrics.accuracy
. The documentation says it has 2 arguments, labels
and predictions
. I added the following next...
accuracy, accuracy_op = tf.metrics.accuracy(labels=Y_test, predictions=pred)
init_local = tf.local_variables_initializer()
sess.run(init_local)
print(sess.run(accuracy))
显然我需要初始化局部变量,但是我认为我做错了,因为打印出来的准确性结果是0.0
.
Apparently I need to initialize local variales, however I think I am doing something wrong because the accuracy result that gets printed out is 0.0
.
我到处搜索了一个有效的示例,但无法使它适用于我的模型,实现它的正确方法是什么?
I searched everywhere for a working example but I cannot get it to work for my model, what is the proper way to implement it?
推荐答案
我认为您正在学习 回归模型 . tf.metrics.accuracy
应该在 分类模型 上运行.
I think you are learning a regression model. The tf.metrics.accuracy
is supposed to run for a classification model.
当模型预测为1.2但目标值为1.15时,使用accuracy
来衡量这是否是正确的预测是没有意义的. accuracy
用于分类问题(例如mnist),当您的模型预测数字为'9'并且目标图像也为'9'时:这是正确的预测,您会得到充分的信誉;或者,当您的模型预测数字为"9"但目标图像为"6"时:这是错误的预测,您将不会获得任何荣誉.
When your model predicts 1.2 but your target value is 1.15, it does not make sense to use accuracy
to measure whether this is a correct prediction. accuracy
is for classification problems (e.g., mnist), when your model predicts a digit to be '9' and your target image is also '9': this is a correct prediction and you get full credit; Or when your model predicts a digit to be '9' but your target image is '6': this is a wrong prediction and you get no credit.
对于您的回归问题,我们通过absolute error
-|target - prediction|
或mean squared error
-您在MSE
计算中使用的那个来测量预测值和目标值之间的差异.因此,tf.metrics.mean_squared_error
或tf.metrics.mean_absolute_error
是您应该用来测量回归模型的预测误差的一个.
For your regression problem, we measure the difference between prediction and target value either by absolute error
- |target - prediction|
or mean squared error
- the one you used in your MSE
calculation. Thus tf.metrics.mean_squared_error
or tf.metrics.mean_absolute_error
is the one you should use to measure the prediction error for regression models.
这篇关于tf.metrics.accuracy无法正常工作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!