问题描述
我在使用 tensorFlow 时遇到了问题.执行以下代码
I'm facing a trouble with tensorFlow. Executing the following code
import tensorflow as tf
import input_data
learning_rate = 0.01
training_epochs = 25
batch_size = 100
display_step = 1
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# tensorflow graph input
X = tf.placeholder('float', [None, 784]) # mnist data image of shape 28 * 28 = 784
Y = tf.placeholder('float', [None, 10]) # 0-9 digits recognition = > 10 classes
# set model weights
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
# Our hypothesis
activation = tf.add(tf.matmul(X, W),b) # Softmax
# Cost function: cross entropy
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=activation, logits=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Gradient Descen
我收到以下错误:
ValueError: 没有为任何变量提供梯度,检查你的图表对于不支持梯度的操作,变量之间['Tensor("变量/读取:0", shape=(784, 10), dtype=float32)','Tensor("Variable_1/read:0", shape=(10,), dtype=float32)'] 和损失张量("Mean:0", shape=(), dtype=float32).
推荐答案
此问题由以下行引起:tf.nn.softmax_cross_entropy_with_logits(labels=activation, logits=Y)
基于文档,您应该拥有
labels:每行labels[i] 必须是一个有效的概率分布.
logits:未缩放的对数概率.
logits: Unscaled log probabilities.
所以 logits 假设是您的假设,因此等于 activation
并且有效概率分布是 Y
.所以只需用 tf.nn.softmax_cross_entropy_with_logits(labels=Y, logits=activation)
So logits suppose to be your hypothesis and thus equal to activation
and valid probability distribution is Y
. So just change it with tf.nn.softmax_cross_entropy_with_logits(labels=Y, logits=activation)
这篇关于ValueError:没有为任何变量提供梯度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!