问题描述
我一直在尝试使用和我自己的数据集进行回归。
I have been trying to perform regression using tflearn and my own dataset.
我使用tflearn尝试基于(使用MNIST数据集)。我没有使用MNIST数据集,而是尝试用自己的数据替换训练和测试数据。我的数据是从csv文件读取的,与MNIST数据的形状不同。我有255个要素,它们代表15 * 15的网格和目标值。在示例中,我将第24-30行替换为(并将numpy导入为np):
Using tflearn I have been trying to implement a convolutional network based off an example using the MNIST dataset. Instead of using the MNIST dataset I have tried replacing the training and test data with my own. My data is read in from a csv file and is a different shape to the MNIST data. I have 255 features which represent a 15*15 grid and a target value. In the example I replaced the lines 24-30 with (and included import numpy as np):
#read in train and test csv's where there are 255 features (15*15) and a target
csvTrain = np.genfromtxt('train.csv', delimiter=",")
X = np.array(csvTrain[:, :225]) #225, 15
Y = csvTrain[:,225]
csvTest = np.genfromtxt('test.csv', delimiter=",")
testX = np.array(csvTest[:, :225])
testY = csvTest[:,225]
#reshape features for each instance in to 15*15, targets are just a single number
X = X.reshape([-1,15,15,1])
testX = testX.reshape([-1,15,15,1])
## Building convolutional network
network = input_data(shape=[None, 15, 15, 1], name='input')
我收到以下错误:
I尝试过各种组合nd看到了一个,但没有成功。此页面中的示例对我不起作用,并引发类似错误,并且我不理解所提供的答案或类似问题所提供的答案。
I have tried various combinations and have seen a similar question in stackoverflow but have not had success. The example in this page does not work for me and throws a similar error and I do not understand the answer provided or those provided by similar questions.
如何使用我的
推荐答案
简短答案
在第41行中,您还必须更改输出在 network = fully_connected(network,10,activation ='softmax')
到 network = fully_connected(network,1,Activation ='线性)
。请注意,您可以删除最终的softmax。
Short answer
In the line 41 of the MNIST example, you also have to change the output size 10 to 1 in network = fully_connected(network, 10, activation='softmax')
to network = fully_connected(network, 1, activation='linear')
. Note that you can remove the final softmax.
看一下代码,看来您的目标值是 Y
,这意味着将 L2损失与 mean_square
一起使用(您会发现所有可能的损失):
Looking at your code, it seems you have a target value Y
, which means using the L2 loss with mean_square
(you will find here all the losses available):
regression(network, optimizer='adam', learning_rate=0.01,
loss='mean_square', name='target')
也请重塑Y和Y_test以使其具有形状(batch_size,1)。
Also, reshape Y and Y_test to have shape (batch_size, 1).
以下是分析错误的方法:
Here is how to analyse the error:
- 错误是
无法为Tensor'target / Y'
馈入值...,这意味着它来自 feed_dict 自变量Y - 同样,根据错误,您尝试输入形状为(64,)的Y值
,而网络k期望形状为
(?, 10)
。
- 它需要一个形状(batch_size,10),因为它最初是MNIST(10个类)的网络
- The error is
Cannot feed value ... for Tensor 'target/Y'
, which means it comes from the feed_dict argument Y. - Again, according to the error, you try to feed an Y value
of shape (64,)
whereas the network expect a shape(?, 10)
.- It expects a shape (batch_size, 10), because originally it's a network for MNIST (10 classes)
- 最后一层
fully_connected(network,10,activation ='softmax')
返回大小为10的输出 - 我们将其更改为到大小为1且没有softmax的输出:
fully_connected(network,1,activation ='linear')
- in the code, we see that the last layer
fully_connected(network, 10, activation='softmax')
is returning an output of size 10 - We change that to an output of size 1 without softmax:
fully_connected(network, 1, activation='linear')
最后,这不是错误,而是错误的模型架构。
In the end, it was not a bug, but a wrong model architecture.
这篇关于TensorFlow / TFLearn:ValueError:无法为形状为((?,10)'的Tensor u'target / Y:0'输入形状(64,)的值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!