以下是我的MLP模型,

layers = [10,20,30,40,50]
model = keras.models.Sequential()
#Stacking Layers
model.add(keras.layers.Dense(layers[0], input_dim = input_dim, activation='relu'))
#Defining the shape of input
for layer in layers[1:]:
    model.add(keras.layers.Dense(layer, activation='relu'))
    #Layer activation function
# Output layer
model.add(keras.layers.Dense(1, activation='sigmoid'))
#Pre-training
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
#Training
model.fit(train_set, test_set, validation_split = 0.10, epochs = 50, batch_size = 10, shuffle = True, verbose = 2)
# evaluate the network
loss, accuracy = model.evaluate(train_set, test_set)
print("\nLoss: %.2f, Accuracy: %.2f%%" % (loss, accuracy*100))
#predictions
predt = model.predict(final_test)
print(predt)


问题是,精度始终为0,错误日志如下所示,

Epoch 48/50 - 0s - loss: 1.0578 - acc: 0.0000e+00 - val_loss: 0.4885 - val_acc: 0.0000e+00
Epoch 49/50 - 0s - loss: 1.0578 - acc: 0.0000e+00 - val_loss: 0.4885 - val_acc: 0.0000e+00
Epoch 50/50 - 0s - loss: 1.0578 - acc: 0.0000e+00 - val_loss: 0.4885 - val_acc: 0.0000e+00
2422/2422 [==============================] - 0s 17us/step



  损失:1.00,准确性:0.00%


按照建议,我已将学习信号从-1,1更改为0,1,但是,以下是错误日志

Epoch 48/50 - 0s - loss: 8.5879 - acc: 0.4672 - val_loss: 8.2912 - val_acc: 0.4856
Epoch 49/50 - 0s - loss: 8.5879 - acc: 0.4672 - val_loss: 8.2912 - val_acc: 0.4856
Epoch 50/50 - 0s - loss: 8.5879 - acc: 0.4672 - val_loss: 8.2912 - val_acc: 0.4856
2422/2422 [==============================] - 0s 19us/step

最佳答案

您的代码很难阅读。这不是编写Keras模型的推荐标准。试试这个,让我们知道您得到什么。假设X是一个矩阵,其中行是实例,列是要素。 Y是标签

如使用TensorFlow后端时所述,您需要添加一个通道作为最后一个尺寸。此外,标签应分为2个节点,以获得更好的成功机会。与使用具有2个节点的概率输出相比,单个神经元映射通常不太成功。

n = 1000         # Number of instances
m = 4            # Number of features
num_classes = 2  # Number of output classes

... # Your code for loading the data

X = X.reshape(n, m,)
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.33)

y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)


建立模型。最后一层应将Sigmoid或softmax用于分类任务。尝试使用Adadelta优化器,它已被证明可以通过更有效地遍历梯度并减少振荡来产生更好的结果。与分类任务一样,我们还将使用交叉熵作为损失函数。二元交叉熵也很好。

尝试使用标准模型配置。越来越多的节点并没有多大意义。该模型应看起来像一个棱柱,少量输入要素,许多隐藏节点和少量输出节点。您应该针对最少数量的隐藏层,使层更胖,而不是添加层。

input_shape = (m,)

model = Sequential()
model.add(Dense(32, activation='relu', input_shape=input_shape))
model.add(Dense(64, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])


您可以使用来获得模型的摘要

model.summary()


训练模型

epochs = 100
batch_size = 128
# Fit the model weights.
history = model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test, y_test))


查看培训期间发生的情况

plt.figure(figsize=(8,10))
plt.subplot(2,1,1)

# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='lower right')

plt.subplot(2,1,2)
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper right')
plt.show()

关于python - keras MLP精度为零,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/50481178/

10-10 15:55