我是keras和深度学习领域的新手。实际上,我基于电离层数据集使用keras库构建了一个深度自动编码器,其中包含一个混合数据帧(浮点数,字符串“对象”,整数..),因此我尝试将所有对象库替换为浮点数或整数类型,因为自动编码器拒绝接收对象样本。训练集包含10000个带有48列的样本,而验证集包含5000个样本。我没有对输入数据进行任何归一化处理,因为我认为对于自动编码器模型而言并没有必要。
我使用了二进制交叉熵损失函数,不确定这是否可能是训练期间具有恒定损失和恒定精度值的原因。我尝试了不同的时期,但我得到了同样的东西。我也尝试更改批次大小,但没有更改。
谁能帮我找到问题。
input_size = 48
hidden_size1 = 30
hidden_size2 = 20
code_size = 10
batch_size = 80
checkpointer = ModelCheckpoint(filepath="model.h5",
verbose=0,
save_best_only=True)
tensorboard = TensorBoard(log_dir='./logs',
histogram_freq=0,
write_graph=True,
write_images=True)
input_enc = Input(shape=(input_size,))
hidden_1 = Dense(hidden_size1, activation='relu')(input_enc)
hidden_11 = Dense(hidden_size2, activation='relu')(hidden_1)
code = Dense(code_size, activation='relu')(hidden_11)
hidden_22 = Dense(hidden_size2, activation='relu')(code)
hidden_2 = Dense(hidden_size1, activation='relu')(hidden_22)
output_enc = Dense(input_size, activation='sigmoid')(hidden_2)
autoencoder_yes = Model(input_enc, output_enc)
autoencoder_yes.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
history = autoencoder_yes.fit(df_noyau_yes, df_noyau_yes,
epochs=200,
batch_size=batch_size,
shuffle = True,
validation_data=(df_test_yes, df_test_yes),
verbose=1,
callbacks=[checkpointer, tensorboard]).history
Epoch 176/200
80/7412 [..............................] - ETA: 2s - loss: -15302256.0000 - acc: 0.4357
320/7412 [>.............................] - ETA: 2s - loss: -16773740.2500 - acc: 0.4448
480/7412 [>.............................] - ETA: 2s - loss: -16924116.1667 - acc: 0.4444
720/7412 [=>............................] - ETA: 2s - loss: -17179484.1111 - acc: 0.4460
960/7412 [==>...........................] - ETA: 2s - loss: -17382038.5833 - acc: 0.4463
1120/7412 [===>..........................] - ETA: 1s - loss: -17477103.7857 - acc: 0.4466
1360/7412 [====>.........................] - ETA: 1s - loss: -17510808.8824 - acc: 0.4466
1520/7412 [=====>........................] - ETA: 1s - loss: -17337536.3158 - acc: 0.4462
1680/7412 [=====>........................] - ETA: 1s - loss: -17221621.6190 - acc: 0.4466
1840/7412 [======>.......................] - ETA: 1s - loss: -17234479.0870 - acc: 0.4467
2000/7412 [=======>......................] - ETA: 1s - loss: -17336909.4000 - acc: 0.4469
2160/7412 [=======>......................] - ETA: 1s - loss: -17338357.2222 - acc: 0.4467
2320/7412 [========>.....................] - ETA: 1s - loss: -17434196.3103 - acc: 0.4465
2560/7412 [=========>....................] - ETA: 1s - loss: -17306412.6875 - acc: 0.4463
2720/7412 [==========>...................] - ETA: 1s - loss: -17229429.4118 - acc: 0.4463
2880/7412 [==========>...................] - ETA: 1s - loss: -17292270.6667 - acc: 0.4461
3040/7412 [===========>..................] - ETA: 1s - loss: -17348734.3684 - acc: 0.4463
3200/7412 [===========>..................] - ETA: 1s - loss: -17343675.9750 - acc: 0.4461
3360/7412 [============>.................] - ETA: 1s - loss: -17276183.1429 - acc: 0.4461
3520/7412 [=============>................] - ETA: 1s - loss: -17222447.5455 - acc: 0.4463
3680/7412 [=============>................] - ETA: 1s - loss: -17179892.1304 - acc: 0.4463
3840/7412 [==============>...............] - ETA: 1s - loss: -17118994.1667 - acc: 0.4462
4080/7412 [===============>..............] - ETA: 1s - loss: -17064828.6275 - acc: 0.4461
4320/7412 [================>.............] - ETA: 0s - loss: -16997390.4074 - acc: 0.4460
4480/7412 [=================>............] - ETA: 0s - loss: -17022740.0357 - acc: 0.4461
4640/7412 [=================>............] - ETA: 0s - loss: -17008629.1552 - acc: 0.4460
4880/7412 [==================>...........] - ETA: 0s - loss: -16969480.9836 - acc: 0.4459
5040/7412 [===================>..........] - ETA: 0s - loss: -17028253.4921 - acc: 0.4457
5200/7412 [====================>.........] - ETA: 0s - loss: -17035566.0308 - acc: 0.4456
5360/7412 [====================>.........] - ETA: 0s - loss: -17057620.4776 - acc: 0.4456
5600/7412 [=====================>........] - ETA: 0s - loss: -17115849.8857 - acc: 0.4457
5760/7412 [======================>.......] - ETA: 0s - loss: -17117196.7500 - acc: 0.4458
5920/7412 [======================>.......] - ETA: 0s - loss: -17071744.5676 - acc: 0.4458
6080/7412 [=======================>......] - ETA: 0s - loss: -17073121.6184 - acc: 0.4459
6320/7412 [========================>.....] - ETA: 0s - loss: -17075835.3797 - acc: 0.4461
6560/7412 [=========================>....] - ETA: 0s - loss: -17081048.5610 - acc: 0.4460
6800/7412 [==========================>...] - ETA: 0s - loss: -17109489.2471 - acc: 0.4460
7040/7412 [===========================>..] - ETA: 0s - loss: -17022715.4545 - acc: 0.4460
7200/7412 [============================>.] - ETA: 0s - loss: -17038501.4222 - acc: 0.4460
7360/7412 [============================>.] - ETA: 0s - loss: -17041619.7174 - acc: 0.4461
7412/7412 [==============================] - 3s 357us/step - loss: -17015624.9390 - acc: 0.4462 - val_loss: -26101260.3556 - val_acc: 0.4473
Epoch 200/200
80/7412 [..............................] - ETA: 2s - loss: -16431795.0000 - acc: 0.4367
240/7412 [..............................] - ETA: 2s - loss: -16439401.0000 - acc: 0.4455
480/7412 [>.............................] - ETA: 2s - loss: -16591146.5000 - acc: 0.4454
640/7412 [=>............................] - ETA: 2s - loss: -16914542.8750 - acc: 0.4457
880/7412 [==>...........................] - ETA: 2s - loss: -16552313.2727 - acc: 0.4460
1120/7412 [===>..........................] - ETA: 1s - loss: -16839956.4286 - acc: 0.4459
1280/7412 [====>.........................] - ETA: 1s - loss: -16965753.3750 - acc: 0.4461
1440/7412 [====>.........................] - ETA: 1s - loss: -17060988.4444 - acc: 0.4461
1680/7412 [=====>........................] - ETA: 1s - loss: -17149844.2381 - acc: 0.4462
1840/7412 [======>.......................] - ETA: 1s - loss: -17049971.6957 - acc: 0.4462
2080/7412 [=======>......................] - ETA: 1s - loss: -17174574.2692 - acc: 0.4462
2240/7412 [========>.....................] - ETA: 1s - loss: -17131009.5357 - acc: 0.4463
2480/7412 [=========>....................] - ETA: 1s - loss: -17182876.8065 - acc: 0.4461
2720/7412 [==========>...................] - ETA: 1s - loss: -17115984.6176 - acc: 0.4460
2880/7412 [==========>...................] - ETA: 1s - loss: -17115818.8611 - acc: 0.4459
3120/7412 [===========>..................] - ETA: 1s - loss: -17123591.0256 - acc: 0.4459
3280/7412 [============>.................] - ETA: 1s - loss: -17114971.6585 - acc: 0.4459
3440/7412 [============>.................] - ETA: 1s - loss: -17072177.0698 - acc: 0.4462
3600/7412 [=============>................] - ETA: 1s - loss: -17025446.1333 - acc: 0.4460
3840/7412 [==============>...............] - ETA: 1s - loss: -16969630.0625 - acc: 0.4462
4080/7412 [===============>..............] - ETA: 1s - loss: -16961362.9608 - acc: 0.4461
4320/7412 [================>.............] - ETA: 0s - loss: -16969639.5000 - acc: 0.4461
4480/7412 [=================>............] - ETA: 0s - loss: -16946814.6964 - acc: 0.4462
4640/7412 [=================>............] - ETA: 0s - loss: -16941803.2586 - acc: 0.4461
4880/7412 [==================>...........] - ETA: 0s - loss: -16915578.2623 - acc: 0.4462
5040/7412 [===================>..........] - ETA: 0s - loss: -16916479.5714 - acc: 0.4464
5200/7412 [====================>.........] - ETA: 0s - loss: -16896774.5846 - acc: 0.4463
5360/7412 [====================>.........] - ETA: 0s - loss: -16956822.5075 - acc: 0.4462
5600/7412 [=====================>........] - ETA: 0s - loss: -17015829.3286 - acc: 0.4461
5760/7412 [======================>.......] - ETA: 0s - loss: -17024089.8750 - acc: 0.4460
5920/7412 [======================>.......] - ETA: 0s - loss: -17034422.1216 - acc: 0.4462
6160/7412 [=======================>......] - ETA: 0s - loss: -17042738.7273 - acc: 0.4462
6320/7412 [========================>.....] - ETA: 0s - loss: -17041053.0886 - acc: 0.4462
6480/7412 [=========================>....] - ETA: 0s - loss: -17046979.9012 - acc: 0.4461
6640/7412 [=========================>....] - ETA: 0s - loss: -17041165.7590 - acc: 0.4461
6800/7412 [==========================>...] - ETA: 0s - loss: -17070702.2824 - acc: 0.4460
7040/7412 [===========================>..] - ETA: 0s - loss: -17031330.6364 - acc: 0.4460
7280/7412 [============================>.] - ETA: 0s - loss: -17027056.8132 - acc: 0.4461
7412/7412 [==============================] - 3s 363us/step - loss: -17015624.9908 - acc: 0.4462 - val_loss: -26101260.3556 - val_acc: 0.4473
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 48) 0
_________________________________________________________________
dense_1 (Dense) (None, 30) 1470
_________________________________________________________________
dense_2 (Dense) (None, 20) 620
_________________________________________________________________
dense_3 (Dense) (None, 10) 210
_________________________________________________________________
dense_4 (Dense) (None, 20) 220
_________________________________________________________________
dense_5 (Dense) (None, 30) 630
_________________________________________________________________
dense_6 (Dense) (None, 48) 1488
=================================================================
Total params: 4,638
Trainable params: 4,638
Non-trainable params: 0
_________________________________________________________________
None
最佳答案
您可能已经解决了您的问题,但是我想澄清一下您的自动编码器可能出了什么问题,以便其他有相同问题的人可以了解正在发生的事情。
主要问题是您没有对输入数据进行规范化,并且在最后一层中使用了sigmoid函数作为激活。
这意味着您的输入范围在-infinity
和+infinity
之间,而输出数据只能在0
和1
之间变化。
自动编码器是一种尝试学习身份功能的神经网络。这意味着如果您有输入[0, 1, 2, 3]
,则希望网络输出[0, 1, 2, 3]
。
您遇到的情况是,您在最后一层中使用了Sigmoid作为激活函数,这意味着该层收到的每个值都将应用Sigmoid函数。
如前所述,S型函数将值压缩在0
和1
之间。
因此,如果您以[0, 1, 2, 3]
作为输入,即使您的隐藏层学习了身份函数(我认为在这种情况下这是不可能的),输出也将是sigmoid([0, 1, 2, 3])
,这大约导致了[0.5, 0.73, 0.88, 0.95]
。
如果您考虑一下,如果输入范围超出0
至1
范围,此自动编码器就不可能学习复制其输入,因为当损失函数尝试将结果与原始数据进行比较时,它将总是不匹配。
在这种情况下,您最好的办法是对输入进行标准化,以使其在0
和1
之间变化,就像您的输出一样。
关于python - 深度自动编码器在keras中保持恒定的精度,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/49369176/