无论我使用哪种优化程序,准确性或损失指标,我的准确性都会快速收敛(在10到20个周期内),而损失却会持续减少(> 100个周期)。我尝试了Keras中可用的每个优化器,并且都出现了相同的趋势(尽管某些融合器收敛速度较慢,且准确性比其他更高,其中nAdam,Adadelta和Adamax表现最佳)。

我的输入是一个64x1的数据向量,而我的输出是一个3x1的向量,代表真实空间中的3D坐标。我大约有2000个训练样本和500个测试样本。我已经使用scikit Learn预处理工具箱中的MinMaxScaler对输入和输出进行了标准化,并且还使用scikit Learn shuffle函数对数据进行了混洗。我使用test_train_split来随机播放数据(具有指定的随机状态)。这是我的CNN:



def cnn(pretrained_weights = None,input_size = (64,1)):
    inputs = keras.engine.input_layer.Input(input_size)

    conv1 = Conv1D(64,2,strides=1,activation='relu')(inputs)
    conv2 = Conv1D(64,2,strides=1,activation='relu')(conv1)
    pool1 = MaxPooling1D(pool_size=2)(conv2)
    #pool1 = Dropout(0.25)(pool1)

    conv3 = Conv1D(128,2,strides=1,activation='relu')(pool1)
    conv4 = Conv1D(128,2,strides=1,activation='relu')(conv3)
    pool2 = MaxPooling1D(pool_size=2)(conv4)
    #pool2 = Dropout(0.25)(pool2)

    conv5 = Conv1D(256,2,strides=1,activation='relu')(pool2)
    conv6 = Conv1D(256,2,strides=1,activation='relu')(conv5)
    pool3 = MaxPooling1D(pool_size=2)(conv6)
    #pool3 = Dropout(0.25)(pool3)
    pool4 = MaxPooling1D(pool_size=2)(pool3)

    dense1 = Dense(256,activation='relu')(pool4)
    #drop1 = Dropout(0.5)(dense1)
    drop1 = dense1
    dense2 = Dense(64,activation='relu')(drop1)
    #drop2 = Dropout(0.5)(dense2)
    drop2 = dense2
    dense3 = Dense(32,activation='relu')(drop2)
    dense4 = Dense(1,activation='sigmoid')(dense3)

    model = Model(inputs = inputs, outputs = dense4)

    #opt = Adam(lr=1e-6,clipvalue=0.01)
    model.compile(optimizer = Nadam(lr=1e-4), loss = 'mse', metrics =   ['accuracy','mse','mae'])


我尝试了额外的池化(在我的代码中可以看到)来规范化数据并减少过度拟合(以防万一,这是问题所在),但无济于事。这是使用上述参数的训练示例:

model = cnn()
model.fit(x=x_train, y=y_train, batch_size=7, epochs=10, verbose=1, validation_split=0.2, shuffle=True)

Train on 1946 samples, validate on 487 samples
Epoch 1/10
1946/1946 [==============================] - 5s 3ms/step - loss: 0.0932 - acc: 0.0766 - mean_squared_error: 0.0932 - mean_absolute_error: 0.2616 - val_loss: 0.0930 - val_acc: 0.0815 - val_mean_squared_error: 0.0930 - val_mean_absolute_error: 0.2605
Epoch 2/10
1946/1946 [==============================] - 2s 1ms/step - loss: 0.0903 - acc: 0.0783 - mean_squared_error: 0.0903 - mean_absolute_error: 0.2553 - val_loss: 0.0899 - val_acc: 0.0842 - val_mean_squared_error: 0.0899 - val_mean_absolute_error: 0.2544
Epoch 3/10
1946/1946 [==============================] - 2s 1ms/step - loss: 0.0886 - acc: 0.0807 - mean_squared_error: 0.0886 - mean_absolute_error: 0.2524 - val_loss: 0.0880 - val_acc: 0.0862 - val_mean_squared_error: 0.0880 - val_mean_absolute_error: 0.2529
Epoch 4/10
1946/1946 [==============================] - 2s 1ms/step - loss: 0.0865 - acc: 0.0886 - mean_squared_error: 0.0865 - mean_absolute_error: 0.2488 - val_loss: 0.0875 - val_acc: 0.1081 - val_mean_squared_error: 0.0875 - val_mean_absolute_error: 0.2534
Epoch 5/10
1946/1946 [==============================] - 2s 1ms/step - loss: 0.0849 - acc: 0.0925 - mean_squared_error: 0.0849 - mean_absolute_error: 0.2461 - val_loss: 0.0851 - val_acc: 0.0972 - val_mean_squared_error: 0.0851 - val_mean_absolute_error: 0.2427
Epoch 6/10
1946/1946 [==============================] - 2s 1ms/step - loss: 0.0832 - acc: 0.1002 - mean_squared_error: 0.0832 - mean_absolute_error: 0.2435 - val_loss: 0.0817 - val_acc: 0.1075 - val_mean_squared_error: 0.0817 - val_mean_absolute_error: 0.2400
Epoch 7/10
1946/1946 [==============================] - 2s 1ms/step - loss: 0.0819 - acc: 0.1041 - mean_squared_error: 0.0819 - mean_absolute_error: 0.2408 - val_loss: 0.0796 - val_acc: 0.1129 - val_mean_squared_error: 0.0796 - val_mean_absolute_error: 0.2374
Epoch 8/10
1946/1946 [==============================] - 2s 1ms/step - loss: 0.0810 - acc: 0.1060 - mean_squared_error: 0.0810 - mean_absolute_error: 0.2391 - val_loss: 0.0787 - val_acc: 0.1129 - val_mean_squared_error: 0.0787 - val_mean_absolute_error: 0.2348
Epoch 9/10
1946/1946 [==============================] - 2s 1ms/step - loss: 0.0794 - acc: 0.1089 - mean_squared_error: 0.0794 - mean_absolute_error: 0.2358 - val_loss: 0.0789 - val_acc: 0.1102 - val_mean_squared_error: 0.0789 - val_mean_absolute_error: 0.2337
Epoch 10/10
1946/1946 [==============================] - 2s 1ms/step - loss: 0.0785 - acc: 0.1086 - mean_squared_error: 0.0785 - mean_absolute_error: 0.2343 - val_loss: 0.0767 - val_acc: 0.1143 - val_mean_squared_error: 0.0767 - val_mean_absolute_error: 0.2328


我很难诊断出问题所在。我是否需要其他正则化?这是输入向量和相应的基本事实的示例:

input = array([[0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [5.05487319e-04],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [2.11865474e-03],
   [6.57073860e-04],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [8.02714614e-04],
   [1.09597877e-03],
   [5.37978732e-03],
   [9.74035809e-03],
   [0.00000000e+00],
   [0.00000000e+00],
   [2.04473307e-03],
   [5.60562907e-04],
   [1.76158615e-03],
   [3.48869003e-03],
   [6.45111735e-02],
   [7.75741303e-01],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [1.33064182e-02],
   [5.04751340e-02],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [5.90069050e-04],
   [3.27240480e-03],
   [1.92582590e-03],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [0.00000000e+00],
   [4.50609885e-04],
   [1.12957157e-03],
   [1.24890352e-03]])

 output = array([[0.        ],
   [0.41666667],
   [0.58823529]])


可能与数据的规范化或我的数据的性质有关吗?我是否没有足够的数据?感谢您提供任何见解,我已经尝试了许多其他帖子中的建议,但没有任何效果。谢谢!

最佳答案

您的问题有几个问题...

首先,正如您所声称的那样,您的培训和验证准确性肯定都不会“迅速收敛”(两者都从0.07降低到〜0.1);但是即使是这种情况,我也看不到这将是一个问题(通常人们会抱怨相反的情况,即准确性没有收敛,或者收敛速度不够快)。

但是,所有这些讨论都是无关紧要的,仅仅是因为您处于回归设置中,而准确性却毫无意义。事实是,在这种情况下,Keras不会通过警告或其他方式“保护”您。您可能会在What function defines accuracy in Keras when the loss is mean squared error (MSE)?中找到有用的讨论(免责声明:答案是我的)。

因此,您应该更改model.compile语句,如下所示:

model.compile(optimizer = Nadam(lr=1e-4), loss = 'mse')


即在这里不需要metrics(同时测量msemae听起来像是一种过大的杀伤力-我建议只使用其中之一)。


  我所处的“模式”(在这种情况下为回归)仅由我在输出层中使用的激活类型决定吗?


否。“模式”(回归或分类)由损失函数决定:像msemae这样的损失表示回归设置。

这将我们带到了最后一个问题:除非您知道输出仅采用[0,1]中的值,否则不应将sigmoid用作最后一层的激活函数; linear激活通常用于回归设置,即:

dense4 = Dense(1,activation='linear')(dense3)


由于linear激活是Keras(docs)中的默认激活,因此甚至不需要显式激活,即:

dense4 = Dense(1)(dense3)


也会做的工作。

关于machine-learning - Keras CNN-损耗不断降低,但精度迅速收敛,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/54096140/

10-12 23:56