我目前正在尝试使用具有 2 个类的 Inception V3 创建一个图像分类模型。我有 1428 张图像,它们的平衡度约为 70/30。当我运行我的模型时,我得到了相当高的损失以及恒定的验证准确性。什么可能导致这个恒定值?

data = np.array(data, dtype="float")/255.0
labels = np.array(labels,dtype ="uint8")

(trainX, testX, trainY, testY) = train_test_split(
                            data,labels,
                            test_size=0.2,
                            random_state=42)

img_width, img_height = 320, 320 #InceptionV3 size

train_samples =  1145
validation_samples = 287
epochs = 20

batch_size = 32

base_model = keras.applications.InceptionV3(
        weights ='imagenet',
        include_top=False,
        input_shape = (img_width,img_height,3))

model_top = keras.models.Sequential()
model_top.add(keras.layers.GlobalAveragePooling2D(input_shape=base_model.output_shape[1:], data_format=None)),
model_top.add(keras.layers.Dense(350,activation='relu'))
model_top.add(keras.layers.Dropout(0.2))
model_top.add(keras.layers.Dense(1,activation = 'sigmoid'))
model = keras.models.Model(inputs = base_model.input, outputs = model_top(base_model.output))


for layer in model.layers[:30]:
  layer.trainable = False

model.compile(optimizer = keras.optimizers.Adam(
                    lr=0.00001,
                    beta_1=0.9,
                    beta_2=0.999,
                    epsilon=1e-08),
                    loss='binary_crossentropy',
                    metrics=['accuracy'])

#Image Processing and Augmentation
train_datagen = keras.preprocessing.image.ImageDataGenerator(
          zoom_range = 0.05,
          #width_shift_range = 0.05,
          height_shift_range = 0.05,
          horizontal_flip = True,
          vertical_flip = True,
          fill_mode ='nearest')

val_datagen = keras.preprocessing.image.ImageDataGenerator()


train_generator = train_datagen.flow(
        trainX,
        trainY,
        batch_size=batch_size,
        shuffle=True)

validation_generator = val_datagen.flow(
                testX,
                testY,
                batch_size=batch_size)

history = model.fit_generator(
    train_generator,
    steps_per_epoch = train_samples//batch_size,
    epochs = epochs,
    validation_data = validation_generator,
    validation_steps = validation_samples//batch_size,
    callbacks = [ModelCheckpoint])

这是我运行模型时的日志:
Epoch 1/20
35/35 [==============================]35/35[==============================] - 52s 1s/step - loss: 0.6347 - acc: 0.6830 - val_loss: 0.6237 - val_acc: 0.6875

Epoch 2/20
35/35 [==============================]35/35 [==============================] - 14s 411ms/step - loss: 0.6364 - acc: 0.6756 - val_loss: 0.6265 - val_acc: 0.6875

Epoch 3/20
35/35 [==============================]35/35 [==============================] - 14s 411ms/step - loss: 0.6420 - acc: 0.6743 - val_loss: 0.6254 - val_acc: 0.6875

Epoch 4/20
35/35 [==============================]35/35 [==============================] - 14s 414ms/step - loss: 0.6365 - acc: 0.6851 - val_loss: 0.6289 - val_acc: 0.6875

Epoch 5/20
35/35 [==============================]35/35 [==============================] - 14s 411ms/step - loss: 0.6359 - acc: 0.6727 - val_loss: 0.6244 - val_acc: 0.6875

Epoch 6/20
35/35 [==============================]35/35 [==============================] - 15s 415ms/step - loss: 0.6342 - acc: 0.6862 - val_loss: 0.6243 - val_acc: 0.6875

最佳答案

我认为你的学习率太低,时代太少。尝试使用 lr = 0.001epochs = 100

关于python - 机器学习中具有高损失的恒定验证精度,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/52598959/

10-11 04:10