我正在尝试设计用于二进制图像分类的模型,这是我的第一个分类器,并且正在关注在线教程,但是该模型始终预测类0

我的数据集分别包含每个类别的3620和3651个图像,我不认为问题出在数据集不平衡,因为模型仅预测数据集中样本数量较少的类别。

我的密码

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K

img_hieght, img_width = 150,150
train_data_dir = 'dataset/train'
#validation_data_dir = 'dataset/validation'

nb_train_samples = 3000
#nb_validation_samples = 500

epochs = 10
batch_size = 16

if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_hieght)
else:
    input_shape = (img_width, img_hieght, 3)

model = Sequential()
model.add(Conv2D(32,(3,3), input_shape = input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(32,(3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(64,(3,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss = 'binary_crossentropy', optimizer = 'rmsprop', metrics = ['accuracy'])

train_datagen = ImageDataGenerator(
    rescale = 1. /255,
    shear_range = 0.2,
    zoom_range = 0.2,
    horizontal_flip = True)

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size = (img_width,img_hieght),
    batch_size = batch_size,
    class_mode = 'binary')

model.fit_generator(train_generator,
    steps_per_epoch = nb_train_samples//batch_size,
    epochs = epochs)

model.save('classifier.h5')


我也尝试检查模型摘要,但无法检测到任何值得注意的东西

_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
conv2d_1 (Conv2D)            (None, 148, 148, 32)      896
_________________________________________________________________
activation_1 (Activation)    (None, 148, 148, 32)      0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 74, 74, 32)        0
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 72, 72, 32)        9248
_________________________________________________________________
activation_2 (Activation)    (None, 72, 72, 32)        0
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 36, 36, 32)        0
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 34, 34, 64)        18496
_________________________________________________________________
activation_3 (Activation)    (None, 34, 34, 64)        0
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 17, 17, 64)        0
_________________________________________________________________
flatten_1 (Flatten)          (None, 18496)             0
_________________________________________________________________
dense_1 (Dense)              (None, 64)                1183808
_________________________________________________________________
activation_4 (Activation)    (None, 64)                0
_________________________________________________________________
dropout_1 (Dropout)          (None, 64)                0
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 65
_________________________________________________________________
activation_5 (Activation)    (None, 1)                 0
=================================================================
Total params: 1,212,513
Trainable params: 1,212,513
Non-trainable params: 0
_________________________________________________________________
None


我尚未使用验证数据集,我仅使用训练数据并使用以下方法手动测试模型:

import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator

batch_size = 16
path = 'dataset/test'
imgen = ImageDataGenerator(rescale=1/255.)
testGene = imgen.flow_from_directory(directory=path,
                                        target_size=(150, 150,),
                                        shuffle=False,
                                        class_mode='binary',
                                        batch_size=batch_size,
                                        save_to_dir=None
                                        )

model = tf.keras.models.load_model("classifier.h5")
pred = model.predict_generator(testGene, steps=testGene.n/batch_size)

print(pred)


以下是每个时期的精度和损耗值:

Epoch 1/10
187/187 [==============================] - 62s 330ms/step - loss: 0.5881 - accuracy: 0.7182
Epoch 2/10
187/187 [==============================] - 99s 529ms/step - loss: 0.4102 - accuracy: 0.8249
Epoch 3/10
187/187 [==============================] - 137s 733ms/step - loss: 0.3266 - accuracy: 0.8646
Epoch 4/10
187/187 [==============================] - 159s 851ms/step - loss: 0.3139 - accuracy: 0.8620
Epoch 5/10
187/187 [==============================] - 112s 597ms/step - loss: 0.2871 - accuracy: 0.8873
Epoch 6/10
187/187 [==============================] - 60s 323ms/step - loss: 0.2799 - accuracy: 0.8847
Epoch 7/10
187/187 [==============================] - 66s 352ms/step - loss: 0.2696 - accuracy: 0.8870
Epoch 8/10
187/187 [==============================] - 57s 303ms/step - loss: 0.2440 - accuracy: 0.8947
Epoch 9/10
187/187 [==============================] - 56s 299ms/step - loss: 0.2478 - accuracy: 0.8994
Epoch 10/10
187/187 [==============================] - 53s 285ms/step - loss: 0.2448 - accuracy: 0.9047

最佳答案

每个纪元仅使用3000个样本(请参见行nb_train_samples = 3000),而每个类具有3620和3651个图像。假设该模型具有90%的准确度,并且只能预测零,那么我假设您在训练期间仅将零类图像传递给网络。考虑增加nb_train_samples

07-24 09:52