本文介绍了Keras fit_generator和fit结果不同的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用面部图像数据集训练卷积神经网络.数据集有 10,000 张尺寸为 700 x 700 的图像.我的模型有 12 层.我正在使用生成器函数将图像读入 Keras fit_generator 函数,如下所示.

I am training a Convolutional Neural Network using face images dataset. The dataset has 10,000 images of dimensions 700 x 700. My model has 12 layers. I am using a generator function to read images into Keras fit_generator function as below.

train_file_names ==> 包含训练实例文件名的 Python 列表
train_class_labels ==> 单热编码类标签的 Numpy 数组([0, 1, 0], [0, 0, 1] 等)
train_data ==> Numpy 训练实例数组
train_steps_epoch ==> 16(批量大小为 400,我有 6400 个训练实例.因此,一次遍历整个数据集需要 16 次迭代)
批量大小 ==> 400
call_made ==> 当生成器到达训练实例的末尾时,它会重置索引以在下一个时期从第一个索引加载数据.

train_file_names ==> Python list containing filenames of training instances
train_class_labels ==> Numpy array of one-hot encoded class lables ([0, 1, 0], [0, 0, 1] etc.)
train_data ==> Numpy array of training instances
train_steps_epoch ==> 16 (Batch size is 400 and I have 6400 instances for training. Hence it takes 16 iterations for a single pass through the whole dataset)
batch_size ==> 400
calls_made ==> When generator reaches end of training instances, it resets indexes to load data from first index in next epoch.

我将此生成器作为参数传递给 keras 'fit_generator' 函数,以便为每个时期生成新的一批数据.

I am passing this generator as an argument to keras 'fit_generator' function to generate new batch of data for each epoch.

val_data, val_class_labels ==> 验证数据 numpy 数组
时代数 ==> 时代数

val_data, val_class_labels ==> Validation data numpy arrays
epochs ==> No. of epochs

使用 Keras fit_generator :

model.fit_generator(generator=train_generator, steps_per_epoch=train_steps_per_epoch, epochs=epochs, use_multiprocessing=False, validation_data=[val_data, val_class_labels], verbose=True, callbacks=[history, model_checkpoint], shuffle=True, initial_epoch=0)

代码

def train_data_generator(self):
    index_start = index_end = 0
    temp = 0
    calls_made = 0

    while temp < train_steps_per_epoch:
        index_end = index_start + batch_size
        for temp1 in range(index_start, index_end):
            index = 0
            # Read image
            img = cv2.imread(str(TRAIN_DIR / train_file_names[temp1]), cv2.IMREAD_GRAYSCALE).T
            train_data[index]  = cv2.resize(img, (self.ROWS, self.COLS), interpolation=cv2.INTER_CUBIC)
            index += 1
        yield train_data, self.train_class_labels[index_start:index_end]
        calls_made += 1
        if calls_made == train_steps_per_epoch:
            index_start = 0
            temp = 0
            calls_made = 0
        else:
            index_start = index_end
            temp += 1
        gc.collect()

fit_generator 的输出

纪元 86/300
16/16 [==============================] - 16s 1s/step - loss: 1.5739 - acc: 0.2991 - val_loss:12.0076 - val_acc:0.2110
纪元87/300
16/16 [==============================] - 16s 1s/step - loss: 1.6010 - acc: 0.2549 - val_loss:11.6689 - val_acc:0.2016
纪元88/300
16/16 [==============================] - 16s 1s/step - loss: 1.5750 - acc: 0.2391 - val_loss:10.2663 - val_acc:0.2004
纪元89/300
16/16 [==============================] - 16s 1s/step - loss: 1.5526 - acc: 0.2641 - val_loss:11.8809 - val_acc:0.2249
时代 90/300
16/16 [==============================] - 16s 1s/step - loss: 1.5867 - acc: 0.2602 - val_loss:12.0392 - val_acc:0.2010
时代 91/300
16/16 [==============================] - 16s 1s/step - loss: 1.5524 - acc: 0.2609 - val_loss:12.0254 - val_acc:0.2027

Epoch 86/300
16/16 [==============================] - 16s 1s/step - loss: 1.5739 - acc: 0.2991 - val_loss: 12.0076 - val_acc: 0.2110
Epoch 87/300
16/16 [==============================] - 16s 1s/step - loss: 1.6010 - acc: 0.2549 - val_loss: 11.6689 - val_acc: 0.2016
Epoch 88/300
16/16 [==============================] - 16s 1s/step - loss: 1.5750 - acc: 0.2391 - val_loss: 10.2663 - val_acc: 0.2004
Epoch 89/300
16/16 [==============================] - 16s 1s/step - loss: 1.5526 - acc: 0.2641 - val_loss: 11.8809 - val_acc: 0.2249
Epoch 90/300
16/16 [==============================] - 16s 1s/step - loss: 1.5867 - acc: 0.2602 - val_loss: 12.0392 - val_acc: 0.2010
Epoch 91/300
16/16 [==============================] - 16s 1s/step - loss: 1.5524 - acc: 0.2609 - val_loss: 12.0254 - val_acc: 0.2027

我的问题是,当使用具有上述生成器函数的 'fit_generator' 时,我的模型损失根本没有改善,并且验证准确性很差.但是当我使用如下的 keras 'fit' 函数时,模型损失会减少并且验证准确性要好得多.

My problem is, while using 'fit_generator' with above generator function as above, my model loss is not at all improving and validation accuracy is very poor. But when I use keras 'fit' function as below, the model loss decreases and validation accuracy is far better.

使用 Keras 拟合函数而不使用生成器

model.fit(self.train_data, self.train_class_labels, batch_size=self.batch_size, epochs=self.epochs, validation_data=[self.val_data, self.val_class_labels], verbose=True, callbacks=[history, model_checkpoint])

使用拟合函数训练时的输出

纪元 25/300
6400/6400 [==============================] - 20s 3ms/step - 损失:0.0207 - acc:0.9939 - val_loss:4.1009 - val_acc:0.4916
纪元 26/300
6400/6400 [==============================] - 20s 3ms/step - 损失:0.0197 - acc:0.9948 - val_loss:2.4758 - val_acc:0.5568
纪元 27/300
6400/6400 [==============================] - 20s 3ms/step - 损失:0.0689 - acc:0.9800 - val_loss:1.2843 - val_acc:0.7361
纪元 28/300
6400/6400 [==============================] - 20s 3ms/step - 损失:0.0207 - acc:0.9947 - val_loss:5.6979 - val_acc:0.4560
纪元 29/300
6400/6400 [==============================] - 20s 3ms/step - 损失:0.0353 - acc:0.9908 - val_loss:1.0801 - val_acc:0.7817
纪元 30/300
6400/6400 [==============================] - 20s 3ms/step - 损失:0.0362 - acc:0.9896 - val_loss:3.7851 - val_acc:0.5173
纪元 31/300
6400/6400 [==============================] - 20s 3ms/step - 损失:0.0481 - acc:0.9896 - val_loss:1.1152 - val_acc:0.7795
纪元 32/300
6400/6400 [==============================] - 20s 3ms/step - 损失:0.0106 - acc:0.9969 - val_loss:1.4803 - val_acc:0.7372

Epoch 25/300
6400/6400 [==============================] - 20s 3ms/step - loss: 0.0207 - acc: 0.9939 - val_loss: 4.1009 - val_acc: 0.4916
Epoch 26/300
6400/6400 [==============================] - 20s 3ms/step - loss: 0.0197 - acc: 0.9948 - val_loss: 2.4758 - val_acc: 0.5568
Epoch 27/300
6400/6400 [==============================] - 20s 3ms/step - loss: 0.0689 - acc: 0.9800 - val_loss: 1.2843 - val_acc: 0.7361
Epoch 28/300
6400/6400 [==============================] - 20s 3ms/step - loss: 0.0207 - acc: 0.9947 - val_loss: 5.6979 - val_acc: 0.4560
Epoch 29/300
6400/6400 [==============================] - 20s 3ms/step - loss: 0.0353 - acc: 0.9908 - val_loss: 1.0801 - val_acc: 0.7817
Epoch 30/300
6400/6400 [==============================] - 20s 3ms/step - loss: 0.0362 - acc: 0.9896 - val_loss: 3.7851 - val_acc: 0.5173
Epoch 31/300
6400/6400 [==============================] - 20s 3ms/step - loss: 0.0481 - acc: 0.9896 - val_loss: 1.1152 - val_acc: 0.7795
Epoch 32/300
6400/6400 [==============================] - 20s 3ms/step - loss: 0.0106 - acc: 0.9969 - val_loss: 1.4803 - val_acc: 0.7372

推荐答案

这很可能是由于您的数据生成器中缺少数据混洗.我遇到了同样的问题.我改变了 shuffle=True 但没有成功.然后我在我的自定义生成器中集成了一个 shuffle.这是 Keras 文档建议的自定义生成器:

It is most probably due to the lack of data shuffling in your data generator. I have run into the same problem. I've changed shuffle=True but without success. Then I've integrated a shuffle inside my custom generator. Here is the custom generator suggested by Keras documentation:

class Generator(Sequence):
    # Class is a dataset wrapper for better training performance
    def __init__(self, x_set, y_set, batch_size=256):
        self.x, self.y = x_set, y_set
        self.batch_size = batch_size

    def __len__(self):
        return math.ceil(self.x.shape[0] / self.batch_size)

    def __getitem__(self, idx):
        batch_x = self.x[idx * self.batch_size:(idx + 1) * self.batch_size]
        batch_y = self.y[idx * self.batch_size:(idx + 1) * self.batch_size]
        return batch_x, batch_y

这是里面有shuffle:

Here is it with shuffle inside:

class Generator(Sequence):
    # Class is a dataset wrapper for better training performance
    def __init__(self, x_set, y_set, batch_size=256):
        self.x, self.y = x_set, y_set
        self.batch_size = batch_size
        self.indices = np.arange(self.x.shape[0])

    def __len__(self):
        return math.ceil(self.x.shape[0] / self.batch_size)

    def __getitem__(self, idx):
        inds = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size]
        batch_x = self.x[inds]
        batch_y = self.y[inds]
        return batch_x, batch_y

    def on_epoch_end(self):
        np.random.shuffle(self.indices)

然后模型很好地收敛了.fculinovic 的致谢.

Then the model converged nicely. Credits for fculinovic.

这篇关于Keras fit_generator和fit结果不同的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

06-14 09:28