我正在尝试训练2D卷积LSTM,以基于视频数据进行分类预测。但是,我的输出层似乎遇到了问题:

“ValueError:检查目标时出错:预期density_1具有5个维,但数组的形状为(1,1939,9)”

我当前的模型基于Keras Team提供的ConvLSTM2D example。我认为上述错误是由于我误解了示例及其基本原理而导致的。

数据

我有任意数量的视频,其中每个视频包含任意数量的帧。每帧为135x240x1(最后一个颜色 channel )。这将导致输入形状为(None,None,135、240、1),其中两个“None”值分别是批次大小和时间步长。如果我训练具有1052帧的单个视频,那么我的输入形状将变为(1、1052、135、240、1)。

对于每一帧,模型应在9个类别中预测介于0和1之间的值。这意味着我的输出形状为(None,None,9)。如果我训练具有1052帧的单个视频,则此形状变为(1、1052、9)。

模型

Layer (type)                 Output Shape              Param #
=================================================================
conv_lst_m2d_1 (ConvLSTM2D)  (None, None, 135, 240, 40 59200
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
conv_lst_m2d_2 (ConvLSTM2D)  (None, None, 135, 240, 40 115360
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
conv_lst_m2d_3 (ConvLSTM2D)  (None, None, 135, 240, 40 115360
_________________________________________________________________
batch_normalization_3 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
dense_1 (Dense)              (None, None, 135, 240, 9) 369
=================================================================
Total params: 290,769
Trainable params: 290,529
Non-trainable params: 240

源代码
model = Sequential()

model.add(ConvLSTM2D(
        filters=40,
        kernel_size=(3, 3),
        input_shape=(None, 135, 240, 1),
        padding='same',
        return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(
        filters=40,
        kernel_size=(3, 3),
        padding='same',
        return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(
        filters=40,
        kernel_size=(3, 3),
        padding='same',
        return_sequences=True))
model.add(BatchNormalization())

model.add(Dense(
        units=classes,
        activation='softmax'
))
model.compile(
        loss='categorical_crossentropy',
        optimizer='adadelta'
)
model.fit_generator(generator=training_sequence)

追溯
Epoch 1/1
Traceback (most recent call last):
  File ".\lstm.py", line 128, in <module>
    main()
  File ".\lstm.py", line 108, in main
    model.fit_generator(generator=training_sequence)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\models.py", line 1253, in fit_generator
    initial_epoch=initial_epoch)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 2244, in fit_generator
    class_weight=class_weight)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 1884, in train_on_batch
    class_weight=class_weight)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 1487, in _standardize_user_data
    exception_prefix='target')
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 113, in _standardize_input_data
    'with shape ' + str(data_shape))
ValueError: Error when checking target: expected dense_1 to have 5 dimensions, but got array with shape (1, 1939, 9)

批量大小设置为1的示例输入形状为(1、1389、135、240、1)。这种形状符合我上面描述的要求,因此我认为我的Keras Sequence子类(在源代码中为“training_sequence”)是正确的。

我怀疑问题是由我直接从BatchNormalization()转到Dense()引起的。毕竟,回溯表明问题是在density_1(最后一层)中发生的。但是,我不想让任何人误入歧途,所以请带我一分盐。

编辑2018年3月27日

阅读涉及类似模型的this thread之后,我更改了最终的ConvLSTM2D层,以便将return_sequences参数设置为False而不是True。我还在我的Dense层之前添加了GlobalAveragePooling2D层。更新后的模型如下:
Layer (type)                 Output Shape              Param #
=================================================================
conv_lst_m2d_1 (ConvLSTM2D)  (None, None, 135, 240, 40 59200
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
conv_lst_m2d_2 (ConvLSTM2D)  (None, None, 135, 240, 40 115360
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
conv_lst_m2d_3 (ConvLSTM2D)  (None, 135, 240, 40)      115360
_________________________________________________________________
batch_normalization_3 (Batch (None, 135, 240, 40)      160
_________________________________________________________________
global_average_pooling2d_1 ( (None, 40)                0
_________________________________________________________________
dense_1 (Dense)              (None, 9)                 369
=================================================================
Total params: 290,769
Trainable params: 290,529
Non-trainable params: 240

这是回溯的新副本:
Traceback (most recent call last):
  File ".\lstm.py", line 131, in <module>
    main()
  File ".\lstm.py", line 111, in main
    model.fit_generator(generator=training_sequence)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\models.py", line 1253, in fit_generator
    initial_epoch=initial_epoch)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 2244, in fit_generator
    class_weight=class_weight)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 1884, in train_on_batch
    class_weight=class_weight)
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 1487, in _standardize_user_data
    exception_prefix='target')
  File "C:\Users\matth\Anaconda3\envs\capstone-gpu\lib\site-packages\keras\engine\training.py", line 113, in _standardize_input_data
    'with shape ' + str(data_shape))
ValueError: Error when checking target: expected dense_1 to have 2 dimensions, but got array with shape (1, 1034, 9)

我在此运行中打印了x和y形状。 x为(1,1034,135,240,1),y为(1,1034,9)。这可以缩小问题的范围。看来问题出在y资料,而不是x资料。具体来说,密集层不喜欢时间暗淡。但是,我不确定如何解决此问题。

编辑2018年3月28日

于阳的解决方案行之有效。对于有类似问题的任何人想要查看最终模型的外观,以下是摘要:
Layer (type)                 Output Shape              Param #
=================================================================
conv_lst_m2d_1 (ConvLSTM2D)  (None, None, 135, 240, 40 59200
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
conv_lst_m2d_2 (ConvLSTM2D)  (None, None, 135, 240, 40 115360
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
conv_lst_m2d_3 (ConvLSTM2D)  (None, None, 135, 240, 40 115360
_________________________________________________________________
batch_normalization_3 (Batch (None, None, 135, 240, 40 160
_________________________________________________________________
average_pooling3d_1 (Average (None, None, 1, 1, 40)    0
_________________________________________________________________
reshape_1 (Reshape)          (None, None, 40)          0
_________________________________________________________________
dense_1 (Dense)              (None, None, 9)           369
=================================================================
Total params: 290,769
Trainable params: 290,529
Non-trainable params: 240

另外,源代码:
model = Sequential()

model.add(ConvLSTM2D(
        filters=40,
        kernel_size=(3, 3),
        input_shape=(None, 135, 240, 1),
        padding='same',
        return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(
        filters=40,
        kernel_size=(3, 3),
        padding='same',
        return_sequences=True))
model.add(BatchNormalization())

model.add(ConvLSTM2D(
        filters=40,
        kernel_size=(3, 3),
        padding='same',
        return_sequences=True))
model.add(BatchNormalization())

model.add(AveragePooling3D((1, 135, 240)))
model.add(Reshape((-1, 40)))
model.add(Dense(
        units=9,
        activation='sigmoid'))

model.compile(
        loss='categorical_crossentropy',
        optimizer='adadelta'
)

最佳答案

如果要每帧进行预测,则绝对应该在最后一个return_sequences=True层中设置ConvLSTM2D

对于目标形状上的ValueError,将GlobalAveragePooling2D()层替换为AveragePooling3D((1, 135, 240))Reshape((-1, 40)),以使输出形状与目标数组兼容。

10-06 00:53