本文介绍了LSTM和CNN:ValueError:检查目标时出错:预期time_distributed_1具有3个维度,但数组的形状为(400,256)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在数据上应用CNNLSTM,我只选择一小部分数据;我的训练数据的大小为(400,50),我的测试数据为(200,50).仅使用CNN模型,它就可以正常工作而没有任何错误,添加LSTM模型时我只会遇到很多错误:

I want to apply CNN and LSTM on my data, I just choose a small set of data; My training data's size is (400,50)and my testing data is (200,50). With only CNN model, it works without any errors, I just have many errors when adding the LSTM model:

model = Sequential()
model.add(Conv1D(filters=8,
                 kernel_size=16,
                 padding='valid',
                 activation='relu',
                 strides=1, input_shape=(50,1)))
model.add(MaxPooling1D(pool_size=2,strides=None, padding='valid', input_shape=(50,1))) # strides=None means strides=pool_size
model.add(Conv1D(filters=8,
                 kernel_size=8,
                 padding='valid',
                 activation='relu',
                 strides=1))
model.add(MaxPooling1D(pool_size=2,strides=None, padding='valid',input_shape=(50,1)))
model.add(LSTM(32, return_sequences=True,
              activation='tanh', recurrent_activation='hard_sigmoid',
              dropout=0.2,recurrent_dropout=0.2)) # 100 num of LSTM units
model.add(LSTM(32, return_sequences=True,
              activation='tanh', recurrent_activation='hard_sigmoid',
              dropout=0.2,recurrent_dropout=0.2))
model.add(LSTM(32, return_sequences=True,
              activation='tanh', recurrent_activation='hard_sigmoid',
              dropout=0.2,recurrent_dropout=0.2))
model.add(LSTM(32, return_sequences=True,
              activation='tanh', recurrent_activation='hard_sigmoid',
              dropout=0.2,recurrent_dropout=0.2))
model.add(LSTM(32, return_sequences=True,
              activation='tanh', recurrent_activation='hard_sigmoid',
              dropout=0.2,recurrent_dropout=0.2))
model.add(TimeDistributed(Dense(256, activation='softmax')))

# # # 4. Compile model
print('########################### Compilation of the model ######################################')
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
print('###########################Fitting the model ######################################')
# # # # # 5. Fit model on training data
x_train = x_train.reshape((400,50,1))
print(x_train.shape) # (400,50,1)
x_test = x_test.reshape((200,50,1))
print(x_test.shape) # (200,50,1)
model.fit(x_train, y_train, batch_size=100, epochs=100,verbose=0)
print(model.summary()) 
# # # # # 6. Evaluate model on test data
score = model.evaluate(x_test, y_test, verbose=0)
print (score)

这是错误:

Traceback (most recent call last):
  File "CNN_LSTM_Based_Attack.py", line 156, in <module>
    model.fit(x_train, y_train, batch_size=100, epochs=100,verbose=0)
  File "/home/doc/.local/lib/python2.7/site-packages/keras/models.py", line 853, in fit
    initial_epoch=initial_epoch)
  File "/home/doc/.local/lib/python2.7/site-packages/keras/engine/training.py", line 1424, in fit
    batch_size=batch_size)
  File "/home/doc/.local/lib/python2.7/site-packages/keras/engine/training.py", line 1304, in _standardize_user_data
    exception_prefix='target')
  File "/home/doc/.local/lib/python2.7/site-packages/keras/engine/training.py", line 127, in _standardize_input_data
    str(array.shape))
ValueError: Error when checking target: expected time_distributed_1 to have 3 dimensions, but got array with shape (400, 256)

您可以在这里找到该模型的全部摘要:(我是LSTM的新手,这是我第一次使用它.)

You can find here the whole summary for this model:(I am new with LSTM it is the first time that I use it).

_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
conv1d_1 (Conv1D)            (None, 35, 8)             136
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 17, 8)             0
_________________________________________________________________
dropout_1 (Dropout)          (None, 17, 8)             0
_________________________________________________________________
conv1d_2 (Conv1D)            (None, 10, 8)             520
_________________________________________________________________
max_pooling1d_2 (MaxPooling1 (None, 5, 8)              0
_________________________________________________________________
dropout_2 (Dropout)          (None, 5, 8)              0
_________________________________________________________________
lstm_1 (LSTM)                (None, 5, 32)             5248
_________________________________________________________________
lstm_2 (LSTM)                (None, 5, 32)             8320
_________________________________________________________________
lstm_3 (LSTM)                (None, 5, 32)             8320
_________________________________________________________________
lstm_4 (LSTM)                (None, 5, 32)             8320
_________________________________________________________________
lstm_5 (LSTM)                (None, 5, 32)             8320
_________________________________________________________________
time_distributed_1 (TimeDist (None, 5, 256)            8448
=================================================================
Total params: 47,632
Trainable params: 47,632
Non-trainable params: 0
_________________________________________________________________

当我替换以下代码行时:

When I replace this lines of code:

model.add(LSTM(32, return_sequences=True,
              activation='tanh', recurrent_activation='hard_sigmoid',
              dropout=0.2,recurrent_dropout=0.2)) # 100 num of LSTM units
model.add(LSTM(32, return_sequences=True,
              activation='tanh', recurrent_activation='hard_sigmoid',
              dropout=0.2,recurrent_dropout=0.2))
model.add(LSTM(32, return_sequences=True,
              activation='tanh', recurrent_activation='hard_sigmoid',
              dropout=0.2,recurrent_dropout=0.2))
model.add(LSTM(32, return_sequences=True,
              activation='tanh', recurrent_activation='hard_sigmoid',
              dropout=0.2,recurrent_dropout=0.2))
model.add(LSTM(32, return_sequences=True,
              activation='tanh', recurrent_activation='hard_sigmoid',
              dropout=0.2,recurrent_dropout=0.2))
model.add(TimeDistributed(Dense(256, activation='softmax')))

仅此行:

model.add(LSTM(26, activation='tanh'))

比它运作得非常好.

如果您能帮助我,我将不胜感激.

I would be grateful if you could help me please.

推荐答案

因此LSTM层希望输入形状(样本,时间步长,特征).堆叠LSTM时,您应该return_sequences = True.这将提供形状的输出(样本,时间步长,单位),从而使堆栈适合在一起-如果仅想预测向前一步(即下一个值),则应在最后一个LSTM层上设置return_sequences = False. (在序列/时间序列中)-如果您不这样做,它将预测与输入中相同的时间步数.您也可以预测一个不同的数字(例如,根据过去的50个观察值预测下一个10,但是在Keras中这有点棘手).

So LSTM layers expect input in shape (Samples, Time steps, Features). When stacking LSTM you should return_sequences = True. This will give an output of shape (Samples, Time steps, units), thus allowing the stack to fit together - You should set return_sequences = False on the last LSTM-layer if you only want to predict one step ahead (i.e. the next value in the sequence/time series) - if you don't it will predict the same number of time steps as is in the input. You can of cause also predict a different number (e.g. given 50 past observations predict the next 10, but it is a little tricky in Keras).

在您的情况下,Conv/MaxPool层输出5个时间步长",并且您在最后一个LSTM层上具有return_sequences = True-因此您的"y"必须具有形状(Samples,5、256)-否则应返回return_sequences =在最后一层为False,并且不使用TimeDistributed,因为您仅预测了一个时间步长.

In your case the Conv/MaxPool-layers output 5 "time steps" and you have return_sequences = True on the last LSTM-layer - so your "y" must have shape (Samples, 5, 256) - otherwise turn return_sequences = False on the last layer and don't use TimeDistributed, as you only predict one time step ahead.

这篇关于LSTM和CNN:ValueError:检查目标时出错:预期time_distributed_1具有3个维度,但数组的形状为(400,256)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-12 16:03