我正在尝试使用keras functional api构建多输入多输出模型,并且正在遵循他们的代码
但是我得到了这个错误:


  ValueError:输入0与层lstm_54不兼容:预期
  ndim = 3,找到的ndim = 4


我在创建lstm_out层时遇到了该错误,这是代码:

def build_model(self):
    main_input = Input(shape=(self.seq_len, 1), name='main_input')
    #seq_len = 50, vocab_len = 1000
    x = Embedding(output_dim=512, input_dim=self.vocab_len()+1, input_length=self.seq_len)(main_input)

    # A LSTM will transform the vector sequence into a single vector,
    # containing information about the entire sequence
    lstm_out = LSTM(50)(x)
    self.auxiliary_output = Dense(1, activation='sigmoid', name='aux_output')(lstm_out)

    auxiliary_input = Input(shape=(self.seq_len,1), name='aux_input')
    x = concatenate([lstm_out, auxiliary_input])

    # We stack a deep densely-connected network on top
    x = Dense(64, activation='relu')(x)
    x = Dense(64, activation='relu')(x)
    x = Dense(64, activation='relu')(x)

    # And finally we add the main logistic regression layer
    main_output = Dense(1, activation='sigmoid', name='main_output')(x)

    self.model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output])

    print(self.model.summary())
    self.model.compile(optimizer='rmsprop', loss='binary_crossentropy',
              loss_weights=[1., 0.2])


我以为问题出在嵌入层中,但我在keras Embedding documentation中读到(input_dim应该等于词汇量+ 1)。

我不确切知道为什么会得到这个,input_dim中确切的错误是什么,以及如何解决?

最佳答案

嵌入的输入形状应为2D张量,形状为:(batch_size,sequence_length)。在您的代码段中提供了main_input,它是一个3D张量。要解决此问题,请更改以下几行:

main_input = Input(shape=(self.seq_len, 1), name='main_input')
<...>
auxiliary_input = Input(shape=(self.seq_len,1), name='aux_input')


至:

main_input = Input(shape=(self.seq_len, ), name='main_input')
<...>
auxiliary_input = Input(shape=(self.seq_len, ), name='aux_input')


它应该解决不同维度的问题

关于python - Keras功能性API给出错误“预期ndim = 3,找到的ndim = 4”,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/53460392/

10-12 20:27