本文介绍了ValueError:输入0与图层layer_1不兼容:预期ndim = 3,找到的ndim = 2的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试使用单词嵌入和编码器-解码器体系结构构建文本摘要器.这是我在Keras的第一枪,我无法理解为什么layer_1需要ndim=3.我无法弄清楚这一点.下面是我的代码:

I am trying to build text-summarizer using word Embeddings and encoder-decoder architecture. This is my first shot at Keras and I am not able to understand why layer_1 requires ndim=3. I am not able to figure this out. Below is my code:

vocab_size = 16828
n_embeddings = 200

def model_builder(embeds):
        model = keras.Sequential()
        model.add(Embedding(weights=[embeds], name="embedding_1", input_dim=vocab_size,
                            output_dim=n_embeddings))
        for i in range(3):
            lstm = LSTM(rnn_size, name="layer_%s" %(i))
            model.add(lstm)
            model.add(Dropout(prob, name="drop_%s" %(i)))
        model.add(Dense())
        model.add(Activation('softmax', name="activation"))
        return model

rnn_size = 200
prob = 0.5
encoder = model_builder(embedding)
encoder.compile(loss='categorical_crossentropy', optimizer='rmsprop')
enocder.save_weights('embeddings.pkl', overwrite=True)

非常感谢您的帮助.让我知道你们是否需要其他信息.预先谢谢你.

I will really appreciate your help. Let me know if you guys need any other information. Thank you in advance.

P.S. Keras后端是Tensorflow.

推荐答案

您的问题出在以下几行:

Your problem lies in these lines:

for i in range(3):
        lstm = LSTM(rnn_size, name="layer_%s" %(i))
        model.add(lstm)
        model.add(Dropout(prob, name="drop_%s" %(i)))

LSTM默认情况下仅返回其预测的最后一步-因此数据将失去其连续性.在您的示例中-在第一次迭代之后,LSTM输出一个向量而不是一系列错误-这就是引发错误的原因.

LSTM by default returns only the last step of its predictions - so data is losing its sequential nature. In your example - after the first iteration LSTM outputs a vector instead of a sequence of errors - and that's why an error is raised.

为了解决该问题,请尝试:

In order to fix that try:

for i in range(2):
        lstm = LSTM(rnn_size, name="layer_%s" %(i), return_sequences=True)
        model.add(lstm)
        model.add(Dropout(prob, name="drop_%s" %(i)))
lstm = LSTM(rnn_size, name="layer_%s" %(i), return_sequences=False)
model.add(lstm)

我注意到的另一件事是您使用Dense的方式不正确.您应该提供输出神经元的数量:

Another thing which I've noticed is that you are using Dense in an incorrect manner. You should provide the number of output neurons:

model.add(Dense(nb_of_output_neurons))

干杯.

这篇关于ValueError:输入0与图层layer_1不兼容:预期ndim = 3,找到的ndim = 2的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-12 02:36