问题描述
在Keras中创建顺序模型时,我了解您在第一层中提供了输入形状.然后,此输入形状会构成一个隐式输入层吗?
When creating a Sequential model in Keras, I understand you provide the input shape in the first layer. Does this input shape then make an implicit input layer?
例如,下面的模型明确指定了2个Dense层,但实际上这是一个包含3层的模型,其中3个层由输入形状所隐含的一个输入层,一个包含32个神经元的隐藏密集层,然后一个包含10个神经元的输出层组成.可能的输出?
For example, the model below explicitly specifies 2 Dense layers, but is this actually a model with 3 layers consisting of one input layer implied by the input shape, one hidden dense layer with 32 neurons, and then one output layer with 10 possible outputs?
model = Sequential([
Dense(32, input_shape=(784,)),
Activation('relu'),
Dense(10),
Activation('softmax'),
])
推荐答案
好吧,它实际上是 隐式输入层,即您的模型是具有以下特征的旧式"神经网络的示例:三层-输入,隐藏和输出.这在Keras Functional API中更加明确可见(请参见示例(文档中的示例),其中您的模型将写为:
Well, it actually is an implicit input layer indeed, i.e. your model is an example of a "good old" neural net with three layers - input, hidden, and output. This is more explicitly visible in the Keras Functional API (check the example in the docs), in which your model would be written as:
inputs = Input(shape=(784,)) # input layer
x = Dense(32, activation='relu')(inputs) # hidden layer
outputs = Dense(10, activation='softmax')(x) # output layer
model = Model(inputs, outputs)
实际上,此隐式输入层是为什么仅在顺序API中模型的第一(显式)层中必须包含input_shape
参数的原因-在随后的层中,从输出推断输入形状的(参见评论中的评论 core.py
的源代码.)
Actually, this implicit input layer is the reason why you have to include an input_shape
argument only in the first (explicit) layer of the model in the Sequential API - in subsequent layers, the input shape is inferred from the output of the previous ones (see the comments in the source code of core.py
).
您还可以在文档 c4>启发人.
You may also find the documentation on tf.contrib.keras.layers.Input
enlightening.
这篇关于Keras顺序模型输入层的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!