问题描述
Keras模型可以通过功能API用作Tensor上的Tensorflow函数,如此处.
A Keras model can used as a Tensorflow function on a Tensor, through the functional API, as described here.
所以我们可以这样做:
from keras.layers import InputLayer
a = tf.placeholder(dtype=tf.float32, shape=(None, 784))
model = Sequential()
model.add(InputLayer(input_tensor=a, input_shape=(None, 784)))
model.add(Dense(32, activation='relu'))
model.add(Dense(10, activation='softmax'))
output = model.output
哪个是张量:
<tf.Tensor 'dense_24/Softmax:0' shape=(?, 10) dtype=float32>
但是,这也可以在没有任何InputLayer
的情况下使用:
But, this also works without any InputLayer
:
a = tf.placeholder(dtype=tf.float32, shape=(None, 784))
model = Sequential()
model.add(Dense(32, activation='relu', input_shape=(784,)))
model.add(Dense(10, activation='softmax'))
output = model(a)
有效,并且output
具有与以前相同的形状:
works, and output
has the same shape as before:
<tf.Tensor 'sequential_9/dense_22/Softmax:0' shape=(?, 10) dtype=float32>
我认为第一种形式允许:
I assume the first form permits:
- 明确地将
inputs
和outputs
作为模型的属性附加(具有相同的名称),因此我们可以在其他地方重用它们.例如,其他TF操作. - 将带有输入的张量转换为Keras输入,并带有其他元数据(例如_keras_history /topology.py#L1369-L1372"rel =" noreferrer>源代码).
- to explicitely attach the
inputs
andoutputs
as attributes of the model (of the same names), so we can reuse them elsewhere. For example with other TF ops. - to transform the tensors given as inputs into Keras inputs, with additional metadata (such as
_keras_history
as stated in the source code).
但这不是我们不能用第二种形式来做的事情,因此,InputLayer
(和Input
是fortiori)是否有特殊用法(除了多个输入)?
而且,InputLayer
棘手,因为它使用的input_shape
与其他keras层不同:我们指定批处理大小(此处为None
),通常情况并非如此...
But this is not something we cannot do with the second form, so, is there a special usage of the InputLayer
(and Input
a fortiori) (except for multiple inputs)?
Moreover, the InputLayer
is tricky because it's using input_shape
differently from other keras layers: we specify the batch size (None
here), which is not usually the case...
推荐答案
InputLayer
似乎有一些用途:
-
首先,它允许您直接提供纯张量流张量,而无需指定其形状.例如.你本来可以写的
First, it allows you to give pure tensorflow tensors as is, without specifying their shape. E.g. you could have written
model.add(InputLayer(input_tensor=a))
这很不错,原因有几个明显的原因,其中包括减少重复.
This is nice for several obvious reasons, among others less duplication.
第二,它们允许您使用单个输入来编写非顺序网络,例如
Second, they allow you to write non-sequential networks with a single input, e.g.
a
/ \
/ \
/ \
conv1 conv2
| |
如果没有InputLayer
,则需要显式地为conv1
和conv2
提供相同的张量,或者在模型顶部创建一个任意的标识层.都不是很讨人喜欢.
Without InputLayer
you would need to explicitly feed conv1
and conv2
the same tensor, or create an arbitrary identity layer on top of the model. Neither is quite pleasing.
最后,它们消除了也是输入层"和正常层"之间的任意区别.如果使用InputLayer
,则可以编写代码,其中在输入的哪一层和输入的哪一层之间有明显的区别.这样可以提高代码的可读性,并使重构更加容易.例如,替换第一层变得与替换任何其他层一样容易,您无需考虑input_shape
.
Finally, they remove the arbitrary distinction between "layers that are also inputs" and "normal layers". If you use InputLayer
you can write code where there is a clear distinction between what layer is the input and what layer does something. This improves code readability and makes refactoring much easier. For example, replacing the first layer becomes just as easy as replacing any other layer, you don't need to think about input_shape
.
这篇关于在带有Tensorflow张量的Keras模型中使用InputLayer(或输入)有什么优势?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!