我通过执行以下操作,从显示的结构中,从保存的文件加载自动编码器:
autoencoder = load_model("autoencoder_mse1.h5")
autoencoder.summary()
>>> ____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_8 (InputLayer) (None, 19) 0
____________________________________________________________________________________________________
dense_43 (Dense) (None, 16) 320 input_8[0][0]
____________________________________________________________________________________________________
dense_44 (Dense) (None, 16) 272 dense_43[0][0]
____________________________________________________________________________________________________
dense_45 (Dense) (None, 2) 34 dense_44[0][0]
____________________________________________________________________________________________________
dense_46 (Dense) (None, 16) 48 dense_45[0][0]
____________________________________________________________________________________________________
dense_47 (Dense) (None, 16) 272 dense_46[0][0]
____________________________________________________________________________________________________
dense_48 (Dense) (None, 19) 323 dense_47[0][0]
====================================================================================================
Total params: 1269
__________________
包括
InputLayer
在内的前四层构成编码器部分。我想知道是否有一种快速的方法来获取这四个层次。到目前为止,我遇到的唯一可能的解决方案是:encoder = Sequential()
encoder.add(Dense(16, 19, weights=autoencoder.layers[1].get_weights()))
^,然后手动进行另外两层。我希望有一种方法可以更有效地提取前四层。特别是由于
.summary()
方法吐出层摘要。编辑1(可能的解决方案):
我已经找到了以下解决方案,但我希望有一些更有效的方法(更少的代码)。
encoder = Sequential()
for i,l in enumerate(autoencoder.layers[1:]):
if i==0:
encoder.add(Dense(input_dim=data.shape[1],output_dim=l.output_dim,
activation="relu",weights=l.get_weights()))
else:
encoder.add(Dense(output_dim=l.output_dim,activation="relu",weights=l.get_weights()))
if l.output_dim == 2:
break
最佳答案
试试这个,让我知道它是否有效:
# TO get first four layers
model.layers[0:3]
#To get the input shape
model.layers[layer_of_interest_index].input_shape
#To get the input shape
model.layers[layer_of_interest_index].output_shape
# TO get weights matrices
model.layers[layer_of_interest_index].get_weights()
希望这可以帮助。
关于python - Keras:获取前n层,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/40734745/