您应该做的是添加一个自定义激活函数,该函数将您的值限制为阶跃函数,即[-inf,+ inf]的值为1或0,并将其应用于该层.我怎么知道要使用哪个功能?您需要创建满足所有需求(输入到输出映射)的y=some_function并将其转换为Python代码,就像这样:from keras import backend as Kdef binaryActivationFromTanh(x, threshold) : # convert [-inf,+inf] to [-1, 1] # you can skip this step if your threshold is actually within [-inf, +inf] activated_x = K.tanh(x) binary_activated_x = activated_x > threshold # cast the boolean array to float or int as necessary # you shall also cast it to Keras default # binary_activated_x = K.cast_to_floatx(binary_activated_x) return binary_activated_x完成自定义激活功能后,您可以像使用它x = Input(shape=(1000,))y = Dense(10, activation=binaryActivationFromTanh)(x)现在测试这些值,看看是否获得了预期的值.您现在可以将其放入更大的神经网络中.我 强烈 不鼓励添加新层以限制输出,除非它仅用于激活(例如keras.layers.LeakyReLU).I have defined my MLP in the code below. I want to extract the values of layer_2.def gater(self): dim_inputs_data = Input(shape=(self.train_dim[1],)) dim_svm_yhat = Input(shape=(3,)) layer_1 = Dense(20, activation='sigmoid')(dim_inputs_data) layer_2 = Dense(3, name='layer_op_2', activation='sigmoid', use_bias=False)(layer_1) layer_3 = Dot(1)([layer_2, dim_svm_yhat]) out_layer = Dense(1, activation='tanh')(layer_3) model = Model(input=[dim_inputs_data, dim_svm_yhat], output=out_layer) adam = optimizers.Adam(lr=0.01) model.compile(loss='mse', optimizer=adam, metrics=['accuracy']) return modelSuppose the output of layer_2 is below in matrix form0.1 0.7 0.80.1 0.8 0.20.1 0.5 0.5....I would like below to be fed into layer_3 instead of above0 0 10 1 00 1 0Basically, I want the first maximum values to be converted to 1 and other to 0.How can this be achieved in keras?. 解决方案 Who decides the range of output values?Output range of any layer in a neural network is decided by the activation function used for that layer. For example, if you use tanh as your activation function, your output values will be restricted to [-1,1] (and the values are continuous, check how the values get mapped from [-inf,+inf] (input on x-axis) to [-1,+1] (output on y-axis) here, understanding this step is very important)What you should be doing is add a custom activation function that restricts your values to a step function i.e., either 1 or 0 for [-inf, +inf] and apply it to that layer.How do I know which function to use?You need to create y=some_function that satisfies all your needs (the input to output mapping) and convert that to Python code just like this one:from keras import backend as Kdef binaryActivationFromTanh(x, threshold) : # convert [-inf,+inf] to [-1, 1] # you can skip this step if your threshold is actually within [-inf, +inf] activated_x = K.tanh(x) binary_activated_x = activated_x > threshold # cast the boolean array to float or int as necessary # you shall also cast it to Keras default # binary_activated_x = K.cast_to_floatx(binary_activated_x) return binary_activated_xAfter making your custom activation function, you can use it likex = Input(shape=(1000,))y = Dense(10, activation=binaryActivationFromTanh)(x)Now test the values and see if you are getting the values like you expected. You can now throw this piece into a bigger neural network.I strongly discourage adding new layers to add restriction to your outputs, unless it is solely for activation (like keras.layers.LeakyReLU). 这篇关于限制Keras中图层的输出值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 1403页,肝出来的..
09-07 03:03