问题描述
我正在从事语言建模,词汇量很大.所以我想使用tensorflow中的 sampled_softmax_loss .问题在于,sampled_softmax_loss函数的参数 weights 和 biases 似乎不可训练(训练后它们的值不会改变)
所以我想我应该将它们添加到keras Model自动创建的计算图中,但是我花了很多时间,但仍然没有找到合适的方法.
所以,再次.我想将外部可训练tf.Variables添加到keras计算图中.有人知道这样做的方法吗?
我的模特(头和尾)
input_sentence = Input(shape=(INPUT_LENGTH,), dtype='int32')
words = Embedding(embedding_matrix.shape[0], embedding_matrix.shape[1],
weights=[embedding_matrix], trainable=True)(input_sentence)
...
context = Dense(256, activation='tanh')(context)
model = Model(inputs=input_sentence, outputs=context, name=name)
损失
def softmax_fine_loss(labels, logits, transposed_W=None, b=None):
res = tf.map_fn(lambda (__labels, __logits): tf.nn.sampled_softmax_loss(transposed_W, b, __labels, __logits,
num_sampled=1000, num_classes=OUTPUT_COUNT+1),
(labels, logits), dtype=tf.float32)
return res
loss = lambda labels, logits: softmax_fine_loss(labels, logits, transposed_W=transposed_W, b=b)
model_truncated.compile(optimizer=optimizer, loss=loss, sample_weight_mode='temporal')
我终于找到了解决方法
假设我们需要训练权重 W 并使用模型对 b 进行偏向.
因此,解决方法是将它们添加到模型的可训练层之一.
model.layers[-1].trainable_weights.extend([W, b])
何时可以编译模型
model.compile(...)
将变量添加到可训练层中非常重要,例如,我已经尝试了顺序模型,并且将[W,b]添加到激活"层中并不能使它们实际上是可训练的. /p>
I am working on language modelling and the vocabulary is large. So I want to use sampled_softmax_loss from tensorflow. The problem is that weights and biases which are the arguments of the sampled_softmax_loss function seems not trainable (their values don't change after training)
So I guess that I should add them to the computation graph building automatically by keras Model, but I spent a lot of time and still haven't find a proper way to do so.
So, once again. I want to add external trainable tf.Variables to the keras computation graph. Does anyone know the method to do so?
my model (head and tail)
input_sentence = Input(shape=(INPUT_LENGTH,), dtype='int32')
words = Embedding(embedding_matrix.shape[0], embedding_matrix.shape[1],
weights=[embedding_matrix], trainable=True)(input_sentence)
...
context = Dense(256, activation='tanh')(context)
model = Model(inputs=input_sentence, outputs=context, name=name)
loss
def softmax_fine_loss(labels, logits, transposed_W=None, b=None):
res = tf.map_fn(lambda (__labels, __logits): tf.nn.sampled_softmax_loss(transposed_W, b, __labels, __logits,
num_sampled=1000, num_classes=OUTPUT_COUNT+1),
(labels, logits), dtype=tf.float32)
return res
loss = lambda labels, logits: softmax_fine_loss(labels, logits, transposed_W=transposed_W, b=b)
model_truncated.compile(optimizer=optimizer, loss=loss, sample_weight_mode='temporal')
I have finally found a workaround
Let's say we need to train weights W and biases b with our model.
So the workaround is just add them to one of the trainable layers of our model.
model.layers[-1].trainable_weights.extend([W, b])
When we can compile the model
model.compile(...)
It is extremely important to add variables to trainable layer, for example I've experimented with Sequential model, and adding [W, b] to the Activation layer does not make them actually trainable.
这篇关于keras将外部可训练变量添加到图形的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!