我有许多不同大小的物品。对于每个组,一个(已知)项是“正确”项。有一个功能可以为每个项目分配分数。这样就形成了项目得分的平面向量,以及告诉索引每个组从何处开始以及其大小的向量。我希望对每个组中的分数进行“ softmax”运算,以分配项概率,然后取正确答案的概率对数的总和。这是一个更简单的版本,在这里,我们只返回正确答案的分数,而没有softmax和对数。
import numpy
import theano
import theano.tensor as T
from theano.printing import Print
def scoreForCorrectAnswer(groupSize, offset, correctAnswer, preds):
# for each group, this will get called with the size of
# the group, the offset of where the group begins in the
# predictions vector, and which item in that group is correct
relevantPredictions = preds[offset:offset+groupSize]
ans = Print("CorrectAnswer")(correctAnswer)
return relevantPredictions[ans]
groupSizes = T.ivector('groupSizes')
offsets = T.ivector('offsets')
x = T.fvector('x')
W = T.vector('W')
correctAnswers = T.ivector('correctAnswers')
# for this simple example, we'll just score the items by
# element-wise product with a weight vector
predictions = x * W
(values, updates) = theano.map(fn=scoreForCorrectAnswer,
sequences = [groupSizes, offsets, correctAnswers],
non_sequences = [predictions] )
func = theano.function([groupSizes, offsets, correctAnswers,
W, x], [values])
sampleInput = numpy.array([0.1,0.7,0.3,0.05,0.3,0.3,0.3], dtype='float32')
sampleW = numpy.array([1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], dtype='float32')
sampleOffsets = numpy.array([0,4], dtype='int32')
sampleGroupSizes = numpy.array([4,3], dtype='int32')
sampleCorrectAnswers = numpy.array([1,2], dtype='int32')
data = func (sampleGroupSizes, sampleOffsets, sampleCorrectAnswers, sampleW, sampleInput)
print data
#these all three raise the same exception (see below)
gW1 = T.grad(cost=T.sum(values), wrt=W)
gW2 = T.grad(cost=T.sum(values), wrt=W, disconnected_inputs='warn')
gW3 = T.grad(cost=T.sum(values), wrt=W, consider_constant=[groupSizes,offsets])
这样可以正确地计算输出,但是当我尝试对参数
W
进行渐变时,我得到了(路径缩写):Traceback (most recent call last):
File "test_scan_for_stackoverflow.py", line 37, in <module>
gW = T.grad(cost=T.sum(values), wrt=W)
File "Theano-0.6.0rc2-py2.7.egg/theano/gradient.py", line 438, in grad
outputs, wrt, consider_constant)
File "Theano-0.6.0rc2-py2.7.egg/theano/gradient.py", line 698, in _populate_var_to_app_to_idx
account_for(output)
File "Theano-0.6.0rc2-py2.7.egg/theano/gradient.py", line 694, in account_for
account_for(ipt)
File "Theano-0.6.0rc2-py2.7.egg/theano/gradient.py", line 669, in account_for
connection_pattern = _node_to_pattern(app)
File "Theano-0.6.0rc2-py2.7.egg/theano/gradient.py", line 554, in _node_to_pattern
connection_pattern = node.op.connection_pattern(node)
File "Theano-0.6.0rc2-py2.7.egg/theano/scan_module/scan_op.py", line 1331, in connection_pattern
ils)
File "Theano-0.6.0rc2-py2.7.egg/theano/scan_module/scan_op.py", line 1266, in compute_gradient
known_grads={y: g_y}, wrt=x)
File "Theano-0.6.0rc2-py2.7.egg/theano/gradient.py", line 511, in grad
handle_disconnected(elem)
File "Theano-0.6.0rc2-py2.7.egg/theano/gradient.py", line 497, in handle_disconnected
raise DisconnectedInputError(message)
theano.gradient.DisconnectedInputError: grad method was asked to compute
the gradient with respect to a variable that is not part of the
computational graph of the cost, or is used only by a
non-differentiable operator: groupSizes[t]
现在,
groupSizes
是恒定的,因此没有理由需要对其进行任何渐变。通常,您可以通过抑制DisconnectedInputError
或告诉Theano在您的groupSizes
调用中将T.grad
视为常量来处理此问题(请参见示例脚本的最后几行)。但是似乎没有任何方法可以将这些信息传递给T.grad
的梯度计算中的内部ScanOp
调用。我想念什么吗?这些方法是使梯度计算通过ScanOp进行工作的一种方法吗?
最佳答案
截至2月中旬,这确实是Theano错误。 2013(0.6.0rc-2)。截至本文发布之日,它已在github上的开发版本中修复。
关于python - Theano Scan Op的梯度输入断开,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/16426641/