提交 2d85bb2a authored 作者: Razvan Pascanu's avatar Razvan Pascanu

Fixed the computation of Lop

The formula before was wrong, it does not work when inputs are matrices, because it depends on which axis y is computed. I believe this formulation is much better.
上级 daef1d0c
...@@ -130,9 +130,8 @@ class test_RopLop(unittest.TestCase): ...@@ -130,9 +130,8 @@ class test_RopLop(unittest.TestCase):
vv = numpy.asarray(self.rng.uniform(size=out_shape), theano.config.floatX) vv = numpy.asarray(self.rng.uniform(size=out_shape), theano.config.floatX)
yv = TT.Lop(y, self.mx, self.v) yv = TT.Lop(y, self.mx, self.v)
lop_f = function([self.mx, self.v], yv) lop_f = function([self.mx, self.v], yv)
sy, _ = theano.scan( lambda i,y,x,v: (TT.grad(y[i]*v[i],x))[i],
sequences = TT.arange(y.shape[0]), sy = TT.grad((self.v*y).sum(), self.mx)
non_sequences = [y,self.mx,self.v])
scan_f = function([self.mx, self.v], sy) scan_f = function([self.mx, self.v], sy)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论