提交 200babca authored 作者: Pascal Lamblin's avatar Pascal Lamblin

Merge pull request #4443 from nouiz/grad_abs

Change the grad of abs to prevent relu from generating nan in fast_compile
......@@ -25,6 +25,20 @@ Python requires that *__len__* returns an integer, yet it cannot be done as Thea
This error message cannot be made more explicit because the relevant aspects of Python's
internals cannot be modified.
Output slight numerical difference
----------------------------------
Sometimes when you compare the output of Theano using different
Theano flags, Theano versions, CPU and GPU or with other software like
NumPy, you will see small numerical differences.
This is normal. Floating point numbers are approximations of real
numbers. This is why doing a+(b+c) vs (a+b)+c can give small
differences of value. This is normal. For more details, see: `What
Every Computer Scientist Should Know About Floating-Point Arithmetic
<https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html>`_.
Faster gcc optimization
-----------------------
......
......@@ -2150,6 +2150,8 @@ class Abs(UnaryScalarOp):
else:
return [x.zeros_like()]
if x.type in float_types:
return gz * sgn(x),
return gz * x / abs(x), # formula works for complex and real
def c_code(self, node, name, inputs, outputs, sub):
......
......@@ -453,6 +453,16 @@ def test_grad_inrange():
utt.assert_allclose(f(7, 1, 5), [0, 0, 0])
def test_grad_abs():
a = theano.tensor.fscalar("a")
b = theano.tensor.nnet.relu(a)
c = theano.grad(b, a)
f = theano.function([a], c, mode=theano.Mode(optimizer=None))
# Currently Theano return 0.5, but it isn't sure it won't change
# in the futur.
ret = f(0.)
assert ret == 0.5, ret
# Testing of Composite is done in tensor/tests/test_opt.py
# in test_fusion, TestCompositeCodegen
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论