提交 b82aee77 authored 作者: Pascal Lamblin's avatar Pascal Lamblin

Fixes to gradient.

Do not create a complex tensor with zeros, then cast it into floatX, create the floatX zeros directly
上级 567d195a
...@@ -1777,7 +1777,7 @@ class Sum(CAReduceDtype): ...@@ -1777,7 +1777,7 @@ class Sum(CAReduceDtype):
out = self(*inp) out = self(*inp)
if out.dtype.find('int') != -1: if out.dtype.find('int') != -1:
return [x.zeros_like().astype(theano.config.floatX)] return [theano.tensor.zeros_like(x, dtype=theano.config.floatX)]
gz, = grads gz, = grads
gz = as_tensor_variable(gz) gz = as_tensor_variable(gz)
...@@ -1891,8 +1891,11 @@ class Prod(CAReduceDtype): ...@@ -1891,8 +1891,11 @@ class Prod(CAReduceDtype):
out = self(*inp) out = self(*inp)
if out.dtype[0:3] in ('int', 'uin'): if (out.dtype in discrete_dtypes or
return [prod_in.zeros_like().astype(theano.config.floatX)] self.acc_dtype in discrete_dtypes):
# There is an int conversion in the way
return [theano.tensor.zeros_like(prod_in,
dtype=theano.config.floatX)]
# Prepare the broadcasting that is used everywhere to broadcast # Prepare the broadcasting that is used everywhere to broadcast
# over the original groups (ie. broadcast over the elements of a given # over the original groups (ie. broadcast over the elements of a given
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论