- 29 10月, 2012 1 次提交
-
-
由 Olivier Delalleau 提交于
Fix test for when floatX=float32
-
- 27 10月, 2012 2 次提交
-
-
由 lamblin 提交于
Borrow=True is dangerous if one output destroys another
-
由 Pascal Lamblin 提交于
-
- 26 10月, 2012 6 次提交
-
-
由 Razvan Pascanu 提交于
-
由 lamblin 提交于
Re-add part of the dtype constraint on out grads
-
由 Pascal Lamblin 提交于
-
由 nouiz 提交于
Batched dot22
-
由 Razvan Pascanu 提交于
This is a temporary fix until we fix the behaviour of borrow=True for the general case.
-
由 nouiz 提交于
Re-add (and update) bug fix description.
-
- 25 10月, 2012 17 次提交
-
-
由 Pascal Lamblin 提交于
-
由 lamblin 提交于
Err msg unalign
-
由 lamblin 提交于
Doc python mem
-
由 nouiz 提交于
C version of view op
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Razvan Pascanu 提交于
Fixes after code review
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Frederic 提交于
-
由 Frederic 提交于
INTERFACE CHANGE: Disable using unaligned tensor in all case. We supported it for python code before.
-
由 Frederic 提交于
-
由 Caglar 提交于
-
由 Caglar 提交于
-
- 24 10月, 2012 7 次提交
-
-
由 Caglar 提交于
-
由 Frederic 提交于
-
由 nouiz 提交于
Added Python 3 support to setup.py
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
- 23 10月, 2012 5 次提交
- 18 10月, 2012 2 次提交
-
-
由 lamblin 提交于
bug scan
-
由 Pascal Lamblin 提交于
In order to avoid expanding memory usage and computations in the part of the graph that computes gradients, I propose the following conventions, that re-instate some of the constraint that existed before on the dtype of gradients: - When calling some_op.grad(inputs, output_grads), each variable in the "output_grads" list, if it is an actual numeric variable (and not, for instance, DisconnectedType or NullType), should have the same dtype as the corresponding output variable. - Moreover, if one of the output variables is of a discrete dtype (int or uint), then the corresponding output gradient (if not a special case like NullType) should be zeros. This is implemented in theano.grad, so the Op's grad method does not have to be changed, but now it can rely again on the fact that, if an output gradient has a dtype, that dtype will be the same as the corresponding output variable.
-