- 25 1月, 2012 12 次提交
-
-
由 nouiz 提交于
Fixed some tests broken due to exception verbosity
-
由 Olivier Delalleau 提交于
This commit also includes a few PEP8 fixes.
-
由 David Warde-Farley 提交于
-
由 David Warde-Farley 提交于
-
由 David Warde-Farley 提交于
Gpu properties
-
由 Nouiz 提交于
-
由 Nouiz 提交于
-
由 Razvan Pascanu 提交于
Improved scan error message I think that we can't really use iter(x) as suggested by David, because we want to iterate over the items.
-
由 Olivier Delalleau 提交于
Two improvements: - An explicit error is raised if the lambda expression used in scan returns something that is not made of Theano variables (which may happen for instance when someone returns a constant value). - An error is always raised when failing to parse the return value of the lambda expression
-
由 Nouiz 提交于
-
由 Nouiz 提交于
-
由 Nouiz 提交于
-
- 24 1月, 2012 10 次提交
-
-
由 David Warde-Farley 提交于
Fixed gh-356: dtype of tensor.sum()
-
由 David Warde-Farley 提交于
Fixed some references to code recently moved
-
由 Olivier Delalleau 提交于
-
由 Olivier Delalleau 提交于
This is to accomodate changes from f9ca8f9d.
-
由 Olivier Delalleau 提交于
-
由 Olivier Delalleau 提交于
Gradient code is moved from tensor/tensor_grad.py to theano/gradient.py. This makes it work with sparse variables. This commit was originally written by Arnaud Bergeron. I re-authored it to avoid a big merge in repo history.
-
由 Arnaud Bergeron 提交于
Code for sparse was stolen from Yann Dauphin.
-
由 nouiz 提交于
Make mlp_test test a more sensible case.
-
由 Pascal Lamblin 提交于
Previous test was testing settings where we removed either the "local_track_shape_i" or "local_shape_to_shape_i" optimizations. These are unlikely settings, what we test now is the complete removal of the "ShapeOpt" optimization.
-
由 Ian Goodfellow 提交于
-
- 23 1月, 2012 15 次提交
-
-
由 nouiz 提交于
Fixed grad of grad of scan
-
由 nouiz 提交于
Fixed tests with script run_tests_in_batch.py
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
Note sure it makes the file anymore readable, but at least I've tried.
-
由 Razvan Pascanu 提交于
Otherwise the numberic differentiation gradient quickly becomes numerically unstable.
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
the index goes from 0 to n_mit_mot-1 for mit_mot sequnces.
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
The main bug was gradients where represented as shared variables. Now we represent them as sit_sot sequences to which only the last step is used (hence the savemem optimization does the memory clean up). The advantage is that gradients with respect to sitsot are well defined.
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
When collecting shapes from the outputs of the inner function, we should keep track that mit_mot arguments have multiple outputs for one argument.
-
由 Razvan Pascanu 提交于
The error was raised if either the dtype didn't match or ndim didn't match. However the error message did not display the ndims.
-
- 22 1月, 2012 1 次提交
-
-
由 Olivier Delalleau 提交于
Added missing __init__.py so that tests can be imported by nose. All tests are now passing under Windows with Python 2.4.
-
- 21 1月, 2012 2 次提交
-
-
由 nouiz 提交于
Fixed max/argmax gradient tests in float32
-
由 Olivier Delalleau 提交于
-