- 25 1月, 2012 11 次提交
-
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
It seems that even the last fix of the generated random numbers didn't do the trick. The reason being same numeric instability. I've change the operation to a sum, which, if you apply it recursevly is less problematic then multiplying numbers
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
This test fails because of small numbers (the gradients are especially susceptible to small numbers). Increasing the range from which we sample, it doesn't mean that there will not be a seed there for which the sampled numbers are small enough (or a few entries are small enough) to get in that instability region
-
由 Razvan Pascanu 提交于
This tests are of the sandbox of scan that I want to change anyway..
-
由 Razvan Pascanu 提交于
Improved scan error message I think that we can't really use iter(x) as suggested by David, because we want to iterate over the items.
-
由 Olivier Delalleau 提交于
Two improvements: - An explicit error is raised if the lambda expression used in scan returns something that is not made of Theano variables (which may happen for instance when someone returns a constant value). - An error is always raised when failing to parse the return value of the lambda expression
-
- 24 1月, 2012 9 次提交
-
-
由 David Warde-Farley 提交于
Fixed gh-356: dtype of tensor.sum()
-
由 David Warde-Farley 提交于
Fixed some references to code recently moved
-
由 Olivier Delalleau 提交于
-
由 Olivier Delalleau 提交于
This is to accomodate changes from f9ca8f9d.
-
由 Olivier Delalleau 提交于
-
由 Olivier Delalleau 提交于
Gradient code is moved from tensor/tensor_grad.py to theano/gradient.py. This makes it work with sparse variables. This commit was originally written by Arnaud Bergeron. I re-authored it to avoid a big merge in repo history.
-
由 Arnaud Bergeron 提交于
Code for sparse was stolen from Yann Dauphin.
-
由 nouiz 提交于
Make mlp_test test a more sensible case.
-
由 Pascal Lamblin 提交于
Previous test was testing settings where we removed either the "local_track_shape_i" or "local_shape_to_shape_i" optimizations. These are unlikely settings, what we test now is the complete removal of the "ShapeOpt" optimization.
-
- 23 1月, 2012 15 次提交
-
-
由 nouiz 提交于
Fixed grad of grad of scan
-
由 nouiz 提交于
Fixed tests with script run_tests_in_batch.py
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
Note sure it makes the file anymore readable, but at least I've tried.
-
由 Razvan Pascanu 提交于
Otherwise the numberic differentiation gradient quickly becomes numerically unstable.
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
the index goes from 0 to n_mit_mot-1 for mit_mot sequnces.
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
The main bug was gradients where represented as shared variables. Now we represent them as sit_sot sequences to which only the last step is used (hence the savemem optimization does the memory clean up). The advantage is that gradients with respect to sitsot are well defined.
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
When collecting shapes from the outputs of the inner function, we should keep track that mit_mot arguments have multiple outputs for one argument.
-
由 Razvan Pascanu 提交于
The error was raised if either the dtype didn't match or ndim didn't match. However the error message did not display the ndims.
-
- 22 1月, 2012 1 次提交
-
-
由 Olivier Delalleau 提交于
Added missing __init__.py so that tests can be imported by nose. All tests are now passing under Windows with Python 2.4.
-
- 21 1月, 2012 4 次提交
-
-
由 nouiz 提交于
Fixed max/argmax gradient tests in float32
-
由 Olivier Delalleau 提交于
-
由 lamblin 提交于
fix how we compute the number of elements in the compile dir.
-
由 nouiz 提交于
Work around bug in numpy 1.6 with ufunc.reduce
-