- 25 1月, 2012 16 次提交
-
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
Also use textwrap.dedent to avoid adding indentation to the final C code.
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 David Warde-Farley 提交于
-
由 David Warde-Farley 提交于
-
由 David Warde-Farley 提交于
Gpu properties
-
由 Nouiz 提交于
-
由 Nouiz 提交于
-
由 Razvan Pascanu 提交于
Improved scan error message I think that we can't really use iter(x) as suggested by David, because we want to iterate over the items.
-
由 Olivier Delalleau 提交于
Two improvements: - An explicit error is raised if the lambda expression used in scan returns something that is not made of Theano variables (which may happen for instance when someone returns a constant value). - An error is always raised when failing to parse the return value of the lambda expression
-
由 Nouiz 提交于
-
由 Nouiz 提交于
-
由 Nouiz 提交于
-
- 24 1月, 2012 10 次提交
-
-
由 David Warde-Farley 提交于
Fixed gh-356: dtype of tensor.sum()
-
由 David Warde-Farley 提交于
Fixed some references to code recently moved
-
由 Olivier Delalleau 提交于
-
由 Olivier Delalleau 提交于
This is to accomodate changes from f9ca8f9d.
-
由 Olivier Delalleau 提交于
-
由 Olivier Delalleau 提交于
Gradient code is moved from tensor/tensor_grad.py to theano/gradient.py. This makes it work with sparse variables. This commit was originally written by Arnaud Bergeron. I re-authored it to avoid a big merge in repo history.
-
由 Arnaud Bergeron 提交于
Code for sparse was stolen from Yann Dauphin.
-
由 nouiz 提交于
Make mlp_test test a more sensible case.
-
由 Pascal Lamblin 提交于
Previous test was testing settings where we removed either the "local_track_shape_i" or "local_shape_to_shape_i" optimizations. These are unlikely settings, what we test now is the complete removal of the "ShapeOpt" optimization.
-
由 Ian Goodfellow 提交于
-
- 23 1月, 2012 14 次提交
-
-
由 nouiz 提交于
Fixed grad of grad of scan
-
由 nouiz 提交于
Fixed tests with script run_tests_in_batch.py
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
Note sure it makes the file anymore readable, but at least I've tried.
-
由 Razvan Pascanu 提交于
Otherwise the numberic differentiation gradient quickly becomes numerically unstable.
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
the index goes from 0 to n_mit_mot-1 for mit_mot sequnces.
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
The main bug was gradients where represented as shared variables. Now we represent them as sit_sot sequences to which only the last step is used (hence the savemem optimization does the memory clean up). The advantage is that gradients with respect to sitsot are well defined.
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
-
由 Razvan Pascanu 提交于
When collecting shapes from the outputs of the inner function, we should keep track that mit_mot arguments have multiple outputs for one argument.
-