* A few tag.shape and Join.vec_length left have been removed. (Frederic)
* The .value attribute of shared variables is removed, use shared.set_value()
or shared.get_value() instead. (Frederic)
* Theano config option "home" is not used anymore as it was redundant with "base_compiledir".
If you use it, Theano will now raise an error. (Olivier D.)
* scan interface changes: (Razvan Pascanu)
- The use of `return_steps` for specifying how many entries of the output
to return has been removed. Instead, apply a subtensor to the output
returned by scan to select a certain slice.
- The inner function (that scan receives) should return its outputs and
updates following this order:
[outputs], [updates], [condition].
One can skip any of the three if not used, but the order has to stay unchanged.
Interface bug fixes:
* Rop in some case should have returned a list of one Theano variable, but returned the variable itself. (Razvan)
Interface Feature Removed (was deprecated):
New deprecation (will be removed in Theano 0.6, warning generated if you use them):
* The string mode FAST_RUN_NOGC and STABILIZE are not accepted. It was accepted only by theano.function(). Use Mode(linker='c|py_nogc') or Mode(optimizer='stabilize') instead.
* tensor.shared() renamed to tensor._shared(). You probably want to call theano.shared() instead! (Olivier D.)
* tensor.grad(cost, wrt) now return an object of the "same type" as wrt
(list/tuple/TensorVariable).
* a few tag.shape and Join.vec_length left.
* scan interface change: RP
* The use of `return_steps` for specifying how many entries of the output
scan has been deprecated
* The same thing can be done by applying a subtensor on the output
return by scan to select a certain slice
* The inner function (that scan receives) should return its outputs and
updates following this order:
[outputs], [updates], [condition]. One can skip any of the three if not
used, but the order has to stay unchanged.
* shared.value is moved, use shared.set_value() or shared.get_value() instead.
New Deprecation (will be removed in Theano 0.6, warning generated if you use them):
* tensor.shared() renamed to tensor._shared (Olivier D.)
* You probably want to call theano.shared()!
Interface Bug Fix:
* Rop in some case should have returned a list of 1 theano varible, but returned directly that variable.
* Theano flags "home" is not used anymore as it was a duplicate. If you use it, theano should raise an error.
New features:
New features:
* adding 1d advanced indexing support to inc_subtensor and set_subtensor (James
* Adding 1D advanced indexing support to inc_subtensor and set_subtensor (James Bergstra)
* tensor.{zeros,ones}_like now support the dtype param as numpy (Fred)
* tensor.{zeros,ones}_like now support the dtype param as numpy (Frederic)
* config flags "exception_verbosity" to control the verbosity of exception (Ian
* Added configuration flag "exception_verbosity" to control the verbosity of exceptions (Ian)
* theano-cache list: list the content of the theano cache(Fred)
* theano-cache list: list the content of the theano cache (Frederic)
* tensor.ceil_int_div FB
* theano-cache unlock: remove the Theano lock (Olivier)
* MaxAndArgMax.grad now work with any axis(The op support only 1 axis) FB
* tensor.ceil_int_div to compute ceil(a / float(b)) (Frederic)
* used by tensor.{max,min,max_and_argmax}
* MaxAndArgMax.grad now works with any axis (The op supports only 1 axis) (Frederic)
* tensor.{all,any} RP
* used by tensor.{max,min,max_and_argmax}
* tensor.roll as numpy: (Matthew Rocklin, DWF)
* tensor.{all,any} (Razvan)
* on Windows work. Still experimental. (Sebastian Urban)
* tensor.roll as numpy: (Matthew Rocklin, David Warde-Farley)
* IfElse now allow to have a list/tuple as the result of the if/else branches.
* Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban)
* They must have the same length and correspondig type) RP
* IfElse now allows to have a list/tuple as the result of the if/else branches.
* argmax dtype as int64. OD
* They must have the same length and corresponding type (Razvan)
* Argmax output dtype is now int64 instead of int32. (Olivier)
* Added the element-wise operation arccos. (Ian)
* Added sparse dot with dense grad output. (Yann Dauphin)
New Optimizations:
* Optimized to Usmm and UsmmCscDense in some case (Yann)
* AdvancedSubtensor1 reuse preallocated memory if available(scan, c|py_nogc linker)(Fred)
* Note: theano.dot and theano.sparse.structured_dot() always had a gradient with the same sparsity pattern as the inputs.
* tensor_variable.size (as numpy) product of the shape elements OD
The new theano.sparse.dot() has a dense gradient for all inputs.
* sparse_variable.size (as scipy) the number of stored value.OD
* GpuAdvancedSubtensor1 support broadcasted dimensions
* Allow fusion of elemwise when the scalar op needs support code. (James)
* Better opt that lifts transpose around dot. (James)
Bugs fixed:
* On cpu, if the convolution had received explicit shape information, they where not checked at run time. This caused wrong result if the input shape was not the one expected. (Fred, reported by Sander Dieleman)
Bug fixes (the result changed):
* Scan grad when the input of scan has sequence of different length. (RP reported by Michael Forbes)
* On CPU, if the convolution had received explicit shape information, they where not checked at runtime.
* Scan.infer_shape now work correctly when working with a condition for the number of loop. In the past, it returned n_stepts as the shape, witch is not always true. RP
This caused wrong result if the input shape was not the one expected. (Frederic, reported by Sander Dieleman)
* Theoritic bug: in some case we could have GPUSum return bad value. Was not able to produce the error..
* Scan grad when the input of scan has sequences of different lengths. (Razvan, reported by Michael Forbes)
* pattern affected({0,1}*nb dim, 0 no reduction on this dim, 1 reduction on this dim )
* Scan.infer_shape now works correctly when working with a condition for the number of loops.
01, 011, 0111, 010, 10, 001, 0011, 0101: FB
In the past, it returned n_steps as the length, which is not always true. (Razvan)
* div by zeros in verify_grad. This hidded a bug in the grad of Images2Neibs. (JB)
* Theoretical bug: in some case we could have GPUSum return bad value.
* theano.sandbox.neighbors.Images2Neibs grad was returning wrong value. The grad is now disabled and return an error. FB
We were not able to reproduce this problem
* patterns affected ({0,1}*nb dim, 0 no reduction on this dim, 1 reduction on this dim):