* A few tag.shape and Join.vec_length left have been removed. (Frederic)
* The .value attribute of shared variables is removed, use shared.set_value()
or shared.get_value() instead. (Frederic)
* Theano config option "home" is not used anymore as it was redundant with "base_compiledir".
If you use it, Theano will now raise an error. (Olivier D.)
* scan interface changes: (Razvan Pascanu)
- The use of `return_steps` for specifying how many entries of the output
to return has been removed. Instead, apply a subtensor to the output
returned by scan to select a certain slice.
- The inner function (that scan receives) should return its outputs and
updates following this order:
[outputs], [updates], [condition].
One can skip any of the three if not used, but the order has to stay unchanged.
Interface bug fixes:
* Rop in some case should have returned a list of one Theano variable, but returned the variable itself. (Razvan)
Interface Feature Removed (was deprecated):
* The string mode FAST_RUN_NOGC and STABILIZE are not accepted. It was accepted only by theano.function(). Use Mode(linker='c|py_nogc') or Mode(optimizer='stabilize') instead.
* tensor.grad(cost, wrt) now return an object of the "same type" as wrt
(list/tuple/TensorVariable).
* a few tag.shape and Join.vec_length left.
New deprecation (will be removed in Theano 0.6, warning generated if you use them):
* tensor.shared() renamed to tensor._shared(). You probably want to call theano.shared() instead! (Olivier D.)
* scan interface change: RP
* The use of `return_steps` for specifying how many entries of the output
scan has been deprecated
* The same thing can be done by applying a subtensor on the output
return by scan to select a certain slice
* The inner function (that scan receives) should return its outputs and
updates following this order:
[outputs], [updates], [condition]. One can skip any of the three if not
used, but the order has to stay unchanged.
* shared.value is moved, use shared.set_value() or shared.get_value() instead.
New Deprecation (will be removed in Theano 0.6, warning generated if you use them):
* tensor.shared() renamed to tensor._shared (Olivier D.)
* You probably want to call theano.shared()!
Interface Bug Fix:
* Rop in some case should have returned a list of 1 theano varible, but returned directly that variable.
* Theano flags "home" is not used anymore as it was a duplicate. If you use it, theano should raise an error.
New features:
* adding 1d advanced indexing support to inc_subtensor and set_subtensor (James
* tensor.{zeros,ones}_like now support the dtype param as numpy (Fred)
* config flags "exception_verbosity" to control the verbosity of exception (Ian
* theano-cache list: list the content of the theano cache(Fred)
* tensor.ceil_int_div FB
* MaxAndArgMax.grad now work with any axis(The op support only 1 axis) FB
* used by tensor.{max,min,max_and_argmax}
* tensor.{all,any} RP
* tensor.roll as numpy: (Matthew Rocklin, DWF)
* on Windows work. Still experimental. (Sebastian Urban)
* IfElse now allow to have a list/tuple as the result of the if/else branches.
* They must have the same length and correspondig type) RP
* argmax dtype as int64. OD
New Optimizations:
* AdvancedSubtensor1 reuse preallocated memory if available(scan, c|py_nogc linker)(Fred)
* tensor_variable.size (as numpy) product of the shape elements OD
* sparse_variable.size (as scipy) the number of stored value.OD
* dot22, dot22scalar work with complex(Fred)
* Doc how to wrap in Theano an existing python function(in numpy, scipy, ...) Fred
* added arccos IG
* sparse dot with full output. (Yann Dauphin)
* Optimized to Usmm and UsmmCscDense in some case (YD)
* Note: theano.dot, sparse.dot return a structured_dot grad(
* Generate Gemv/Gemm more often JB
* scan move computation outside the inner loop when the remove everything from the inner loop RP
* scan optimization done earlier. This allow other optimization to be applied FB, RP, GD
* exp(x) * sigmoid(-x) is now correctly optimized to a more stable form.
GPU:
* GpuAdvancedSubtensor1 support broadcasted dimensions
Bugs fixed:
* On cpu, if the convolution had received explicit shape information, they where not checked at run time. This caused wrong result if the input shape was not the one expected. (Fred, reported by Sander Dieleman)
* Scan grad when the input of scan has sequence of different length. (RP reported by Michael Forbes)
* Scan.infer_shape now work correctly when working with a condition for the number of loop. In the past, it returned n_stepts as the shape, witch is not always true. RP
* Theoritic bug: in some case we could have GPUSum return bad value. Was not able to produce the error..
* pattern affected({0,1}*nb dim, 0 no reduction on this dim, 1 reduction on this dim )
01, 011, 0111, 010, 10, 001, 0011, 0101: FB
* div by zeros in verify_grad. This hidded a bug in the grad of Images2Neibs. (JB)
* theano.sandbox.neighbors.Images2Neibs grad was returning wrong value. The grad is now disabled and return an error. FB
Crash fixed:
* T.mean crash at graph building timeby Ian G.
* "Interactive debugger" crash fix (Ian, Fred)
* "Interactive Debugger" renamed to "Using Test Values"
* Do not call gemm with strides 0, some blas refuse it. (PL)
* optimization crash with gemm and complex.(Fred
* Gpu crash with elemwise Fred
* compilation crash with amdlibm and the gpu. Fred
* IfElse crash Fred
* Execution crash fix in AdvancedSubtensor1 on 32 bits computer(PL)
* gpu compilation crash on MacOS X OD
* gpu compilation crash on MacOS X Fred
* Support for OSX Enthought Python Distribution 7.x (Graham Taylor, OD)
* When the subtensor inputs had 0 dimensions and the outputs 0 dimensions
* Crash when the step to subtensor was not 1 in conjonction with some optimization