* Fix the scan gradient dtype issue. In 0.6rc1, some upcast where inserted. (Razvan P.)
* Fixed the scan gradient dtype issue. In 0.6rc1, some upcast were inserted. (Razvan P.)
* Now grad() will do as before the 0.6rc1 for float, i.e. the grad dtype will be the same as the inputs inside the graph. If you ask for the direct grad, it will return the computed dtype. (Pascal L.)
* Now grad() will do as before 0.6rc1 for float, i.e. the grad dtype will be the same as the inputs inside the graph. If you ask for the direct grad, it will return the computed dtype. (Pascal L.)
Wrong results fix:
Wrong results fixes:
* Scan fix in some case didn't returned the good results. (Razvan P., reported by Jeremiah L.)
* Scan fix in some case didn't returned the good results. (Razvan P., reported by Jeremiah L.)
This happen if you have a state with only neg tap and the outputs of the state is a function of some sequence.
This happened if you had a state with only neg tap and the output of the state was a function of some sequence.
If you have multiple state, there was no problem.
If you had multiple states, there was no problem.
* Fixed bug in Scan with multiple outputs,
* Fixed bug in Scan with multiple outputs,
where one output would sometimes overwrite another one. (Razvan P.)
where one output would sometimes overwrite another one. (Razvan P.)
* Clip.grad treated the gradient with respect to the clipping boundary as always 0. (Ian G.)
* Clip.grad treated the gradient with respect to the clipping boundary as always 0. (Ian G.)
Interface change:
Interface changes:
* Now we do not support unaligned ndarray in python code. (Frederic B.)
* We do not support anymore unaligned ndarray in Python code. (Frederic B.)
We did not support it in c code and supporting it in python code made
We did not support it in C code and supporting it in Python code made
the detection harder.
the detection harder.
* Now we only officially support scipy 0.7.2 and numpy 1.5.0 (Frederic B.)
* Now we only officially support scipy 0.7.2 and numpy 1.5.0 (Frederic B.)
We weren't and aren't testing with older versions.
We weren't and aren't testing with older versions.
* The theano.sparse.SparseType is available even when scipy is not (Frederic B.)
* The theano.sparse.SparseType is available even when scipy is not (Frederic B.)
* Fixes issue where members of consider_constant grad parameter
* Fixed issue where members of consider_constant grad parameter
were treated differently from Constant variables. (Ian G.)
were treated differently from Constant variables. (Ian G.)
* Remove the parameter g_cost from theano.grad(). (Ian G.)
* Removed the parameter g_cost from theano.grad(). (Ian G.)
Use the new more powerful parameter known_grads instead.
Use the new more powerful parameter known_grads instead.
NumPy interface support:
NumPy interface support:
* theano.tensor.where is an alias for theano.tensor.switch to support NumPy semantic. (Ian G.)
* theano.tensor.where is an alias for theano.tensor.switch to support NumPy semantic. (Ian G.)
* TensorVariable objects now have dot, argmin, argmax, clip, conj, repeat, trace, std, round,
* TensorVariable objects now have dot, argmin, argmax, clip, conj, repeat, trace, std, round,
ravel and argsort functions and the real and imag properties as numpy.ndarray object.
ravel and argsort functions and the real and imag properties as numpy.ndarray objects.
The functionality was already available in Theano. (abalkin)
The functionality was already available in Theano. (abalkin)
Speed up:
Speed-ups:
* A C version of the SoftMax op (Razvan P.)
* A C version of the SoftMax op (Razvan P.)
There was c code for the softmax with bias code.
There was C code for the softmax with bias code.
* Faster GpuIncSubtensor (Ian G.)
* Faster GpuIncSubtensor (Ian G.)
* Faster copy on the GPU for 4d tensor. (Ian G.)
* Faster copy on the GPU for 4d tensor. (Ian G.)
* The fix of flatten infer_shape re-enable an optimization (Pascal L.)
* The fix of flatten infer_shape re-enables an optimization (Pascal L.)
* The bug was introduced in 0.6rc1.
* The bug was introduced in 0.6rc1.
* Enable inc_subtensor on the GPU when updating it with a float64 dtype. (Ian G.)
* Enable inc_subtensor on the GPU when updating it with a float64 dtype. (Ian G.)
It was causing an optimization warning.
It was causing an optimization warning.
* Make DeepCopy reuse preallocated memory. (Frederic B.)
* Make DeepCopy reuse preallocated memory. (Frederic B.)
* Move then convolution to the GPU when the image shape and logical image shape differ. (Frederic Bastien)
* Move the convolution to the GPU when the image shape and logical image shape differ. (Frederic Bastien)
* C code for the View Op (Razvan P., Pascal L.)
* C code for the View Op (Razvan P., Pascal L.)
New Feature:
New Features:
* Added a monitoring mode "MonitorMode" as a debugging tool. (Olivier D.)
* Added a monitoring mode "MonitorMode" as a debugging tool. (Olivier D.)
* Allow integer axes when keepdims==True (Jeremiah Lowin)
* Allow integer axes when keepdims==True (Jeremiah Lowin)
* Add erfinv and erfcinv op. (Jey Kottalam)
* Added erfinv and erfcinv op. (Jey Kottalam)
* Added tensor.batched_dot(). (Caglar Gulcehre)
* Added tensor.batched_dot(). (Caglar Gulcehre)
It uses scan behind the scenes, but makes doing this easier.
It uses scan behind the scenes, but makes doing this easier.
* theano.get_constant_value(x) (Frederic B.)
* theano.get_constant_value(x) (Frederic B.)
...
@@ -81,15 +81,15 @@ New Feature:
...
@@ -81,15 +81,15 @@ New Feature:
This does some constant folding to try to convert x into an int.
This does some constant folding to try to convert x into an int.