@@ -4,7 +4,7 @@ Updates in the Trunk since the last release:
...
@@ -4,7 +4,7 @@ Updates in the Trunk since the last release:
https://github.com/Theano/Theano/wiki/Devnews
https://github.com/Theano/Theano/wiki/Devnews
git log rel-0.6rc1.. | grep --color=auto -i merge | less
git log rel-0.6rc1.. | grep --color=auto -i merge | less
done up to PR 1066
done up to PR 1074
=============
=============
...
@@ -22,16 +22,17 @@ Highlight:
...
@@ -22,16 +22,17 @@ Highlight:
* Crash fix
* Crash fix
Commiters for this rc2 only:
Commiters for this rc2 only:
Razvan P.
Razvan Pascanu
Pascal L.
Pascal Lamblin
Fred
Frederic Bastien
Ian G.
Ian Goodfellow
Jeremiah Lowin
Jeremiah Lowin
Caglar Gulcehre
Caglar Gulcehre
Jey Kottalam
Jey Kottalam
Matthew Rocklin
Matthew Rocklin
abalkin
abalkin
Regression in 0.6rc1 fixed:
Regression in 0.6rc1 fixed:
* Fix the scan gradient dtype issue. In 0.6rc1, some upcast where inserted. (Razvan P.)
* Fix the scan gradient dtype issue. In 0.6rc1, some upcast where inserted. (Razvan P.)
* Now grad() will do as before the 0.6rc1, i.e. the grad dtype will be the same as the inputs inside the graph. If you ask for the direct grad, it will return the computed dtype. (Pascal L.)
* Now grad() will do as before the 0.6rc1, i.e. the grad dtype will be the same as the inputs inside the graph. If you ask for the direct grad, it will return the computed dtype. (Pascal L.)
...
@@ -50,6 +51,11 @@ Interface change:
...
@@ -50,6 +51,11 @@ Interface change:
* Now we only support officialy scipy 0.7.2 and numpy 1.5.0 (Frederic B.)
* Now we only support officialy scipy 0.7.2 and numpy 1.5.0 (Frederic B.)
We wheren't and aren't testing with older version.
We wheren't and aren't testing with older version.
* The theano.sparse.SparseType is available event when scipy is not (Frederic B.)
* The theano.sparse.SparseType is available event when scipy is not (Frederic B.)
* Fixes issue where members of consider_constant grad parameter
were treated differently from Constant variables. (Ian G.)
* Remove the parameter g_cost to theano.grad(). (Ian G.)
Use the new more powerfull parameter known_grads instead.
Speed up:
Speed up:
* A C version of the SoftMax op (Razvan P.)
* A C version of the SoftMax op (Razvan P.)
...
@@ -65,6 +71,7 @@ Speed up:
...
@@ -65,6 +71,7 @@ Speed up:
* C code for the View Op (Razvan P., Pascal L.)
* C code for the View Op (Razvan P., Pascal L.)
New Feature:
New Feature:
* Added a monitoring mode "MonitorMode" as a debugging tool. (Olivier D.)
* Allow interger axes when keepdims==True (Jeremiah Lowin)
* Allow interger axes when keepdims==True (Jeremiah Lowin)
* Add erfinv and erfcinv op. (Jey Kottalam)
* Add erfinv and erfcinv op. (Jey Kottalam)
* Added tensor.batched_dot(). (Caglar Gulcehre)
* Added tensor.batched_dot(). (Caglar Gulcehre)
...
@@ -102,7 +109,12 @@ Crash Fix:
...
@@ -102,7 +109,12 @@ Crash Fix:
* Fix import problem on PiCloud (Jeremiah Lowin)
* Fix import problem on PiCloud (Jeremiah Lowin)
* You need to use the c|py linker with the default
* You need to use the c|py linker with the default
environment. Otherwise, you need to create your own environment.
environment. Otherwise, you need to create your own environment.
* Fix a crash during optimization when we take a subtensor of a constant with a non constant index. (Ian G.)
* Better handling and error message of gradients on integer. (Ian G.)
* Fixes a crash where Scan assumed all TypeErrors raised by the grad function were due to undefined gradients (Ian G.)
Other:
* Doc typo fixes, Doc updates, Better error messages: Olivier D., David W.F., Frederic B., James B., Matthew Rocklin, Ian G., abalkin.