提交 7c49a1ef authored 作者: Olivier Delalleau's avatar Olivier Delalleau

Typos / format / other minor fixes to NEWS.txt

上级 c3d22042
TODO for final release: TODO for final release:
- Re-write this NEWS.txt file!
Modifications in the trunk since the 0.4.1 release (12 August 2011) up to 5 Dec 2011 - Re-write this NEWS.txt file!
Modifications in the trunk since the 0.4.1 release (August 12th, 2011) up to December 5th, 2011
Every body is recommended to update Theano to 0.5 when released after
checking that their code doesn't return deprecation warnings. Otherwise, Upgrading to Theano 0.5 is recommended for everyone, but you should first make
in one case the result can change. In other cases, the warnings are sure that your code does not raise deprecation warnings with Theano 0.4.1.
transformed to errors. See below. Otherwise, in one case the results can change. In other cases, the warnings are
turned into errors (see below for details).
Important changes: Important changes:
* Moved to github: https://github.com/Theano/Theano/ * Moved to github: http://github.com/Theano/Theano/
* Old trac ticket moved to assembla ticket: https://www.assembla.com/spaces/theano/tickets * Old trac ticket moved to assembla ticket: http://www.assembla.com/spaces/theano/tickets
* Theano vision: https://deeplearning.net/software/theano/introduction.html#theano-vision (Many people) * Theano vision: http://deeplearning.net/software/theano/introduction.html#theano-vision (Many people)
* Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban) * Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban)
* See the Interface changes. * See the Interface changes.
Interface Behavior Change (was deprecated and generated a warning since Theano 0.3 released the 23 Nov 2010): Interface Behavior Change (was deprecated and generated a warning since Theano 0.3 released Nov. 23rd, 2010):
* The current default value of the parameter axis of * The current default value of the parameter axis of
theano.{max,min,argmax,argmin,max_and_argmax} is now the same as theano.{max,min,argmax,argmin,max_and_argmax} is now the same as
numpy: None. i.e. operate on all dimensions of the tensor. (Frédéric Bastien, Olivier Delalleau) numpy: None. i.e. operate on all dimensions of the tensor. (Frédéric Bastien, Olivier Delalleau)
Interface Feature Removed (was deprecated): Interface Features Removed (most were deprecated):
* The string mode FAST_RUN_NOGC and STABILIZE are not accepted. It was accepted only by theano.function(). Use Mode(linker='c|py_nogc') or Mode(optimizer='stabilize') instead. * The string modes FAST_RUN_NOGC and STABILIZE are not accepted. They were accepted only by theano.function().
* tensor.grad(cost, wrt) now return an object of the "same type" as wrt Use Mode(linker='c|py_nogc') or Mode(optimizer='stabilize') instead.
* tensor.grad(cost, wrt) now always returns an object of the "same type" as wrt
(list/tuple/TensorVariable). (Ian Goodfellow, Olivier) (list/tuple/TensorVariable). (Ian Goodfellow, Olivier)
* a few tag.shape and Join.vec_length left have been removed. (Frederic) * A few tag.shape and Join.vec_length left have been removed. (Frederic)
* The .value attribute of shared variables is removed, use shared.set_value()
* scan interface change: (Razvan Pascanu) or shared.get_value() instead. (Frederic)
* The use of `return_steps` for specifying how many entries of the output * Theano config option "home" is not used anymore as it was redundant with "base_compiledir".
to return has been removed. Instead, apply a subtensor to the output If you use it, Theano will now raise an error. (Olivier D.)
returned by scan to select a certain slice. * scan interface changes: (Razvan Pascanu)
* The inner function (that scan receives) should return its outputs and - The use of `return_steps` for specifying how many entries of the output
updates following this order: to return has been removed. Instead, apply a subtensor to the output
returned by scan to select a certain slice.
[outputs], [updates], [condition]. One can skip any of the three if not - The inner function (that scan receives) should return its outputs and
used, but the order has to stay unchanged. updates following this order:
* shared.value is moved, use shared.set_value() or shared.get_value() instead. (Olivier D.) [outputs], [updates], [condition].
One can skip any of the three if not used, but the order has to stay unchanged.
Interface bug fixes: Interface bug fixes:
* Rop in some case should have returned a list of one theano variable, but returned the variable itself. (Razvan) * Rop in some case should have returned a list of one Theano variable, but returned the variable itself. (Razvan)
* Theano flags "home" is not used anymore as it was a duplicate. If you use it, theano should raise an error. (Olivier D.)
New deprecation (will be removed in Theano 0.6, warning generated if you use them): New deprecation (will be removed in Theano 0.6, warning generated if you use them):
* tensor.shared() renamed to tensor._shared. You probably want to call theano.shared()! (Olivier D.) * tensor.shared() renamed to tensor._shared(). You probably want to call theano.shared() instead! (Olivier D.)
New features: New features:
* Adding 1d advanced indexing support to inc_subtensor and set_subtensor (James Bergstra) * Adding 1D advanced indexing support to inc_subtensor and set_subtensor (James Bergstra)
* tensor.{zeros,ones}_like now support the dtype param as numpy (Frederic) * tensor.{zeros,ones}_like now support the dtype param as numpy (Frederic)
* Added configuration flag "exception_verbosity" to control the verbosity of exceptions (Ian) * Added configuration flag "exception_verbosity" to control the verbosity of exceptions (Ian)
* theano-cache list: list the content of the theano cache (Frederic) * theano-cache list: list the content of the theano cache (Frederic)
* theano-cache unlock: remove the Theano lock (Olivier) * theano-cache unlock: remove the Theano lock (Olivier)
* tensor.ceil_int_div (Frederic) * tensor.ceil_int_div to compute ceil(a / float(b)) (Frederic)
* MaxAndArgMax.grad now work with any axis(The op support only 1 axis) (Frederic) * MaxAndArgMax.grad now works with any axis (The op supports only 1 axis) (Frederic)
* used by tensor.{max,min,max_and_argmax} * used by tensor.{max,min,max_and_argmax}
* tensor.{all,any} (Razvan) * tensor.{all,any} (Razvan)
* tensor.roll as numpy: (Matthew Rocklin, David Warde-Farley) * tensor.roll as numpy: (Matthew Rocklin, David Warde-Farley)
* Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban) * Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban)
* IfElse now allow to have a list/tuple as the result of the if/else branches. * IfElse now allows to have a list/tuple as the result of the if/else branches.
* They must have the same length and corresponding type) (Razvan) * They must have the same length and corresponding type (Razvan)
* Argmax output dtype now int64. (Olivier) * Argmax output dtype is now int64 instead of int32. (Olivier)
* Added the element-wise operation arccos. (Ian) * Added the element-wise operation arccos. (Ian)
* sparse dot with full grad output. (Yann Dauphin) * sparse dot with full grad output. (Yann Dauphin)
* Optimized to Usmm and UsmmCscDense in some case (Yann) * Optimized to Usmm and UsmmCscDense in some case (Yann)
* Note: theano.dot, sparse.dot return a structured_dot grad. * Note: theano.dot and sparse.dot return a structured_dot grad.
This mean that the old grad returned a grad value with the same sparsity pattern then the inputs. This means that the old grad returned a grad value with the same sparsity pattern than the inputs.
* GpuAdvancedSubtensor1 support broadcasted dimensions. (Frederic) * GpuAdvancedSubtensor1 supports broadcasted dimensions. (Frederic)
New optimizations: New optimizations:
* AdvancedSubtensor1 reuse preallocated memory if available(scan, c|py_nogc linker)(Frederic) * AdvancedSubtensor1 reuses preallocated memory if available (scan, c|py_nogc linker) (Frederic)
* tensor_variable.size (as numpy) product of the shape elements. (Olivier) * tensor_variable.size (as numpy) computes the product of the shape elements. (Olivier)
* sparse_variable.size (as scipy) the number of stored value. (Olivier) * sparse_variable.size (as scipy) computes the number of stored values. (Olivier)
* dot22, dot22scalar work with complex. (Frederic) * dot22, dot22scalar work with complex. (Frederic)
* Generate Gemv/Gemm more often. (James) * Generate Gemv/Gemm more often. (James)
* remove scan when all computations can be moved outside the loop. (Razvan) * Remove scan when all computations can be moved outside the loop. (Razvan)
* scan optimization done earlier. This allow other optimization to be applied. (Frederic, Guillaume, Razvan) * scan optimization done earlier. This allows other optimizations to be applied. (Frederic, Guillaume, Razvan)
* exp(x) * sigmoid(-x) is now correctly optimized to a more stable form. (Olivier) * exp(x) * sigmoid(-x) is now correctly optimized to the more stable form sigmoid(x). (Olivier)
* Added Subtensor(Rebroadcast(x)) => Rebroadcast(Subtensor(x)) optimization. (Guillaume) * Added Subtensor(Rebroadcast(x)) => Rebroadcast(Subtensor(x)) optimization. (Guillaume)
* Make the optimization process faster. (James) * Made the optimization process faster. (James)
* Allow fusion of elemwise when the scalar op need support code. (James) * Allow fusion of elemwise when the scalar op needs support code. (James)
* Better opt that lift transpose around dot. (James) * Better opt that lifts transpose around dot. (James)
Bug fixes (the result change): Bug fixes (the result changed):
* On CPU, if the convolution had received explicit shape information, they where not checked at runtime. * On CPU, if the convolution had received explicit shape information, they where not checked at runtime.
This caused wrong result if the input shape was not the one expected. (Frederic, reported by Sander Dieleman) This caused wrong result if the input shape was not the one expected. (Frederic, reported by Sander Dieleman)
* Scan grad when the input of scan has sequence of different length. (Razvan, reported by Michael Forbes) * Scan grad when the input of scan has sequences of different lengths. (Razvan, reported by Michael Forbes)
* Scan.infer_shape now work correctly when working with a condition for the number of loop. * Scan.infer_shape now works correctly when working with a condition for the number of loops.
In the past, it returned n_steps as the shape, which is not always true. (Razvan) In the past, it returned n_steps as the length, which is not always true. (Razvan)
* Theoretical bug: in some case we could have GPUSum return bad value. Was not able to produce the error. * Theoretical bug: in some case we could have GPUSum return bad value.
* pattern affected({0,1}*nb dim, 0 no reduction on this dim, 1 reduction on this dim ) We were not able to reproduce this problem
01, 011, 0111, 010, 10, 001, 0011, 0101: (Frederic) * patterns affected ({0,1}*nb dim, 0 no reduction on this dim, 1 reduction on this dim):
* div by zeros in verify_grad. This hid a bug in the grad of Images2Neibs. (James) 01, 011, 0111, 010, 10, 001, 0011, 0101 (Frederic)
* theano.sandbox.neighbors.Images2Neibs grad was returning wrong value. * div by zero in verify_grad. This hid a bug in the grad of Images2Neibs. (James)
The grad is now disabled and return an error. (Frederic) * theano.sandbox.neighbors.Images2Neibs grad was returning a wrong value.
The grad is now disabled and returns an error. (Frederic)
Crash fixed:
Crashes fixed:
* T.mean crash at graph building time. (Ian) * T.mean crash at graph building time. (Ian)
* "Interactive debugger" crash fix. (Ian, Frederic) * "Interactive debugger" crash fix. (Ian, Frederic)
* Do not call gemm with strides 0, some blas refuse it. (Pascal Lamblin) * Do not call gemm with strides 0, some blas refuse it. (Pascal Lamblin)
...@@ -116,20 +117,19 @@ Crash fixed: ...@@ -116,20 +117,19 @@ Crash fixed:
* Support for OSX Enthought Python Distribution 7.x. (Graham Taylor, Olivier) * Support for OSX Enthought Python Distribution 7.x. (Graham Taylor, Olivier)
* When the subtensor inputs had 0 dimensions and the outputs 0 dimensions. (Frederic) * When the subtensor inputs had 0 dimensions and the outputs 0 dimensions. (Frederic)
* Crash when the step to subtensor was not 1 in conjunction with some optimization. (Frederic, reported by Olivier Chapelle) * Crash when the step to subtensor was not 1 in conjunction with some optimization. (Frederic, reported by Olivier Chapelle)
* fix dot22scalar cast (Justin Bayer, Frédéric, Olivier) * fix dot22scalar cast of integer scalars (Justin Bayer, Frédéric, Olivier)
Know bug: Known bugs:
* CAReduce with nan in inputs don't return the good output (`Ticket <https://www.assembla.com/spaces/theano/tickets/763>`_). * CAReduce with nan in inputs don't return the good output (`Ticket <https://www.assembla.com/spaces/theano/tickets/763>`_).
* This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements.
* This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements. * If you do grad of grad of scan you can have wrong results in some cases.
* If you do grad of grad of scan you can have wrong number in some case.
Sandbox: Sandbox:
* cvm, interface more consistent with current linker. (James) * cvm interface more consistent with current linker. (James)
* vm linker have a callback parameter. (James) * vm linker has a callback parameter. (James)
* review/finish/doc: diag/extract_diag. (Arnaud Bergeron, Frederic, Olibier) * review/finish/doc: diag/extract_diag. (Arnaud Bergeron, Frederic, Olivier)
* review/finish/doc: AllocDiag/diag. (Arnaud, Frederic, Guillaume) * review/finish/doc: AllocDiag/diag. (Arnaud, Frederic, Guillaume)
* review/finish/doc: MatrixInverse, matrix_inverse. (Razvan) * review/finish/doc: MatrixInverse, matrix_inverse. (Razvan)
* review/finish/doc: matrix_dot. (Razvan) * review/finish/doc: matrix_dot. (Razvan)
...@@ -140,8 +140,8 @@ Sandbox: ...@@ -140,8 +140,8 @@ Sandbox:
* review/finish/doc: sparse sum. (Valentin Bisson) * review/finish/doc: sparse sum. (Valentin Bisson)
Sandbox New features(not enabled by default): Sandbox New features (not enabled by default):
* CURAND_RandomStreams for uniform and normal. (not pickable, GPU only)(James) * CURAND_RandomStreams for uniform and normal (not picklable, GPU only) (James)
Documentation: Documentation:
...@@ -149,25 +149,25 @@ Documentation: ...@@ -149,25 +149,25 @@ Documentation:
* Updates to install doc on MacOS. (Olivier) * Updates to install doc on MacOS. (Olivier)
* Updates to install doc on Windows. (David, Olivier) * Updates to install doc on Windows. (David, Olivier)
* Added how to use scan to loop with a condition as the number of iteration. (Razvan) * Added how to use scan to loop with a condition as the number of iteration. (Razvan)
* Added how to wrap in Theano an existing python function .(in numpy, scipy, ...) (Frederic) * Added how to wrap in Theano an existing python function (in numpy, scipy, ...). (Frederic)
* Refactored GPU insatalltion of Theano. (Olivier) * Refactored GPU installation of Theano. (Olivier)
Others: Others:
* Better error message at many places. (David, Ian, Frederic, Olivier) * Better error messages in many places. (David, Ian, Frederic, Olivier)
* pep8 fix. (Many people) * PEP8 fixes. (Many people)
* New min_informative_str() function to print graph. (Ian) * New min_informative_str() function to print graph. (Ian)
* Fix catching of exception. (Sometimes we catched interupt) (Frederic, David, Ian, Olivier) * Fix catching of exception. (Sometimes we catched interupt) (Frederic, David, Ian, Olivier)
* Better support for uft string. (David) * Better support for uft string. (David)
* Fix pydotprint with a function compiled with a ProfileMode (Frederic) * Fix pydotprint with a function compiled with a ProfileMode (Frederic)
* Was broken with change to the profiler. * Was broken with change to the profiler.
* warning when people have old cache entry. (Olivier) * Warning when people have old cache entries. (Olivier)
* More test for join on the GPU and cpu. (Frederic) * More tests for join on the GPU and CPU. (Frederic)
* Don't request to load the GPU module by default in scan module. (Razvan) * Don't request to load the GPU module by default in scan module. (Razvan)
* Fix some import problem. * Fixed some import problems.
* Filtering update. (James) * Filtering update. (James)
* The buidbot raise optimization error instead of printing a warning. (Frederic) * The buidbot now raises optimization errors instead of just printing a warning. (Frederic)
* On Windows, the default compiledir changed to be local to the computer/user and not transfered with roaming profile. (Sebastian Urban) * On Windows, the default compiledir changed to be local to the computer/user and not transferred with roaming profile. (Sebastian Urban)
Reviewers (alphabetical order): Reviewers (alphabetical order):
* David, Frederic, Ian, James, Olivier, Razvan * David, Frederic, Ian, James, Olivier, Razvan
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论