提交 c9ad36d1 authored 作者: Olivier Delalleau's avatar Olivier Delalleau

Updated NEWS.txt and doc/NEWS.txt

- Removed TODO from NEWS.txt (since it has been updated already) - Copied NEWS.txt to doc/NEWS.txt (just a basic copy, no need to review that part)
上级 8eacd6bf
TODO for final release:
- Re-write this NEWS.txt file!
Modifications in the trunk since the 0.4.1 release (August 12th, 2011) up to December 5th, 2011 Modifications in the trunk since the 0.4.1 release (August 12th, 2011) up to December 5th, 2011
......
TODO for final release: Modifications in the trunk since the 0.4.1 release (August 12th, 2011) up to December 5th, 2011
- test python 2.4
- test theano-cache with "pip install Theano": issue 101
- Re-write this NEWS.txt file!
if time check issue: 98.
Modifications in the trunk since the 0.4.1 release (12 August 2011) up to 2 Dec 2011 Upgrading to Theano 0.5 is recommended for everyone, but you should first make
sure that your code does not raise deprecation warnings with Theano 0.4.1.
Otherwise, in one case the results can change. In other cases, the warnings are
turned into errors (see below for details).
Every body is recommented to update Theano to 0.5 when released after Important changes:
they checked there code don't return deprecation warning. Otherwise, * Moved to github: http://github.com/Theano/Theano/
in one case the result can change. In other case, the warning are * Old trac ticket moved to assembla ticket: http://www.assembla.com/spaces/theano/tickets
transformed to error. See bellow. * Theano vision: http://deeplearning.net/software/theano/introduction.html#theano-vision (Many people)
* Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban)
* See the Interface changes.
Important change: Interface Behavior Change (was deprecated and generated a warning since Theano 0.3 released Nov. 23rd, 2010):
* Moved to github: https://github.com/Theano/Theano/ * The current default value of the parameter axis of
* Old trac ticket moved to assembla ticket: https://www.assembla.com/spaces/theano/tickets theano.{max,min,argmax,argmin,max_and_argmax} is now the same as
* Theano vision: https://deeplearning.net/software/theano/introduction.html#theano-vision (Many people) numpy: None. i.e. operate on all dimensions of the tensor. (Frédéric Bastien, Olivier Delalleau)
*
Interface Behavior Change (was deprecated and generated a warning since Theano 0.3 released the 23 Nov 2010): Interface Features Removed (most were deprecated):
* The current default value of the parameter axis of * The string modes FAST_RUN_NOGC and STABILIZE are not accepted. They were accepted only by theano.function().
theano.{max,min,argmax,argmin,max_and_argmax} is now the same as Use Mode(linker='c|py_nogc') or Mode(optimizer='stabilize') instead.
numpy: None. i.e. operate on all dimensions of the tensor. * tensor.grad(cost, wrt) now always returns an object of the "same type" as wrt
(list/tuple/TensorVariable). (Ian Goodfellow, Olivier)
* A few tag.shape and Join.vec_length left have been removed. (Frederic)
* The .value attribute of shared variables is removed, use shared.set_value()
or shared.get_value() instead. (Frederic)
* Theano config option "home" is not used anymore as it was redundant with "base_compiledir".
If you use it, Theano will now raise an error. (Olivier D.)
* scan interface changes: (Razvan Pascanu)
- The use of `return_steps` for specifying how many entries of the output
to return has been removed. Instead, apply a subtensor to the output
returned by scan to select a certain slice.
- The inner function (that scan receives) should return its outputs and
updates following this order:
[outputs], [updates], [condition].
One can skip any of the three if not used, but the order has to stay unchanged.
Interface bug fixes:
* Rop in some case should have returned a list of one Theano variable, but returned the variable itself. (Razvan)
Interface Feature Removed (was deprecated): New deprecation (will be removed in Theano 0.6, warning generated if you use them):
* The string mode FAST_RUN_NOGC and STABILIZE are not accepted. It was accepted only by theano.function(). Use Mode(linker='c|py_nogc') or Mode(optimizer='stabilize') instead. * tensor.shared() renamed to tensor._shared(). You probably want to call theano.shared() instead! (Olivier D.)
* tensor.grad(cost, wrt) now return an object of the "same type" as wrt
(list/tuple/TensorVariable).
* a few tag.shape and Join.vec_length left.
* scan interface change: RP
* The use of `return_steps` for specifying how many entries of the output
scan has been deprecated
* The same thing can be done by applying a subtensor on the output
return by scan to select a certain slice
* The inner function (that scan receives) should return its outputs and
updates following this order:
[outputs], [updates], [condition]. One can skip any of the three if not
used, but the order has to stay unchanged.
* shared.value is moved, use shared.set_value() or shared.get_value() instead.
New Deprecation (will be removed in Theano 0.6, warning generated if you use them):
* tensor.shared() renamed to tensor._shared (Olivier D.)
* You probably want to call theano.shared()!
Interface Bug Fix:
* Rop in some case should have returned a list of 1 theano varible, but returned directly that variable.
* Theano flags "home" is not used anymore as it was a duplicate. If you use it, theano should raise an error.
New features: New features:
* adding 1d advanced indexing support to inc_subtensor and set_subtensor (James * Adding 1D advanced indexing support to inc_subtensor and set_subtensor (James Bergstra)
* tensor.{zeros,ones}_like now support the dtype param as numpy (Fred) * tensor.{zeros,ones}_like now support the dtype param as numpy (Frederic)
* config flags "exception_verbosity" to control the verbosity of exception (Ian * Added configuration flag "exception_verbosity" to control the verbosity of exceptions (Ian)
* theano-cache list: list the content of the theano cache(Fred) * theano-cache list: list the content of the theano cache (Frederic)
* tensor.ceil_int_div FB * theano-cache unlock: remove the Theano lock (Olivier)
* MaxAndArgMax.grad now work with any axis(The op support only 1 axis) FB * tensor.ceil_int_div to compute ceil(a / float(b)) (Frederic)
* used by tensor.{max,min,max_and_argmax} * MaxAndArgMax.grad now works with any axis (The op supports only 1 axis) (Frederic)
* tensor.{all,any} RP * used by tensor.{max,min,max_and_argmax}
* tensor.roll as numpy: (Matthew Rocklin, DWF) * tensor.{all,any} (Razvan)
* on Windows work. Still experimental. (Sebastian Urban) * tensor.roll as numpy: (Matthew Rocklin, David Warde-Farley)
* IfElse now allow to have a list/tuple as the result of the if/else branches. * Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban)
* They must have the same length and correspondig type) RP * IfElse now allows to have a list/tuple as the result of the if/else branches.
* argmax dtype as int64. OD * They must have the same length and corresponding type (Razvan)
* Argmax output dtype is now int64 instead of int32. (Olivier)
* Added the element-wise operation arccos. (Ian)
* Added sparse dot with dense grad output. (Yann Dauphin)
New Optimizations: * Optimized to Usmm and UsmmCscDense in some case (Yann)
* AdvancedSubtensor1 reuse preallocated memory if available(scan, c|py_nogc linker)(Fred) * Note: theano.dot and theano.sparse.structured_dot() always had a gradient with the same sparsity pattern as the inputs.
* tensor_variable.size (as numpy) product of the shape elements OD The new theano.sparse.dot() has a dense gradient for all inputs.
* sparse_variable.size (as scipy) the number of stored value.OD * GpuAdvancedSubtensor1 supports broadcasted dimensions. (Frederic)
* dot22, dot22scalar work with complex(Fred)
* Doc how to wrap in Theano an existing python function(in numpy, scipy, ...) Fred
* added arccos IG New optimizations:
* sparse dot with full output. (Yann Dauphin) * AdvancedSubtensor1 reuses preallocated memory if available (scan, c|py_nogc linker) (Frederic)
* Optimized to Usmm and UsmmCscDense in some case (YD) * tensor_variable.size (as numpy) computes the product of the shape elements. (Olivier)
* Note: theano.dot, sparse.dot return a structured_dot grad( * sparse_variable.size (as scipy) computes the number of stored values. (Olivier)
* Generate Gemv/Gemm more often JB * dot22, dot22scalar work with complex. (Frederic)
* scan move computation outside the inner loop when the remove everything from the inner loop RP * Generate Gemv/Gemm more often. (James)
* scan optimization done earlier. This allow other optimization to be applied FB, RP, GD * Remove scan when all computations can be moved outside the loop. (Razvan)
* exp(x) * sigmoid(-x) is now correctly optimized to a more stable form. * scan optimization done earlier. This allows other optimizations to be applied. (Frederic, Guillaume, Razvan)
* exp(x) * sigmoid(-x) is now correctly optimized to the more stable form sigmoid(x). (Olivier)
* Added Subtensor(Rebroadcast(x)) => Rebroadcast(Subtensor(x)) optimization. (Guillaume)
GPU: * Made the optimization process faster. (James)
* GpuAdvancedSubtensor1 support broadcasted dimensions * Allow fusion of elemwise when the scalar op needs support code. (James)
* Better opt that lifts transpose around dot. (James)
Bugs fixed:
* On cpu, if the convolution had received explicit shape information, they where not checked at run time. This caused wrong result if the input shape was not the one expected. (Fred, reported by Sander Dieleman) Bug fixes (the result changed):
* Scan grad when the input of scan has sequence of different length. (RP reported by Michael Forbes) * On CPU, if the convolution had received explicit shape information, they where not checked at runtime.
* Scan.infer_shape now work correctly when working with a condition for the number of loop. In the past, it returned n_stepts as the shape, witch is not always true. RP This caused wrong result if the input shape was not the one expected. (Frederic, reported by Sander Dieleman)
* Theoritic bug: in some case we could have GPUSum return bad value. Was not able to produce the error.. * Scan grad when the input of scan has sequences of different lengths. (Razvan, reported by Michael Forbes)
* pattern affected({0,1}*nb dim, 0 no reduction on this dim, 1 reduction on this dim ) * Scan.infer_shape now works correctly when working with a condition for the number of loops.
01, 011, 0111, 010, 10, 001, 0011, 0101: FB In the past, it returned n_steps as the length, which is not always true. (Razvan)
* div by zeros in verify_grad. This hidded a bug in the grad of Images2Neibs. (JB) * Theoretical bug: in some case we could have GPUSum return bad value.
* theano.sandbox.neighbors.Images2Neibs grad was returning wrong value. The grad is now disabled and return an error. FB We were not able to reproduce this problem
* patterns affected ({0,1}*nb dim, 0 no reduction on this dim, 1 reduction on this dim):
01, 011, 0111, 010, 10, 001, 0011, 0101 (Frederic)
* div by zero in verify_grad. This hid a bug in the grad of Images2Neibs. (James)
Crash fixed: * theano.sandbox.neighbors.Images2Neibs grad was returning a wrong value.
* T.mean crash at graph building timeby Ian G. The grad is now disabled and returns an error. (Frederic)
* "Interactive debugger" crash fix (Ian, Fred)
* "Interactive Debugger" renamed to "Using Test Values"
* Do not call gemm with strides 0, some blas refuse it. (PL) Crashes fixed:
* optimization crash with gemm and complex.(Fred * T.mean crash at graph building time. (Ian)
* Gpu crash with elemwise Fred * "Interactive debugger" crash fix. (Ian, Frederic)
* compilation crash with amdlibm and the gpu. Fred * Do not call gemm with strides 0, some blas refuse it. (Pascal Lamblin)
* IfElse crash Fred * Optimization crash with gemm and complex. (Frederic)
* Execution crash fix in AdvancedSubtensor1 on 32 bits computer(PL) * GPU crash with elemwise. (Frederic)
* gpu compilation crash on MacOS X OD * Compilation crash with amdlibm and the GPU. (Frederic)
* gpu compilation crash on MacOS X Fred * IfElse crash. (Frederic)
* Support for OSX Enthought Python Distribution 7.x (Graham Taylor, OD) * Execution crash fix in AdvancedSubtensor1 on 32 bit computers. (Pascal)
* When the subtensor inputs had 0 dimensions and the outputs 0 dimensions * GPU compilation crash on MacOS X. (Olivier)
* Crash when the step to subtensor was not 1 in conjonction with some optimization * Support for OSX Enthought Python Distribution 7.x. (Graham Taylor, Olivier)
* When the subtensor inputs had 0 dimensions and the outputs 0 dimensions. (Frederic)
* Crash when the step to subtensor was not 1 in conjunction with some optimization. (Frederic, reported by Olivier Chapelle)
Optimization: * fix dot22scalar cast of integer scalars (Justin Bayer, Frédéric, Olivier)
* Added Subtensor(Rebroadcast(x)) => Rebroadcast(Subtensor(x)) optimization (GD)
* Scan optimization are executed earlier. This make other optimization being applied(like blas optimization, gpu optimization...)(GD, Fred, RP)
* Make the optimization process faster JB Known bugs:
* Allow fusion of elemwise when the scalar op need support code. JB * CAReduce with nan in inputs don't return the good output (`Ticket <https://www.assembla.com/spaces/theano/tickets/763>`_).
* This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements.
* If you do grad of grad of scan you can have wrong results in some cases.
Know bug:
* CAReduce with nan in inputs don't return the good output (`Ticket <http://trac-hg.assembla.com/theano/ticket/763>`_).
* This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements.
* If you do grad of grad of scan you can have wrong number in some case.
Sandbox: Sandbox:
* cvm, interface more consistent with current linker (James) * cvm interface more consistent with current linker. (James)
* vm linker have a callback parameter (JB) * vm linker has a callback parameter. (James)
* review/finish/doc: diag/extract_diag AB,FB,GD * review/finish/doc: diag/extract_diag. (Arnaud Bergeron, Frederic, Olivier)
* review/finish/doc: AllocDiag/diag AB,FB,GD * review/finish/doc: AllocDiag/diag. (Arnaud, Frederic, Guillaume)
* review/finish/doc: MatrixInverse, matrix_inverse RP * review/finish/doc: MatrixInverse, matrix_inverse. (Razvan)
* review/finish/doc: matrix_dot RP * review/finish/doc: matrix_dot. (Razvan)
* review/finish/doc: det PH determinent op * review/finish/doc: det (determinent) op. (Philippe Hamel)
* review/finish/doc: Cholesky David determinent op * review/finish/doc: Cholesky determinent op. (David)
* review/finish/doc: ensure_sorted_indices Li Yao * review/finish/doc: ensure_sorted_indices. (Li Yao)
* review/finish/doc: spectral_radius_boud Xavier Glorot * review/finish/doc: spectral_radius_boud. (Xavier Glorot)
* review/finish/doc: sparse sum Valentin Bisson * review/finish/doc: sparse sum. (Valentin Bisson)
Sandbox New features(not enabled by default): Sandbox New features (not enabled by default):
* CURAND_RandomStreams for uniform and normal(not pickable, gpu only)(James) * CURAND_RandomStreams for uniform and normal (not picklable, GPU only) (James)
Documentation: Documentation:
* Many update by many people: Olivier Delalleau, Fred, RP, David, * Many updates. (Many people)
* Updates to install doc on MacOS (OD) * Updates to install doc on MacOS. (Olivier)
* Updates to install doc on Windows(DWF, OD) * Updates to install doc on Windows. (David, Olivier)
* Doc how to use scan to loop with a condition as the number of iteration RP * Added how to use scan to loop with a condition as the number of iteration. (Razvan)
* Added how to wrap in Theano an existing python function (in numpy, scipy, ...). (Frederic)
* Refactored GPU installation of Theano. (Olivier)
Others: Others:
* Better error message at many places: David Warde-Farley, Ian, Fred, Olivier D. * Better error messages in many places. (David, Ian, Frederic, Olivier)
* pep8: James, * PEP8 fixes. (Many people)
* min_informative_str to print graph: Ian G. * New min_informative_str() function to print graph. (Ian)
* Fix catching of exception. (Sometimes we catched interupt): Fred, David, Ian, OD, * Fix catching of exception. (Sometimes we catched interupt) (Frederic, David, Ian, Olivier)
* Better support for uft string(David WF) * Better support for uft string. (David)
* Fix pydotprint with a function compiled with a ProfileMode (Fred) * Fix pydotprint with a function compiled with a ProfileMode (Frederic)
* Was broken with change to the profiler. * Was broken with change to the profiler.
* warning when people have old cache entry (OD) * Warning when people have old cache entries. (Olivier)
* More test for join on the gpu and cpu. * More tests for join on the GPU and CPU. (Frederic)
* Don't request to load the gpu module by default in scan module. RP * Don't request to load the GPU module by default in scan module. (Razvan)
* Better opt that lift transpose around dot JB * Fixed some import problems.
* Fix some import problem * Filtering update. (James)
* Filtering update JB * The buidbot now raises optimization errors instead of just printing a warning. (Frederic)
* On Windows, the default compiledir changed to be local to the computer/user and not transferred with roaming profile. (Sebastian Urban)
Reviewers: Reviewers (alphabetical order):
James, David, Ian, Fred, Razvan, delallea * David, Frederic, Ian, James, Olivier, Razvan
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论