提交 11d1efbc authored 作者: Frederic's avatar Frederic

NEWS.txt file.

上级 37ef1949
Modifications in the trunk since the 0.4.1 release (August 12th, 2011) up to December 5th, 2011
UPDATED THIS FILE UP TO: 41103b5d158739e4147428ce776fb5716062d4a8
* fix subtensor bug(report RP, fix PL) TODO BETTER DESCRIPTION! a7be89231eb26f7a39ab5448ef4abf90a6c0d529
Upgrading to Theano 0.5 is recommended for everyone, but you should first make
If you have updated to 0.5rc1, you are highly encouraged to update to
0.5rc2. There is more bug fix and speed uptimization! But there is
also a small new interface change about sum of [u]int* dtype.
Modifications in the trunk since the 0.4.1 release (August 12th, 2011)
Upgrading to Theano 0.5rc2 is recommended for everyone, but you should first make
sure that your code does not raise deprecation warnings with Theano 0.4.1.
Otherwise, in one case the results can change. In other cases, the warnings are
turned into errors (see below for details).
Important changes:
Highlight:
* Moved to github: http://github.com/Theano/Theano/
* Old trac ticket moved to assembla ticket: http://www.assembla.com/spaces/theano/tickets
* Theano vision: http://deeplearning.net/software/theano/introduction.html#theano-vision (Many people)
* Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban)
* Faster dot() call: New/Better direct call to cpu and gpu ger, gemv, gemm and dot(vector, vector). (James, Frédéric, Pascal)
* C implementation of Alloc. (James, Pascal)
* theano.grad() now also work with sparse variable. (Arnaud)
* Macro to implement the Jacobian/Hessian with theano.tensor.{jacobian,hessian} (Razvan)
* See the Interface changes.
Interface Behavior Change (was deprecated and generated a warning since Theano 0.3 released Nov. 23rd, 2010):
* The current default value of the parameter axis of
theano.{max,min,argmax,argmin,max_and_argmax} is now the same as
numpy: None. i.e. operate on all dimensions of the tensor. (Frédéric Bastien, Olivier Delalleau)
* The current default value of the parameter axis of
theano.{max,min,argmax,argmin,max_and_argmax} is now the same as
numpy: None. i.e. operate on all dimensions of the tensor. (Frédéric Bastien, Olivier Delalleau)
* The current output dtype of sum with input dtype [u]int* is now always [u]int64.
You can specify the output dtype with a new dtype parameter to sum.
The output dtype is the one using for the summation.
There is no warning in previous Theano version about this.
The consequence is that the sum is done in a dtype with more precession then before.
So the sum could be slower, but will be more resistent to overflow.
This new behavior is the same as numpy. (Olivier, Pascal)
Interface Features Removed (most were deprecated):
......@@ -32,10 +52,10 @@ Interface Features Removed (most were deprecated):
* Theano config option "home" is not used anymore as it was redundant with "base_compiledir".
If you use it, Theano will now raise an error. (Olivier D.)
* scan interface changes: (Razvan Pascanu)
- The use of `return_steps` for specifying how many entries of the output
* The use of `return_steps` for specifying how many entries of the output
to return has been removed. Instead, apply a subtensor to the output
returned by scan to select a certain slice.
- The inner function (that scan receives) should return its outputs and
* The inner function (that scan receives) should return its outputs and
updates following this order:
[outputs], [updates], [condition].
One can skip any of the three if not used, but the order has to stay unchanged.
......@@ -46,8 +66,30 @@ Interface bug fixes:
New deprecation (will be removed in Theano 0.6, warning generated if you use them):
* tensor.shared() renamed to tensor._shared(). You probably want to call theano.shared() instead! (Olivier D.)
Scan fix:
* computing grad of a function of grad of scan(reported by ?, Razvan)
before : most of the time crash, but could be wrong value with bad number of dimensions(so a visible bug)
now : do the right thing.
* gradient with respect to outputs using multiple taps(Timotty reported, fix Razvan)
before : it used to return wrong values
now : do the right thing.
Note: The reported case of this bug was happening in conjunction with the
save optimization of scan that give run time errors. So if you didn't
manually disable the same memory optimization(number in the list4),
you are fine if you didn't manually request multiple taps.
* Rop of gradient of scan (reported by Timotty and Justin Buyer, fix by Razvan)
before : compilation error when computing R-op
now : do the right thing.
* save memory optimization of scan (reported by Timotty and Nicolas BL, fix by Razvan)
before : for certain corner cases used to result in a runtime shape error
now : do the right thing.
* Scan grad when the input of scan has sequences of different lengths. (Razvan, reported by Michael Forbes)
* Scan.infer_shape now works correctly when working with a condition for the number of loops.
In the past, it returned n_steps as the length, which is not always true. (Razvan)
* Scan.infer_shape crash fix. (Reported by ?, Razvan)
New features:
* AdvancedIncSubtensor grad defined and tested (Justin Bayer)
* Adding 1D advanced indexing support to inc_subtensor and set_subtensor (James Bergstra)
* tensor.{zeros,ones}_like now support the dtype param as numpy (Frederic)
* Added configuration flag "exception_verbosity" to control the verbosity of exceptions (Ian)
......@@ -68,6 +110,18 @@ New features:
* Note: theano.dot and theano.sparse.structured_dot() always had a gradient with the same sparsity pattern as the inputs.
The new theano.sparse.dot() has a dense gradient for all inputs.
* GpuAdvancedSubtensor1 supports broadcasted dimensions. (Frederic)
* TensorVariable.zeros_like() and SparseVariable.zeros_like()
* theano.sandbox.cuda.cuda_ndarray.cuda_ndarray.device_properties()(Frederic)
* theano.sandbox.cuda.cuda_ndarray.cuda_ndarray.mem_info()return free and total gpu memory(Frederic)
* Theano flags compiledir_format. Keep the same default as before: compiledir_%(platform)s-%(processor)s-%(python_version)s. (Josh Bleecher Snyder)
* We also support the "theano_version" substitution.
* IntDiv c code (faster and allow this elemwise to be fused with other elemwise) (Pascal)
* Internal filter_variable mechanism in Type. (Pascal, Ian)
* Ifelse work on sparse.
* Make use of gpu shared variable more transparent with theano.function updates and givens parameter.
* Added a_tensor.transpose(axes) axes is optional (James)
* theano.tensor.transpose(a_tensor, kwargs) We where ignoring kwargs, now it is used as the axes.
* a_CudaNdarray_object[*] = int, now work (Frederic)
New optimizations:
......@@ -88,9 +142,6 @@ New optimizations:
Bug fixes (the result changed):
* On CPU, if the convolution had received explicit shape information, they where not checked at runtime.
This caused wrong result if the input shape was not the one expected. (Frederic, reported by Sander Dieleman)
* Scan grad when the input of scan has sequences of different lengths. (Razvan, reported by Michael Forbes)
* Scan.infer_shape now works correctly when working with a condition for the number of loops.
In the past, it returned n_steps as the length, which is not always true. (Razvan)
* Theoretical bug: in some case we could have GPUSum return bad value.
We were not able to reproduce this problem
* patterns affected ({0,1}*nb dim, 0 no reduction on this dim, 1 reduction on this dim):
......@@ -119,19 +170,21 @@ Crashes fixed:
* Support for OSX Enthought Python Distribution 7.x. (Graham Taylor, Olivier)
* When the subtensor inputs had 0 dimensions and the outputs 0 dimensions. (Frederic)
* Crash when the step to subtensor was not 1 in conjunction with some optimization. (Frederic, reported by Olivier Chapelle)
* fix dot22scalar cast of integer scalars (Justin Bayer, Frédéric, Olivier)
* Fix dot22scalar cast of integer scalars (Justin Bayer, Frédéric, Olivier)
* Fix runtime crash in gemm, dot22. FB
* Fix on 32bits computer: make sure all shape are int64.(Olivier)
* Fix to deque on python 2.4 (Olivier)
* Fix crash when not using c code(or using DebugMode)(not used by default) with numpy 1.6*. Numpy have a bug in the reduction code that make it crash. ufunc.reduce (Pascal)
Known bugs:
* CAReduce with nan in inputs don't return the good output (`Ticket <https://www.assembla.com/spaces/theano/tickets/763>`_).
* This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements.
* If you take the grad of a grad of scan, now we raise an error during the construction of the graph. In the past, you could have wrong results in some cases or an error at run time.
* Scan can raise an IncSubtensor error at run time (no wrong result possible). The current workaround is to disable an optimization with this Theano flag: "optimizer_excluding=scanOp_save_mem".
* If you have multiple optimizations to disable, you must separate them with ":".
Sandbox:
* cvm interface more consistent with current linker. (James)
* Now all tests pass with the linker=cvm flags.
* vm linker has a callback parameter. (James)
* review/finish/doc: diag/extract_diag. (Arnaud Bergeron, Frederic, Olivier)
* review/finish/doc: AllocDiag/diag. (Arnaud, Frederic, Guillaume)
......@@ -142,24 +195,30 @@ Sandbox:
* review/finish/doc: ensure_sorted_indices. (Li Yao)
* review/finish/doc: spectral_radius_boud. (Xavier Glorot)
* review/finish/doc: sparse sum. (Valentin Bisson)
* review/finish/doc: Remove0 (Valentin)
* review/finish/doc: SquareDiagonal (Eric)
Sandbox New features (not enabled by default):
* CURAND_RandomStreams for uniform and normal (not picklable, GPU only) (James)
* New sandbox.linalg.ops.pinv(pseudo-inverse) op (Razvan)
Documentation:
* Many updates. (Many people)
* Updates to install doc on MacOS. (Olivier)
* Updates to install doc on Windows. (David, Olivier)
* Doc on the Rop function (Ian)
* Added how to use scan to loop with a condition as the number of iteration. (Razvan)
* Added how to wrap in Theano an existing python function (in numpy, scipy, ...). (Frederic)
* Refactored GPU installation of Theano. (Olivier)
Others:
* Better error messages in many places. (David, Ian, Frederic, Olivier)
* Better error messages in many places. (Many people)
* PEP8 fixes. (Many people)
* Add a warning about numpy bug with subtensor with more then 2**32 elemenent(TODO, more explicit)
* Added Scalar.ndim=0 and ScalarSharedVariable.ndim=0 (simplify code)(Razvan)
* New min_informative_str() function to print graph. (Ian)
* Fix catching of exception. (Sometimes we catched interupt) (Frederic, David, Ian, Olivier)
* Better support for uft string. (David)
......@@ -168,13 +227,18 @@ Others:
* Warning when people have old cache entries. (Olivier)
* More tests for join on the GPU and CPU. (Frederic)
* Don't request to load the GPU module by default in scan module. (Razvan)
* Fixed some import problems.
* Fixed some import problems. (Frederic and others)
* Filtering update. (James)
* On Windows, the default compiledir changed to be local to the computer/user and not transferred with roaming profile. (Sebastian Urban)
* New theano flag "on_shape_error". Defaults to "warn" (same as previous behavior):
it prints a warning when an error occurs when inferring the shape of some apply node.
The other accepted value is "raise" to raise an error when this happens. (Frederic)
* The buidbot now raises optimization/shape errors instead of just printing a warning. (Frederic)
* better pycuda tests (Frederic)
* check_blas.py now accept the shape and the number of iteration as parameter (Frederic)
* Fix opt warning when the opt ShapeOpt is disabled(enabled by default) (Frederic)
* More internal verification on what each op.infer_shape return. (Frederic, James)
* Argmax dtype to int64 (Olivier)
Reviewers (alphabetical order):
* David, Frederic, Ian, James, Olivier, Razvan
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论