提交 8fcb9eef authored 作者: notoraptor's avatar notoraptor

Release 0.10.0beta2

上级 9b3f6351
...@@ -167,6 +167,7 @@ Markus Roth <markus.roth@herr-biber.de> Markus Roth <mail@rothmark.us> ...@@ -167,6 +167,7 @@ Markus Roth <markus.roth@herr-biber.de> Markus Roth <mail@rothmark.us>
Mathieu Germain <mathieu.germain@gmail.com> Mathieu Germain <mathieu.germain2@usherbrooke.ca> Mathieu Germain <mathieu.germain@gmail.com> Mathieu Germain <mathieu.germain2@usherbrooke.ca>
Mehdi Mirza <memirzamo@gmail.com> Mehdi Mirza <memimo@users.noreply.github.com> Mehdi Mirza <memirzamo@gmail.com> Mehdi Mirza <memimo@users.noreply.github.com>
Mehdi Mirza <memirzamo@gmail.com> memimo <memirzamo@gmail.com> Mehdi Mirza <memirzamo@gmail.com> memimo <memirzamo@gmail.com>
Mohammed Affan <affanv14@gmail.com> affan <affanv14@gmail.com>
Mohammed Affan <affanv14@gmail.com> affanv14 <affanv14@gmail.com> Mohammed Affan <affanv14@gmail.com> affanv14 <affanv14@gmail.com>
Moslem Kazemi <moslemk@gmail.com> Moslem Kazemi <moslemk@users.noreply.github.com> Moslem Kazemi <moslemk@gmail.com> Moslem Kazemi <moslemk@users.noreply.github.com>
Moslem Kazemi <moslemk@gmail.com> Mo <moslemk@gmail.com> Moslem Kazemi <moslemk@gmail.com> Mo <moslemk@gmail.com>
......
...@@ -5,6 +5,187 @@ ...@@ -5,6 +5,187 @@
Old Release Notes Old Release Notes
================= =================
Theano 0.10.0beta1 (9th of August, 2017)
========================================
This release contains a lot of bug fixes, improvements and new features to prepare the upcoming release candidate.
We recommend that every developer updates to this version.
Highlights:
- Moved Python 3.* minimum supported version from 3.3 to 3.4
- Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements
- Theano now internally uses ``sha256`` instead of ``md5`` to work on systems that forbide ``md5`` for security reason
- Removed old GPU backend ``theano.sandbox.cuda``. New backend ``theano.gpuarray`` is now the official GPU backend
- Support more debuggers for ``PdbBreakpoint``
- Scan improvements
- Speed up Theano scan compilation and gradient computation
- Added meaningful message when missing inputs to scan
- Speed up graph toposort algorithm
- Faster C compilation by massively using a new interface for op params
- Faster optimization step
- Documentation updated and more complete
- Many bug fixes, crash fixes and warning improvements
A total of 65 people contributed to this release since 0.9.0, see list below.
Interface changes:
- Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector),
and ``AllocDiag`` (set a vector as a diagonal of an empty array)
- Renamed ``MultinomialWOReplacementFromUniform`` to ``ChoiceFromUniform``
- Removed or deprecated Theano flags:
- ``cublas.lib``
- ``cuda.enabled``
- ``enable_initial_driver_test``
- ``gpuarray.sync``
- ``home``
- ``lib.cnmem``
- ``nvcc.*`` flags
- ``pycuda.init``
- Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient
Convolution updates:
- Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation
- Implemented separable convolutions
- Implemented grouped convolutions
GPU:
- Prevent GPU initialization when not required
- Added disk caching option for kernels
- Added method ``my_theano_function.sync_shared()`` to help synchronize GPU Theano functions
- Added useful stats for GPU in profile mode
- Added Cholesky op based on ``cusolver`` backend
- Added GPU ops based on `magma library <http://icl.cs.utk.edu/magma/software/>`_:
SVD, matrix inverse, QR, cholesky and eigh
- Added ``GpuCublasTriangularSolve``
- Added atomic addition and exchange for ``long long`` values in ``GpuAdvancedIncSubtensor1_dev20``
- Support log gamma function for all non-complex types
- Support GPU SoftMax in both OpenCL and CUDA
- Support offset parameter ``k`` for ``GpuEye``
- ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU
- Better cuDNN support
- Official support for ``v5.*`` and ``v6.*``
- Better support and loading on Windows and Mac
- Support cuDNN v6 dilated convolutions
- Support cuDNN v6 reductions
- Added new Theano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
to help configure Theano when CUDA and cuDNN can not be found automatically.
- Updated ``float16`` support
- Added documentation for GPU float16 ops
- Support ``float16`` for ``GpuGemmBatch``
- Started to use ``float32`` precision for computations that don't support ``float16`` on GPU
New features:
- Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions
- Added scalar and elemwise CPU ops for modified Bessel function of order 0 and 1 from ``scipy.special``.
- Added Scaled Exponential Linear Unit (SELU) activation
- Added sigmoid_binary_crossentropy function
- Added tri-gamma function
- Added modes ``half`` and ``full`` for ``Images2Neibs`` ops
- Implemented gradient for ``AbstractBatchNormTrainGrad``
- Implemented gradient for matrix pseudoinverse op
- Added new prop `replace` for ``ChoiceFromUniform`` op
- Added new prop ``on_error`` for CPU ``Cholesky`` op
- Added new Theano flag ``deterministic`` to help control how Theano optimize certain ops that have deterministic versions.
Currently used for subtensor Ops only.
- Added new Theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace optimizations
- Added new Theano flag ``check_stack_trace`` to help check the stack trace during optimization process
- Added new Theano flag ``cmodule.debug`` to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only.
Others:
- Added deprecation warning for the softmax and logsoftmax vector case
- Added a warning to announce that C++ compiler will become mandatory in next Theano release ``0.11``
Other more detailed changes:
- Removed useless warning when profile is manually disabled
- Added tests for abstract conv
- Added options for `disconnected_outputs` to Rop
- Removed ``theano/compat/six.py``
- Removed ``COp.get_op_params()``
- Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
- Macro names provided for array properties are now standardized in both CPU and GPU C codes
- Started to move C code files into separate folder ``c_code`` in every Theano module
- Many improvements for Travis CI tests (with better splitting for faster testing)
- Many improvements for Jenkins CI tests: daily testings on Mac and Windows in addition to Linux
Commiters since 0.9.0:
- Frederic Bastien
- Arnaud Bergeron
- amrithasuresh
- João Victor Tozatti Risso
- Steven Bocco
- Pascal Lamblin
- Mohammed Affan
- Reyhane Askari
- Alexander Matyasko
- Simon Lefrancois
- Shawn Tan
- Thomas George
- Faruk Ahmed
- Zhouhan LIN
- Aleksandar Botev
- jhelie
- xiaoqie
- Tegan Maharaj
- Matt Graham
- Cesar Laurent
- Gabe Schwartz
- Juan Camilo Gamboa Higuera
- AndroidCloud
- Saizheng Zhang
- vipulraheja
- Florian Bordes
- Sina Honari
- Vikram
- erakra
- Chiheb Trabelsi
- Shubh Vachher
- Daren Eiri
- Gijs van Tulder
- Laurent Dinh
- Mohamed Ishmael Diwan Belghazi
- mila
- Jeff Donahue
- Ramana Subramanyam
- Bogdan Budescu
- Ghislain Antony Vaillant
- Jan Schlüter
- Xavier Bouthillier
- fo40225
- Aarni Koskela
- Adam Becker
- Adam Geitgey
- Adrian Keet
- Adrian Seyboldt
- Andrei Costinescu
- Anmol Sahoo
- Chong Wu
- Holger Kohr
- Jayanth Koushik
- Jenkins
- Lilian Besson
- Lv Tao
- Michael Manukyan
- Murugesh Marvel
- NALEPA
- Ubuntu
- Zotov Yuriy
- dareneiri
- lrast
- morrme
- yikang
Theano 0.9.0 (20th of March, 2017) Theano 0.9.0 (20th of March, 2017)
================================== ==================================
...@@ -1365,7 +1546,7 @@ Interface Change: ...@@ -1365,7 +1546,7 @@ Interface Change:
New Interface (reuse existing functionality): New Interface (reuse existing functionality):
* tensor_var.sort() as a shortcut for theano.tensor.sort. (Jeremiah Lowin) * tensor_var.sort() as a shortcut for theano.tensor.sort. (Jeremiah Lowin)
We where already doing this for argsort. We where already doing this for argsort.
* Add theano.tensor.take() and a_tensor_var.take() to support NumPy syntax. (abalkin) * Add theano.tensor.take() and a_tensor_var.take() to support NumPy syntax. (abalkin)
* Add a_tensor_variable.{dot,std,argmin,argmax,argsort,clip,conj,conjugate,repeat,round,trace,real,imag}. (abalkin) * Add a_tensor_variable.{dot,std,argmin,argmax,argsort,clip,conj,conjugate,repeat,round,trace,real,imag}. (abalkin)
New debug feature: New debug feature:
...@@ -1403,7 +1584,7 @@ Crash Fixes: ...@@ -1403,7 +1584,7 @@ Crash Fixes:
* Make CrossentropySoftmax1HotWithBiasDx and CrossentropySoftmaxArgmax1HotWithBias support uint* dtype. (Frederic B., reported by Mark Fenner) * Make CrossentropySoftmax1HotWithBiasDx and CrossentropySoftmaxArgmax1HotWithBias support uint* dtype. (Frederic B., reported by Mark Fenner)
* Fix GpuSoftmax and GpuSoftmaxWithBias crash on GTX285. (Frederic B.) * Fix GpuSoftmax and GpuSoftmaxWithBias crash on GTX285. (Frederic B.)
* Fix crash due to a race condition when importing theano. (Ian G.) * Fix crash due to a race condition when importing theano. (Ian G.)
* Fix crash from path problem with `theano-nose --batch`. (Abalkin) * Fix crash from path problem with `theano-nose --batch`. (Abalkin)
* Fix crash with tensor.roll(Var, iscalar). (Frederic B., reported by Jeremiah Lowin) * Fix crash with tensor.roll(Var, iscalar). (Frederic B., reported by Jeremiah Lowin)
* Fix compilation crash with llvm on Mac. (Abalkin) * Fix compilation crash with llvm on Mac. (Abalkin)
* Fix the grad of Scan that told wrongly that there is no connection between cost and parameters. (Razvan P.) * Fix the grad of Scan that told wrongly that there is no connection between cost and parameters. (Razvan P.)
...@@ -2268,7 +2449,7 @@ Optimizations: ...@@ -2268,7 +2449,7 @@ Optimizations:
* IncSubtensor(x, zeros, idx) -> x * IncSubtensor(x, zeros, idx) -> x
* SetSubtensor(x, x[idx], idx) -> x (when x is a constant) * SetSubtensor(x, x[idx], idx) -> x (when x is a constant)
* subtensor(alloc,...) -> alloc * subtensor(alloc,...) -> alloc
* Many new scan optimization * Many new scan optimization
* Lower scan execution overhead with a Cython implementation * Lower scan execution overhead with a Cython implementation
* Removed scan double compilation (by using the new Op.make_thunk mechanism) * Removed scan double compilation (by using the new Op.make_thunk mechanism)
...@@ -2331,10 +2512,10 @@ Deprecation (will be removed in Theano 0.5, warning generated if you use them): ...@@ -2331,10 +2512,10 @@ Deprecation (will be removed in Theano 0.5, warning generated if you use them):
[outputs], [updates], [condition]. One can skip any of the three if not [outputs], [updates], [condition]. One can skip any of the three if not
used, but the order has to stay unchanged. used, but the order has to stay unchanged.
* tensor.grad(cost, wrt) will return an object of the "same type" as wrt * tensor.grad(cost, wrt) will return an object of the "same type" as wrt
(list/tuple/TensorVariable). (list/tuple/TensorVariable).
* Currently tensor.grad return a type list when the wrt is a list/tuple of * Currently tensor.grad return a type list when the wrt is a list/tuple of
more than 1 element. more than 1 element.
Decrecated in 0.4.0(Reminder, warning generated if you use them): Decrecated in 0.4.0(Reminder, warning generated if you use them):
...@@ -2445,7 +2626,7 @@ Bugs fixed: ...@@ -2445,7 +2626,7 @@ Bugs fixed:
guaranteed to be of the specified dtype instead of potentially being of a guaranteed to be of the specified dtype instead of potentially being of a
higher-precision dtype. higher-precision dtype.
* The perform() method of DownsampleFactorMax did not give the right result * The perform() method of DownsampleFactorMax did not give the right result
when reusing output storage. This happen only if you use the Theano flags when reusing output storage. This happen only if you use the Theano flags
'linker=c|py_nogc' or manually specify the mode to be 'c|py_nogc'. 'linker=c|py_nogc' or manually specify the mode to be 'c|py_nogc'.
Crash fixed: Crash fixed:
......
...@@ -3,182 +3,61 @@ Release Notes ...@@ -3,182 +3,61 @@ Release Notes
============= =============
Theano 0.10.0beta1 (9th of August, 2017) Theano 0.10.0beta2 (25th of August, 2017)
======================================== =========================================
This release contains a lot of bug fixes, improvements and new features to prepare the upcoming release candidate. This release contains new features, improvements and bug fixes to prepare the upcoming release candidate.
We recommend that every developer updates to this version. We recommend that every developer updates to this version.
Highlights: Highlights:
- Moved Python 3.* minimum supported version from 3.3 to 3.4 - Support NumPy ``1.13``
- Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements - Faster optimization step with new destroy handler
- Theano now internally uses ``sha256`` instead of ``md5`` to work on systems that forbide ``md5`` for security reason - Updated documentation
- Removed old GPU backend ``theano.sandbox.cuda``. New backend ``theano.gpuarray`` is now the official GPU backend - Bug fixes, crash fixes and warning improvements
- Support more debuggers for ``PdbBreakpoint``
- Scan improvements A total of 14 people contributed to this release since 0.10.0beta1, see list below.
- Speed up Theano scan compilation and gradient computation
- Added meaningful message when missing inputs to scan
- Speed up graph toposort algorithm
- Faster C compilation by massively using a new interface for op params
- Faster optimization step
- Documentation updated and more complete
- Many bug fixes, crash fixes and warning improvements
A total of 65 people contributed to this release since 0.9.0, see list below.
Interface changes:
- Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector),
and ``AllocDiag`` (set a vector as a diagonal of an empty array)
- Renamed ``MultinomialWOReplacementFromUniform`` to ``ChoiceFromUniform``
- Removed or deprecated Theano flags:
- ``cublas.lib``
- ``cuda.enabled``
- ``enable_initial_driver_test``
- ``gpuarray.sync``
- ``home``
- ``lib.cnmem``
- ``nvcc.*`` flags
- ``pycuda.init``
- Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient
Convolution updates: Convolution updates:
- Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation - Added 3D separable convolutions
- Implemented separable convolutions - Added 3D grouped convolutions
- Implemented grouped convolutions - Removed old ``conv3d`` interface
- Deprecated old ``conv2d`` interface
- Updated ``conv`` documentation
GPU: GPU:
- Prevent GPU initialization when not required - Added a meta-optimizer to select the fastest GPU implementations for convolutions
- Added disk caching option for kernels
- Added method ``my_theano_function.sync_shared()`` to help synchronize GPU Theano functions
- Added useful stats for GPU in profile mode
- Added Cholesky op based on ``cusolver`` backend
- Added GPU ops based on `magma library <http://icl.cs.utk.edu/magma/software/>`_:
SVD, matrix inverse, QR, cholesky and eigh
- Added ``GpuCublasTriangularSolve``
- Added atomic addition and exchange for ``long long`` values in ``GpuAdvancedIncSubtensor1_dev20``
- Support log gamma function for all non-complex types
- Support GPU SoftMax in both OpenCL and CUDA
- Support offset parameter ``k`` for ``GpuEye``
- ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU
- Better cuDNN support
- Official support for ``v5.*`` and ``v6.*`` - cuDNN:
- Better support and loading on Windows and Mac
- Support cuDNN v6 dilated convolutions
- Support cuDNN v6 reductions
- Added new Theano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
to help configure Theano when CUDA and cuDNN can not be found automatically.
- Updated ``float16`` support - Official support for ``v6.*`` and ``v7.*``, support for ``v5.*`` will be removed in next release
- Added spatial transformation operation based on cuDNN
- Added documentation for GPU float16 ops - Updated and improved caching system for runtime-chosen cuDNN convolution algorithms
- Support ``float16`` for ``GpuGemmBatch`` - Support cuDNN v7 tensor core operations for convolutions
- Started to use ``float32`` precision for computations that don't support ``float16`` on GPU - Restricted cuDNN reductions to contiguous inputs
New features: New features:
- Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions - Added ``tensor6()`` and ``tensor7()`` in ``theano.tensor`` module
- Added scalar and elemwise CPU ops for modified Bessel function of order 0 and 1 from ``scipy.special``. - Added boolean indexing for sub-tensors
- Added Scaled Exponential Linear Unit (SELU) activation - Added covariance matrix function ``theano.tensor.cov``
- Added sigmoid_binary_crossentropy function - Added new Theano flag ``pickle_test_value`` to help disable pickling test values
- Added tri-gamma function
- Added modes ``half`` and ``full`` for ``Images2Neibs`` ops
- Implemented gradient for ``AbstractBatchNormTrainGrad``
- Implemented gradient for matrix pseudoinverse op
- Added new prop `replace` for ``ChoiceFromUniform`` op
- Added new prop ``on_error`` for CPU ``Cholesky`` op
- Added new Theano flag ``deterministic`` to help control how Theano optimize certain ops that have deterministic versions.
Currently used for subtensor Ops only.
- Added new Theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace optimizations
- Added new Theano flag ``check_stack_trace`` to help check the stack trace during optimization process
- Added new Theano flag ``cmodule.debug`` to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only.
Others:
- Added deprecation warning for the softmax and logsoftmax vector case
- Added a warning to announce that C++ compiler will become mandatory in next Theano release ``0.11``
Other more detailed changes: Other more detailed changes:
- Removed useless warning when profile is manually disabled - Moved all C code files into separate folder ``c_code`` in every Theano module
- Added tests for abstract conv - Improvements for Jenkins tests
- Added options for `disconnected_outputs` to Rop
- Removed ``theano/compat/six.py``
- Removed ``COp.get_op_params()``
- Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
- Macro names provided for array properties are now standardized in both CPU and GPU C codes
- Started to move C code files into separate folder ``c_code`` in every Theano module
- Many improvements for Travis CI tests (with better splitting for faster testing)
- Many improvements for Jenkins CI tests: daily testings on Mac and Windows in addition to Linux
Commiters since 0.9.0: Commiters since 0.10.0beta1:
- Frederic Bastien
- Arnaud Bergeron
- amrithasuresh
- João Victor Tozatti Risso - João Victor Tozatti Risso
- Steven Bocco
- Pascal Lamblin
- Mohammed Affan - Mohammed Affan
- Frederic Bastien
- Reyhane Askari - Reyhane Askari
- Alexander Matyasko - Steven Bocco
- Simon Lefrancois
- Shawn Tan
- Thomas George
- Faruk Ahmed
- Zhouhan LIN
- Aleksandar Botev
- jhelie
- xiaoqie
- Tegan Maharaj
- Matt Graham
- Cesar Laurent
- Gabe Schwartz
- Juan Camilo Gamboa Higuera
- AndroidCloud
- Saizheng Zhang
- vipulraheja
- Florian Bordes
- Sina Honari
- Vikram
- erakra
- Chiheb Trabelsi
- Shubh Vachher
- Daren Eiri
- Gijs van Tulder - Gijs van Tulder
- Laurent Dinh - Boris Fomitchev
- Mohamed Ishmael Diwan Belghazi - Arnaud Bergeron
- mila - Joseph Paul Cohen
- Jeff Donahue - Dzmitry Bahdanau
- Ramana Subramanyam - Yikang Shen
- Bogdan Budescu - Faruk Ahmed
- Ghislain Antony Vaillant - Shawn Tan
- Jan Schlüter - wyjw
- Xavier Bouthillier
- fo40225
- Aarni Koskela
- Adam Becker
- Adam Geitgey
- Adrian Keet
- Adrian Seyboldt
- Andrei Costinescu
- Anmol Sahoo
- Chong Wu
- Holger Kohr
- Jayanth Koushik
- Jenkins
- Lilian Besson
- Lv Tao
- Michael Manukyan
- Murugesh Marvel
- NALEPA
- Ubuntu
- Zotov Yuriy
- dareneiri
- lrast
- morrme
- yikang
...@@ -18,6 +18,7 @@ TODO: better Theano conv doc ...@@ -18,6 +18,7 @@ TODO: better Theano conv doc
# NB: Following notes contains infos since 0.9.0. # NB: Following notes contains infos since 0.9.0.
Highlights: Highlights:
- Support NumPy ``1.13``
- Moved Python 3.* minimum supported version from 3.3 to 3.4 - Moved Python 3.* minimum supported version from 3.3 to 3.4
- Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements - Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements
- Theano now internally uses ``sha256`` instead of ``md5`` to work on systems that forbide ``md5`` for security reason - Theano now internally uses ``sha256`` instead of ``md5`` to work on systems that forbide ``md5`` for security reason
...@@ -31,7 +32,7 @@ Highlights: ...@@ -31,7 +32,7 @@ Highlights:
- Speed up graph toposort algorithm - Speed up graph toposort algorithm
- Faster C compilation by massively using a new interface for op params - Faster C compilation by massively using a new interface for op params
- Faster optimization step - Faster optimization step with new destroy handler
- Documentation updated and more complete - Documentation updated and more complete
- Many bug fixes, crash fixes and warning improvements - Many bug fixes, crash fixes and warning improvements
...@@ -54,11 +55,15 @@ Interface changes: ...@@ -54,11 +55,15 @@ Interface changes:
- Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient - Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient
Convolution updates: Convolution updates:
- Removed old ``conv3d`` interface
- Deprecated old ``conv2d`` interface
- Updated ``conv`` documentation
- Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation - Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation
- Implemented separable convolutions - Implemented separable convolutions for 2D and 3D
- Implemented grouped convolutions - Implemented grouped convolutions for 2D and 3D
GPU: GPU:
- Added a meta-optimizer to select the fastest GPU implementations for convolutions
- Prevent GPU initialization when not required - Prevent GPU initialization when not required
- Added disk caching option for kernels - Added disk caching option for kernels
- Added method ``my_theano_function.sync_shared()`` to help synchronize GPU Theano functions - Added method ``my_theano_function.sync_shared()`` to help synchronize GPU Theano functions
...@@ -74,11 +79,13 @@ GPU: ...@@ -74,11 +79,13 @@ GPU:
- ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU - ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU
- Better cuDNN support - Better cuDNN support
- Official support for ``v6.*`` and ``v7.*``, support for ``v5.*`` will be removed in next release
- Official support for ``v5.*`` and ``v6.*`` - Added spatial transformation operation based on cuDNN
- Updated and improved caching system for runtime-chosen cuDNN convolution algorithms
- Support cuDNN v7 tensor core operations for convolutions
- Better support and loading on Windows and Mac - Better support and loading on Windows and Mac
- Support cuDNN v6 dilated convolutions - Support cuDNN v6 dilated convolutions
- Support cuDNN v6 reductions - Support cuDNN v6 reductions for contiguous inputs
- Added new Theano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path`` - Added new Theano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
to help configure Theano when CUDA and cuDNN can not be found automatically. to help configure Theano when CUDA and cuDNN can not be found automatically.
...@@ -89,6 +96,9 @@ GPU: ...@@ -89,6 +96,9 @@ GPU:
- Started to use ``float32`` precision for computations that don't support ``float16`` on GPU - Started to use ``float32`` precision for computations that don't support ``float16`` on GPU
New features: New features:
- Added ``tensor6()`` and ``tensor7()`` in ``theano.tensor`` module
- Added boolean indexing for sub-tensors
- Added covariance matrix function ``theano.tensor.cov``
- Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions - Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions
- Added scalar and elemwise CPU ops for modified Bessel function of order 0 and 1 from ``scipy.special``. - Added scalar and elemwise CPU ops for modified Bessel function of order 0 and 1 from ``scipy.special``.
- Added Scaled Exponential Linear Unit (SELU) activation - Added Scaled Exponential Linear Unit (SELU) activation
...@@ -104,6 +114,7 @@ New features: ...@@ -104,6 +114,7 @@ New features:
- Added new Theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace optimizations - Added new Theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace optimizations
- Added new Theano flag ``check_stack_trace`` to help check the stack trace during optimization process - Added new Theano flag ``check_stack_trace`` to help check the stack trace during optimization process
- Added new Theano flag ``cmodule.debug`` to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only. - Added new Theano flag ``cmodule.debug`` to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only.
- Added new Theano flag ``pickle_test_value`` to help disable pickling test values
Others: Others:
- Added deprecation warning for the softmax and logsoftmax vector case - Added deprecation warning for the softmax and logsoftmax vector case
...@@ -117,11 +128,41 @@ Other more detailed changes: ...@@ -117,11 +128,41 @@ Other more detailed changes:
- Removed ``COp.get_op_params()`` - Removed ``COp.get_op_params()``
- Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes - Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
- Macro names provided for array properties are now standardized in both CPU and GPU C codes - Macro names provided for array properties are now standardized in both CPU and GPU C codes
- Started to move C code files into separate folder ``c_code`` in every Theano module - Moved all C code files into separate folder ``c_code`` in every Theano module
- Many improvements for Travis CI tests (with better splitting for faster testing) - Many improvements for Travis CI tests (with better splitting for faster testing)
- Many improvements for Jenkins CI tests: daily testings on Mac and Windows in addition to Linux - Many improvements for Jenkins CI tests: daily testings on Mac and Windows in addition to Linux
ALL THE PR BELLOW HAVE BEEN CHECKED ALL THE PR BELLOW HAVE BEEN CHECKED
* https://github.com/Theano/Theano/pull/6301
* https://github.com/Theano/Theano/pull/6333
* https://github.com/Theano/Theano/pull/6341
* https://github.com/Theano/Theano/pull/6332
* https://github.com/Theano/Theano/pull/6319
* https://github.com/Theano/Theano/pull/6302
* https://github.com/Theano/Theano/pull/6300
* https://github.com/Theano/Theano/pull/6323
* https://github.com/Theano/Theano/pull/6324
* https://github.com/Theano/Theano/pull/5817
* https://github.com/Theano/Theano/pull/6312
* https://github.com/Theano/Theano/pull/6061
* https://github.com/Theano/Theano/pull/6305
* https://github.com/Theano/Theano/pull/6059
* https://github.com/Theano/Theano/pull/6315
* https://github.com/Theano/Theano/pull/6295
* https://github.com/Theano/Theano/pull/6252
* https://github.com/Theano/Theano/pull/6267
* https://github.com/Theano/Theano/pull/6207
* https://github.com/Theano/Theano/pull/6309
* https://github.com/Theano/Theano/pull/6307
* https://github.com/Theano/Theano/pull/6000
* https://github.com/Theano/Theano/pull/6293
* https://github.com/Theano/Theano/pull/6292
* https://github.com/Theano/Theano/pull/6299
* https://github.com/Theano/Theano/pull/6143
* https://github.com/Theano/Theano/pull/6296
* https://github.com/Theano/Theano/pull/6280
* https://github.com/Theano/Theano/pull/6289
* https://github.com/Theano/Theano/pull/6285
* https://github.com/Theano/Theano/pull/6275 * https://github.com/Theano/Theano/pull/6275
* https://github.com/Theano/Theano/pull/6218 * https://github.com/Theano/Theano/pull/6218
* https://github.com/Theano/Theano/pull/6271 * https://github.com/Theano/Theano/pull/6271
......
...@@ -74,7 +74,7 @@ copyright = '2008--2017, LISA lab' ...@@ -74,7 +74,7 @@ copyright = '2008--2017, LISA lab'
# The short X.Y version. # The short X.Y version.
version = '0.10' version = '0.10'
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = '0.10.0beta1' release = '0.10.0beta2'
# There are two options for replacing |today|: either, you set today to some # There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used: # non-false value, then it is used:
......
...@@ -21,6 +21,8 @@ learning/machine learning <https://mila.umontreal.ca/en/cours/>`_ classes). ...@@ -21,6 +21,8 @@ learning/machine learning <https://mila.umontreal.ca/en/cours/>`_ classes).
News News
==== ====
* 2017/08/25: Release of Theano 0.10.0beta2, new features and many bugfixes, release candidate to coming.
* 2017/08/09: Release of Theano 0.10.0beta1, many improvements and bugfixes, release candidate to coming. * 2017/08/09: Release of Theano 0.10.0beta1, many improvements and bugfixes, release candidate to coming.
* Removed support for the old (device=gpu) backend. Use the new * Removed support for the old (device=gpu) backend. Use the new
......
...@@ -53,7 +53,7 @@ PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"] ...@@ -53,7 +53,7 @@ PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"]
MAJOR = 0 MAJOR = 0
MINOR = 10 MINOR = 10
MICRO = 0 MICRO = 0
SUFFIX = "beta1" # Should be blank except for rc's, betas, etc. SUFFIX = "beta2" # Should be blank except for rc's, betas, etc.
ISRELEASED = False ISRELEASED = False
VERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX) VERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论