提交 f6f45e57 authored 作者: notoraptor's avatar notoraptor 提交者: GitHub

Merge pull request #6353 from notoraptor/prepare-release-0.10.0beta2

Prepare release 0.10.0beta2
...@@ -24,6 +24,9 @@ Anatoly Belikov <awbelikov@gmail.com> Anatoly Belikov <wormblood@gmail.com> ...@@ -24,6 +24,9 @@ Anatoly Belikov <awbelikov@gmail.com> Anatoly Belikov <wormblood@gmail.com>
Andre Holzner <Andre.Georg.Holzner@cern.ch> Andre Holzner <holzner@pb-d-128-141-148-222.cern.ch> Andre Holzner <Andre.Georg.Holzner@cern.ch> Andre Holzner <holzner@pb-d-128-141-148-222.cern.ch>
Andre Holzner <Andre.Georg.Holzner@cern.ch> andreh <andreh@localhost> Andre Holzner <Andre.Georg.Holzner@cern.ch> andreh <andreh@localhost>
Andre Holzner <Andre.Georg.Holzner@cern.ch> Andre Holzner <holzner@andres-mbp-2.fritz.box> Andre Holzner <Andre.Georg.Holzner@cern.ch> Andre Holzner <holzner@andres-mbp-2.fritz.box>
Andrei Costinescu <andrei.costinescu@yahoo.com> Andrei Costinescu <AndreiCostinescu@users.noreply.github.com>
Andrei Costinescu <andrei.costinescu@yahoo.com> AndreiCostinescu <andrei.costinescu@yahoo.com>
Anirudh Goyal <anirudhgoyal9119@gmail.com> AndroidCloud <anirudhgoyal9119@gmail.com>
Arjun Jain <arjunjain@gmail.com> Arjun Jain <stencilman@users.noreply.github.com> Arjun Jain <arjunjain@gmail.com> Arjun Jain <stencilman@users.noreply.github.com>
Arnaud Bergeron <abergeron@gmail.com> <abergeron@gmail.com> Arnaud Bergeron <abergeron@gmail.com> <abergeron@gmail.com>
Arnaud Bergeron <abergeron@gmail.com> <bergearn@iro.umontreal.ca> Arnaud Bergeron <abergeron@gmail.com> <bergearn@iro.umontreal.ca>
...@@ -167,7 +170,9 @@ Markus Roth <markus.roth@herr-biber.de> Markus Roth <mail@rothmark.us> ...@@ -167,7 +170,9 @@ Markus Roth <markus.roth@herr-biber.de> Markus Roth <mail@rothmark.us>
Mathieu Germain <mathieu.germain@gmail.com> Mathieu Germain <mathieu.germain2@usherbrooke.ca> Mathieu Germain <mathieu.germain@gmail.com> Mathieu Germain <mathieu.germain2@usherbrooke.ca>
Mehdi Mirza <memirzamo@gmail.com> Mehdi Mirza <memimo@users.noreply.github.com> Mehdi Mirza <memirzamo@gmail.com> Mehdi Mirza <memimo@users.noreply.github.com>
Mehdi Mirza <memirzamo@gmail.com> memimo <memirzamo@gmail.com> Mehdi Mirza <memirzamo@gmail.com> memimo <memirzamo@gmail.com>
Mohammed Affan <affanv14@gmail.com> affan <affanv14@gmail.com>
Mohammed Affan <affanv14@gmail.com> affanv14 <affanv14@gmail.com> Mohammed Affan <affanv14@gmail.com> affanv14 <affanv14@gmail.com>
Mohammed Affan <affanv14@gmail.com> Ubuntu <ubuntu@ip-172-31-58-125.ec2.internal>
Moslem Kazemi <moslemk@gmail.com> Moslem Kazemi <moslemk@users.noreply.github.com> Moslem Kazemi <moslemk@gmail.com> Moslem Kazemi <moslemk@users.noreply.github.com>
Moslem Kazemi <moslemk@gmail.com> Mo <moslemk@gmail.com> Moslem Kazemi <moslemk@gmail.com> Mo <moslemk@gmail.com>
Nan Rosemary Ke <rosemary.ke@west.cmu.edu> nke001 <rosemary.nan.ke@gmail.com> Nan Rosemary Ke <rosemary.ke@west.cmu.edu> nke001 <rosemary.nan.ke@gmail.com>
...@@ -217,6 +222,8 @@ Sebastien Jean <jeasebas@iro.umontreal.ca> sebastien-j <jeasebas@iro.umontreal.c ...@@ -217,6 +222,8 @@ Sebastien Jean <jeasebas@iro.umontreal.ca> sebastien-j <jeasebas@iro.umontreal.c
Sebastien Jean <jeasebas@iro.umontreal.ca> sebastien-j <sebastien.jean@mail.mcgill.ca> Sebastien Jean <jeasebas@iro.umontreal.ca> sebastien-j <sebastien.jean@mail.mcgill.ca>
Simon Lefrancois <simon.lefrancois@umontreal.ca> slefrancois <simon.lefrancois@umontreal.ca> Simon Lefrancois <simon.lefrancois@umontreal.ca> slefrancois <simon.lefrancois@umontreal.ca>
Simon Lefrancois <simon.lefrancois@umontreal.ca> Simon Lefrancois <lefransi@iro.umontreal.ca> Simon Lefrancois <simon.lefrancois@umontreal.ca> Simon Lefrancois <lefransi@iro.umontreal.ca>
Simon Lefrancois <simon.lefrancois@umontreal.ca> Jenkins <jenkins@milaburger.iro.umontreal.ca>
Simon Lefrancois <simon.lefrancois@umontreal.ca> mila <mila@earlgrey.iro.umontreal.ca>
Sina Honari <honaris@iro.umontreal.ca> SinaHonari <sina2222@gmail.com> Sina Honari <honaris@iro.umontreal.ca> SinaHonari <sina2222@gmail.com>
Sina Honari <honaris@iro.umontreal.ca> Sina Honari <honaris@eos21.iro.umontreal.ca> Sina Honari <honaris@iro.umontreal.ca> Sina Honari <honaris@eos21.iro.umontreal.ca>
Sina Honari <honaris@iro.umontreal.ca> Sina Honari <sina.honari@gmail.com> Sina Honari <honaris@iro.umontreal.ca> Sina Honari <sina.honari@gmail.com>
...@@ -238,6 +245,7 @@ Vincent Dumoulin <vi.dumoulin@gmail.com> vdumoulin <vi.dumoulin@gmail.com> ...@@ -238,6 +245,7 @@ Vincent Dumoulin <vi.dumoulin@gmail.com> vdumoulin <vi.dumoulin@gmail.com>
Vitaliy Kurlin <vitaliykurin@gmail.com> yobibyte <vitaliykurin@gmail.com> Vitaliy Kurlin <vitaliykurin@gmail.com> yobibyte <vitaliykurin@gmail.com>
Vivek Kulkarni <viveksck@gmail.com> Vivek Kulkarni <vvkulkarni@cs.stonybrook.edu> Vivek Kulkarni <viveksck@gmail.com> Vivek Kulkarni <vvkulkarni@cs.stonybrook.edu>
Wei Li <kuantkid@gmail.com> kuantkid <kuantkid@gmail.com> Wei Li <kuantkid@gmail.com> kuantkid <kuantkid@gmail.com>
Yikang Shen <yikang.shn@gmail.com> yikang <yikang.shn@gmail.com>
Yoshua Bengio <bengioy@iro.umontreal.ca> bengioy@bengio-mac.local <bengioy@bengio-mac.local> Yoshua Bengio <bengioy@iro.umontreal.ca> bengioy@bengio-mac.local <bengioy@bengio-mac.local>
Ziye Fan <fanziye.cis@gmail.com> FanZiye(t13m) <fanziye.cis@gmail.com> Ziye Fan <fanziye.cis@gmail.com> FanZiye(t13m) <fanziye.cis@gmail.com>
Zhouhan LIN <lin.zhouhan@gmail.com> hantek <lin.zhouhan@gmail.com> Zhouhan LIN <lin.zhouhan@gmail.com> hantek <lin.zhouhan@gmail.com>
......
...@@ -5,6 +5,187 @@ ...@@ -5,6 +5,187 @@
Old Release Notes Old Release Notes
================= =================
Theano 0.10.0beta1 (9th of August, 2017)
========================================
This release contains a lot of bug fixes, improvements and new features to prepare the upcoming release candidate.
We recommend that every developer updates to this version.
Highlights:
- Moved Python 3.* minimum supported version from 3.3 to 3.4
- Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements
- Theano now internally uses ``sha256`` instead of ``md5`` to work on systems that forbide ``md5`` for security reason
- Removed old GPU backend ``theano.sandbox.cuda``. New backend ``theano.gpuarray`` is now the official GPU backend
- Support more debuggers for ``PdbBreakpoint``
- Scan improvements
- Speed up Theano scan compilation and gradient computation
- Added meaningful message when missing inputs to scan
- Speed up graph toposort algorithm
- Faster C compilation by massively using a new interface for op params
- Faster optimization step
- Documentation updated and more complete
- Many bug fixes, crash fixes and warning improvements
A total of 65 people contributed to this release since 0.9.0, see list below.
Interface changes:
- Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector),
and ``AllocDiag`` (set a vector as a diagonal of an empty array)
- Renamed ``MultinomialWOReplacementFromUniform`` to ``ChoiceFromUniform``
- Removed or deprecated Theano flags:
- ``cublas.lib``
- ``cuda.enabled``
- ``enable_initial_driver_test``
- ``gpuarray.sync``
- ``home``
- ``lib.cnmem``
- ``nvcc.*`` flags
- ``pycuda.init``
- Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient
Convolution updates:
- Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation
- Implemented separable convolutions
- Implemented grouped convolutions
GPU:
- Prevent GPU initialization when not required
- Added disk caching option for kernels
- Added method ``my_theano_function.sync_shared()`` to help synchronize GPU Theano functions
- Added useful stats for GPU in profile mode
- Added Cholesky op based on ``cusolver`` backend
- Added GPU ops based on `magma library <http://icl.cs.utk.edu/magma/software/>`_:
SVD, matrix inverse, QR, cholesky and eigh
- Added ``GpuCublasTriangularSolve``
- Added atomic addition and exchange for ``long long`` values in ``GpuAdvancedIncSubtensor1_dev20``
- Support log gamma function for all non-complex types
- Support GPU SoftMax in both OpenCL and CUDA
- Support offset parameter ``k`` for ``GpuEye``
- ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU
- Better cuDNN support
- Official support for ``v5.*`` and ``v6.*``
- Better support and loading on Windows and Mac
- Support cuDNN v6 dilated convolutions
- Support cuDNN v6 reductions
- Added new Theano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
to help configure Theano when CUDA and cuDNN can not be found automatically.
- Updated ``float16`` support
- Added documentation for GPU float16 ops
- Support ``float16`` for ``GpuGemmBatch``
- Started to use ``float32`` precision for computations that don't support ``float16`` on GPU
New features:
- Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions
- Added scalar and elemwise CPU ops for modified Bessel function of order 0 and 1 from ``scipy.special``.
- Added Scaled Exponential Linear Unit (SELU) activation
- Added sigmoid_binary_crossentropy function
- Added tri-gamma function
- Added modes ``half`` and ``full`` for ``Images2Neibs`` ops
- Implemented gradient for ``AbstractBatchNormTrainGrad``
- Implemented gradient for matrix pseudoinverse op
- Added new prop `replace` for ``ChoiceFromUniform`` op
- Added new prop ``on_error`` for CPU ``Cholesky`` op
- Added new Theano flag ``deterministic`` to help control how Theano optimize certain ops that have deterministic versions.
Currently used for subtensor Ops only.
- Added new Theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace optimizations
- Added new Theano flag ``check_stack_trace`` to help check the stack trace during optimization process
- Added new Theano flag ``cmodule.debug`` to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only.
Others:
- Added deprecation warning for the softmax and logsoftmax vector case
- Added a warning to announce that C++ compiler will become mandatory in next Theano release ``0.11``
Other more detailed changes:
- Removed useless warning when profile is manually disabled
- Added tests for abstract conv
- Added options for `disconnected_outputs` to Rop
- Removed ``theano/compat/six.py``
- Removed ``COp.get_op_params()``
- Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
- Macro names provided for array properties are now standardized in both CPU and GPU C codes
- Started to move C code files into separate folder ``c_code`` in every Theano module
- Many improvements for Travis CI tests (with better splitting for faster testing)
- Many improvements for Jenkins CI tests: daily testings on Mac and Windows in addition to Linux
Commiters since 0.9.0:
- Frederic Bastien
- Arnaud Bergeron
- amrithasuresh
- João Victor Tozatti Risso
- Steven Bocco
- Pascal Lamblin
- Mohammed Affan
- Reyhane Askari
- Alexander Matyasko
- Simon Lefrancois
- Shawn Tan
- Thomas George
- Faruk Ahmed
- Zhouhan LIN
- Aleksandar Botev
- jhelie
- xiaoqie
- Tegan Maharaj
- Matt Graham
- Cesar Laurent
- Gabe Schwartz
- Juan Camilo Gamboa Higuera
- AndroidCloud
- Saizheng Zhang
- vipulraheja
- Florian Bordes
- Sina Honari
- Vikram
- erakra
- Chiheb Trabelsi
- Shubh Vachher
- Daren Eiri
- Gijs van Tulder
- Laurent Dinh
- Mohamed Ishmael Diwan Belghazi
- mila
- Jeff Donahue
- Ramana Subramanyam
- Bogdan Budescu
- Ghislain Antony Vaillant
- Jan Schlüter
- Xavier Bouthillier
- fo40225
- Aarni Koskela
- Adam Becker
- Adam Geitgey
- Adrian Keet
- Adrian Seyboldt
- Andrei Costinescu
- Anmol Sahoo
- Chong Wu
- Holger Kohr
- Jayanth Koushik
- Jenkins
- Lilian Besson
- Lv Tao
- Michael Manukyan
- Murugesh Marvel
- NALEPA
- Ubuntu
- Zotov Yuriy
- dareneiri
- lrast
- morrme
- yikang
Theano 0.9.0 (20th of March, 2017) Theano 0.9.0 (20th of March, 2017)
================================== ==================================
......
...@@ -3,133 +3,77 @@ Release Notes ...@@ -3,133 +3,77 @@ Release Notes
============= =============
Theano 0.10.0beta1 (9th of August, 2017) Theano 0.10.0beta2 (7th of September, 2017)
======================================== ===========================================
This release contains a lot of bug fixes, improvements and new features to prepare the upcoming release candidate. This release contains new features, improvements and bug fixes to prepare the upcoming release candidate.
We recommend that every developer updates to this version. We recommend that every developer updates to this version.
Highlights: Highlights:
- Moved Python 3.* minimum supported version from 3.3 to 3.4 - Support NumPy ``1.13``
- Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements - Support pygpu ``0.7``
- Theano now internally uses ``sha256`` instead of ``md5`` to work on systems that forbide ``md5`` for security reason - Added conda recipe
- Removed old GPU backend ``theano.sandbox.cuda``. New backend ``theano.gpuarray`` is now the official GPU backend - Optional faster optimization step with new destroy handler
- Support more debuggers for ``PdbBreakpoint`` - Added documentation for RNNBlock
- Bug fixes, crash fixes, warning improvements and documentation updates
- Scan improvements A total of 67 people contributed to this release since 0.9.0, see list below.
- Speed up Theano scan compilation and gradient computation
- Added meaningful message when missing inputs to scan
- Speed up graph toposort algorithm
- Faster C compilation by massively using a new interface for op params
- Faster optimization step
- Documentation updated and more complete
- Many bug fixes, crash fixes and warning improvements
A total of 65 people contributed to this release since 0.9.0, see list below.
Interface changes: Interface changes:
- Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector), - Added new parameter ``target`` for MRG functions
and ``AllocDiag`` (set a vector as a diagonal of an empty array)
- Renamed ``MultinomialWOReplacementFromUniform`` to ``ChoiceFromUniform``
- Removed or deprecated Theano flags:
- ``cublas.lib``
- ``cuda.enabled``
- ``enable_initial_driver_test``
- ``gpuarray.sync``
- ``home``
- ``lib.cnmem``
- ``nvcc.*`` flags
- ``pycuda.init``
- Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient
Convolution updates: Convolution updates:
- Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation - Added unshared convolutions
- Implemented separable convolutions - Added 3D separable convolutions
- Implemented grouped convolutions - Added 3D grouped convolutions
- Removed old ``conv3d`` interface
- Deprecated old ``conv2d`` interface
- Updated ``conv`` documentation
GPU: GPU:
- Prevent GPU initialization when not required - Added a meta-optimizer to select the fastest GPU implementations for convolutions
- Added disk caching option for kernels
- Added method ``my_theano_function.sync_shared()`` to help synchronize GPU Theano functions
- Added useful stats for GPU in profile mode
- Added Cholesky op based on ``cusolver`` backend
- Added GPU ops based on `magma library <http://icl.cs.utk.edu/magma/software/>`_:
SVD, matrix inverse, QR, cholesky and eigh
- Added ``GpuCublasTriangularSolve``
- Added atomic addition and exchange for ``long long`` values in ``GpuAdvancedIncSubtensor1_dev20``
- Support log gamma function for all non-complex types
- Support GPU SoftMax in both OpenCL and CUDA
- Support offset parameter ``k`` for ``GpuEye``
- ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU
- Better cuDNN support
- Official support for ``v5.*`` and ``v6.*``
- Better support and loading on Windows and Mac
- Support cuDNN v6 dilated convolutions
- Support cuDNN v6 reductions
- Added new Theano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
to help configure Theano when CUDA and cuDNN can not be found automatically.
- Updated ``float16`` support - cuDNN:
- Added documentation for GPU float16 ops - Official support for ``v6.*`` and ``v7.*``, support for ``v5.*`` will be removed in next release
- Support ``float16`` for ``GpuGemmBatch`` - Added spatial transformation operation based on cuDNN
- Started to use ``float32`` precision for computations that don't support ``float16`` on GPU - Updated and improved caching system for runtime-chosen cuDNN convolution algorithms
- Support cuDNN v7 tensor core operations for convolutions with runtime timed algorithms
- Restricted cuDNN reductions to contiguous inputs
- Automatic addition of cuDNN DLL path to ``PATH`` environment variable on Windows
New features: New features:
- Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions - Added ``tensor6()`` and ``tensor7()`` in ``theano.tensor`` module
- Added scalar and elemwise CPU ops for modified Bessel function of order 0 and 1 from ``scipy.special``. - Added boolean indexing for sub-tensors
- Added Scaled Exponential Linear Unit (SELU) activation - Added covariance matrix function ``theano.tensor.cov``
- Added sigmoid_binary_crossentropy function - Added new Theano flag ``pickle_test_value`` to help disable pickling test values
- Added tri-gamma function
- Added modes ``half`` and ``full`` for ``Images2Neibs`` ops
- Implemented gradient for ``AbstractBatchNormTrainGrad``
- Implemented gradient for matrix pseudoinverse op
- Added new prop `replace` for ``ChoiceFromUniform`` op
- Added new prop ``on_error`` for CPU ``Cholesky`` op
- Added new Theano flag ``deterministic`` to help control how Theano optimize certain ops that have deterministic versions.
Currently used for subtensor Ops only.
- Added new Theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace optimizations
- Added new Theano flag ``check_stack_trace`` to help check the stack trace during optimization process
- Added new Theano flag ``cmodule.debug`` to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only.
Others: Others:
- Added deprecation warning for the softmax and logsoftmax vector case - Kept stack trace for optimizations in new GPU backend
- Added a warning to announce that C++ compiler will become mandatory in next Theano release ``0.11``
Other more detailed changes: Other more detailed changes:
- Removed useless warning when profile is manually disabled - Moved all C code files into separate folder ``c_code`` in every Theano module
- Added tests for abstract conv - Improvements for Jenkins tests
- Added options for `disconnected_outputs` to Rop
- Removed ``theano/compat/six.py``
- Removed ``COp.get_op_params()``
- Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
- Macro names provided for array properties are now standardized in both CPU and GPU C codes
- Started to move C code files into separate folder ``c_code`` in every Theano module
- Many improvements for Travis CI tests (with better splitting for faster testing)
- Many improvements for Jenkins CI tests: daily testings on Mac and Windows in addition to Linux
Commiters since 0.9.0: Commiters since 0.9.0:
- Frederic Bastien - Frederic Bastien
- Arnaud Bergeron
- amrithasuresh
- João Victor Tozatti Risso - João Victor Tozatti Risso
- Arnaud Bergeron
- Steven Bocco - Steven Bocco
- Pascal Lamblin
- Mohammed Affan - Mohammed Affan
- amrithasuresh
- Pascal Lamblin
- Reyhane Askari - Reyhane Askari
- Alexander Matyasko - Alexander Matyasko
- Simon Lefrancois - Simon Lefrancois
- Shawn Tan - Shawn Tan
- Gijs van Tulder
- Thomas George - Thomas George
- Vikram
- Andrei Costinescu
- Faruk Ahmed - Faruk Ahmed
- Boris Fomitchev
- Zhouhan LIN - Zhouhan LIN
- Aleksandar Botev - Aleksandar Botev
- jhelie - jhelie
...@@ -139,23 +83,24 @@ Commiters since 0.9.0: ...@@ -139,23 +83,24 @@ Commiters since 0.9.0:
- Cesar Laurent - Cesar Laurent
- Gabe Schwartz - Gabe Schwartz
- Juan Camilo Gamboa Higuera - Juan Camilo Gamboa Higuera
- AndroidCloud - Tim Cooijmans
- Anirudh Goyal
- Saizheng Zhang - Saizheng Zhang
- vipulraheja - vipulraheja
- Florian Bordes - Florian Bordes
- Sina Honari - Sina Honari
- Vikram - Yikang Shen
- erakra - erakra
- Chiheb Trabelsi - Chiheb Trabelsi
- Shubh Vachher - Shubh Vachher
- Daren Eiri - Daren Eiri
- Gijs van Tulder - Joseph Paul Cohen
- Laurent Dinh - Laurent Dinh
- Mohamed Ishmael Diwan Belghazi - Mohamed Ishmael Diwan Belghazi
- mila
- Jeff Donahue - Jeff Donahue
- Ramana Subramanyam - Ramana Subramanyam
- Bogdan Budescu - Bogdan Budescu
- Dzmitry Bahdanau
- Ghislain Antony Vaillant - Ghislain Antony Vaillant
- Jan Schlüter - Jan Schlüter
- Xavier Bouthillier - Xavier Bouthillier
...@@ -165,20 +110,17 @@ Commiters since 0.9.0: ...@@ -165,20 +110,17 @@ Commiters since 0.9.0:
- Adam Geitgey - Adam Geitgey
- Adrian Keet - Adrian Keet
- Adrian Seyboldt - Adrian Seyboldt
- Andrei Costinescu
- Anmol Sahoo - Anmol Sahoo
- Chong Wu - Chong Wu
- Holger Kohr - Holger Kohr
- Jayanth Koushik - Jayanth Koushik
- Jenkins
- Lilian Besson - Lilian Besson
- Lv Tao - Lv Tao
- Michael Manukyan - Michael Manukyan
- Murugesh Marvel - Murugesh Marvel
- NALEPA - NALEPA
- Ubuntu
- Zotov Yuriy - Zotov Yuriy
- dareneiri - dareneiri
- lrast - lrast
- morrme - morrme
- yikang - wyjw
...@@ -5,6 +5,7 @@ DRAFT Release Notes ...@@ -5,6 +5,7 @@ DRAFT Release Notes
=================== ===================
git log -p rel-0.9.0... |grep Merge|grep '#[0123456789]' |cut -f 8 -d ' ' | sed 's\#\* https://github.com/Theano/Theano/pull/\' git log -p rel-0.9.0... |grep Merge|grep '#[0123456789]' |cut -f 8 -d ' ' | sed 's\#\* https://github.com/Theano/Theano/pull/\'
git log -p rel-0.10.0beta1... |grep Merge|grep '#[0123456789]' |cut -f 8 -d ' ' | sed 's\#\* https://github.com/Theano/Theano/pull/\'
# Commit count per user # Commit count per user
git shortlog -sn rel-0.9.0.. git shortlog -sn rel-0.9.0..
...@@ -18,6 +19,9 @@ TODO: better Theano conv doc ...@@ -18,6 +19,9 @@ TODO: better Theano conv doc
# NB: Following notes contains infos since 0.9.0. # NB: Following notes contains infos since 0.9.0.
Highlights: Highlights:
- Support NumPy ``1.13``
- Support pygpu ``0.7``
- Added conda recipe
- Moved Python 3.* minimum supported version from 3.3 to 3.4 - Moved Python 3.* minimum supported version from 3.3 to 3.4
- Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements - Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements
- Theano now internally uses ``sha256`` instead of ``md5`` to work on systems that forbide ``md5`` for security reason - Theano now internally uses ``sha256`` instead of ``md5`` to work on systems that forbide ``md5`` for security reason
...@@ -31,11 +35,15 @@ Highlights: ...@@ -31,11 +35,15 @@ Highlights:
- Speed up graph toposort algorithm - Speed up graph toposort algorithm
- Faster C compilation by massively using a new interface for op params - Faster C compilation by massively using a new interface for op params
- Faster optimization step - Faster optimization step, with new optional destroy handler
- Documentation updated and more complete - Documentation updated and more complete
- Added documentation for RNNBlock
- Many bug fixes, crash fixes and warning improvements - Many bug fixes, crash fixes and warning improvements
Interface changes: Interface changes:
- Added new parameter ``target`` for MRG functions
- Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector), - Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector),
and ``AllocDiag`` (set a vector as a diagonal of an empty array) and ``AllocDiag`` (set a vector as a diagonal of an empty array)
- Renamed ``MultinomialWOReplacementFromUniform`` to ``ChoiceFromUniform`` - Renamed ``MultinomialWOReplacementFromUniform`` to ``ChoiceFromUniform``
...@@ -54,11 +62,17 @@ Interface changes: ...@@ -54,11 +62,17 @@ Interface changes:
- Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient - Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient
Convolution updates: Convolution updates:
- Removed old ``conv3d`` interface
- Deprecated old ``conv2d`` interface
- Updated ``conv`` documentation
- Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation - Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation
- Implemented separable convolutions - Added unshared convolutions
- Implemented grouped convolutions - Implemented separable convolutions for 2D and 3D
- Implemented grouped convolutions for 2D and 3D
- Automatic addition of cuDNN DLL path to ``PATH`` environment variable on Windows
GPU: GPU:
- Added a meta-optimizer to select the fastest GPU implementations for convolutions
- Prevent GPU initialization when not required - Prevent GPU initialization when not required
- Added disk caching option for kernels - Added disk caching option for kernels
- Added method ``my_theano_function.sync_shared()`` to help synchronize GPU Theano functions - Added method ``my_theano_function.sync_shared()`` to help synchronize GPU Theano functions
...@@ -74,11 +88,13 @@ GPU: ...@@ -74,11 +88,13 @@ GPU:
- ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU - ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU
- Better cuDNN support - Better cuDNN support
- Official support for ``v6.*`` and ``v7.*``, support for ``v5.*`` will be removed in next release
- Official support for ``v5.*`` and ``v6.*`` - Added spatial transformation operation based on cuDNN
- Updated and improved caching system for runtime-chosen cuDNN convolution algorithms
- Support cuDNN v7 tensor core operations for convolutions with runtime timed algorithms
- Better support and loading on Windows and Mac - Better support and loading on Windows and Mac
- Support cuDNN v6 dilated convolutions - Support cuDNN v6 dilated convolutions
- Support cuDNN v6 reductions - Support cuDNN v6 reductions for contiguous inputs
- Added new Theano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path`` - Added new Theano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
to help configure Theano when CUDA and cuDNN can not be found automatically. to help configure Theano when CUDA and cuDNN can not be found automatically.
...@@ -89,6 +105,9 @@ GPU: ...@@ -89,6 +105,9 @@ GPU:
- Started to use ``float32`` precision for computations that don't support ``float16`` on GPU - Started to use ``float32`` precision for computations that don't support ``float16`` on GPU
New features: New features:
- Added ``tensor6()`` and ``tensor7()`` in ``theano.tensor`` module
- Added boolean indexing for sub-tensors
- Added covariance matrix function ``theano.tensor.cov``
- Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions - Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions
- Added scalar and elemwise CPU ops for modified Bessel function of order 0 and 1 from ``scipy.special``. - Added scalar and elemwise CPU ops for modified Bessel function of order 0 and 1 from ``scipy.special``.
- Added Scaled Exponential Linear Unit (SELU) activation - Added Scaled Exponential Linear Unit (SELU) activation
...@@ -104,8 +123,10 @@ New features: ...@@ -104,8 +123,10 @@ New features:
- Added new Theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace optimizations - Added new Theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace optimizations
- Added new Theano flag ``check_stack_trace`` to help check the stack trace during optimization process - Added new Theano flag ``check_stack_trace`` to help check the stack trace during optimization process
- Added new Theano flag ``cmodule.debug`` to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only. - Added new Theano flag ``cmodule.debug`` to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only.
- Added new Theano flag ``pickle_test_value`` to help disable pickling test values
Others: Others:
- Kept stack trace for optimizations in new GPU backend
- Added deprecation warning for the softmax and logsoftmax vector case - Added deprecation warning for the softmax and logsoftmax vector case
- Added a warning to announce that C++ compiler will become mandatory in next Theano release ``0.11`` - Added a warning to announce that C++ compiler will become mandatory in next Theano release ``0.11``
...@@ -117,11 +138,66 @@ Other more detailed changes: ...@@ -117,11 +138,66 @@ Other more detailed changes:
- Removed ``COp.get_op_params()`` - Removed ``COp.get_op_params()``
- Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes - Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
- Macro names provided for array properties are now standardized in both CPU and GPU C codes - Macro names provided for array properties are now standardized in both CPU and GPU C codes
- Started to move C code files into separate folder ``c_code`` in every Theano module - Moved all C code files into separate folder ``c_code`` in every Theano module
- Many improvements for Travis CI tests (with better splitting for faster testing) - Many improvements for Travis CI tests (with better splitting for faster testing)
- Many improvements for Jenkins CI tests: daily testings on Mac and Windows in addition to Linux - Many improvements for Jenkins CI tests: daily testings on Mac and Windows in addition to Linux
ALL THE PR BELLOW HAVE BEEN CHECKED ALL THE PR BELLOW HAVE BEEN CHECKED
* https://github.com/Theano/Theano/pull/6384
* https://github.com/Theano/Theano/pull/6326
* https://github.com/Theano/Theano/pull/6317
* https://github.com/Theano/Theano/pull/6269
* https://github.com/Theano/Theano/pull/5688
* https://github.com/Theano/Theano/pull/6376
* https://github.com/Theano/Theano/pull/6377
* https://github.com/Theano/Theano/pull/6355
* https://github.com/Theano/Theano/pull/6373
* https://github.com/Theano/Theano/pull/6374
* https://github.com/Theano/Theano/pull/6371
* https://github.com/Theano/Theano/pull/6362
* https://github.com/Theano/Theano/pull/6368
* https://github.com/Theano/Theano/pull/6339
* https://github.com/Theano/Theano/pull/6366
* https://github.com/Theano/Theano/pull/6364
* https://github.com/Theano/Theano/pull/6349
* https://github.com/Theano/Theano/pull/6361
* https://github.com/Theano/Theano/pull/6356
* https://github.com/Theano/Theano/pull/6359
* https://github.com/Theano/Theano/pull/6286
* https://github.com/Theano/Theano/pull/6357
* https://github.com/Theano/Theano/pull/6354
* https://github.com/Theano/Theano/pull/6336
* https://github.com/Theano/Theano/pull/6351
* https://github.com/Theano/Theano/pull/6301
* https://github.com/Theano/Theano/pull/6333
* https://github.com/Theano/Theano/pull/6341
* https://github.com/Theano/Theano/pull/6332
* https://github.com/Theano/Theano/pull/6319
* https://github.com/Theano/Theano/pull/6302
* https://github.com/Theano/Theano/pull/6300
* https://github.com/Theano/Theano/pull/6323
* https://github.com/Theano/Theano/pull/6324
* https://github.com/Theano/Theano/pull/5817
* https://github.com/Theano/Theano/pull/6312
* https://github.com/Theano/Theano/pull/6061
* https://github.com/Theano/Theano/pull/6305
* https://github.com/Theano/Theano/pull/6059
* https://github.com/Theano/Theano/pull/6315
* https://github.com/Theano/Theano/pull/6295
* https://github.com/Theano/Theano/pull/6252
* https://github.com/Theano/Theano/pull/6267
* https://github.com/Theano/Theano/pull/6207
* https://github.com/Theano/Theano/pull/6309
* https://github.com/Theano/Theano/pull/6307
* https://github.com/Theano/Theano/pull/6000
* https://github.com/Theano/Theano/pull/6293
* https://github.com/Theano/Theano/pull/6292
* https://github.com/Theano/Theano/pull/6299
* https://github.com/Theano/Theano/pull/6143
* https://github.com/Theano/Theano/pull/6296
* https://github.com/Theano/Theano/pull/6280
* https://github.com/Theano/Theano/pull/6289
* https://github.com/Theano/Theano/pull/6285
* https://github.com/Theano/Theano/pull/6275 * https://github.com/Theano/Theano/pull/6275
* https://github.com/Theano/Theano/pull/6218 * https://github.com/Theano/Theano/pull/6218
* https://github.com/Theano/Theano/pull/6271 * https://github.com/Theano/Theano/pull/6271
......
...@@ -74,7 +74,7 @@ copyright = '2008--2017, LISA lab' ...@@ -74,7 +74,7 @@ copyright = '2008--2017, LISA lab'
# The short X.Y version. # The short X.Y version.
version = '0.10' version = '0.10'
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = '0.10.0beta1' release = '0.10.0beta2'
# There are two options for replacing |today|: either, you set today to some # There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used: # non-false value, then it is used:
......
...@@ -21,6 +21,8 @@ learning/machine learning <https://mila.umontreal.ca/en/cours/>`_ classes). ...@@ -21,6 +21,8 @@ learning/machine learning <https://mila.umontreal.ca/en/cours/>`_ classes).
News News
==== ====
* 2017/09/07: Release of Theano 0.10.0beta2, new features and many bugfixes, release candidate to coming.
* 2017/08/09: Release of Theano 0.10.0beta1, many improvements and bugfixes, release candidate to coming. * 2017/08/09: Release of Theano 0.10.0beta1, many improvements and bugfixes, release candidate to coming.
* Removed support for the old (device=gpu) backend. Use the new * Removed support for the old (device=gpu) backend. Use the new
......
...@@ -13,7 +13,7 @@ With ``conda`` ...@@ -13,7 +13,7 @@ With ``conda``
^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^
If you use conda, you can directly install both theano and pygpu. Libgpuarray If you use conda, you can directly install both theano and pygpu. Libgpuarray
will be automatically installed as a dependency. will be automatically installed as a dependency of pygpu.
.. code-block:: bash .. code-block:: bash
...@@ -21,7 +21,7 @@ will be automatically installed as a dependency. ...@@ -21,7 +21,7 @@ will be automatically installed as a dependency.
.. warning:: .. warning::
Last conda packages for theano (0.9) and pygpu (0.6*) currently don't support Latest conda packages for theano (``>= 0.9``) and pygpu (``>= 0.6*``) currently don't support
Python 3.4 branch. Python 3.4 branch.
With ``pip`` With ``pip``
...@@ -91,6 +91,12 @@ libgpuarray ...@@ -91,6 +91,12 @@ libgpuarray
Install the latest, development version of libgpuarray following the Install the latest, development version of libgpuarray following the
`Step-by-step instructions <http://deeplearning.net/software/libgpuarray/installation.html#step-by-step-install>`__. `Step-by-step instructions <http://deeplearning.net/software/libgpuarray/installation.html#step-by-step-install>`__.
.. note::
Currently, you need ``libgpuarray`` version ``0.7.1`` that is not in conda default channel.
But you can install it with our own channel ``mila-udem`` (that support only Python 2.7 and 3.5)::
conda install -c mila-udem pygpu
Developer Installation Developer Installation
---------------------- ----------------------
...@@ -119,5 +125,4 @@ If you encountered any trouble, head to the :ref:`troubleshooting` page. ...@@ -119,5 +125,4 @@ If you encountered any trouble, head to the :ref:`troubleshooting` page.
libgpuarray libgpuarray
^^^^^^^^^^^ ^^^^^^^^^^^
Install the latest, development version of libgpuarray following the See instructions for bleeding-edge installation about ``libgpuarray``.
`Step-by-step instructions <http://deeplearning.net/software/libgpuarray/installation.html#step-by-step-install>`__.
...@@ -165,7 +165,7 @@ Note: There is no short term plan to support multi-node computation. ...@@ -165,7 +165,7 @@ Note: There is no short term plan to support multi-node computation.
Theano Vision State Theano Vision State
=================== ===================
Here is the state of that vision as of August 9th, 2017 (after Theano 0.10.0beta1): Here is the state of that vision as of September 7th, 2017 (after Theano 0.10.0beta2):
* We support tensors using the `numpy.ndarray` object and we support many operations on them. * We support tensors using the `numpy.ndarray` object and we support many operations on them.
* We support sparse types by using the `scipy.{csc,csr,bsr}_matrix` object and support some operations on them. * We support sparse types by using the `scipy.{csc,csr,bsr}_matrix` object and support some operations on them.
......
...@@ -53,7 +53,7 @@ PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"] ...@@ -53,7 +53,7 @@ PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"]
MAJOR = 0 MAJOR = 0
MINOR = 10 MINOR = 10
MICRO = 0 MICRO = 0
SUFFIX = "beta1" # Should be blank except for rc's, betas, etc. SUFFIX = "beta2" # Should be blank except for rc's, betas, etc.
ISRELEASED = False ISRELEASED = False
VERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX) VERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论