提交 6815194c authored 作者: notoraptor's avatar notoraptor

Prepare beta release 0.10.0beta1.

上级 63dc46bc
......@@ -15,6 +15,7 @@ abalkin <abalkin@enlnt.com> abalkin <abalkin>
abalkin <abalkin@enlnt.com> abalkin <serpent.speak@gmail.com>
abalkin <abalkin@enlnt.com> Alexander Belopolsky <abalkin@enlnt.com>
abalkin <abalkin@enlnt.com> Alexander Belopolsky <a@enlnt.com>
Aleksandar Botev <botevmg@gmail.com> botev <botevmg@gmail.com>
Alex Lamb <alex6200@gmail.com> AlexLamb <alex6200@gmail.com>
Alex Lamb <alex6200@gmail.com> DeathMonster666 <alex6200@gmail.com>
Alexandre de Brebisson <adbrebs@gmail.com> AdeB <adbrebs@gmail.com>
......@@ -25,6 +26,7 @@ Andre Holzner <Andre.Georg.Holzner@cern.ch> andreh <andreh@localhost>
Andre Holzner <Andre.Georg.Holzner@cern.ch> Andre Holzner <holzner@andres-mbp-2.fritz.box>
Arjun Jain <arjunjain@gmail.com> Arjun Jain <stencilman@users.noreply.github.com>
Arnaud Bergeron <abergeron@gmail.com> <abergeron@gmail.com>
Arnaud Bergeron <abergeron@gmail.com> <bergearn@iro.umontreal.ca>
<abergeron@gmail.com> <anakha@kami.(none)>
Balázs Hidasi <hidasi.balazs@gravityrd.com> Balázs <hidasib@gmail.com>
Bart van Merrienboer <bart.vanmerrienboer@gmail.com> Bart van Merriënboer <bart.vanmerrienboer@gmail.com>
......@@ -62,9 +64,14 @@ Dzmitry Bahdanau <dimabgv@gmail.com> rizar <dimabv@tut.by>
Eric Hunsberger <hunse@ctn> hunse <hunse@ctn>
Ethan Buchman <ebuchman@uoguelph.ca> ebuchman <ebuchman@uoguelph.ca>
Evelyn Mitchell <efm-github@linsomniac.com> evelynmitchell <efm-github@linsomniac.com>
Faruk Ahmed <faruk.ahmed.91@gmail.com> Faruk Ahmed <ahmedfar@bart7.iro.umontreal.ca>
Faruk Ahmed <faruk.ahmed.91@gmail.com> Faruk Ahmed <ahmedfar@elisa1.iro.umontreal.ca>
Faruk Ahmed <faruk.ahmed.91@gmail.com> Faruk Ahmed <ahmedfar@eos3.iro.umontreal.ca>
Faruk Ahmed <faruk.ahmed.91@gmail.com> Faruk Ahmed <ahmedfar@kepler2.iro.umontreal.ca>
Faruk Ahmed <faruk.ahmed.91@gmail.com> Faruk Ahmed <ahmedfar@kepler3.iro.umontreal.ca>
Faruk Ahmed <faruk.ahmed.91@gmail.com> Faruk Ahmed <ahmedfar@elisa1.iro.umontreal.ca>
Faruk Ahmed <faruk.ahmed.91@gmail.com> Faruk Ahmed <ahmedfar@ceylon.iro.umontreal.ca>
Faruk Ahmed <faruk.ahmed.91@gmail.com> Faruk Ahmed <ahmedfar@leto21.iro.umontreal.ca>
Faruk Ahmed <faruk.ahmed.91@gmail.com> Faruk Ahmed <ahmedfar@leto23.iro.umontreal.ca>
Fei Wang <fay96816@gmail.com> fay <fay96816@gmail.com>
Francesco Visin <fvisin@gmail.com> Francesco <fvisin@users.noreply.github.com>
Francesco Visin <fvisin@gmail.com> fvisin <fvisin@gmail.com>
......@@ -81,6 +88,7 @@ Frederic Bastien <nouiz@nouiz.org> Frederic <nouiz@nouiz.org>
Frederic Bastien <nouiz@nouiz.org> Frédéric Bastien <frederic.bastien@gmail.com>
Frederic Bastien <nouiz@nouiz.org> theano-bot <frederic.bastien.1@umontreal.ca>
Gennadiy Tupitsin <genichyar@genichyar.com> genichyar <genichyar@genichyar.com>
Ghislain Antony Vaillant <ghisvail@gmail.com> Ghislain Antony Vaillant <ghisvail@users.noreply.github.com>
Gokula Krishnan <gokul.uf@gmail.com> Gokul <gokul.uf@gmail.com>
Grégoire Mesnil <gregoire.mesnil@gmail.com> Grégoire <gregoire.mesnil@laposte.net>
Grégoire Mesnil <gregoire.mesnil@gmail.com> Grégoire <gregoire.mesnil@gmail.com>
......@@ -90,6 +98,7 @@ Guillaume Desjardins <guillaume.desjardins@gmail.com> desjagui@atchoum.iro.umont
Guillaume Desjardins <guillaume.desjardins@gmail.com> desjagui@opale.iro.umontreal.ca <desjagui@opale.iro.umontreal.ca>
Guillaume Desjardins <guillaume.desjardins@gmail.com> gdesjardins <devnull@localhost>
Guillaume Desjardins <guillaume.desjardins@gmail.com> tutorial/debug_faq.txt <devnull@localhost>
gw0 [http://gw.tnode.com/] <gw.2015@tnode.com> gw0 [http://gw.tnode.com/] <gw.2016@tnode.com>
Hani Almousli <hani.mousli@gmail.com> Hani <hani.mousli@gmail.com>
Hani Almousli <hani.mousli@gmail.com> HaniAlmousli <hani.mousli@gmail.com>
Huy Nguyen <huy@huyng.com> huyng <huy@huyng.com>
......@@ -118,6 +127,8 @@ Jeremiah Lowin <jlowin@lowindata.com> Jeremiah Lowin <jlowin@iHal.local>
Jeremie Tanguay <tanguaj@iro.umontreal.ca> Tanjay94 <you@yourdomain.example.com>
Jeremie Tanguay <tanguaj@iro.umontreal.ca> Jeremie Tanguay <tanguaj@bart4.iro.umontreal.ca>
Jesse Livezey <jesse.livezey@berkeley.edu> JesseLivezey <jesse.livezey@gmail.com>
João Victor Tozatti Risso <joaovictortr@gmail.com> João Victor Risso <joaovictor.risso@gmail.com>
João Victor Tozatti Risso <joaovictortr@gmail.com> João Victor Tozatti Risso <joaovictor.risso@gmail.com>
John Salvatier <jsalvatier@gmail.com> jsalvatier <jsalvatier@gmail.com>
John Salvatier <jsalvatier@gmail.com> john salvatier <jsalvatier@gmail.com>
John Schulman <john.d.schulman@gmail.com> joschu <john.d.schulman@gmail.com>
......@@ -156,6 +167,7 @@ Markus Roth <markus.roth@herr-biber.de> Markus Roth <mail@rothmark.us>
Mathieu Germain <mathieu.germain@gmail.com> Mathieu Germain <mathieu.germain2@usherbrooke.ca>
Mehdi Mirza <memirzamo@gmail.com> Mehdi Mirza <memimo@users.noreply.github.com>
Mehdi Mirza <memirzamo@gmail.com> memimo <memirzamo@gmail.com>
Mohammed Affan <affanv14@gmail.com> affanv14 <affanv14@gmail.com>
Moslem Kazemi <moslemk@gmail.com> Moslem Kazemi <moslemk@users.noreply.github.com>
Moslem Kazemi <moslemk@gmail.com> Mo <moslemk@gmail.com>
Nan Rosemary Ke <rosemary.ke@west.cmu.edu> nke001 <rosemary.nan.ke@gmail.com>
......@@ -195,6 +207,7 @@ Razvan Pascanu <r.pascanu@gmail.com> Razvan Pascanu <rman@rman-Dell-System-XPS-L
Razvan Pascanu <r.pascanu@gmail.com> Razvan Pascanu <rman@rman-pad.(none)>
Razvan Pascanu <r.pascanu@gmail.com> pascanur@simplet.iro.umontreal.ca <pascanur@simplet.iro.umontreal.ca>
Razvan Pascanu <r.pascanu@gmail.com> rman@rpad <rman@rpad>
Reyhane Askari <r.askari.hemmat@gmail.com> Reyhane Askari <ReyhaneAskari@users.noreply.github.com>
Roy Xue <xljroy@gmail.com> Lijun Xue <xljroy@gmail.com>
Ruslana Makovetsky <ruslana@cim.mcgill.ca> ruslanagit <ruslana@cim.mcgill.ca>
Sander Dieleman <sanderdieleman@gmail.com> benanne <sanderdieleman@gmail.com>
......@@ -206,9 +219,13 @@ Simon Lefrancois <simon.lefrancois@umontreal.ca> slefrancois <simon.lefrancois@u
Simon Lefrancois <simon.lefrancois@umontreal.ca> Simon Lefrancois <lefransi@iro.umontreal.ca>
Sina Honari <honaris@iro.umontreal.ca> SinaHonari <sina2222@gmail.com>
Sina Honari <honaris@iro.umontreal.ca> Sina Honari <honaris@eos21.iro.umontreal.ca>
Sina Honari <honaris@iro.umontreal.ca> Sina Honari <sina.honari@gmail.com>
Søren Kaae Sønderby <skaaesonderby@gmail.com> skaae <skaaesonderby@gmail.com>
Steven Bocco <stevenbocco@gmail.com> notoraptor <stevenbocco@gmail.com>
Steven Bocco <stevenbocco@gmail.com> notoraptor <notoraptor@users.noreply.github.com>
Steven Bocco <stevenbocco@gmail.com> Seton Steven Bocco <boccoset@leto01.iro.umontreal.ca>
Steven Bocco <stevenbocco@gmail.com> Seton Steven Bocco <boccoset@leto15.iro.umontreal.ca>
Steven Bocco <stevenbocco@gmail.com> Seton Steven Bocco <boccoset@leto51.iro.umontreal.ca>
Steven Pigeon <pigeon@iro.umontreal.ca> steven-pigeon <pigeon@iro.umontreal.ca>
Thomas George <tfjgeorge@gmail.com> Thomas George <georgeth@helios1.helios>
Valentin Bisson <valentin.bisson@umontreal.ca> onze <onzeonline@gmail.com>
......
......@@ -5,6 +5,254 @@
Old Release Notes
=================
Theano 0.9.0 (20th of March, 2017)
==================================
This is a final release of Theano, version ``0.9.0``, with a lot of
new features, interface changes, improvements and bug fixes.
We recommend that everybody update to this version.
Highlights (since 0.8.0):
- Better Python 3.5 support
- Better numpy 1.12 support
- Conda packages for Mac, Linux and Windows
- Support newer Mac and Windows versions
- More Windows integration:
- Theano scripts (``theano-cache`` and ``theano-nose``) now works on Windows
- Better support for Windows end-lines into C codes
- Support for space in paths on Windows
- Scan improvements:
- More scan optimizations, with faster compilation and gradient computation
- Support for checkpoint in scan (trade off between speed and memory usage, useful for long sequences)
- Fixed broadcast checking in scan
- Graphs improvements:
- More numerical stability by default for some graphs
- Better handling of corner cases for theano functions and graph optimizations
- More graph optimizations with faster compilation and execution
- smaller and more readable graph
- New GPU back-end:
- Removed warp-synchronous programming to get good results with newer CUDA drivers
- More pooling support on GPU when cuDNN isn't available
- Full support of ignore_border option for pooling
- Inplace storage for shared variables
- float16 storage
- Using PCI bus ID of graphic cards for a better mapping between theano device number and nvidia-smi number
- Fixed offset error in ``GpuIncSubtensor``
- Less C code compilation
- Added support for bool dtype
- Updated and more complete documentation
- Bug fixes related to merge optimizer and shape inference
- Lot of other bug fixes, crashes fixes and warning improvements
A total of 123 people contributed to this release since 0.8.0, see list below.
Interface changes:
- Merged ``CumsumOp/CumprodOp`` into ``CumOp``
- In MRG module:
- Replaced method ``multinomial_wo_replacement()`` with new method ``choice()``
- Random generator now tries to infer the broadcast pattern of its output
- New pooling interface
- Pooling parameters can change at run time
- Moved ``softsign`` out of sandbox to ``theano.tensor.nnet.softsign``
- Using floatX dtype when converting empty list/tuple
- ``Roll`` make the shift be modulo the size of the axis we roll on
- ``round()`` default to the same as NumPy: half_to_even
Convolution updates:
- Support of full and half modes for 2D and 3D convolutions including in ``conv3d2d``
- Allowed pooling of empty batch
- Implement ``conv2d_transpose`` convenience function
- Multi-cores convolution and pooling on CPU
- New abstract 3d convolution interface similar to the 2d convolution interface
- Dilated convolution
GPU:
- cuDNN: support versoin 5.1 and wrap batch normalization (2d and 3d) and RNN functions
- Multiple-GPU, synchrone update (via platoon, use NCCL)
- Gemv(matrix-vector product) speed up for special shape
- cublas gemv workaround when we reduce on an axis with a dimensions size of 0
- Warn user that some cuDNN algorithms may produce unexpected results in certain environments
for convolution backward filter operations
- ``GPUMultinomialFromUniform`` op now supports multiple dtypes
- Support for ``MaxAndArgMax`` for some axis combination
- Support for solve (using cusolver), erfinv and erfcinv
- Implemented ``GpuAdvancedSubtensor``
New features:
- ``OpFromGraph`` now allows gradient overriding for every input
- Added Abstract Ops for batch normalization that use cuDNN when available and pure Theano CPU/GPU alternatives otherwise
- Added gradient of solve, tensorinv (CPU), tensorsolve (CPU), searchsorted (CPU), DownsampleFactorMaxGradGrad (CPU)
- Added Multinomial Without Replacement
- Allowed partial evaluation of compiled function
- More Rop support
- Indexing support ellipsis: ``a[..., 3]```, ``a[1,...,3]``
- Added ``theano.tensor.{tensor5,dtensor5, ...}``
- compiledir_format support device
- Added New Theano flag ``conv.assert_shape`` to check user-provided shapes at runtime (for debugging)
- Added new Theano flag ``cmodule.age_thresh_use``
- Added new Theano flag ``cuda.enabled``
- Added new Theano flag ``nvcc.cudafe`` to enable faster compilation and import with old CUDA back-end
- Added new Theano flag ``print_global_stats`` to print some global statistics (time spent) at the end
- Added new Theano flag ``profiling.ignore_first_call``, useful to profile the new gpu back-end
- remove ProfileMode (use Theano flag ``profile=True`` instead)
Others:
- Split op now has C code for CPU and GPU
- ``theano-cache list`` now includes compilation times
- Speed up argmax only on GPU (without also needing the max)
- More stack trace in error messages
- Speed up cholesky grad
- ``log(sum(exp(...)))`` now get stability optimized
Other more detailed changes:
- Added Jenkins (gpu tests run on pull requests in addition to daily buildbot)
- Removed old benchmark directory and other old files not used anymore
- Use of 64-bit indexing in sparse ops to allow matrix with more then 2\ :sup:`31`\ -1 elements
- Allowed more then one output to be an destructive inplace
- More support of negative axis
- Added the keepdims parameter to the norm function
- Make scan gradient more deterministic
Commiters since 0.8.0:
- Frederic Bastien
- Arnaud Bergeron
- Pascal Lamblin
- Steven Bocco
- Ramana Subramanyam
- Simon Lefrancois
- Gijs van Tulder
- Benjamin Scellier
- khaotik
- Chiheb Trabelsi
- Chinnadhurai Sankar
- Cesar Laurent
- Reyhane Askari
- Mohammad Pezeshki
- Alexander Matyasko
- Alexandre de Brebisson
- Mathieu Germain
- Nan Rosemary Ke
- Pierre Luc Carrier
- Olivier Mastropietro
- Thomas George
- Saizheng Zhang
- Iulian Vlad Serban
- Francesco Visin
- Caglar
- Faruk Ahmed
- Harm de Vries
- Samira Shabanian
- Vincent Dumoulin
- Nicolas Ballas
- Jakub Sygnowski
- Jan Schlüter
- Samira Ebrahimi Kahou
- Mikhail Korobov
- Fei Wang
- Kv Manohar
- Jesse Livezey
- Kelvin Xu
- Matt Graham
- Ruslana Makovetsky
- Sina Honari
- Bryn Keller
- Ciyong Chen
- Vitaliy Kurlin
- Zhouhan LIN
- Gokula Krishnan
- Kumar Krishna Agrawal
- Ozan Çağlayan
- Vincent Michalski
- affanv14
- Amjad Almahairi
- Ray Donnelly
- Tim Cooijmans
- happygds
- mockingjamie
- Christos Tsirigotis
- Florian Bordes
- Ilya Kulikov
- RadhikaG
- Taesup (TS) Kim
- Ying Zhang
- Anton Chechetka
- Karthik Karanth
- Kirill Bobyrev
- Rebecca N. Palmer
- Yang Zhang
- Yaroslav Ganin
- Jonas Degrave
- Liwei Cai
- Lucas Beyer
- Michael Harradon
- Morgan Stuart
- Tim Gasper
- Xavier Bouthillier
- p
- texot
- Andrés Gottlieb
- Ben Poole
- Bhavishya Pohani
- Carl Thomé
- David Bau
- Dimitar Dimitrov
- Evelyn Mitchell
- Fei Zhan
- Fuchai
- Fábio Perez
- Gennadiy Tupitsin
- Gilles Louppe
- Greg Ciccarelli
- He
- Huan Zhang
- Kaixhin
- Kevin Keraudren
- Maltimore
- Marc-Alexandre Cote
- Marco
- Marius F. Killinger
- Martin Drawitsch
- Maxim Kochurov
- Micah Bojrab
- Neil
- Nizar Assaf
- Rithesh Kumar
- Rizky Luthfianto
- Robin Millette
- Roman Ring
- Sander Dieleman
- Sebastin Santy
- Shawn Tan
- Wazeer Zulfikar
- Wojciech Głogowski
- Yann N. Dauphin
- gw0 [http://gw.tnode.com/]
- hexahedria
- hsintone
- jakirkham
- joncrall
- root
- superantichrist
- tillahoffmann
- valtron
- wazeerzulfikar
- you-n-g
Theano 0.9.0rc4 (13th of March, 2017)
=====================================
......
......@@ -3,249 +3,183 @@ Release Notes
=============
Theano 0.9.0 (20th of March, 2017)
==================================
Theano 0.10.0beta1 (9th of August, 2017)
========================================
This is a final release of Theano, version ``0.9.0``, with a lot of
new features, interface changes, improvements and bug fixes.
This release contains a lot of bug fixes and improvements + new features, to prepare the upcoming release candidate.
We recommend that everybody update to this version.
We recommend that every developer updates to this version.
Highlights (since 0.8.0):
- Better Python 3.5 support
- Better numpy 1.12 support
- Conda packages for Mac, Linux and Windows
- Support newer Mac and Windows versions
- More Windows integration:
Highlights:
- Moved Python 3.* minimum supported version from 3.3 to 3.4
- Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements
- Make theano more FIPS compliant by using ``sha256`` instead of ``md5`` where needed
- Removed old GPU backend ``theano.sandbox.cuda``. New backend ``theano.gpuarray`` is now the official GPU backend
- Support more debuggers for ``PdbBreakpoint``
- Theano scripts (``theano-cache`` and ``theano-nose``) now works on Windows
- Better support for Windows end-lines into C codes
- Support for space in paths on Windows
- Scan improvements
- Scan improvements:
- Speed up Theano scan compilation and gradient computation
- Added meaningful message when missing inputs to scan
- More scan optimizations, with faster compilation and gradient computation
- Support for checkpoint in scan (trade off between speed and memory usage, useful for long sequences)
- Fixed broadcast checking in scan
- Speed up graph toposort algorithm
- Faster compilation step by massively using a new interface for op params
- Faster optimization step
- Documentation updated and more complete
- Many bug fixes, crash fixes and warning improvements
- Graphs improvements:
- More numerical stability by default for some graphs
- Better handling of corner cases for theano functions and graph optimizations
- More graph optimizations with faster compilation and execution
- smaller and more readable graph
- New GPU back-end:
- Removed warp-synchronous programming to get good results with newer CUDA drivers
- More pooling support on GPU when cuDNN isn't available
- Full support of ignore_border option for pooling
- Inplace storage for shared variables
- float16 storage
- Using PCI bus ID of graphic cards for a better mapping between theano device number and nvidia-smi number
- Fixed offset error in ``GpuIncSubtensor``
- Less C code compilation
- Added support for bool dtype
- Updated and more complete documentation
- Bug fixes related to merge optimizer and shape inference
- Lot of other bug fixes, crashes fixes and warning improvements
A total of 123 people contributed to this release since 0.8.0, see list below.
A total of 65 people contributed to this release since 0.9.0, see list below.
Interface changes:
- Merged ``CumsumOp/CumprodOp`` into ``CumOp``
- In MRG module:
- Replaced method ``multinomial_wo_replacement()`` with new method ``choice()``
- Random generator now tries to infer the broadcast pattern of its output
- New pooling interface
- Pooling parameters can change at run time
- Moved ``softsign`` out of sandbox to ``theano.tensor.nnet.softsign``
- Using floatX dtype when converting empty list/tuple
- ``Roll`` make the shift be modulo the size of the axis we roll on
- ``round()`` default to the same as NumPy: half_to_even
- Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient
- Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector),
and ``AllocDiag`` (set a vector as a diagonal of an empty array)
- Replaced ``MultinomialWOReplacementFromUniform`` with ``ChoiceFromUniform``
- Removed or deprecated Theano flags:
- ``cublas.lib``
- ``cuda.enabled``
- ``enable_initial_driver_test``
- ``gpuarray.sync``
- ``home``
- ``lib.cnmem``
- ``nvcc.*`` flags
- ``pycuda.init``
Convolution updates:
- Support of full and half modes for 2D and 3D convolutions including in ``conv3d2d``
- Allowed pooling of empty batch
- Implement ``conv2d_transpose`` convenience function
- Multi-cores convolution and pooling on CPU
- New abstract 3d convolution interface similar to the 2d convolution interface
- Dilated convolution
- Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation
- Implemented separable convolutions
- Implemented grouped convolutions
GPU:
- cuDNN: support versoin 5.1 and wrap batch normalization (2d and 3d) and RNN functions
- Multiple-GPU, synchrone update (via platoon, use NCCL)
- Gemv(matrix-vector product) speed up for special shape
- cublas gemv workaround when we reduce on an axis with a dimensions size of 0
- Warn user that some cuDNN algorithms may produce unexpected results in certain environments
for convolution backward filter operations
- ``GPUMultinomialFromUniform`` op now supports multiple dtypes
- Support for ``MaxAndArgMax`` for some axis combination
- Support for solve (using cusolver), erfinv and erfcinv
- Implemented ``GpuAdvancedSubtensor``
- Prevent GPU initialization when not required
- Added disk caching option for kernels
- Added method ``my_theano_function.sync_shared()`` to help synchronize GPU Theano functions
- Added useful stats for GPU in profile mode
- Added Cholesky op based on ``cusolver`` backend
- Added GPU ops based on `magma library <http://icl.cs.utk.edu/magma/software/>`_:
SVD, matrix inverse, QR, cholesky and eigh
- Added ``GpuAdvancedIncSubtensor``
- Added ``GpuCublasTriangularSolve``
- Added atomic addition and exchange for ``long long`` values in ``GpuAdvancedIncSubtensor1_dev20``
- Fixed C code for log gamma function, now supporting all types except complex types.
- Support GPU SoftMax in both OpenCL and CUDA
- Support offset parameter ``k`` for ``GpuEye``
- ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU
- Better cuDNN support
- Official support for versions >= ``v5.1``
- Better support and loading on Windows and Mac
- Support cuDNN v6 dilated convolutions
- Support cuDNN v6 reductions
- Added new theano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
to help configure Theano when CUDA and cuDNN can not be found automatically.
- Updated ``float16`` support
- Added documentation for GPU float16 ops
- Support ``float16`` for ``GpuGemmBatch``
- Started to avoid lifting ``float16`` computations that are not supported on GPU
New features:
- ``OpFromGraph`` now allows gradient overriding for every input
- Added Abstract Ops for batch normalization that use cuDNN when available and pure Theano CPU/GPU alternatives otherwise
- Added gradient of solve, tensorinv (CPU), tensorsolve (CPU), searchsorted (CPU), DownsampleFactorMaxGradGrad (CPU)
- Added Multinomial Without Replacement
- Allowed partial evaluation of compiled function
- More Rop support
- Indexing support ellipsis: ``a[..., 3]```, ``a[1,...,3]``
- Added ``theano.tensor.{tensor5,dtensor5, ...}``
- compiledir_format support device
- Added New Theano flag ``conv.assert_shape`` to check user-provided shapes at runtime (for debugging)
- Added new Theano flag ``cmodule.age_thresh_use``
- Added new Theano flag ``cuda.enabled``
- Added new Theano flag ``nvcc.cudafe`` to enable faster compilation and import with old CUDA back-end
- Added new Theano flag ``print_global_stats`` to print some global statistics (time spent) at the end
- Added new Theano flag ``profiling.ignore_first_call``, useful to profile the new gpu back-end
- remove ProfileMode (use Theano flag ``profile=True`` instead)
- Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions
- Added scalar and elemwise ops for modified Bessel function of order 0 and 1 from ``scipy.special``
- Added Scaled Exponential Linear Unit (SELU) activation
- Added sigmoid_binary_crossentropy function
- Added tri-gamma function
- Added modes ``half`` and ``full`` for ``Images2Neibs`` ops
- Implemented gradient for ``AbstractBatchNormTrainGrad``
- Implemented gradient for matrix pseudoinverse op
- Added new prop `replace` for ``ChoiceFromUniform`` op
- Added new prop ``on_error`` for CPU ``Cholesky`` op
- Added new theano flag ``deterministic`` to help control how Theano optimize certain ops that have deterministic versions.
Currently used for subtensor Ops only.
- Added new theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace insertions
- Added new theano flag ``check_stack_trace`` to help check the stack trace during optimization process
- Added new theano flag ``cmodule.debug`` to allow a debug mode for theano C code. Currently used for cuDNN convolutions only.
Others:
- Split op now has C code for CPU and GPU
- ``theano-cache list`` now includes compilation times
- Speed up argmax only on GPU (without also needing the max)
- More stack trace in error messages
- Speed up cholesky grad
- ``log(sum(exp(...)))`` now get stability optimized
- Added deprecation warning for the softmax and logsoftmax vector case
- Added a warning to announce that C++ compiler will become mandatory in next Theano release ``0.11``
Other more detailed changes:
- Added Jenkins (gpu tests run on pull requests in addition to daily buildbot)
- Removed old benchmark directory and other old files not used anymore
- Use of 64-bit indexing in sparse ops to allow matrix with more then 2\ :sup:`31`\ -1 elements
- Allowed more then one output to be an destructive inplace
- More support of negative axis
- Added the keepdims parameter to the norm function
- Make scan gradient more deterministic
Commiters since 0.8.0:
- Removed useless warning when profile is manually disabled
- Added tests for abstract conv
- Added options for `disconnected_outputs` to Rop
- Insertion of an OutputGuard is now considered as an error
- Removed ``theano/compat/six.py``
- Removed ``COp.get_op_params()``
- Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
- Macro names provided for array properties are now standardized in both CPU and GPU C codes
- Started to move C code files into separate folder ``c_code`` in every Theano module
- Many improvements for TRAVIS CI tests (with better splitting for faster testing)
- Many improvements for Jenkins CI tests: support for Mac and Windows testings, usage of Docker for better tests isolation
Commiters since 0.9.0:
- Frederic Bastien
- Arnaud Bergeron
- Pascal Lamblin
- amrithasuresh
- João Victor Tozatti Risso
- Steven Bocco
- Ramana Subramanyam
- Simon Lefrancois
- Gijs van Tulder
- Benjamin Scellier
- khaotik
- Chiheb Trabelsi
- Chinnadhurai Sankar
- Cesar Laurent
- Pascal Lamblin
- Mohammed Affan
- Reyhane Askari
- Mohammad Pezeshki
- Alexander Matyasko
- Alexandre de Brebisson
- Mathieu Germain
- Nan Rosemary Ke
- Pierre Luc Carrier
- Olivier Mastropietro
- Simon Lefrancois
- Shawn Tan
- Thomas George
- Saizheng Zhang
- Iulian Vlad Serban
- Francesco Visin
- Caglar
- Faruk Ahmed
- Harm de Vries
- Samira Shabanian
- Vincent Dumoulin
- Nicolas Ballas
- Jakub Sygnowski
- Jan Schlüter
- Samira Ebrahimi Kahou
- Mikhail Korobov
- Fei Wang
- Kv Manohar
- Jesse Livezey
- Kelvin Xu
- Matt Graham
- Ruslana Makovetsky
- Sina Honari
- Bryn Keller
- Ciyong Chen
- Vitaliy Kurlin
- Zhouhan LIN
- Gokula Krishnan
- Kumar Krishna Agrawal
- Ozan Çağlayan
- Vincent Michalski
- affanv14
- Amjad Almahairi
- Ray Donnelly
- Tim Cooijmans
- happygds
- mockingjamie
- Christos Tsirigotis
- Aleksandar Botev
- jhelie
- xiaoqie
- Tegan Maharaj
- Matt Graham
- Cesar Laurent
- Gabe Schwartz
- Juan Camilo Gamboa Higuera
- AndroidCloud
- Saizheng Zhang
- vipulraheja
- Florian Bordes
- Ilya Kulikov
- RadhikaG
- Taesup (TS) Kim
- Ying Zhang
- Anton Chechetka
- Karthik Karanth
- Kirill Bobyrev
- Rebecca N. Palmer
- Yang Zhang
- Yaroslav Ganin
- Jonas Degrave
- Liwei Cai
- Lucas Beyer
- Michael Harradon
- Morgan Stuart
- Tim Gasper
- Sina Honari
- Vikram
- erakra
- Chiheb Trabelsi
- Shubh Vachher
- Daren Eiri
- Gijs van Tulder
- Laurent Dinh
- Mohamed Ishmael Diwan Belghazi
- mila
- Jeff Donahue
- Ramana Subramanyam
- Bogdan Budescu
- Ghislain Antony Vaillant
- Jan Schlüter
- Xavier Bouthillier
- p
- texot
- Andrés Gottlieb
- Ben Poole
- Bhavishya Pohani
- Carl Thomé
- David Bau
- Dimitar Dimitrov
- Evelyn Mitchell
- Fei Zhan
- Fuchai
- Fábio Perez
- Gennadiy Tupitsin
- Gilles Louppe
- Greg Ciccarelli
- He
- Huan Zhang
- Kaixhin
- Kevin Keraudren
- Maltimore
- Marc-Alexandre Cote
- Marco
- Marius F. Killinger
- Martin Drawitsch
- Maxim Kochurov
- Micah Bojrab
- Neil
- Nizar Assaf
- Rithesh Kumar
- Rizky Luthfianto
- Robin Millette
- Roman Ring
- Sander Dieleman
- Sebastin Santy
- Shawn Tan
- Wazeer Zulfikar
- Wojciech Głogowski
- Yann N. Dauphin
- gw0 [http://gw.tnode.com/]
- hexahedria
- hsintone
- jakirkham
- joncrall
- root
- superantichrist
- tillahoffmann
- valtron
- wazeerzulfikar
- you-n-g
- fo40225
- Aarni Koskela
- Adam Becker
- Adam Geitgey
- Adrian Keet
- Adrian Seyboldt
- Andrei Costinescu
- Anmol Sahoo
- Chong Wu
- Holger Kohr
- Jayanth Koushik
- Jenkins
- Lilian Besson
- Lv Tao
- Michael Manukyan
- Murugesh Marvel
- NALEPA
- Ubuntu
- Zotov Yuriy
- dareneiri
- lrast
- morrme
- yikang
......@@ -11,51 +11,344 @@ git shortlog -sn rel-0.9.0..
# docker?
# docker added to jenkins buildbot
TODO: better Theano conv doc
# NB: Following notes contains infos since 0.9.0.
Highlights:
- Speed up graph toposort algorithm
- Speed up Theano scan compilation and gradient computation
- Added meaningful message when missing inputs to scan
- Bug fixes related to Debug mode
- Moved Python 3.* minimum supported version from 3.3 to 3.4
- Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements
- Make theano more FIPS compliant by using ``sha256`` instead of ``md5`` where needed
- Removed old GPU backend ``theano.sandbox.cuda``. New backend ``theano.gpuarray`` is now the official GPU backend
- Support more debuggers for ``PdbBreakpoint``
- New GPU back-end:
- Scan improvements
- Added useful stats for GPU in profile mode
- Added documentation for GPU float16 ops
- Speed up Theano scan compilation and gradient computation
- Added meaningful message when missing inputs to scan
- Speed up graph toposort algorithm
- Faster compilation step by massively using a new interface for op params
- Faster optimization step
- Documentation updated and more complete
- Many bug fixes, crash fixes and warning improvements
Interface changes:
- Changed ``grad()`` method to ``L_op`` in ops that need the outputs to compute gradient
- Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient
- Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector),
and ``AllocDiag`` (set a vector as a diagonal of an empty array)
- Replaced ``MultinomialWOReplacementFromUniform`` with ``ChoiceFromUniform``
- Removed or deprecated Theano flags:
- ``cublas.lib``
- ``cuda.enabled``
- ``enable_initial_driver_test``
- ``gpuarray.sync``
- ``home``
- ``lib.cnmem``
- ``nvcc.*`` flags
- ``pycuda.init``
Convolution updates:
- Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation
- Implemented separable convolutions
- Implemented grouped convolutions
GPU:
- ...
- Prevent GPU initialization when not required
- Added disk caching option for kernels
- Added method ``my_theano_function.sync_shared()`` to help synchronize GPU Theano functions
- Added useful stats for GPU in profile mode
- Added Cholesky op based on ``cusolver`` backend
- Added GPU ops based on `magma library <http://icl.cs.utk.edu/magma/software/>`_:
SVD, matrix inverse, QR, cholesky and eigh
- Added ``GpuAdvancedIncSubtensor``
- Added ``GpuCublasTriangularSolve``
- Added atomic addition and exchange for ``long long`` values in ``GpuAdvancedIncSubtensor1_dev20``
- Fixed C code for log gamma function, now supporting all types except complex types.
- Support GPU SoftMax in both OpenCL and CUDA
- Support offset parameter ``k`` for ``GpuEye``
- ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU
- Better cuDNN support
- Official support for versions >= ``v5.1``
- Better support and loading on Windows and Mac
- Support cuDNN v6 dilated convolutions
- Support cuDNN v6 reductions
- Added new theano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
to help configure Theano when CUDA and cuDNN can not be found automatically.
- Updated ``float16`` support
- Added documentation for GPU float16 ops
- Support ``float16`` for ``GpuGemmBatch``
- Started to avoid lifting ``float16`` computations that are not supported on GPU
New features:
- Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions
- Added scalar and elemwise ops for modified Bessel function of order 0 and 1 from ``scipy.special``
- Added Scaled Exponential Linear Unit (SELU) activation
- Added sigmoid_binary_crossentropy function
- Added tri-gamma function
- Added modes ``half`` and ``full`` for ``Images2Neibs`` ops
- Implemented gradient for ``AbstractBatchNormTrainGrad``
- Implemented gradient for matrix pseudoinverse op
- Added new prop `replace` for ``ChoiceFromUniform`` op
- Added new prop ``on_error`` for CPU ``Cholesky`` op
- Added new theano flag ``deterministic`` to help control how Theano optimize certain ops that have deterministic versions.
Currently used for subtensor Ops only.
- Added new theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace insertions
- Added new theano flag ``check_stack_trace`` to help check the stack trace during optimization process
- Added new theano flag ``cmodule.debug`` to allow a debug mode for theano C code. Currently used for cuDNN convolutions only.
Others:
- ...
- Added deprecation warning for the softmax and logsoftmax vector case
- Added a warning to announce that C++ compiler will become mandatory in next Theano release ``0.11``
Other more detailed changes:
- Removed useless warning when profile is manually disabled
- Added tests for abstract conv
- Added options for `disconnected_outputs` to Rop
- Insertion of an OutputGuard is now considered as an error
- Removed ``theano/compat/six.py``
- Removed ``COp.get_op_params()``
- Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
- Macro names provided for array properties are now standardized in both CPU and GPU C codes
- Started to move C code files into separate folder ``c_code`` in every Theano module
- Many improvements for TRAVIS CI tests (with better splitting for faster testing)
- Many improvements for Jenkins CI tests: support for Mac and Windows testings, usage of Docker for better tests isolation
ALL THE PR BELLOW HAVE BEEN CHECKED
* https://github.com/Theano/Theano/pull/6218
* https://github.com/Theano/Theano/pull/6271
* https://github.com/Theano/Theano/pull/6253
* https://github.com/Theano/Theano/pull/6273
* https://github.com/Theano/Theano/pull/6262
* https://github.com/Theano/Theano/pull/6214
* https://github.com/Theano/Theano/pull/6264
* https://github.com/Theano/Theano/pull/6256
* https://github.com/Theano/Theano/pull/6254
* https://github.com/Theano/Theano/pull/6220
* https://github.com/Theano/Theano/pull/5949
* https://github.com/Theano/Theano/pull/6243
* https://github.com/Theano/Theano/pull/6250
* https://github.com/Theano/Theano/pull/6225
* https://github.com/Theano/Theano/pull/6242
* https://github.com/Theano/Theano/pull/6213
* https://github.com/Theano/Theano/pull/6199
* https://github.com/Theano/Theano/pull/6209
* https://github.com/Theano/Theano/pull/6216
* https://github.com/Theano/Theano/pull/6215
* https://github.com/Theano/Theano/pull/6182
* https://github.com/Theano/Theano/pull/6194
* https://github.com/Theano/Theano/pull/6190
* https://github.com/Theano/Theano/pull/6146
* https://github.com/Theano/Theano/pull/6201
* https://github.com/Theano/Theano/pull/6150
* https://github.com/Theano/Theano/pull/6204
* https://github.com/Theano/Theano/pull/6166
* https://github.com/Theano/Theano/pull/6174
* https://github.com/Theano/Theano/pull/6205
* https://github.com/Theano/Theano/pull/6183
* https://github.com/Theano/Theano/pull/6186
* https://github.com/Theano/Theano/pull/6203
* https://github.com/Theano/Theano/pull/6161
* https://github.com/Theano/Theano/pull/6164
* https://github.com/Theano/Theano/pull/6050
* https://github.com/Theano/Theano/pull/6178
* https://github.com/Theano/Theano/pull/6180
* https://github.com/Theano/Theano/pull/6173
* https://github.com/Theano/Theano/pull/6170
* https://github.com/Theano/Theano/pull/6092
* https://github.com/Theano/Theano/pull/6163
* https://github.com/Theano/Theano/pull/6171
* https://github.com/Theano/Theano/pull/6169
* https://github.com/Theano/Theano/pull/6165
* https://github.com/Theano/Theano/pull/5914
* https://github.com/Theano/Theano/pull/5775
* https://github.com/Theano/Theano/pull/6147
* https://github.com/Theano/Theano/pull/6159
* https://github.com/Theano/Theano/pull/6156
* https://github.com/Theano/Theano/pull/6154
* https://github.com/Theano/Theano/pull/5991
* https://github.com/Theano/Theano/pull/6149
* https://github.com/Theano/Theano/pull/6151
* https://github.com/Theano/Theano/pull/6116
* https://github.com/Theano/Theano/pull/6111
* https://github.com/Theano/Theano/pull/6139
* https://github.com/Theano/Theano/pull/6097
* https://github.com/Theano/Theano/pull/6070
* https://github.com/Theano/Theano/pull/6148
* https://github.com/Theano/Theano/pull/6140
* https://github.com/Theano/Theano/pull/6138
* https://github.com/Theano/Theano/pull/5881
* https://github.com/Theano/Theano/pull/6130
* https://github.com/Theano/Theano/pull/6044
* https://github.com/Theano/Theano/pull/6060
* https://github.com/Theano/Theano/pull/6109
* https://github.com/Theano/Theano/pull/6119
* https://github.com/Theano/Theano/pull/6123
* https://github.com/Theano/Theano/pull/6117
* https://github.com/Theano/Theano/pull/6120
* https://github.com/Theano/Theano/pull/5747
* https://github.com/Theano/Theano/pull/6087
* https://github.com/Theano/Theano/pull/6108
* https://github.com/Theano/Theano/pull/6112
* https://github.com/Theano/Theano/pull/6106
* https://github.com/Theano/Theano/pull/6107
* https://github.com/Theano/Theano/pull/6105
* https://github.com/Theano/Theano/pull/6102
* https://github.com/Theano/Theano/pull/6101
* https://github.com/Theano/Theano/pull/6077
* https://github.com/Theano/Theano/pull/6085
* https://github.com/Theano/Theano/pull/6091
* https://github.com/Theano/Theano/pull/6013
* https://github.com/Theano/Theano/pull/6088
* https://github.com/Theano/Theano/pull/6069
* https://github.com/Theano/Theano/pull/6084
* https://github.com/Theano/Theano/pull/6083
* https://github.com/Theano/Theano/pull/6081
* https://github.com/Theano/Theano/pull/6072
* https://github.com/Theano/Theano/pull/6045
* https://github.com/Theano/Theano/pull/6082
* https://github.com/Theano/Theano/pull/6049
* https://github.com/Theano/Theano/pull/6076
* https://github.com/Theano/Theano/pull/6062
* https://github.com/Theano/Theano/pull/6041
* https://github.com/Theano/Theano/pull/6057
* https://github.com/Theano/Theano/pull/6055
* https://github.com/Theano/Theano/pull/6056
* https://github.com/Theano/Theano/pull/6043
* https://github.com/Theano/Theano/pull/6032
* https://github.com/Theano/Theano/pull/6030
* https://github.com/Theano/Theano/pull/5942
* https://github.com/Theano/Theano/pull/6025
* https://github.com/Theano/Theano/pull/6038
* https://github.com/Theano/Theano/pull/6034
* https://github.com/Theano/Theano/pull/6012
* https://github.com/Theano/Theano/pull/6029
* https://github.com/Theano/Theano/pull/6015
* https://github.com/Theano/Theano/pull/6027
* https://github.com/Theano/Theano/pull/6026
* https://github.com/Theano/Theano/pull/5980
* https://github.com/Theano/Theano/pull/6021
* https://github.com/Theano/Theano/pull/6022
* https://github.com/Theano/Theano/pull/6011
* https://github.com/Theano/Theano/pull/5935
* https://github.com/Theano/Theano/pull/5955
* https://github.com/Theano/Theano/pull/6009
* https://github.com/Theano/Theano/pull/5016
* https://github.com/Theano/Theano/pull/5794
* https://github.com/Theano/Theano/pull/5996
* https://github.com/Theano/Theano/pull/5923
* https://github.com/Theano/Theano/pull/5993
* https://github.com/Theano/Theano/pull/5983
* https://github.com/Theano/Theano/pull/5964
* https://github.com/Theano/Theano/pull/5940
* https://github.com/Theano/Theano/pull/5915
* https://github.com/Theano/Theano/pull/5989
* https://github.com/Theano/Theano/pull/5988
* https://github.com/Theano/Theano/pull/5987
* https://github.com/Theano/Theano/pull/5908
* https://github.com/Theano/Theano/pull/5974
* https://github.com/Theano/Theano/pull/5965
* https://github.com/Theano/Theano/pull/5960
* https://github.com/Theano/Theano/pull/5957
* https://github.com/Theano/Theano/pull/5936
* https://github.com/Theano/Theano/pull/5950
* https://github.com/Theano/Theano/pull/5948
* https://github.com/Theano/Theano/pull/5946
* https://github.com/Theano/Theano/pull/5947
* https://github.com/Theano/Theano/pull/5927
* https://github.com/Theano/Theano/pull/5944
* https://github.com/Theano/Theano/pull/5918
* https://github.com/Theano/Theano/pull/5941
* https://github.com/Theano/Theano/pull/5931
* https://github.com/Theano/Theano/pull/5937
* https://github.com/Theano/Theano/pull/5852
* https://github.com/Theano/Theano/pull/5922
* https://github.com/Theano/Theano/pull/5921
* https://github.com/Theano/Theano/pull/5902
* https://github.com/Theano/Theano/pull/5903
* https://github.com/Theano/Theano/pull/5909
* https://github.com/Theano/Theano/pull/5758
* https://github.com/Theano/Theano/pull/5778
* https://github.com/Theano/Theano/pull/5900
* https://github.com/Theano/Theano/pull/5895
* https://github.com/Theano/Theano/pull/5883
* https://github.com/Theano/Theano/pull/5896
* https://github.com/Theano/Theano/pull/5888
* https://github.com/Theano/Theano/pull/5886
* https://github.com/Theano/Theano/pull/5885
* https://github.com/Theano/Theano/pull/5873
* https://github.com/Theano/Theano/pull/5877
* https://github.com/Theano/Theano/pull/5878
* https://github.com/Theano/Theano/pull/5872
* https://github.com/Theano/Theano/pull/5870
* https://github.com/Theano/Theano/pull/5854
* https://github.com/Theano/Theano/pull/5865
* https://github.com/Theano/Theano/pull/5853
* https://github.com/Theano/Theano/pull/5850
* https://github.com/Theano/Theano/pull/5538
* https://github.com/Theano/Theano/pull/5863
* https://github.com/Theano/Theano/pull/5799
* https://github.com/Theano/Theano/pull/5859
* https://github.com/Theano/Theano/pull/5755
* https://github.com/Theano/Theano/pull/5860
* https://github.com/Theano/Theano/pull/5716
* https://github.com/Theano/Theano/pull/5842
* https://github.com/Theano/Theano/pull/5821
* https://github.com/Theano/Theano/pull/5789
* https://github.com/Theano/Theano/pull/5847
* https://github.com/Theano/Theano/pull/5735
* https://github.com/Theano/Theano/pull/5710
* https://github.com/Theano/Theano/pull/5843
* https://github.com/Theano/Theano/pull/5832
* https://github.com/Theano/Theano/pull/5814
* https://github.com/Theano/Theano/pull/5835
* https://github.com/Theano/Theano/pull/5834
* https://github.com/Theano/Theano/pull/5829
* https://github.com/Theano/Theano/pull/5785
* https://github.com/Theano/Theano/pull/5824
* https://github.com/Theano/Theano/pull/5820
* https://github.com/Theano/Theano/pull/5808
* https://github.com/Theano/Theano/pull/5815
* https://github.com/Theano/Theano/pull/5819
* https://github.com/Theano/Theano/pull/5612
* https://github.com/Theano/Theano/pull/5802
* https://github.com/Theano/Theano/pull/5796
* https://github.com/Theano/Theano/pull/5806
* https://github.com/Theano/Theano/pull/5782
* https://github.com/Theano/Theano/pull/5787
* https://github.com/Theano/Theano/pull/5774
* https://github.com/Theano/Theano/pull/5751
* https://github.com/Theano/Theano/pull/5779
* https://github.com/Theano/Theano/pull/5763
* https://github.com/Theano/Theano/pull/5746
* https://github.com/Theano/Theano/pull/5579
* https://github.com/Theano/Theano/pull/5772
* https://github.com/Theano/Theano/pull/5756
* https://github.com/Theano/Theano/pull/5769
* https://github.com/Theano/Theano/pull/5433
* https://github.com/Theano/Theano/pull/5760
* https://github.com/Theano/Theano/pull/5470
* https://github.com/Theano/Theano/pull/5759
* https://github.com/Theano/Theano/pull/5739
* https://github.com/Theano/Theano/pull/5752
* https://github.com/Theano/Theano/pull/5548
* https://github.com/Theano/Theano/pull/5749
* https://github.com/Theano/Theano/pull/5665
* https://github.com/Theano/Theano/pull/5562
* https://github.com/Theano/Theano/pull/5686
* https://github.com/Theano/Theano/pull/5718
* https://github.com/Theano/Theano/pull/5698
* https://github.com/Theano/Theano/pull/5720
* https://github.com/Theano/Theano/pull/5717
* https://github.com/Theano/Theano/pull/5715
* https://github.com/Theano/Theano/pull/5502
* https://github.com/Theano/Theano/pull/5533
......
......@@ -72,9 +72,9 @@ copyright = '2008--2017, LISA lab'
# other places throughout the built documents.
#
# The short X.Y version.
version = '0.9'
version = '0.10'
# The full version, including alpha/beta/rc tags.
release = '0.9.0'
release = '0.10.0beta1'
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
......
......@@ -21,6 +21,8 @@ learning/machine learning <https://mila.umontreal.ca/en/cours/>`_ classes).
News
====
* 2017/08/09: Release of Theano 0.10.0beta1, many improvements and bugfixes, release candidate to coming.
* Removed support for the old (device=gpu) backend. Use the new
backend (device=cuda) for gpu computing. See `Converting to the new
gpu back end(gpuarray)
......
......@@ -56,12 +56,12 @@ libgpuarray
:::::::::::
For the stable version of Theano you need a specific version of libgpuarray,
that has been tagged ``v0.6.5``.
that has been tagged ``v0.6.9``.
Download it with::
git clone https://github.com/Theano/libgpuarray.git
cd libgpuarray
git checkout tags/v0.6.5 -b v0.6.5
git checkout tags/v0.6.5 -b v0.6.9
and then follow the `Step-by-step instructions <http://deeplearning.net/software/libgpuarray/installation.html#step-by-step-install>`__.
......
......@@ -53,7 +53,7 @@ PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"]
MAJOR = 0
MINOR = 10
MICRO = 0
SUFFIX = "dev1" # Should be blank except for rc's, betas, etc.
SUFFIX = "beta1" # Should be blank except for rc's, betas, etc.
ISRELEASED = False
VERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论