提交 16683490 authored 作者: Pascal Lamblin's avatar Pascal Lamblin

Merge pull request #1611 from nouiz/release

Release preparation
......@@ -8,7 +8,9 @@
# # This file is up-to-date if the command git log --format="%aN <%aE>" | sort -u
# # gives no duplicates.
Jan Schlüter <github@jan-schlueter.de> f0k <github@jan-schlueter.de>
Rami Al-Rfou' <rmyeid@gmail.com> Rami Al-Rfou <rmyeid@gmail.com>
Arnaud Bergeron <abergeron@gmail.com> <abergeron@gmail.com>
<abergeron@gmail.com> <anakha@kami.(none)>
David Warde-Farley <wardefar@iro.umontreal.ca> David Warde-Farley <dwf@cs.toronto.edu>
David Warde-Farley <wardefar@iro.umontreal.ca> David Warde Farley <dwf@cs.toronto.edu>
......@@ -25,6 +27,11 @@ Francois Savard <devnull@localhost> fsavard <devnull@localhost>
# 2 onze <onzeonline@gmail.com>
# 25 projects@lgcm <projects@lgcm>
# 1 tutorial/debug_faq.txt <devnull@localhost>
Bogdan Budescu <bbudescu@gmail.com> bbudescu <bbudescu@gmail.com>
Sebastian Berg <sebastian@sipsolutions.net> seberg <sebastian@sipsolutions.net>
Huy Nguyen <huy@huyng.com> huyng <huy@huyng.com>
Wei Li <kuantkid@gmail.com> kuantkid <kuantkid@gmail.com>
Ethan Buchman <ebuchman@uoguelph.ca> ebuchman <ebuchman@uoguelph.ca>
Frederic Bastien <nouiz@nouiz.org> Frederic Bastien <bastienf@briaree1.rqchp.qc.ca>
Frederic Bastien <nouiz@nouiz.org> Frederic Bastien <bastienf@iro.umontreal.ca>
Frederic Bastien <nouiz@nouiz.org> Frédéric Bastien <nouiz@nouiz.org>
......@@ -55,6 +62,7 @@ James Bergstra <james.bergstra@gmail.com> james@mackie <james@mackie>
James Bergstra <james.bergstra@gmail.com> james@x40.unstable <james@x40.unstable>
James Bergstra <james.bergstra@gmail.com> test_rng_mrg.py <devnull@localhost>
John Salvatier <jsalvatier@gmail.com> jsalvatier <jsalvatier@gmail.com>
John Salvatier <jsalvatier@gmail.com> john salvatier <jsalvatier@gmail.com>
Joseph Turian <turian@iro.umontreal.ca> Joseph Turian <turian@gmail.com>
Joseph Turian <turian@iro.umontreal.ca> turian@grenat.iro.umontreal.ca <turian@grenat.iro.umontreal.ca>
Joseph Turian <turian@iro.umontreal.ca> turian@lgcm <turian@lgcm>
......@@ -83,3 +91,4 @@ Sander Dieleman <sanderdieleman@gmail.com> benanne <sanderdieleman@gmail.com>
Xavier Glorot <glorotxa@iro.umontreal.ca> glorotxa <glorotxa@iro.umontreal.ca>
Xavier Glorot <glorotxa@iro.umontreal.ca> glorotxa@timide.iro.umontreal.ca <glorotxa@timide.iro.umontreal.ca>
Yoshua Bengio <bengioy@iro.umontreal.ca> bengioy@bengio-mac.local <bengioy@bengio-mac.local>
Sina Honari <honaris@iro.umontreal.ca> SinaHonari <sina2222@gmail.com>
......@@ -6,17 +6,71 @@ DRAFT Release Notes
git log -p rel-0.6rc3... |grep Merge|grep '#' |cut -f 8 -d ' ' | replace "#" "* https://github.com/Theano/Theano/pull/"
Done up to PR 1456
git shortlog -sn rel-0.6rc3..
Done up to PR 1608
* https://github.com/Theano/Theano/pull/1608
* https://github.com/Theano/Theano/pull/1591 # need info
Theano Development version
==========================
NEWS.txt:
We recommand everybody to update to this version.
Highlights:
* Python 3.3 compatibility with buildbot.
* Python 3.3 compatibility with buildbot test for it.
* Full advanced indexing support.
* Better Windows 64 bit support.
* New profiler.
* Better error messages that help debugging.
* Better support of newer NumPy version (remove useless warning/crash).
* Faster optimization/compilation for big graph.
* Move in Theano the Conv3d2d implementation.
* Better SymPy/Theano bridge: Make an Theano op from SymPy expression and use SymPy c code generator.
* Bug fixes.
Committers for this rc4 only:
Frederic Bastien
Pascal Lamblin
Arnaud Bergeron
abalkin
Olivier Delalleau
John Salvatier
Razvan Pascanu
Jeremiah Lowin
Ludwig Schmidt-Hackenberg
Vivek Kulkarni
Matthew Rocklin
Gabe Schwartz
James Bergstra
Sigurd Spieckermann
Bogdan Budescu
Mehdi Mirza
Nicolas Bouchard
Ethan Buchman
Guillaume Desjardins
Ian Goodfellow
Jason Yosinski
Sina Honari
Ben McCann
David Warde-Farley
Ilya Dyachenko
Jan Schlüter
Micky Latowicki
Yaroslav Halchenko
Alexander Belopolsky
Hannes Schulz
Huy Nguyen
Robert Kern
Sebastian Berg
Vincent Dumoulin
Wei Li
XterNalz
Installation:
* Canopy support (direct link to MKL):
......@@ -25,22 +79,33 @@ Installation:
* Anaconda instructions (Pascal L., Frederic B.)
* Doc Ubuntu 13.04 (Frederic B.)
Committers for this rc3 only:
* Better support of newer NumPy version(remove useless warning/crash) (Frederic B., Huy Nguyen)
Bug fixes:
* Fix wrong result of GpuDownsampleFactorMaxGrad on Mac OSX. (Pascal L.)
* Auto-Detect and work around a bug in BLAS on MacOS X (Pascal L.)
* Work around bug in MacOS X. If 2 compiled modules had the same name, the OS or Python was not always the right one even when we used the right handle to it. (Pascal L.)
* Scan: if a scan node was cloned (by theano.clone) with different inputs, and if both the initial and the cloned nodes are used in the function being compiled, the value of the outputs of one would be replaced with the outputs of the other one. (Pascal L.)
* Sparse: Disable the optimization that introduce the CSMGradC op as it don't work correctly with unsorted indices. (Frederic B.)
* Mac: Fix wrong result of GpuDownsampleFactorMaxGrad on Mac OSX. (Pascal L.)
* Mac: Auto-Detect and work around a bug in BLAS on MacOS X (Pascal L.)
* Mac: Work around bug in MacOS X. If 2 compiled modules had the same name, the OS or Python was not always the right one even when we used the right handle to it. (Pascal L.)
Use this hash in the Python module, and in %(nodename)s, so that different helper functions in the support code for different Ops will always have different names.
* Fix infinite loop related to Scan on the GPU. (Pascal L.)
* Fix ConstructSparseFromList.infer_shape (Pascal L., reported by Rami Al-Rfou')
* Sparse grad: Fix ConstructSparseFromList.infer_shape (Pascal L., reported by Rami Al-Rfou')
* (introduced in the development version after 0.6rc3 release) (Frederic B.)
Reduction that upcasts the input on no axis (ex: call theano.sum() on a scalar when the original dtype isn't float64 or [u]int64). It produced bad results as we don't upcast the inputs in the code, we just copy them.
Reduction that upcasts the input on no axis (ex: call theano.sum() on a scalar when the original dtype isn't float64 or
[u]int64). It produced bad results as we did not upcasted the inputs in the code, we just copy them.
* Fix some cases of theano.clone() when we get a replacement of x that is a function of x. (Razvan P., reported by Akio Takano)
* Fix grad of Alloc when we unbroadcast the value value and it isn't a scalar. (Frederic B., reported Ian G.)
* I some cases (I think most cases), there was an exception raised in the theano.tensor.grad() method.
But in theory, there could be bad shapes produced in the unbroadcasted dimensions.
New Features:
* Python 3.3 compatible (abalkin, Gabe Schwartz, Frederic B.)
* compilation work on ARM processor (Raspberry Pi, Vincent Dumoulin)
* Add numpy.random.choice wrapper to our random number generator (Sigurd Spieckermann)
* Better SymPy/Theano bridge: Make an Theano op from SymPy expression and use SymPy c code generator (Matthew Rocklin)
* Move in Theano the Conv3d2d implementation (James Bergstra, Frederic B., Pascal L.)
* First version of the new GPU back-end available (Arnaud Bergeron, Frederic B.)
* Not all Ops have been converted to this new back-end.
To use, use Theano flag device=cudaN or device=openclN, where N is a integer.
* Python 3.3 compatible (abalkin, Gabe Schwartz, Frederic B., Pascal L.)
* A new profiler (Frederic B.)
The new profiler now can profile the memory with the Theano flag profile_memory=True.
The ProfileMode now can't profile memory anymore and prints a message about it.
......@@ -88,6 +153,10 @@ New Features:
* Make GpuCrossentropySoftmaxArgmax1HotWithBias and GpuCrossentropySoftmax1HotWithBiasDx work for bigger inputs (Frederic B., reported by Ryan Price)
* Finish and move out of sandbox theano.sparse.basic.true_dot (Nicolas Bouchard, Frederic B.)
And document all sparse dot variants.
* Implement the mode ignore_borders for GpuImages2Neibs (Frederic B.)
* Make many reduction algo accept a scalar numpy.ndarray as axis (Jeremiah Lowin)
* Allow numpy.asarray(cuda_ndarray, dtype=...) (Frederic B.)
* theano-cache cleanup now remove cached module old version of code. (Frederic B.)
Interface Deprecation (a warning is printed):
......@@ -97,20 +166,38 @@ Interface Deprecation (a warning is printed):
Deprecate the old interface for this. (Frederic B.)
Interface Changes:
* Interface change subtensor and take are not in tensor.basic anymore. They where available from tensor.* and are still avail from there. (Frederic B., Matthew Rocklin)
* This lower the basic.py size to 191k, so under 200k for github search.
* Add -m32 or -m64 in the module cache key and add the python bitwidth in the compiledir path. (Pascal L.)
* mrg.normal now has the parameter size mandatory. It was crashing with the default value of None. (Olivier D.)
* Remove the deprecated passing of multiple modes to theano function. (Frederic B.)
* Change FunctionGraph Features interface of the {on_prune(),on_import()} call back to take a reason. (Frederic B.)
* FunctionGraph now clone the input graph by default. (Frederic B.)
* A parameter allow to don't do this clone.
* This was needed to speed up compilation
New Interface (reuses existing functionality):
* Add hostname as a var in compiledir_format (Frederic B.)
* Add a new Theano flag: compute_test_value_opt. It take the same value as compute_test_value. It enable compute_test_value during Theano optimization. Only useful to debug Theano optimization. Also small changes to some optimization to work correctly in that setup. (Frederic B.)
* Add the value pdb to the Theano flag: compute_test_value and compute_test_value_opt. (Frederic B.)
* Add the Theano flag: optimizer_verbose. Default False. When True, we print all the optimization being applied.(Frederic B.)
* Add Op.c_init_code() to allow running the code when the c cmodule is imported (Pascal L.)
* Allow theano.tensor.ones(3) to support scalar and not just list of scalar as numpy.ones (Jeremiah Lowin)
* Make the memory profiler print the FLOPS used for the ops that know how to compute it. (Frederic B.)
New debug features:
Speed-ups:
* Optimizer speed up. (Frederic B.)
* Fix warning/not detection on newer llvm version on Mac. (Pascal L., reported by Jeremiah Lowin and Chris Fonnesbeck)
* Allow pickling of more Op to allow reusing the compiled code (Pascal L., Frederic B.)
* Optimize more cases of dot22 and scalar when we can't make a gemm (Pascal L., Frederic B.)
* Speed up GpuJoin with c code (Ludwig Schmidt-Hackenberg, Frederic B.)
* Faster GpuAdvancedIncSubtensor1 on Fermi GPU (and up) on matrix. (Vivek Kulkarni)
* Faster GPUAdvancedIncSubtensor1 in some cases on all GPU (Vivek Kulkarni)
* Implemented c_code for AdvancedSubtensor1 (abalkin)
* Add the equivalent of -march=native to g++ command line. (Frederic B., Pascal L.)
* Speed up compilation with Scan (Jan Schlüter)
* Merge more Scan node together (Pascal L., Yao Li).
* Add MakeVector.c_code (Fred)
* Add Shape.c_code (Fred)
* Optimize Elemwise when all the inputs are fortran (Frederic B.)
......@@ -122,8 +209,22 @@ Speed-ups:
* A fix that removes a local_setsubtensor_of_allocs optimization warning and enables it in that case. (Frederic B., reported by John Salvatier)
* Make inv_as_solve optimization work (Matthew Rocklin)
Crash fixes:
Crash/no return fixes:
* Fix shape crash inserted by Scan optimization. The gradient of some recursive scan was making the PushOutSeqScan optimization insert crash during the execution of a Theano function. (Frédéric B., reported by Hugo Larochelle)
* Fix command not returning with recent mingw64 on Windows (Pascal L., reported by many people)
* Fix infinite loop related to Scan on the GPU. (Pascal L.)
* Fix infinite loop when the compiledir is full. (Frederic B.)
* Fix a shape cycle crash in the optimizer (Pascal L., Frédéric B., reported by Cho KyungHyun)
* Fix MRG normal now accept to generate scalar. (Pascal L.)
* Fix some GPU compilation issue on Mac (John Yani, Frédéric B.)
* Fix crash when building symbolic random variables with a mix of symbolic and numeric scalar in the "size" parameter. (Pascal L., Reported by Wu Zhen Zhou)
* Make some Op.grad() implemention don't return None (Pascal L.)
* Crash fix in the grad of elemwise about an DisconnectedType (Pascal L, reported by Thomas Wiecki)
* Fix local_gpu_multinomial optimization handling of broadcast information. (Frederic B., reported by Caglar)
* Fix crash with change introduced in NumPy 1.7.1 (Pascal L., reported by Thomas Wiecki)
* Compilation failure with complex (Pascal L., reported by autumncat)
* Gpu reduction on all dimensions of a 4d tensor. (Frederic B., reported by Arjun Jain)
* Fix crash for a combination of grad of dot and dimshuffle when only one of the inputs for a corresponding dimensions was knowing that it was broadcastable. (Frederic B., reported by Micky Latowicki)
* AdvancedSubtensor1: allow broadcasted index vector. (Frederic B., reported by Jeremiah Lowin)
* Fix compute_test_value for ifelse (Olivier D., reported by Bitton Tenessi)
* Fix import error with some versions of NumPy (Olivier D.)
......@@ -131,14 +232,15 @@ Crash fixes:
* Fix compute_test_value for a non_sequence when calling the gradient of Scan (Pascal L., reported by Bitton Tenessi).
* Crash fix in Scan following interface change in 0.6rc2 (Razvan P.)
* Crash fix on Scan (Razvan P.)
* Crash fix on Scan (Pascal L., reported by Sina Honari and Sigurd)
* Fix crash in Scan gradient related to compute_test_value (Frederic B., reported by Bitton Tenessi)
* Fix a scan optimization warning/error depending of Theano flags (Frederic B.)
* Fixed crash for unimplemented elemwise gradient (Olivier D., reported by Michael McNeil Forbes)
* Fix crash in the elemwise python code for some big shape with power of 2. (Sina Honari, Pascal L.)
* Fix compile and import errors on Windows (bbudescu)
* Fix compile and import errors on Windows including for the GPU. (Bogdan Budescu)
* Fix GPU compilation on Windows (XterNalz)
* Fix local_abs_merge optimization crash (Pascal L, reported by Jeremiah Lowin)
* Fix import theano crash when g++ isn't there (OD)
* Fix local_abs_merge optimization crash (Pascal L., reported by Jeremiah Lowin)
* Fix import theano crash when g++ isn't there (Olivier D.)
* Fix crash related to rebuild of Theano graph (Pascal L., reported by Divine Eguzouwa)
* Fix crash during compilation (David Ward-Farley)
* Crash fix in the grad of GPU op in corner case (Pascal L.)
......@@ -148,22 +250,24 @@ Crash fixes:
* Fix crash with keepdims and negative axis (Frederic B., reported by David W.-F.)
* Fix crash of theano.[sparse.]dot(x,y) when x or y is a vector. (Frederic B., reported by Zsolt Bitvai)
* Fix opt crash/disabled with ifelse on the gpu (Frederic B, reported by Ryan Price)
* Fix crash in optimization involving dot22, (Pascal L., reported by @micklat)
* Prevent shape optimizations from introducing cycles in the graph (Frederic Bastien, Pascal Lamblin, reported by Kyunghyun Cho)
Others:
* Update/Fixes/Typo/pep8 documentation and/or tutorial (Olivier D., Frederic B., Yaroslav Halchenko, Micky Latowicki, Ben McCann, Jason Yosinski, reported by Arnaud Bergeron)
* Doc how to make a sparse Op. (Frederic B.)
* Doc compatibility guide (abalkin)
* Fix problem in remove_constants_and_unused_inputs_scan. (useless warning and maybe slow down) (Pascal L.)
* Fix rop dot.(Razvan P., reported by Jeremiah Lowin)
* Raise better error related to pydot bug. (Frederic B., reported by Jason Yosinski and Ludwig Schmidt-Hackenberg)
* Fix to Theano tutorial examples. (reported by Ilya Dyachenko)
* Fix SharedVar.value property to make it raise an exceptin (Frederic B., reported by Drew Duncan)
* Fix verification with compute_test_value in grad() (Frederic B.)
* Theano flags are now evaluated lazily, only if requested (Frederic B.)
* Fix test when g++ is not avail (Frederic B.)
* Typo/pep8 (Olivier D., Frederic B.)
* Update doc (Ben McCann)
* Doc compatibility guide (abalkin)
* Doc the MPI and load Ops (Frederic B.)
* Add manual instructions for OpenBLAS on Ubuntu by (Jianri Li )
* Doc fixes (Yaroslav Halchenko)
* Better/more error messages (Frederic B., Pascal L., Ian Goodfellow)
* More doc (Frederic B.)
* Fix Error reporting with GpuConv (Frederic B., reported by Heng Luo and Nicolas Pinto)
* Update BLAS compilation doc on windows to use OpenBLAS (Olivier D.)
* The infer_shape tester method now warns if the shapes values could hide errors. (Frederic B.)
* Now travis-ci tests with scipy the parts that need it (Frederic B.)
* Export some functions that work on CudaNdarray for windows (Frederic B.)
......@@ -189,674 +293,3 @@ Others:
Todo for the final release:
* update the NEWS.txt file.
=============
Release Notes
=============
Theano 0.6rc3 (February 14th, 2013)
===================================
Highlights:
* Windows related fixes.
* Speed-ups.
* Crash fixes.
* A few small interface changes.
* GPU memory leak fix.
* A few corner cases fixes without incidence.
* More Theano determinism
* tensor.{dot,tensordot} more complete/faster/GPU friendly.
* tensor.tensordot now support Rop/Lop
* tensor.dot support n-dimensional inputs as NumPy
* To support more NumPy syntax:
* Add theano.tensor.take()
* Add a_tensor_variable.{sort,dot,std,argmin,argmax,argsort,clip,conj,conjugate,repeat,round,trace,real,imag,take}
Commiters for this rc3 only:
Frederic Bastien
Ian Goodfellow
Pascal Lamblin
Jeremiah Lowin
abalkin
Olivier Delalleau
Razvan Pascanu
Rami Al-Rfou'
Vivek Kulkarni
Guillaume Desjardins
David Warde-Farley
Eric Hunsberger
Amir Elaguizy
James Bergstra
Bug fix:
* Fix memory leak on the GPU in some corner cases with the Theano flags `allow_gc=False`. (Frederic B., reported by Jonas Gehring)
* Fix copy of random state between graph. (Guillaume D.)
http://deeplearning.net/software/theano/tutorial/examples.html#copying-random-state-between-theano-graphs
* Fix wrong dtype in sandbox.linalg.ExtractDiag with shape of 0. (Frederic B., reported by abalkin)
* Correctly support array with more then 2*10e32 element in AdvancedSubtensor1. (Abalkin)
* Fix wrong broadcast dimensions of output of Repeat op. (Abalkin)
We where using the inputs broadcasting pattern in some cases when we shouldn't.
* Fix theano.sandbox.linalg.eigh grad that didn't always returned the right dtype. (Frederic B., Olivier D.)
New Features:
* More Theano determinism (Ian G., Olivier D., Pascal L.)
* Add and use a new class OrderedSet.
* theano.grad is now deterministic.
* Warn when the user uses a (non ordered) dictionary and this causes non-determinism in Theano.
* The Updates class was non-deterministic; replaced it with the OrderedUpdates class.
* tensor.tensordot now support Rop/Lop (Jeremiah Lowin)
This remove the class TensorDot and TensorDotGrad. It is the Dot/Elemwise ops that are used.
* tensor.dot support n-dimensional inputs as NumPy (Jeremiah Lowin)
Work on the GPU too.
* The Theano flag `nvcc.flags` now accept `-ftz=true`, `--prec-div=false` and `--prec=sqrt=false` as value. (Frederic B.)
To enable all of them, use the Theano flag `nvcc.flags=--use_fast_math`.
* New op theano.sparse.ConstructSparseFromList (Rami Al-Rfou' Vivek Kulkarni)
* Make Theano work with Anaconda on Windows. (Pascal L.)
* Add tensor_var.diagonal and theano.tensor.{diag,diagonal}. (abalkin)
* AdvencedSubtensor1 can now have a sparse gradient. (Rami Al-Rfou', Vivek Kulkarni)
* Implemented GpuContiguous.grad. (Ian G.)
Interface Deprecation (a warning is printed):
* theano.misc.strutil.renderString -> render_string (Ian G.)
* Print a warning when using dictionary and this makes Theano non-deterministic.
Interface Change:
* Raise an error when theano.shared called with a theano variable. (Frederic B.)
* Don't print warning for bug before Theano 0.5 by default. (Frederic B.)
* Theano functions now always have a field name, default to None. (Frederic B.)
* Theano function fct.fgraph have a copy of the Theano function name field. (Ian G.)
This is needed to allow the fgraph to know it.
* In the grad method, if it were asked to raise an error if there is no path between the variables, we didn't always returned an error. (Ian G.)
We returned the mathematical right answer 0 in those cases.
* get_constant_value() renamed get_scalar_constant_value() and raise a new exception tensor.basic.NotScalarConstantError. (Ian G.)
* theano.function raises an error when trying to replace inputs with the 'given' parameter. (Olivier D.)
This was doing nothing, the error message explains what the user probably wants to do.
New Interface (reuse existing functionality):
* tensor_var.sort() as a shortcut for theano.tensor.sort. (Jeremiah Lowin)
We where already doing this for argsort.
* Add theano.tensor.take() and a_tensor_var.take() to support NumPy syntax. (abalkin)
* Add a_tensor_variable.{dot,std,argmin,argmax,argsort,clip,conj,conjugate,repeat,round,trace,real,imag}. (abalkin)
New debug feature:
* DebugMode print more info when there is an error. (Frederic B.)
* Better profiling of test time with `theano-nose --time-profile`. (Frederic B.)
* Detection of infinite loop with global optimizer. (Pascal L.)
* DebugMode.check_preallocated_output now also work on Theano function output. (Pascal L.)
* DebugMode will now complain when the strides of CudaNdarray of dimensions of 1 are not 0. (Frederic B.)
Speed-ups:
* c_code for SpecifyShape op. (Frederic B.)
* cross-entropy optimization now work when specify_shape is used. (Pascal L.)
* The Scan optimization ScanSaveMem and PushOutDot1 applied more frequently. (Razvan P, reported Abalkin)
A skipped optimization warning was printed.
* dot(vector, vector) now faster with some BLAS implementation. (Eric Hunsberger)
OpenBLAS and possibly others didn't call {s,d}dot internally when we called {s,d}gemv.
MKL was doing this.
* Compilation speed up: Take the compiledir lock only for op that generate c_code. (Frederic B)
* More scan optimization (Razvan P.)
* Opt to make RNN fast in Theano.
* Optimize some case of dot, by moving them outside of Scan.
* Move some sequences outside of scan too.
* Merge more scan inputs, mostly byproduct of other Scan optimizations.
* c_code for theano.sparse.AddSD. (Rami Al-Rfou', Vivek Kulkarni)
Crash Fixes:
* Fix crash about dimshuffle. (abalkin)
* Fix crash at compilation. (Olivier D.)
* Fix openmp detection. (Pascal L.)
Resulted in a crash with EPD on Windows.
* Fix for new BLAS interface in SciPy. (Olivier D.)
Fix crash with some development version of SciPy.
* GpuSum work with bigger shape when summing on the first dim on 3d tensor. (Frederic B., reported Chris Currivan)
* Windows compilation crash fix. (Frederic B.)
* Make CrossentropySoftmax1HotWithBiasDx and CrossentropySoftmaxArgmax1HotWithBias support uint* dtype. (Frederic B., reported by Mark Fenner)
* Fix GpuSoftmax and GpuSoftmaxWithBias crash on GTX285. (Frederic B.)
* Fix crash due to a race condition when importing theano. (Ian G.)
* Fix crash from path problem with `theano-nose --batch`. (Abalkin)
* Fix crash with tensor.roll(Var, iscalar). (Frederic B., reported by Jeremiah Lowin)
* Fix compilation crash with llvm on Mac. (Abalkin)
* Fix the grad of Scan that told wrongly that there is no connection between cost and parameters. (Razvan P.)
* The infer shape mechanism now force that broadcasted dimensions have a shape know to be equivalent to one during compilation.
Sometimes, we where not able knowing this before run time and resulted in crash. (Frederic B.)
* Fix compilation problems on GPU on Windows. (Frederic B.)
* Fix copy on the GPU with big shape for 4d tensor (Pascal L.)
* GpuSubtensor didn't set the stride to 0 for dimensions of 1. This could lead to check failing later that caused a crash. (Frederic B., reported by vmichals)
Theoretical bugfix (bug that won't happen with current Theano code, but if you messed with the internal, could have affected you):
* GpuContiguous, GpuAlloc, GpuDownSampleGrad, Conv2d now check the preallocated outputs strides before using it. (Pascal L.)
* GpuDownSample, GpuDownSampleGrad didn't work correctly with negative strides in their output due to problem with nvcc (Pascal L, reported by abalkin?)
Others:
* Fix race condition when determining if g++ is available. (Abalkin)
* Documentation improvements. (Many people including David W-F, abalkin, Amir Elaguizy, Olivier D., Frederic B.)
* The current GPU back-end have a new function CudaNdarray_prep_output(CudaNdarray ** arr, int nd, const int * dims) (Ian G)
=============
Release Notes
=============
Theano 0.6rc2 (November 21th, 2012)
===================================
Highlights:
* Fix for a few regressions introduced in 0.6rc1.
* A few new features.
* Speed-ups.
* Scan fixes.
* Crash fixes.
* A few small interface changes.
Commiters for this rc2 only:
Razvan Pascanu
Pascal Lamblin
Frederic Bastien
Ian Goodfellow
Jeremiah Lowin
Caglar Gulcehre
Jey Kottalam
Matthew Rocklin
abalkin
Regressions in 0.6rc1 fixed:
* Fixed the scan gradient dtype issue. In 0.6rc1, some upcast were inserted. (Razvan P.)
* Now grad() will do as before 0.6rc1 for float, i.e. the grad dtype will be the same as the inputs inside the graph. If you ask for the direct grad, it will return the computed dtype. (Pascal L.)
Wrong results fixes:
* Scan fix in some case didn't returned the good results. (Razvan P., reported by Jeremiah L.)
This happened if you had a state with only neg tap and the output of the state was a function of some sequence.
If you had multiple states, there was no problem.
* Fixed bug in Scan with multiple outputs,
where one output would sometimes overwrite another one. (Razvan P.)
* Clip.grad treated the gradient with respect to the clipping boundary as always 0. (Ian G.)
Interface changes:
* We do not support anymore unaligned ndarray in Python code. (Frederic B.)
We did not support it in C code and supporting it in Python code made
the detection harder.
* Now we only officially support SciPy 0.7.2 and NumPy 1.5.0 (Frederic B.)
We weren't and aren't testing with older versions.
* The theano.sparse.SparseType is available even when SciPy is not (Frederic B.)
* Fixed issue where members of consider_constant grad parameter
were treated differently from Constant variables. (Ian G.)
* Removed the parameter g_cost from theano.grad(). (Ian G.)
Use the new more powerful parameter known_grads instead.
NumPy interface support:
* theano.tensor.where is an alias for theano.tensor.switch to support NumPy semantic. (Ian G.)
* TensorVariable objects now have dot, argmin, argmax, clip, conj, repeat, trace, std, round,
ravel and argsort functions and the real and imag properties as numpy.ndarray objects.
The functionality was already available in Theano. (abalkin)
Speed-ups:
* A C version of the SoftMax op (Razvan P.)
There was C code for the softmax with bias code.
* Faster GpuIncSubtensor (Ian G.)
* Faster copy on the GPU for 4d tensor. (Ian G.)
* The fix of flatten infer_shape re-enables an optimization (Pascal L.)
* The bug was introduced in 0.6rc1.
* Enable inc_subtensor on the GPU when updating it with a float64 dtype. (Ian G.)
It was causing an optimization warning.
* Make DeepCopy reuse preallocated memory. (Frederic B.)
* Move the convolution to the GPU when the image shape and logical image shape differ. (Frederic Bastien)
* C code for the View Op (Razvan P., Pascal L.)
New Features:
* Added a monitoring mode "MonitorMode" as a debugging tool. (Olivier D.)
* Allow integer axes when keepdims==True (Jeremiah Lowin)
* Added erfinv and erfcinv op. (Jey Kottalam)
* Added tensor.batched_dot(). (Caglar Gulcehre)
It uses scan behind the scenes, but makes doing this easier.
* theano.get_constant_value(x) (Frederic B.)
This tries to have x as a constant int.
This does some constant folding to try to convert x into an int.
Used by some optimizations.
* Add theano.tensor.io.{MPIRecv,MPIRecvWait,MPISend,MPISendWait} (Matthew Rocklin)
Theano does not automatically use them. It is up to you to use them and split your computations.
* Added theano.sandbox.linalg.eig (abalkin)
* Started some support for Python3 (abalkin)
setup.py supports python3 now.
It calls 2to3 during the setup.
Python3 is not fully supported as we didn't update the C code.
Crash Fixes:
* Fix a crash related to scan.grad due to the new mechanism. (Ian G.)
* Fix an optimization warning. Now it gets optimized. (Frederic B.)
* Fix crash introduced in 0.6rc1 in theano.grad (Ian G.)
* Fix crash introduced in 0.6rc1 in the grad of scan (Razvan P.)
* Fix crash introduced in 0.6rc1 in the grad of clip (Ian G.)
Also implement the gradient on the min/max bound.
* Fix crash in the grad of tensor.switch for int (Ian G.)
* Fix crash when mixing shared variable on the GPU and sparse dot. (Pascal L.)
* Fix crash as sometimes sparse.dot would return a different dtype number
that is equivalent but not the one expected. (Pascal L., reported by Rami Al-Rfou)
* Better error msg (Ian G.)
* Move all sparse random functions back to sandbox as they don't have a state inside Theano. (Pascal L.)
They were moved outside the sandbox in 0.6rc1
* LoadFromDisk now is allowed to only support some memmap mode. (Pascal L.)
Otherwise, this was causing errors, segmentation faults or wrong results.
* Fix import problem on PiCloud (Jeremiah Lowin)
* You need to use the c|py linker with the default
environment. Otherwise, you need to create your own environment.
* Fix a crash during optimization when we take a subtensor of a constant with a non constant index. (Ian G.)
* Better handling and error message of gradients on integer. (Ian G.)
* Fixed a crash where Scan assumed all TypeErrors raised by the grad function were due to undefined gradients (Ian G.)
Other:
* Doc typo fixes, Doc updates, Better error messages: Olivier D., David W.F., Frederic B., James B., Matthew Rocklin, Ian G., abalkin.
=============
Release Notes
=============
Theano 0.6rc1 (October 1st, 2012)
=================================
Highlights:
* Bug fixes, crash fixes, CPU and GPU speed up.
* theano_var.eval({other_var: val[,...]} to simplify the usage of Theano (Ian G.)
* New default linker `cvm`. This is the execution engine that tells ops to run in certain orders.
It is now implemented in C and enables lazy evaluation of ifelse op.
* Faster theano.function compilation. (Pascal L., Ian G.)
* Big sparse submodule update and documentation of it. (Nicolas Bouchard)
* Use GPU asynchronous functionality (Frederic B.)
* Better Windows support.
Known bugs:
* A few crash cases that will be fixed by the final release.
Bug fixes:
* Outputs of Scan nodes could contain corrupted values: some parts of the
output would be repeated a second time, instead of the correct values.
It happened randomly, and quite infrequently, but the bug has been present
(both in Python and Cython) since April 2011. (Pascal L.)
* In Sparse sandbox, fix the grad of theano.sparse.sandbox.sp.row_scale.
It did not return the right number of elements. (Frederic B.)
* set_subtensor(x[int vector], new_value) when moved to the GPU
was transformed into inc_subtensor on the GPU. Now we have a correct
(but slow) GPU implementation.
Note 1: set_subtensor(x[slice[,...]], new_value) was working correctly
in all cases as well as all inc_subtensor.
Note 2: If your code was affected by the incorrect behavior, we now print
a warning by default (Frederic B.)
* Fixed an issue whereby config values were used as default arguments,
with those defaults then stuck at old values if the config variables were
changed during program execution. (David W-F)
* Fixed many subtle bugs involving mutable default arguments which may have
led to unexpected behavior, such as objects sharing instance variables
they were not supposed to share. (David W-F)
* Correctly record the GPU device number used when we let the driver select it.
(Frederic B.)
* Min, max with NaN in inputs did not return the right output. (Pascal L.)
* The grad of TensorDot, was returning the wrong shape for some combination of axes.
We now raise NotImplementedError in those cases. (Frederic B.)
* conv2d with subsample >2 returned wrong values. (Pascal L.)
* Fixed when mode==valid, disabled when mode==full
* theano.sparse.CSMGrad op (generated by the grad of CSM) didn't
handle unsorted input correctly and gradient that is sparser
than the input. In that case, a bad result was returned. But this could
happen only when a sparse input of a Theano function was not
sorted. This happens for example with sparse advanced indexing from
scipy. The conclusion is most of time Nan in the graph.
(Yann Dauphin)
* theano.sparse._dot(CSC matrix, dense) optimized version UsmmCSCDense didn't handle
correctly not contiguous inputs/outputs. (Pascal L.)
* Fix a corner case CVM updates case. (Pascal L.)
This happened if the update to a shared variable is itself after optimization.
The CVM was not used by default.
* Fix the view_map of sparse.Transpose and sparse.sandbow.sp.RowScale. (Frederic B.)
This probably didn't cause problem as there is only the UsmmCscDense op
(used call to Usmm with CSC matrix) that could interfere with them.
Deprecation:
* Deprecated the Module class (Ian G.)
This was a predecessor of SharedVariable with a less pythonic philosophy.
Interface changes:
* Now the base version requirements are numpy >= 1.5.0 and the optional scipy >= 0.7.2.
* In Theano 0.5, we removed the deprecated sharedvar.value property.
Now we raise an error if you access it. (Frederic B.)
* theano.function does not accept duplicate inputs, so function([x, x], ...)
does not work anymore. (Pascal L.)
* theano.function now raises an error if some of the provided inputs are
not part of the computational graph needed to compute the output, for
instance, function([x, y], [y]). You can use the kwarg
``on_unused_input={'raise', 'warn', 'ignore'}`` to control this.
(Pascal L.)
* New Theano flag "on_unused_input" that defines the default value of the
previous point. (Frederic B.)
* tensor.alloc() now raises an error during graph build time
when we try to create less dimensions than the number of dimensions
the provided value have. In the past, the error was at run time.
(Frederic B.)
* Remove theano.Value and related stuff (Ian G.)
This was a test of what ended up as SharedVariable.
* Renamed Env to FunctionGraph, and object attribute "env" to "fgraph" (Ian G.)
Deprecation warning printed when you try to access the "env" attribute.
* Renamed the FunctionGraph.nodes attribute to FunctionNodes.apply_nodes (Ian G.)
* Warn when we don't handle correctly the parameter in Theano flags `nvcc.flags`
(Frederic B.)
* Do not reorder the user flags passed to the compiler. They get set after other flags. (Frederic B.)
* Make setuptools optional (Ilan Schnell)
* We warn when a user tries to use an old GPU with which Theano is untested.
This could cause crash and will also be very slow. (Frederic B.)
* Make theano.grad able to differentiate between not implemented, undefined and disconnected grad.
Op.grad function should return theano.gradient.{grad_not_implemented,grad_undefined} or
something of DisconectedType (Ian G.)
* Make theano.grad expect to always receive a float or undefined
gradient and enforce that op with integer output values always
return 0. (Ian G.)
New memory output contract (was mentioned in the release notes of Theano 0.5):
* Now the output memory received can be preallocated by other stuff.
In the past it was always the previous output an Apply node allocated.
So this means that the shape and strides can be different from previous calls
and there can be links to this memory at other places.
This means it could receive preallocated output that is not c_contiguous.
But we don't do that now. (Pascal L.)
* New Theano flags to test this DebugMode.check_preallocated_output (Pascal L.)
* Updated a few ops to respect this contract (Pascal L.)
New Features:
* GPU scan now works (does not crash) when there is a mixture of float32 and other dtypes.
* theano_var.eval({other_var:val[,...]} to simplify the usage of Theano (Ian G.)
* debugprint new param ids=["CHAR", "id", "int", ""]
This makes the identifier printed to be a unique char, the Python id, a
unique int, or not have it printed. We changed the default to be "CHAR"
as this is more readable. (Frederic B.)
* debugprint new param stop_on_name=[False, True]. If True, we don't print
anything below an intermediate variable that has a name. Defaults to False.
(Frederic B.)
* debugprint does not print anymore the "|" symbol in a column after the last input. (Frederic B.)
* If you use Enthought Python Distribution (EPD) now we use its blas
implementation by default. (Frederic B., Graham Taylor, Simon McGregor)
* MRG random now raises an error with a clear message when the passed shape
contains dimensions with bad value like 0. (Frederic B. reported by Ian G.)
* "CudaNdarray[*] = ndarray" works in more cases (Frederic B.)
* "CudaNdarray[*] += ndarray" works in more cases (Frederic B.)
* We add dimensions to CudaNdarray to automatically broadcast more frequently.
(Frederic B.)
* New theano flag cmodule.warn_no_version. Default False. If True,
will print a warning when compiling one or more Op with C code that
can't be cached because there is no c_code_cache_version() function
associated to at least one of those Ops. (Frederic B.)
* CPU alloc now always generate C code (Pascal L.)
* New Theano flag cmodule.warn_no_version=False. When True, warn when an op
with C code is not versioned (which forces to recompile it everytimes).
(Frederic B.)
* C code reuses preallocated outputs (only done by Scan) (Pascal L.)
* Garbage collection of intermediate results during Theano function calls
for Ops with C code (Pascal L.)
* Theano flag compiledir_format now supports the parameter "numpy_version" and "g++". (Frederic B.)
* Theano GPU variables, shared variables and constants now support <, <=,
> and >= similar to those not on the GPU.
* AdvancedIncSubtensor now supports the set_instead_of_inc parameter. (Eric L.)
* Added Advanced Indexing support to inc_subtensor and set_subtensor. (Eric L.)
* theano.tensor.{any,all,std,var,mean,prod,sum,argmin,argmax,min,max,max_and_argman}
have a new parameter keepdims (Eric L.)
This allows to broadcast it correctly against the input data to normalize it.
* The Updates objects now check that the keys are SharedVariable when we pass them
in the __init__ function. (Pascal L.)
* Set a Theano Variable name on transposed op when the input has one (Frederic B).
* The cvm linker now supports garbage collection (enabled by default). (James B. Arnaud B., Pascal L.)
* The cvm linker is now the default linker.
This makes the "loop" around the execution of apply node in C. So this lowers the overhead.
* theano_variable[numpy.newaxis] is now supported (James B.)
* Enable ifelse on the GPU. (Frederic B.)
* Correctly support numpy.memmap everywhere (Pascal L.)
We add partial support for them before. Just use the normal tensor operation
on them and it should work.
But be careful not to exhaust your computer memory! (we always generate normal ndarray)
* Add an optimization that stabilizes log(softmax(x)). (Ian G.)
* Re-enable the Images2Neibs grad. It was not broken, the problem was how we tested it. (Frederic B.)
* If `theano_fn.trust_input` is set to False, do not check if the inputs are good
when calling the theano function. (Frederic B.)
* Add theano.tensor.blas,gem{m,v} as shortcut.
* theano.grad(..., add_names=True). False for the old
behavior. Otherwise it tries to name the grad variables. (Ian G.)
* theano-nose (Pascal L.)
A wrapper around nosetests that adds needed extensions.
* --profile-time option, to print time spent in each test (Eric L.)
* --batch option, to allow to run tests in batch to lower memory requirement.
* m = mean(log(1 - sigm(x)))
x - scalar * theano.grad(m, x)
There is a stabilization optimization for this.
Now it is applied more frequently. (Pascal L.)
New Op/functions:
* Added element-wise operation theano.tensor.{GammaLn,Psi} (John Salvatier, Nicolas Bouchard)
* Added element-wise operation theano.tensor.{arcsin,arctan,arccosh,arcsinh,arctanh,exp2,arctan2} (Nicolas Bouchard)
* Added element-wise operation theano.tensor.{gamma,conj,complex_from_polar,expm1,deg2rad,rad2deg,trunc,gamma} (Nicolas Bouchard)
* Added theano.tensor.argsort that wraps numpy.argsort (Hani Almousli).
* Added theano.tensor.diff that wraps numpy.diff (Nicolas B.)
* Added theano.tensor.bincount that wraps numpy.bincount (Nicolas B., Pascal L, Frederic B.)
* Added theano.tensor.squeeze (Nicolas B.)
This removes broadcasted dimensions from the variable.
Theano-esque version of numpy.squeeze.
* Added theano.tensor.repeat that wraps numpy.repeat (Nicolas B. + PL)
* Added theano.tensor.bartlett that wraps numpy.bartlett (Eric L.)
* Added theano.tensor.fill_diagonal that wraps numpy.fill_diagonal (Eric L., Frederic B.)
* Added tensor.square that is an alias for tensor.sqr as NumPy (Ian G.)
* Added theano.tensor.load(path, dtype, broadcastable, mmap_mode=None) op
that allows to load a .npy file in a theano graph (Matthew Rocklin)
* theano.sandbox.linalg.kron.py:Kron op. (Eric L.)
Kronecker product
Speed up:
* CPU convolutions are now parallelized (Frederic B.)
By default use all cores/hyper-threads.
To control it, use the `OMP_NUM_THREADS=N` environment variable where N is the number of
parallel threads to use. By default it is equal to the number of CPU cores/hyper
threads that you have.
There is a new Theano flag `openmp` to allow/disallow openmp op.
If your BLAS library is parallelized, this flag won't affect it, but the
env variable will.
* Remove a corner case causing duplicated dot22/gemm in the graph. (Frederic B., Ian G.)
* Enable fusion of elemwise that have the same clients multiple times. (Frederic B.)
* New optimization: Remove reduction over broadcastable dimensions (James B., Frederic B.)
* Faster theano.function compilation. (Pascal L., Ian G.)
* Remove GPU transfer around specify_shape op. (Frederic B.)
* Implemented/tested MANY op.infer_shape method (Eric Larsen)
This allows Theano to make better shape inferance.
* Implement Solve.infer_shape (Matthew Rocklin)
* Scan memory optimizations now work more frequently. (Razvan P.)
There was a warning printed by the subtensor optimization in those cases.
* Faster rng_mrg Python code. (mostly used for tests) (Frederic B.)
Speed up GPU:
* Convolution on the GPU now checks the generation of the card to make
it faster in some cases (especially medium/big ouput image) (Frederic B.)
* We had hardcoded 512 as the maximum number of threads per block. Newer cards
support up to 1024 threads per block.
* Faster GpuAdvancedSubtensor1, GpuSubtensor, GpuAlloc (Frederic B.)
* We now pass the GPU architecture to nvcc when compiling (Frederic B.)
* Now we use the GPU function async feature by default. (Frederic B.)
Set the environment variable `CUDA_LAUNCH_BLOCKING` to `1` to disable this
for profiling or debugging.
* Faster creation of CudaNdarray objects (Frederic B.)
* Now some Max reductions are implemented on the GPU. (Ian G.)
Sparse Sandbox graduate (moved from theano.sparse.sandbox.sp):
* sparse.remove0 (Frederic B., Nicolas B.)
* sparse.sp_sum(a, axis=None) (Nicolas B.)
* bugfix: the not structured grad was returning a structured grad.
* sparse.{col_scale,row_scale,ensure_sorted_indices,clean} (Nicolas B.)
* sparse.{diag,square_diagonal} (Nicolas B.)
Sparse:
* Support for uint* dtype.
* Implement theano.sparse.mul(sparse1, sparse2) when both inputs don't
have the same sparsity pattern. (Frederic B.)
* New Ops: sparse.{expm1,deg2rad,rad2deg,trunc} (Nicolas B.)
* New Ops: sparse.{sqrt,sqr,log1p,floor,ceil,sgn,round_half_to_even} (Nicolas B.)
* New Ops: sparse.{arctanh,tanh,arcsinh,sinh,arctan,arcsin,tan,sin} (Nicolas B.)
* New functions: structured_{add,exp,log,pow,minimum,maximum,sigmoid} (Yann D., Nicolas B.)
* Optimized op: StructuredAddSV, StrucutedAddSVCSR (inserted automatically)
* New Op: sparse.mul_s_v multiplication of sparse matrix by broadcasted vector (Yann D.)
* New Op: sparse.Cast() (Yann D., Nicolas B.)
* Add sparse_variable.astype() and theano.sparse.cast() and
theano.sparse.{b,w,i,l,f,d,c,z}cast() as their tensor equivalent (Nicolas B.)
* Op class: SamplingDot (Yann D., Nicolas B.)
* Optimized version: SamplingDotCsr, StructuredDotCSC
* Optimizations to insert the optimized version: local_sampling_dot_csr, local_structured_add_s_v
* New Ops: sparse.{Multinomial,Poisson,Binomial} (Yann D., NB)
* Implement the CSMProperties grad method (Yann Dauphin)
* Move optimizations to theano/sparse/opt.py (Nicolas B.)
New flags:
* `profile=True` flag now prints the sum of all printed profiles. (Frederic B.)
* It works with the linkers vm/cvm (default).
* Also print compile time, optimizer time and linker time.
* Also print a summary by op class.
* new flag "profile_optimizer" (Frederic B.)
when profile=True, will also print the time spent in each optimizer.
Useful to find optimization bottleneck.
* new flag "cmodule.remove_gxx_opt" (Frederic B.)
If True, will remove -O* parameter passed to g++.
This is useful to debug in gdb module compiled by Theano.
The parameter -g is passed by default to g++.
* new flag cmodule.compilation_warning
if True, will print compilation warning.
* new flag `allow_gc` (Frederic B.)
When False, do not garbage collect intermediate results when they are not needed.
This uses more memory, but allocates memory less frequently so faster.
* new flag `vm.lazy` (Frederic B.)
Useful only for the vm linkers. When lazy is None,
auto detect if lazy evaluation is needed and use the apropriate
version. If lazy is True/False, force the version used between
Loop/LoopGC and Stack.
* new flag `cxx`. This is the C++ compiler to use. If empty do not compile C code. (Frederic B.)
* New flag `print_active_device` that defaults to True. (Matthew R.)
Documentation:
* Added in the tutorial documentation on how to extend Theano.
This explains how to make a Theano Op from a Python function.
http://deeplearning.net/software/theano/tutorial/extending_theano.html
(Frederic B.)
* New installation instructions for Windows using EPD (Pascal L.)
* New installation on Windows by using a Linux VM from ContinuumIO (Frederic B.)
* Revisions of Theano tutorial and addition of exercises to it. (Eric L.)
* New tutorial on Sparse variable. (Nicolas B., Sebastien Lemieux, Frederic Bastien
http://www.deeplearning.net/software/theano/tutorial/sparse.html
* Installation documentation for CentOS6 (Frederic B.)
* Installation documentation for Ubuntu (with GPU) (Frederic B., Matthias Zoehrer)
* Doc typo fixes, Doc updates, Better error messages: Olivier D., David W.F., Frederic B., James B., Matthew Rocklin, Ian G.
* Python Memory Management tutorial (Steven Pigeon, Olivier D.)
Proposal:
* Math framework for complex gradients (Pascal L.)
Internal changes:
* Define new exceptions MissingInputError and UnusedInputError, and use them
in theano.function, instead of TypeError and ValueError. (Pascal L.)
* Better handling of bitwidth and max values of integers and pointers
across platforms (Pascal L.)
* Made a few Ops with C code versioned to reduce compilation time.
(Frederic B, Pascal L.)
* Better deletion of files in the compiledir (Frederic B.)
* Safer import on sort op (Nicolas Pinto)
* hash_from_dict for elemwise op (Fredric B.)
* Renamed BadCLinkerOutput into BadThunkOutput. (PL)
* tensor.utils.shape_of_variables (Matthew R.)
* Add the numpy abi version and g++/nvcc version in the key of compiled code. (Frederic B.)
* env.replace_all_validate_remove (Frederic B.)
This allows global optimizer to ensure it removed some nodes from the graph.
This is a generic way to catch errors that would otherwise duplicate
computation.
* It was used for GEMM and Scan optimization (Frederic B., Razvan P.)
* Fix how exception are raised in GPU code (James B.)
* Made code respect pep8: OD, Fred, Pascal L., Nicolas Bouchard, Eric Larsen and others.
* TensorType and CudaNdarrayType now have a value_zeros method that call CudaNdarray.zeros or
numpy.zeros with the right dtype. (Pascal L., Olivier D.)
This allows to have the same code work with both types.
* Renamed FunctionGraph.extend function to FunctionGraph.attach_feature. (Ian G.)
* New exception MissingGXX when we try to compile but there is no cxx compiler. (Frederic B.)
* New fct theano.gof.utils.give_variables_names(...) that gives unique names to variables. (Matthew R.)
* Use most of the time the new NumPy C-API for later NumPy release. (Frederic B.)
* New theano.gof.sched.sort_apply_nodes() that will allow other execution ordering. (Matthew R.)
* New attribute sort_schedule_fn, a way to specify a scheduler to use. (Matthew R.)
Crash Fix:
* Fix import conflict name (usaar33, Frederic B.)
* This makes Theano work with PiCloud.
* Do not try to use the BLAS library when blas.ldflags is manually set to an
empty string (Frederic B., Pascal L.)
* When importing theano on a computer without GPU with the Theano
flags 'device' or 'init_gpu_device' set to gpu* (Frederic B., reported by Luo Heng)
* Optimization printed a useless error when scipy was not available. (Frederic B.)
* GPU conv crash/slowdown on newer hardware (James B.)
* Better error handling in GPU conv (Frederic B.)
* GPU optimization that moves element-wise Ops to the GPU. Crash happened in
a particular execution order of this optimization and the
element-wise fusion optimization when upcasting some inputs to
float32 (to compute them on the GPU).
(Frederic B., reported by Sander Dieleman)
* GpuReshape in some particular case when the input is not contiguous
(Frederic B., reported by Sander Dieleman)
* GpuSoftmaxWithBias with shape (0, N) with N > 1.
(Frederic B., reported by Razvan P.)
* Fix crash under 64-bit Windows, when taking subtensors of the form a[n:]
(Pascal L., reported by Simon McGregor)
* Fixed issue with the MaxAndArgmax Op not properly preserving broadcastable
dimensions, which could typically result in optimization crashes (Olivier D.)
* Fixed crash when concatenating some arrays with specific broadcasting
patterns (Olivier D.)
* Work around a known issue with nvcc 4.1 on MacOS X. (Graham Taylor)
* In advanced indexing, if some inputs are constant, no need to call constant(...)
on their value any more. (Pascal L., reported by John Salvatier)
* Fix crash on GPU when the GpuSubtensor didn't put the right stride
when the result tensor had a dimension with size of 1. (Pascal L,
reported Graham T.)
* Fix scan crash that made it not run on the GPU in one case. (Guillaume D.)
* If you grad again a random state, don't crash (Razvan P.)
* GpuDownsampleFactorMax and its grad with inputs dimensions 0 and 1 bigger then 65535.
(Frederic B. reported by Gabe Schwartz)
* Potential crash due to parallel compilation when importing theano.sandbox.cuda
(Olivier D.)
* Crash fix on python 2.4 with slicing. (Pascal L.)
* grad of argmin and argmax (Razvan P.)
* Don't compute the Rop for shared variables with updates (mostly random).
We don't use them and they caused crash. (Razvan P.)
* MaxArgmax.grad() when one of the gradient it receives is None. (Razvan P, reported by Mark Fenner)
* Fix crash of GpuSum when some dimensions shape was 0. (Frederic B.)
Tests:
* Use less memory (Olivier D.) (fix crash on 32-bit computers)
* Fix test with Theano flag "blas.ldflags=". (Frederic B., Pascal L.)
* Fix crash with advanced subtensor and numpy constant.
* Fix random tests crash due to random value. (Pascal L.)
* Always introduce Alloc node when calling alloc and let the optimizer remove them if needed.
This allows DebugMode to catch some shape error. (Pascal L.)
* DebugMode now checks the view_map for all types of Theano variables.
It was doing only variables of tensor type. (Frederic B.)
Others:
* Remove python warning for some python version. (Gabe Schwartz)
* Remove useless fill op in fast_compile mode to make the graph more readable. (Fredric B.)
* Remove GpuOuter as it is a subset of the new GpuGer (Frederic B.)
* Now we use http://travis-ci.org/ to run all CPU tests (without SciPy)
with the default mode on all Pull Requests.
This should make the trunk more stable. (Fredric B.)
* Our nightly buildbot now checks on python 2.4 (Frederic B.)
This should make the trunk work on it more frequently.
Other thanks:
* blaxill reported an error introduced into the trunk.
New stuff that will probably be reworked/removed before the release:
* Better PyCUDA sharing of the GPU context.(fix crash at exit) (Frederic B.)
TODO: there is still a crash at exit!
......@@ -1001,6 +1001,10 @@ instructions in :ref:`windows_bleeding_edge`.
Windows installer for AnacondaCE
################################
.. note::
This don't work with current Anaconda. Help needed to repair this.
If you installed AnacondaCE, the simplest way to install and configure
Theano is to download and execute this `Windows installer
for Theano on AnacondaCE for Windows
......
......@@ -201,7 +201,10 @@ def shared(value, name=None, strict=False, allow_downcast=None, **kwargs):
' using \'theano.shared(..., borrow=True)\'',)
raise
raise TypeError('No suitable SharedVariable constructor could be found',
raise TypeError('No suitable SharedVariable constructor could be found.'
' Are you sure all kwargs are supported?'
' We do not support the parameter dtype or type.'
' value="%s". parameters="%s"' %
(value, kwargs))
shared.constructors = []
......
......@@ -17,7 +17,7 @@ class T_OpFromGraph(unittest.TestCase):
x, y, z = T.matrices('xyz')
e = x + y * z
op = OpFromGraph([x, y, z], [e], mode='FAST_RUN')
f = op(x, y, z) - op(y, z, x) #(1+3*5=array of 16) - (3+1*5=array of 8)
f = op(x, y, z) - op(y, z, x) # (1+3*5=array of 16) - (3+1*5=array of 8)
fn = function([x, y, z], f)
xv = numpy.ones((2, 2), dtype=config.floatX)
yv = numpy.ones((2, 2), dtype=config.floatX)*3
......@@ -47,7 +47,7 @@ class T_OpFromGraph(unittest.TestCase):
def test_grad(self):
x, y, z = T.matrices('xyz')
e = x + y * z
op = OpFromGraph([x, y, z], [e], mode='FAST_RUN', grad_depth = 2)
op = OpFromGraph([x, y, z], [e], mode='FAST_RUN', grad_depth=2)
f = op(x, y, z)
f = f - T.grad(T.sum(f), y)
fn = function([x, y, z], f)
......
......@@ -322,6 +322,11 @@ class Chi2SF(BinaryScalarOp):
"""
Compute (1 - chi2_cdf(x))
ie. chi2 pvalue (chi2 'survival function')
C code is provided in the Theano_lgpl repository.
This make it faster.
https://github.com/Theano/Theano_lgpl.git
"""
@staticmethod
......@@ -334,269 +339,6 @@ class Chi2SF(BinaryScalarOp):
else:
super(Chi2SF, self).impl(x, k)
def c_support_code(self):
return(
"""
//For GPU support
#ifdef __CUDACC__
#define DEVICE __device__
#else
#define DEVICE
#endif
#ifndef _CHI2FUNCDEFINED
#define _CHI2FUNCDEFINED
/*----------------------------------------------------------------------
File : gamma.c
Contents: computation of the (incomplete/regularized) gamma function
Author : Christian Borgelt
History : 2002.07.04 file created
2003.05.19 incomplete Gamma function added
2008.03.14 more incomplete Gamma functions added
2008.03.15 table of factorials and logarithms added
2008.03.17 gamma distribution functions added
----------------------------------------------------------------------*/
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#include <float.h>
#include <math.h>
/*----------------------------------------------------------------------
Preprocessor Definitions
----------------------------------------------------------------------*/
#define LN_BASE 2.71828182845904523536028747135 /* e */
#define SQRT_PI 1.77245385090551602729816748334 /* \sqrt(\pi) */
#define LN_PI 1.14472988584940017414342735135 /* \ln(\pi) */
#define LN_SQRT_2PI 0.918938533204672741780329736406
/* \ln(\sqrt(2\pi)) */
#define EPSILON 2.2204460492503131e-16
#define EPS_QTL 1.4901161193847656e-08
#define MAXFACT 170
#define MAXITER 1024
#define TINY (EPSILON *EPSILON *EPSILON)
/*----------------------------------------------------------------------
Table of Factorials/Gamma Values
----------------------------------------------------------------------*/
DEVICE static double _facts[MAXFACT+1] = { 0 };
DEVICE static double _logfs[MAXFACT+1];
DEVICE static double _halfs[MAXFACT+1];
DEVICE static double _loghs[MAXFACT+1];
/*----------------------------------------------------------------------
Functions
----------------------------------------------------------------------*/
DEVICE static void _init (void)
{ /* --- init. factorial tables */
int i; /* loop variable */
double x = 1; /* factorial */
_facts[0] = _facts[1] = 1; /* store factorials for 0 and 1 */
_logfs[0] = _logfs[1] = 0; /* and their logarithms */
for (i = 1; ++i <= MAXFACT; ) {
_facts[i] = x *= i; /* initialize the factorial table */
_logfs[i] = log(x); /* and the table of their logarithms */
}
_halfs[0] = x = SQRT_PI; /* store Gamma(0.5) */
_loghs[0] = 0.5*LN_PI; /* and its logarithm */
for (i = 0; ++i < MAXFACT; ) {
_halfs[i] = x *= i-0.5; /* initialize the table for */
_loghs[i] = log(x); /* the Gamma function of half numbers */
} /* and the table of their logarithms */
} /* _init() */
/*--------------------------------------------------------------------*/
#if 0
double logGamma (double n)
{ /* --- compute ln(Gamma(n)) */
double s; /* = ln((n-1)!), n \in IN */
assert(n > 0); /* check the function argument */
if (_facts[0] <= 0) _init(); /* initialize the tables */
if (n < MAXFACT +1 +4 *EPSILON) {
if (fabs( n -floor( n)) < 4 *EPSILON)
return _logfs[(int)floor(n)-1];
if (fabs(2*n -floor(2*n)) < 4 *EPSILON)
return _loghs[(int)floor(n)];
} /* try to get the value from a table */
s = 1.000000000190015 /* otherwise compute it */
+ 76.18009172947146 /(n+1)
- 86.50532032941677 /(n+2)
+ 24.01409824083091 /(n+3)
- 1.231739572450155 /(n+4)
+ 0.1208650972866179e-2 /(n+5)
- 0.5395239384953e-5 /(n+6);
return (n+0.5) *log((n+5.5)/LN_BASE) +(LN_SQRT_2PI +log(s/n) -5.0);
} /* logGamma() */
#else /*--------------------------------------------------------------*/
DEVICE double logGamma (double n)
{ /* --- compute ln(Gamma(n)) */
double s; /* = ln((n-1)!), n \in IN */
assert(n > 0); /* check the function argument */
if (_facts[0] <= 0) _init(); /* initialize the tables */
if (n < MAXFACT +1 +4 *EPSILON) {
if (fabs( n -floor( n)) < 4 *EPSILON)
return _logfs[(int)floor(n)-1];
if (fabs(2*n -floor(2*n)) < 4 *EPSILON)
return _loghs[(int)floor(n)];
} /* try to get the value from a table */
s = 0.99999999999980993227684700473478 /* otherwise compute it */
+ 676.520368121885098567009190444019 /(n+1)
- 1259.13921672240287047156078755283 /(n+2)
+ 771.3234287776530788486528258894 /(n+3)
- 176.61502916214059906584551354 /(n+4)
+ 12.507343278686904814458936853 /(n+5)
- 0.13857109526572011689554707 /(n+6)
+ 9.984369578019570859563e-6 /(n+7)
+ 1.50563273514931155834e-7 /(n+8);
return (n+0.5) *log((n+7.5)/LN_BASE) +(LN_SQRT_2PI +log(s/n) -7.0);
} /* logGamma() */
#endif
/*----------------------------------------------------------------------
Use Lanczos' approximation
\Gamma(n+1) = (n+\gamma+0.5)^(n+0.5)
* e^{-(n+\gamma+0.5)}
* \sqrt{2\pi}
* (c_0 +c_1/(n+1) +c_2/(n+2) +...+c_n/(n+k) +\epsilon)
and exploit the recursion \Gamma(n+1) = n *\Gamma(n) once,
i.e., compute \Gamma(n) as \Gamma(n+1) /n.
For the choices \gamma = 5, k = 6, and c_0 to c_6 as defined
in the first version, it is |\epsilon| < 2e-10 for all n > 0.
Source: W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery
Numerical Recipes in C - The Art of Scientific Computing
Cambridge University Press, Cambridge, United Kingdom 1992
pp. 213-214
For the choices gamma = 7, k = 8, and c_0 to c_8 as defined
in the second version, the value is slightly more accurate.
----------------------------------------------------------------------*/
DEVICE double Gamma (double n)
{ /* --- compute Gamma(n) = (n-1)! */
assert(n > 0); /* check the function argument */
if (_facts[0] <= 0) _init(); /* initialize the tables */
if (n < MAXFACT +1 +4 *EPSILON) {
if (fabs( n -floor( n)) < 4 *EPSILON)
return _facts[(int)floor(n)-1];
if (fabs(2*n -floor(2*n)) < 4 *EPSILON)
return _halfs[(int)floor(n)];
} /* try to get the value from a table */
return exp(logGamma(n)); /* compute through natural logarithm */
} /* Gamma() */
/*--------------------------------------------------------------------*/
DEVICE static double _series (double n, double x)
{ /* --- series approximation */
int i; /* loop variable */
double t, sum; /* buffers */
sum = t = 1/n; /* compute initial values */
for (i = MAXITER; --i >= 0; ) {
sum += t *= x/++n; /* add one term of the series */
if (fabs(t) < fabs(sum) *EPSILON) break;
} /* if term is small enough, abort */
return sum; /* return the computed factor */
} /* _series() */
/*----------------------------------------------------------------------
series approximation:
P(a,x) = \gamma(a,x)/\Gamma(a)
\gamma(a,x) = e^-x x^a \sum_{n=0}^\infty (\Gamma(a)/\Gamma(a+1+n)) x^n
Source: W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery
Numerical Recipes in C - The Art of Scientific Computing
Cambridge University Press, Cambridge, United Kingdom 1992
formula: pp. 216-219
The factor exp(n *log(x) -x) is added in the functions below.
----------------------------------------------------------------------*/
DEVICE static double _cfrac (double n, double x)
{ /* --- continued fraction approx. */
int i; /* loop variable */
double a, b, c, d, e, f; /* buffers */
b = x+1-n; c = 1/TINY; f = d = 1/b;
for (i = 1; i < MAXITER; i++) {
a = i*(n-i); /* use Lentz's algorithm to compute */
d = a *d +(b += 2); /* consecutive approximations */
if (fabs(d) < TINY) d = TINY;
c = b +a/c;
if (fabs(c) < TINY) c = TINY;
d = 1/d; f *= e = d *c;
if (fabs(e-1) < EPSILON) break;
} /* if factor is small enough, abort */
return f; /* return the computed factor */
} /* _cfrac() */
/*----------------------------------------------------------------------
continued fraction approximation:
P(a,x) = 1 -\Gamma(a,x)/\Gamma(a)
\Gamma(a,x) = e^-x x^a (1/(x+1-a- 1(1-a)/(x+3-a- 2*(2-a)/(x+5-a- ...))))
Source: W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery
Numerical Recipes in C - The Art of Scientific Computing
Cambridge University Press, Cambridge, United Kingdom 1992
formula: pp. 216-219
Lentz's algorithm: p. 171
The factor exp(n *log(x) -x) is added in the functions below.
----------------------------------------------------------------------*/
DEVICE double lowerGamma (double n, double x)
{ /* --- lower incomplete Gamma fn. */
assert((n > 0) && (x > 0)); /* check the function arguments */
return _series(n, x) *exp(n *log(x) -x);
} /* lowerGamma() */
/*--------------------------------------------------------------------*/
DEVICE double upperGamma (double n, double x)
{ /* --- upper incomplete Gamma fn. */
assert((n > 0) && (x > 0)); /* check the function arguments */
return _cfrac(n, x) *exp(n *log(x) -x);
} /* upperGamma() */
/*--------------------------------------------------------------------*/
DEVICE double GammaP (double n, double x)
{ /* --- regularized Gamma function P */
assert((n > 0) && (x >= 0)); /* check the function arguments */
if (x <= 0) return 0; /* treat x = 0 as a special case */
if (x < n+1) return _series(n, x) *exp(n *log(x) -x -logGamma(n));
return 1 -_cfrac(n, x) *exp(n *log(x) -x -logGamma(n));
} /* GammaP() */
//ebuchman: this function is equivalent to scipy.stats.chi2.sf
//it's the pvalue (survival function) of a chi2 distribution
DEVICE double Chi2SF (double k, double x)
{
return 1 - GammaP(k/2., x/2.);
}
#endif
""")
def c_code(self, node, name, inp, out, sub):
x, k = inp
z, = out
if node.inputs[0].type in float_types:
dtype = 'npy_' + node.outputs[0].dtype
return """%(z)s =
(%(dtype)s)Chi2SF(%(k)s, %(x)s);""" % locals()
raise NotImplementedError('only floatingpoint is implemented')
def __eq__(self, other):
return type(self) == type(other)
......
......@@ -128,10 +128,15 @@ import logging
import os
import sys
import time
import warnings
import numpy
import numpy.distutils
import numpy.distutils.system_info
try:
import numpy.distutils.__config__
except ImportError:
pass
from theano.configparser import config, AddConfigVar, StrParam
from theano.gof import (utils, Op, view_roots, DestroyHandler,
......@@ -156,8 +161,26 @@ _logger = logging.getLogger('theano.tensor.blas')
# Otherwise, we give an optimization warning for no reason in some cases.
def default_blas_ldflags():
try:
if hasattr(numpy.distutils, '__config__'):
#If the old private interface is available use it as it
#don't print information to the user.
blas_info = numpy.distutils.__config__.blas_opt_info
else:
#We need to catch warnings as in some cases NumPy print
#stuff that we don't want the user to see like this:
"""
SOMEPATH/Canopy_64bit/User/lib/python2.7/site-packages/numpy/distutils/system_info.py:564: UserWarning: Specified path /home/vagrant/src/master-env/lib is invalid.
warnings.warn('Specified path %s is invalid.' % d)
"""
#I'm not able to remove all printed stuff
with_context = warnings.catch_warnings(record=True)
with_context.__enter__()
try:
blas_info = numpy.distutils.system_info.get_info("blas_opt")
finally:
with_context.__exit__(None, None, None)
# If we are in a EPD installation, mkl is available
blas_info = numpy.distutils.system_info.get_info("blas_opt")
if "EPD" in sys.version:
use_unix_epd = True
if sys.platform == 'win32':
......
......@@ -1075,10 +1075,12 @@ class ShapeFeature(object):
# an element of o_shapes is either None or a tuple
# elements of the tuple can be either strings, or ints
if len(o_shapes) != len(node.outputs):
raise Exception('len(o_shapes) = '
+ str(len(o_shapes))
+ ' != len(node.outputs) = '
+ str(len(node.outputs)))
raise Exception(
'The infer_shape method for the Op "%s" returned a list ' +
'with the wrong number of element: len(o_shapes) = %d ' +
' != len(node.outputs) = %d' % (str(node.op),
len(o_shapes),
len(node.outputs)))
# Ensure shapes are in 'int64'. This is to make sure the assert
# found in the `local_useless_subtensor` optimization does not fail.
......
......@@ -188,7 +188,11 @@ def safe_make_node(op, *inputs):
def makeTester(name, op, expected, checks=None, good=None, bad_build=None,
bad_runtime=None, grad=None, mode=None, grad_rtol=None,
eps=1e-10, skip=False, test_memmap=True):
eps=1e-10, skip=False, test_memmap=True, check_name=True):
"""
:param check_name:
Use only for tester that aren't in Theano.
"""
if checks is None:
checks = {}
if good is None:
......@@ -206,12 +210,14 @@ def makeTester(name, op, expected, checks=None, good=None, bad_build=None,
_bad_build, _bad_runtime, _grad = bad_build, bad_runtime, grad
_mode, _grad_rtol, _eps, skip_ = mode, grad_rtol, eps, skip
_test_memmap = test_memmap
_check_name = check_name
class Checker(unittest.TestCase):
op = staticmethod(_op)
expected = staticmethod(_expected)
checks = _checks
check_name = _check_name
good = _good
bad_build = _bad_build
bad_runtime = _bad_runtime
......@@ -223,7 +229,8 @@ def makeTester(name, op, expected, checks=None, good=None, bad_build=None,
def setUp(self):
# Verify that the test's name is correctly set.
# Some tests reuse it outside this module.
eval(self.__class__.__module__ + '.' + self.__class__.__name__)
if self.check_name:
eval(self.__class__.__module__ + '.' + self.__class__.__name__)
# We keep a list of temporary files created in add_memmap_values,
# to remove them at the end of the test.
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论