提交 b3dfea4c authored 作者: nouiz's avatar nouiz

Merge pull request #1227 from nouiz/fix_test

Fix test and update NEWS.txt
......@@ -7,16 +7,16 @@ Release Notes
Theano in the development version since 0.6rc2
==============================================
up to merged PR gh-1220
up to merged PR gh-1225
Highlights:
* Speed-ups.
* Crash fixes.
* A few small interface changes.
* GPU memory leak fix.
* A few corner cases fix without incidence.
* A few corner cases fixes without incidence.
* More Theano determinism
* tensor.{dot,tensordot} more complete/faster/more GPU friendly.
* tensor.{dot,tensordot} more complete/faster/GPU friendly.
* tensor.tensordot now support Rop/Lop
* tensor.dot support n-dimensional inputs as NumPy
* To support more NumPy syntax:
......@@ -24,9 +24,24 @@ Highlights:
* Add a_tensor_variable.{sort,dot,std,argmin,argmax,argsort,clip,conj,conjugate,repeat,round,trace,real,imag,take}
Commiters for this rc2 only:
Frederic Bastien
Ian Goodfellow
Pascal Lamblin
Jeremiah Lowin
abalkin
Olivier Delalleau
Razvan Pascanu
Rami Al-Rfou
Vivek Kulkarni
Guillaume Desjardins
David Warde-Farley
Rami Al-Rfou'
Eric Hunsberger
Amir Elaguizy
James Bergstra
Bug fix:
* Fix memory leak on the GPU in some corner with the Theano flags `allow_gc=False`. (Frederic B., reported by Jonas Gehring)
* Fix memory leak on the GPU in some corner cases with the Theano flags `allow_gc=False`. (Frederic B., reported by Jonas Gehring)
* Fix copy of random state between graph. (Guillaume D.)
http://deeplearning.net/software/theano/tutorial/examples.html#copying-random-state-between-theano-graphs
* Fix wrong dtype in sandbox.linalg.ExtractDiag with shape of 0. (Frederic B., reported by abalkin)
......@@ -38,10 +53,9 @@ Bug fix:
New Features:
* More Theano determinism (Ian G., Olivier D., Pascal L.)
* Add and use a new class OrderedSet.
* Modify theano.grad to be determinist.
* Warn when using a dict as the updates argument to theano.compile.function, since this makes the returned function non-deterministic.
* The Updates class was not appropriate for representing updates because it is non-deterministic; replaced it with the OrderedUpdates class.
* Implemented GpuContiguous.grad. (Ian G.)
* theano.grad is now determinist.
* Warn when the user use a dictionary and this cause non-determinism in Theano.
* The Updates class was non-deterministic; replaced it with the OrderedUpdates class.
* tensor.tensordot now support Rop/Lop (Jeremiah Lowin)
This remove the class TensorDot and TensorDotGrad. It is the Dot/Elemwise ops that are used.
* tensor.dot support n-dimensional inputs as NumPy (Jeremiah Lowin)
......@@ -52,22 +66,23 @@ New Features:
* Make Theano work with Anaconda on Windows. (Pascal L.)
* Add tensor_var.diagonal and theano.tensor.{diag,diagonal}. (abalkin)
* AdvencedSubtensor1 can now have a sparse gradient. (Rami Al-Rfou', Vivek Kulkarni)
* Implemented GpuContiguous.grad. (Ian G.)
Interface Deprecation (a warning is printed):
* theano.misc.strutil.renderString -> render_string (Ian G.)
* Will get warning when using dictionary at some place as this make Theano non-deterministic.
* Print a warning when using dictionary and this make Theano non-deterministic.
Interface Change:
* Raise an error when theano.shared called with a theano variable. (Frederic B.)
* Don't print warning for bug before Theano 0.5 by default. (Frederic B.)
* Theano functions now always have a field name, default to None. (Frederic B.)
* Theano function fct.fgraph have a copy of the Theano function name field. (Ian G.)
This is needed to all the fgraph to know it.
This is needed to allow the fgraph to know it.
* In the grad method, if it were asked to raise an error if there is no path between the variables, we didn't always returned an error. (Ian G.)
We returned the mathematical right answer 0.
We returned the mathematical right answer 0 in those cases.
* get_constant_value() renamed get_scalar_constant_value() and raise a new exception tensor.basic.NotScalarConstantError. (Ian G.)
* theano.function raise an error when triing to replace inputs with the given paramter. (Olivier D.)
This was doing nothing, the error message tell what the user probably want to do.
This was doing nothing, the error message explain what the user probably want to do.
New Interface (reuse existing functionality):
* tensor_var.sort() as a shortcut for theano.tensor.sort. (Jeremiah Lowin)
......@@ -80,6 +95,7 @@ New debug feature:
* Better profiling of test time with `theano-nose --time-profile`. (Frederic B.)
* Detection of infinite loop with global optimizer. (Pascal L.)
* DebugMode.check_preallocated_output now also work on Theano function output. (Pascal L.)
* DebugMode will now complains when the strides of CudaNdarray of dimensions of 1 aren't 0. (Frederic B.)
Speed-ups:
* c_code for SpecifyShape op. (Frederic B.)
......@@ -87,7 +103,7 @@ Speed-ups:
* The Scan optimization ScanSaveMem and PushOutDot1 applied more frequently. (Razvan P, reported Abalkin)
A skipped optimization warning was printed.
* dot(vector, vector) now faster with some BLAS implementation. (Eric Hunsberger)
OpenBLAS and other didn't called {s,d}dot internally when we called {s,g}gemv.
OpenBLAS and possibly others didn't called {s,d}dot internally when we called {s,d}gemv.
MKL was doing this.
* Compilation speed up: Take the compiledir lock only for op that generate c_code. (Frederic B)
* More scan optimization (Razvan P.)
......@@ -116,9 +132,12 @@ Crash Fixes:
* The infer shape mechanism now force that broadcasted dimensions have a shape know to be equivalent to one during compilation.
Sometimes, we where not able knowing this before run time and resulted in crash. (Frederic B.)
* Fix compilation problems on GPU on Windows. (Frederic B.)
* Fix copy on the GPU with big shape for 4d tensor (Pascal L.)
* GpuSubtensor didn't set the stride to 0 for dimensions of 1. This could lead to check failing later that cause a crash. (Frederic B., reported by vmichals)
Theoretical bugfix (bug that won't happen with current Theano code, but if you messed with the internal, could have affected you):
* GpuContiguous now check the preallocated outputs strides before using it. (Pascal L.)
* GpuContiguous, GpuAlloc, GpuDownSampleGrad, Conv2d now check the preallocated outputs strides before using it. (Pascal L.)
* GpuDownSample, GpuDownSampleGrad didn't worked correctly with negative strides in their output due to problem with nvcc (Pascal L, reported by abalkin?)
Others:
* Fix race condition when determining if g++ is available. (Abalkin)
......
......@@ -2523,7 +2523,11 @@ class test_shapeoptimizer(unittest.TestCase):
rng = numpy.random.RandomState(utt.fetch_seed())
a = shared(rng.rand(*shp).astype(config.floatX))
out = self.max_pool_c01b(a, 1, 1, 1)
f = theano.function([], out)
#max_pool_c01b use -inf and this will trigger DebugMode error.
mode = copy.copy(theano.compile.get_default_mode())
mode.check_isfinite = False
f = theano.function([], out, mode=mode)
f()
def test_local_track_shape_i(self):
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论