提交 2ef1058b authored 作者: Pascal Lamblin's avatar Pascal Lamblin

Merge pull request #1613 from nouiz/crash_fixes

Crash fixes
......@@ -3,7 +3,7 @@
language: python
python:
- "2.5"
- "2.6"
# - "2.7"
# - "3.2"
# command to install dependencies
......
......@@ -19,7 +19,7 @@ Theano Development version
NEWS.txt:
We recommand everybody to update to this version.
We recommend that everybody update to this version.
Highlights:
* Python 3.3 compatibility with buildbot test for it.
......@@ -27,7 +27,7 @@ Highlights:
* Better Windows 64 bit support.
* New profiler.
* Better error messages that help debugging.
* Better support of newer NumPy version (remove useless warning/crash).
* Better support for newer NumPy versions (remove useless warning/crash).
* Faster optimization/compilation for big graph.
* Move in Theano the Conv3d2d implementation.
* Better SymPy/Theano bridge: Make an Theano op from SymPy expression and use SymPy c code generator.
......@@ -43,34 +43,38 @@ Olivier Delalleau
John Salvatier
Razvan Pascanu
Jeremiah Lowin
Ludwig Schmidt-Hackenberg
Ludwig Schmidt-Hackenberg +
Vivek Kulkarni
Matthew Rocklin
Gabe Schwartz
James Bergstra
Sigurd Spieckermann
Bogdan Budescu
Mehdi Mirza
Sigurd Spieckermann +
Bogdan Budescu +
Mehdi Mirza +
Nicolas Bouchard
Ethan Buchman
Ethan Buchman +
Guillaume Desjardins
Ian Goodfellow
Jason Yosinski
Sina Honari
Ben McCann
Sina Honari +
Ben McCann +
David Warde-Farley
Ilya Dyachenko
Jan Schlüter
Micky Latowicki
Yaroslav Halchenko
Ilya Dyachenko +
Jan Schlüter +
Micky Latowicki +
Yaroslav Halchenko +
Alexander Belopolsky
Hannes Schulz
Huy Nguyen
Robert Kern
Sebastian Berg
Vincent Dumoulin
Wei Li
XterNalz
Hannes Schulz +
Huy Nguyen +
Robert Kern +
Sebastian Berg +
Vincent Dumoulin +
Wei Li +
XterNalz +
A total of 36 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.
Installation:
* Canopy support (direct link to MKL):
......@@ -83,7 +87,7 @@ Installation:
Bug fixes:
* Scan: if a scan node was cloned (by theano.clone) with different inputs, and if both the initial and the cloned nodes are used in the function being compiled, the value of the outputs of one would be replaced with the outputs of the other one. (Pascal L.)
* Sparse: Disable the optimization that introduce the CSMGradC op as it don't work correctly with unsorted indices. (Frederic B.)
* Sparse: Disable the optimization that introduce the CSMGradC op as it doesn't work correctly with unsorted indices. (Frederic B.)
* Mac: Fix wrong result of GpuDownsampleFactorMaxGrad on Mac OSX. (Pascal L.)
* Mac: Auto-Detect and work around a bug in BLAS on MacOS X (Pascal L.)
* Mac: Work around bug in MacOS X. If 2 compiled modules had the same name, the OS or Python was not always the right one even when we used the right handle to it. (Pascal L.)
......@@ -93,8 +97,8 @@ Bug fixes:
Reduction that upcasts the input on no axis (ex: call theano.sum() on a scalar when the original dtype isn't float64 or
[u]int64). It produced bad results as we did not upcasted the inputs in the code, we just copy them.
* Fix some cases of theano.clone() when we get a replacement of x that is a function of x. (Razvan P., reported by Akio Takano)
* Fix grad of Alloc when we unbroadcast the value value and it isn't a scalar. (Frederic B., reported Ian G.)
* I some cases (I think most cases), there was an exception raised in the theano.tensor.grad() method.
* Fix grad of Alloc when we unbroadcast the value and it isn't a scalar. (Frederic B., reported Ian G.)
* In some cases (I think most cases), there was an exception raised in the theano.tensor.grad() method.
But in theory, there could be bad shapes produced in the unbroadcasted dimensions.
New Features:
......@@ -154,7 +158,7 @@ New Features:
* Finish and move out of sandbox theano.sparse.basic.true_dot (Nicolas Bouchard, Frederic B.)
And document all sparse dot variants.
* Implement the mode ignore_borders for GpuImages2Neibs (Frederic B.)
* Make many reduction algo accept a scalar numpy.ndarray as axis (Jeremiah Lowin)
* Make many reduction functions accept a numpy scalar as axis (Jeremiah Lowin)
* Allow numpy.asarray(cuda_ndarray, dtype=...) (Frederic B.)
* theano-cache cleanup now remove cached module old version of code. (Frederic B.)
......@@ -166,19 +170,19 @@ Interface Deprecation (a warning is printed):
Deprecate the old interface for this. (Frederic B.)
Interface Changes:
* Interface change subtensor and take are not in tensor.basic anymore. They where available from tensor.* and are still avail from there. (Frederic B., Matthew Rocklin)
* This lower the basic.py size to 191k, so under 200k for github search.
* Interface change subtensor and take are not in tensor.basic anymore. They were available from tensor.* and are still available from there. (Frederic B., Matthew Rocklin)
* This lowers the basic.py size to 191k, so under 200k for github search.
* Add -m32 or -m64 in the module cache key and add the python bitwidth in the compiledir path. (Pascal L.)
* mrg.normal now has the parameter size mandatory. It was crashing with the default value of None. (Olivier D.)
* Remove the deprecated passing of multiple modes to theano function. (Frederic B.)
* Change FunctionGraph Features interface of the {on_prune(),on_import()} call back to take a reason. (Frederic B.)
* FunctionGraph now clone the input graph by default. (Frederic B.)
* A parameter allow to don't do this clone.
* Added a parameter to optionally not do this cloning.
* This was needed to speed up compilation
New Interface (reuses existing functionality):
* Add hostname as a var in compiledir_format (Frederic B.)
* Add a new Theano flag: compute_test_value_opt. It take the same value as compute_test_value. It enable compute_test_value during Theano optimization. Only useful to debug Theano optimization. Also small changes to some optimization to work correctly in that setup. (Frederic B.)
* Add a new Theano flag: compute_test_value_opt. It takes the same values as compute_test_value. It enables compute_test_value during Theano optimization. Only useful to debug Theano optimization. Also small changes to some optimization to work correctly in that setup. (Frederic B.)
* Add the value pdb to the Theano flag: compute_test_value and compute_test_value_opt. (Frederic B.)
* Add the Theano flag: optimizer_verbose. Default False. When True, we print all the optimization being applied.(Frederic B.)
* Add Op.c_init_code() to allow running the code when the c cmodule is imported (Pascal L.)
......@@ -188,8 +192,8 @@ New Interface (reuses existing functionality):
New debug features:
Speed-ups:
* Optimizer speed up. (Frederic B.)
* Fix warning/not detection on newer llvm version on Mac. (Pascal L., reported by Jeremiah Lowin and Chris Fonnesbeck)
* Allow pickling of more Op to allow reusing the compiled code (Pascal L., Frederic B.)
* Fix warning on newer llvm version on Mac. (Pascal L., reported by Jeremiah Lowin and Chris Fonnesbeck)
* Allow pickling of more Ops to allow reusing the compiled code (Pascal L., Frederic B.)
* Optimize more cases of dot22 and scalar when we can't make a gemm (Pascal L., Frederic B.)
* Speed up GpuJoin with c code (Ludwig Schmidt-Hackenberg, Frederic B.)
* Faster GpuAdvancedIncSubtensor1 on Fermi GPU (and up) on matrix. (Vivek Kulkarni)
......@@ -197,7 +201,7 @@ Speed-ups:
* Implemented c_code for AdvancedSubtensor1 (abalkin)
* Add the equivalent of -march=native to g++ command line. (Frederic B., Pascal L.)
* Speed up compilation with Scan (Jan Schlüter)
* Merge more Scan node together (Pascal L., Yao Li).
* Merge more Scan nodes together (Pascal L., Yao Li).
* Add MakeVector.c_code (Fred)
* Add Shape.c_code (Fred)
* Optimize Elemwise when all the inputs are fortran (Frederic B.)
......@@ -210,15 +214,16 @@ Speed-ups:
* Make inv_as_solve optimization work (Matthew Rocklin)
Crash/no return fixes:
* Fix various crashes when calling scan() with inputs specified in unusual ways. (Pascal L.)
* Fix shape crash inserted by Scan optimization. The gradient of some recursive scan was making the PushOutSeqScan optimization insert crash during the execution of a Theano function. (Frédéric B., reported by Hugo Larochelle)
* Fix command not returning with recent mingw64 on Windows (Pascal L., reported by many people)
* Fix infinite loop related to Scan on the GPU. (Pascal L.)
* Fix infinite loop when the compiledir is full. (Frederic B.)
* Fix a shape cycle crash in the optimizer (Pascal L., Frédéric B., reported by Cho KyungHyun)
* Fix MRG normal now accept to generate scalar. (Pascal L.)
* Fix MRG normal() now allow it to generate scalars. (Pascal L.)
* Fix some GPU compilation issue on Mac (John Yani, Frédéric B.)
* Fix crash when building symbolic random variables with a mix of symbolic and numeric scalar in the "size" parameter. (Pascal L., Reported by Wu Zhen Zhou)
* Make some Op.grad() implemention don't return None (Pascal L.)
* Make some Op.grad() implementions not return None (Pascal L.)
* Crash fix in the grad of elemwise about an DisconnectedType (Pascal L, reported by Thomas Wiecki)
* Fix local_gpu_multinomial optimization handling of broadcast information. (Frederic B., reported by Caglar)
* Fix crash with change introduced in NumPy 1.7.1 (Pascal L., reported by Thomas Wiecki)
......@@ -254,21 +259,20 @@ Crash/no return fixes:
* Prevent shape optimizations from introducing cycles in the graph (Frederic Bastien, Pascal Lamblin, reported by Kyunghyun Cho)
Others:
* Update/Fixes/Typo/pep8 documentation and/or tutorial (Olivier D., Frederic B., Yaroslav Halchenko, Micky Latowicki, Ben McCann, Jason Yosinski, reported by Arnaud Bergeron)
* Update/Fixes/Typo/pep8 documentation and/or tutorial (Olivier D., David W.-F., Frederic B., Yaroslav Halchenko, Micky Latowicki, Ben McCann, Jason Yosinski, reported by Arnaud Bergeron)
* Doc how to make a sparse Op. (Frederic B.)
* Doc compatibility guide (abalkin)
* Fix problem in remove_constants_and_unused_inputs_scan. (useless warning and maybe slow down) (Pascal L.)
* Fix rop dot.(Razvan P., reported by Jeremiah Lowin)
* Raise better error related to pydot bug. (Frederic B., reported by Jason Yosinski and Ludwig Schmidt-Hackenberg)
* Fix to Theano tutorial examples. (reported by Ilya Dyachenko)
* Fix SharedVar.value property to make it raise an exceptin (Frederic B., reported by Drew Duncan)
* Fix SharedVar.value property to make it raise an exception (Frederic B., reported by Drew Duncan)
* Fix verification with compute_test_value in grad() (Frederic B.)
* Theano flags are now evaluated lazily, only if requested (Frederic B.)
* Fix test when g++ is not avail (Frederic B.)
* Add manual instructions for OpenBLAS on Ubuntu by (Jianri Li )
* Better/more error messages (Frederic B., Pascal L., Ian Goodfellow)
* Fix Error reporting with GpuConv (Frederic B., reported by Heng Luo and Nicolas Pinto)
* The infer_shape tester method now warns if the shapes values could hide errors. (Frederic B.)
* Now travis-ci tests with scipy the parts that need it (Frederic B.)
* Export some functions that work on CudaNdarray for windows (Frederic B.)
* If the user specifies a -arch=sm_* value in the Theano flags for the gpu, don't add one (Frederic B., Pascal L.)
......
import os, sys, traceback, warnings
import os
import sys
import traceback
import warnings
import numpy
from nose.plugins.skip import SkipTest
......@@ -46,34 +49,33 @@ class TestComputeTestValue(unittest.TestCase):
theano.config.compute_test_value = 'raise'
x = T.matrix('x')
x.tag.test_value = numpy.random.rand(3,4).astype(config.floatX)
x.tag.test_value = numpy.random.rand(3, 4).astype(config.floatX)
y = T.matrix('y')
y.tag.test_value = numpy.random.rand(4,5).astype(config.floatX)
y.tag.test_value = numpy.random.rand(4, 5).astype(config.floatX)
# should work
z = T.dot(x,y)
z = T.dot(x, y)
assert hasattr(z.tag, 'test_value')
f = theano.function([x,y], z)
f = theano.function([x, y], z)
assert _allclose(f(x.tag.test_value, y.tag.test_value),
z.tag.test_value)
# this test should fail
y.tag.test_value = numpy.random.rand(6,5).astype(config.floatX)
y.tag.test_value = numpy.random.rand(6, 5).astype(config.floatX)
self.assertRaises(ValueError, T.dot, x, y)
finally:
theano.config.compute_test_value = orig_compute_test_value
def test_compute_flag(self):
orig_compute_test_value = theano.config.compute_test_value
try:
x = T.matrix('x')
y = T.matrix('y')
y.tag.test_value = numpy.random.rand(4,5).astype(config.floatX)
y.tag.test_value = numpy.random.rand(4, 5).astype(config.floatX)
# should skip computation of test value
theano.config.compute_test_value = 'off'
z = T.dot(x,y)
z = T.dot(x, y)
assert not hasattr(z.tag, 'test_value')
# should fail when asked by user
......@@ -99,25 +101,25 @@ class TestComputeTestValue(unittest.TestCase):
theano.config.compute_test_value = 'raise'
x = T.matrix('x')
x.tag.test_value = numpy.random.rand(3,4).astype(config.floatX)
x.tag.test_value = numpy.random.rand(3, 4).astype(config.floatX)
y = T.matrix('y')
y.tag.test_value = numpy.random.rand(4,5).astype(config.floatX)
y.tag.test_value = numpy.random.rand(4, 5).astype(config.floatX)
z = theano.shared(numpy.random.rand(5,6).astype(config.floatX))
z = theano.shared(numpy.random.rand(5, 6).astype(config.floatX))
# should work
out = T.dot(T.dot(x,y), z)
out = T.dot(T.dot(x, y), z)
assert hasattr(out.tag, 'test_value')
tf = theano.function([x,y], out)
tf = theano.function([x, y], out)
assert _allclose(
tf(x.tag.test_value, y.tag.test_value),
out.tag.test_value)
tf(x.tag.test_value, y.tag.test_value),
out.tag.test_value)
def f(x,y,z):
return T.dot(T.dot(x,y),z)
def f(x, y, z):
return T.dot(T.dot(x, y), z)
# this test should fail
z.set_value(numpy.random.rand(7,6).astype(config.floatX))
z.set_value(numpy.random.rand(7, 6).astype(config.floatX))
self.assertRaises(ValueError, f, x, y, z)
finally:
theano.config.compute_test_value = orig_compute_test_value
......@@ -128,17 +130,18 @@ class TestComputeTestValue(unittest.TestCase):
theano.config.compute_test_value = 'raise'
x = T.matrix('x')
x.tag.test_value = numpy.random.rand(3,4).astype(config.floatX)
y = theano.shared(numpy.random.rand(4,6).astype(config.floatX), 'y')
x.tag.test_value = numpy.random.rand(3, 4).astype(config.floatX)
y = theano.shared(numpy.random.rand(4, 6).astype(config.floatX),
'y')
# should work
z = T.dot(x,y)
z = T.dot(x, y)
assert hasattr(z.tag, 'test_value')
f = theano.function([x], z)
assert _allclose(f(x.tag.test_value), z.tag.test_value)
# this test should fail
y.set_value(numpy.random.rand(5,6).astype(config.floatX))
y.set_value(numpy.random.rand(5, 6).astype(config.floatX))
self.assertRaises(ValueError, T.dot, x, y)
finally:
theano.config.compute_test_value = orig_compute_test_value
......@@ -148,17 +151,18 @@ class TestComputeTestValue(unittest.TestCase):
try:
theano.config.compute_test_value = 'raise'
x = numpy.random.rand(2,3).astype(config.floatX)
y = theano.shared(numpy.random.rand(3,6).astype(config.floatX), 'y')
x = numpy.random.rand(2, 3).astype(config.floatX)
y = theano.shared(numpy.random.rand(3, 6).astype(config.floatX),
'y')
# should work
z = T.dot(x,y)
z = T.dot(x, y)
assert hasattr(z.tag, 'test_value')
f = theano.function([], z)
assert _allclose(f(), z.tag.test_value)
# this test should fail
x = numpy.random.rand(2,4).astype(config.floatX)
x = numpy.random.rand(2, 4).astype(config.floatX)
self.assertRaises(ValueError, T.dot, x, y)
finally:
theano.config.compute_test_value = orig_compute_test_value
......@@ -168,17 +172,18 @@ class TestComputeTestValue(unittest.TestCase):
try:
theano.config.compute_test_value = 'raise'
x = T.constant(numpy.random.rand(2,3), dtype=config.floatX)
y = theano.shared(numpy.random.rand(3,6).astype(config.floatX), 'y')
x = T.constant(numpy.random.rand(2, 3), dtype=config.floatX)
y = theano.shared(numpy.random.rand(3, 6).astype(config.floatX),
'y')
# should work
z = T.dot(x,y)
z = T.dot(x, y)
assert hasattr(z.tag, 'test_value')
f = theano.function([], z)
assert _allclose(f(), z.tag.test_value)
# this test should fail
x = T.constant(numpy.random.rand(2,4), dtype=config.floatX)
x = T.constant(numpy.random.rand(2, 4), dtype=config.floatX)
self.assertRaises(ValueError, T.dot, x, y)
finally:
theano.config.compute_test_value = orig_compute_test_value
......@@ -190,9 +195,9 @@ class TestComputeTestValue(unittest.TestCase):
x = T.fmatrix('x')
# Incorrect dtype (float64) for test_value
x.tag.test_value = numpy.random.rand(3,4)
x.tag.test_value = numpy.random.rand(3, 4)
y = T.dmatrix('y')
y.tag.test_value = numpy.random.rand(4,5)
y.tag.test_value = numpy.random.rand(4, 5)
self.assertRaises(TypeError, T.dot, x, y)
finally:
......@@ -205,9 +210,9 @@ class TestComputeTestValue(unittest.TestCase):
try:
config.compute_test_value = "raise"
x = T.matrix()
x.tag.test_value = numpy.zeros((2,3), dtype=config.floatX)
x.tag.test_value = numpy.zeros((2, 3), dtype=config.floatX)
y = T.matrix()
y.tag.test_value = numpy.zeros((2,2), dtype=config.floatX)
y.tag.test_value = numpy.zeros((2, 2), dtype=config.floatX)
self.assertRaises(ValueError, x.__mul__, y)
finally:
theano.config.compute_test_value = orig_compute_test_value
......@@ -250,7 +255,7 @@ class TestComputeTestValue(unittest.TestCase):
k = T.iscalar("k")
A = T.matrix("A")
k.tag.test_value = 3
A.tag.test_value = numpy.random.rand(5,3).astype(config.floatX)
A.tag.test_value = numpy.random.rand(5, 3).astype(config.floatX)
def fx(prior_result, A):
return T.dot(prior_result, A)
......@@ -259,10 +264,10 @@ class TestComputeTestValue(unittest.TestCase):
# we cannot simply use self.assertRaises()
try:
theano.scan(
fn=fx,
outputs_info=T.ones_like(A),
non_sequences=A,
n_steps=k)
fn=fx,
outputs_info=T.ones_like(A),
non_sequences=A,
n_steps=k)
assert False
except ValueError, e:
# Get traceback
......@@ -286,26 +291,26 @@ class TestComputeTestValue(unittest.TestCase):
k = T.iscalar("k")
A = T.matrix("A")
k.tag.test_value = 3
A.tag.test_value = numpy.random.rand(5,3).astype(config.floatX)
A.tag.test_value = numpy.random.rand(5, 3).astype(config.floatX)
def fx(prior_result, A):
return T.dot(prior_result, A)
self.assertRaises(ValueError,
theano.scan,
fn=fx,
outputs_info=T.ones_like(A.T),
non_sequences=A,
n_steps=k)
theano.scan,
fn=fx,
outputs_info=T.ones_like(A.T),
non_sequences=A,
n_steps=k)
# Since we have to inspect the traceback,
# we cannot simply use self.assertRaises()
try:
theano.scan(
fn=fx,
outputs_info=T.ones_like(A.T),
non_sequences=A,
n_steps=k)
fn=fx,
outputs_info=T.ones_like(A.T),
non_sequences=A,
n_steps=k)
assert False
except ValueError, e:
# The first message is for numpy before 1.6.
......@@ -338,7 +343,6 @@ class TestComputeTestValue(unittest.TestCase):
output, = outputs
output[0] = input + 1
orig_compute_test_value = theano.config.compute_test_value
try:
theano.config.compute_test_value = 'raise'
......@@ -349,9 +353,10 @@ class TestComputeTestValue(unittest.TestCase):
o = IncOnePython()(i)
# Check that the c_code function is not implemented
self.assertRaises((NotImplementedError, utils.MethodNotDefined),
o.owner.op.c_code,
o.owner, 'o', ['x'], 'z', {'fail': ''})
self.assertRaises(
(NotImplementedError, utils.MethodNotDefined),
o.owner.op.c_code,
o.owner, 'o', ['x'], 'z', {'fail': ''})
assert hasattr(o.tag, 'test_value')
assert o.tag.test_value == 4
......@@ -376,8 +381,8 @@ class TestComputeTestValue(unittest.TestCase):
# Check that the perform function is not implemented
self.assertRaises((NotImplementedError, utils.MethodNotDefined),
o.owner.op.perform,
o.owner, 0, [None])
o.owner.op.perform,
o.owner, 0, [None])
assert hasattr(o.tag, 'test_value')
assert o.tag.test_value == 4
......@@ -391,7 +396,8 @@ class TestComputeTestValue(unittest.TestCase):
orig_compute_test_value = theano.config.compute_test_value
try:
theano.config.compute_test_value = 'raise'
init_Mu1 = theano.shared(numpy.zeros((5,),dtype=config.floatX)).dimshuffle('x',0)
init_Mu1 = theano.shared(
numpy.zeros((5,), dtype=config.floatX)).dimshuffle('x', 0)
f = theano.function([], outputs=[init_Mu1])
finally:
......
from itertools import izip
import theano
import numpy
import scipy
import theano
from theano import gof, scalar, tensor
from theano.tensor import blas
from theano.sparse import (CSC, CSR, csm_properties,
......
......@@ -631,21 +631,21 @@ class Subtensor(Op):
if view_ndim:
rval = """
// Argument of the view
ssize_t xview_dims[%(view_ndim)s];
ssize_t xview_strides[%(view_ndim)s];
npy_intp xview_dims[%(view_ndim)s];
npy_intp xview_strides[%(view_ndim)s];
"""% locals()
else:
rval = """
// Argument of the view
ssize_t* xview_dims = NULL;
ssize_t* xview_strides = NULL;
npy_intp* xview_dims = NULL;
npy_intp* xview_strides = NULL;
"""
rval += """
// One more argument of the view
ssize_t xview_offset = 0;
npy_intp xview_offset = 0;
// The subtensor is created by iterating over the dimensions
// and updating stride, shape, and data pointers
......@@ -776,7 +776,7 @@ class Subtensor(Op):
@staticmethod
def helper_c_code_cache_version():
return (7,)
return (8,)
def c_code(self, node, name, inputs, outputs, sub): # DEBUG
if not isinstance(node.inputs[0].type, theano.tensor.TensorType):
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论