提交 f65bbe4e authored 作者: Frederic's avatar Frederic

Fix runtime crash with the local_subtensor_of_alloc optimization when the input…

Fix runtime crash with the local_subtensor_of_alloc optimization when the input have a broadcastable dimensions.
上级 f97fc466
...@@ -168,6 +168,7 @@ Crashes fixed: ...@@ -168,6 +168,7 @@ Crashes fixed:
* Support for OSX Enthought Python Distribution 7.x. (Graham Taylor, Olivier) * Support for OSX Enthought Python Distribution 7.x. (Graham Taylor, Olivier)
* When the subtensor inputs had 0 dimensions and the outputs 0 dimensions. (Frederic) * When the subtensor inputs had 0 dimensions and the outputs 0 dimensions. (Frederic)
* Crash when the step to subtensor was not 1 in conjunction with some optimization. (Frederic, reported by Olivier Chapelle) * Crash when the step to subtensor was not 1 in conjunction with some optimization. (Frederic, reported by Olivier Chapelle)
* Runtime crash related to an optimization with subtensor of alloc (reported by Razvan, fixed by Frederic)
* Fix dot22scalar cast of integer scalars (Justin Bayer, Frédéric, Olivier) * Fix dot22scalar cast of integer scalars (Justin Bayer, Frédéric, Olivier)
* Fix runtime crash in gemm, dot22. FB * Fix runtime crash in gemm, dot22. FB
* Fix on 32bits computer: make sure all shape are int64.(Olivier) * Fix on 32bits computer: make sure all shape are int64.(Olivier)
......
...@@ -1842,6 +1842,12 @@ def local_subtensor_of_alloc(node): ...@@ -1842,6 +1842,12 @@ def local_subtensor_of_alloc(node):
# If val was not copied over that dim, # If val was not copied over that dim,
# we need to take the appropriate subtensor on it. # we need to take the appropriate subtensor on it.
if i >= n_added_dims: if i >= n_added_dims:
# We check that the corresponding val dimensions was
# not a broadcasted dimensions.
if (val.type.ndim > (i - n_added_dims) and
val.type.broadcastable[i - n_added_dims]):
val_slices.append(slice(None))
else:
val_slices.append(sl) val_slices.append(sl)
csl, _ = T.get_canonical_form_slice(sl, dim) csl, _ = T.get_canonical_form_slice(sl, dim)
......
...@@ -2073,7 +2073,15 @@ class Test_alloc_zero(unittest.TestCase): ...@@ -2073,7 +2073,15 @@ class Test_alloc_zero(unittest.TestCase):
def test_local_subtensor_of_alloc(): def test_local_subtensor_of_alloc():
x = tensor.matrix('x') x = tensor.matrix('x')
y = tensor.vector('y')
# DebugMode should detect if something goes wrong.
# test shape combination of odd and event shape.
for shape in [(3, 5), (4, 6), (3, 8), (4, 7)]:
xval = numpy.zeros(shape, dtype=config.floatX)
yval = numpy.arange(shape[1], dtype=config.floatX)
for y in [theano.shared(yval), tensor.constant([1.])]:
# The rows of yx are copies of y # The rows of yx are copies of y
yx = tensor.alloc(y, x.shape[0], x.shape[1]) yx = tensor.alloc(y, x.shape[0], x.shape[1])
...@@ -2086,13 +2094,6 @@ def test_local_subtensor_of_alloc(): ...@@ -2086,13 +2094,6 @@ def test_local_subtensor_of_alloc():
z_vec = yx[:, 3] z_vec = yx[:, 3]
assert z_vec.ndim == 1 assert z_vec.ndim == 1
# DebugMode should detect if something goes wrong.
# test shape combination of odd and event shape.
for shape in [(3, 5), (4, 6), (3, 8), (4, 7)]:
xval = numpy.zeros(shape, dtype=config.floatX)
yval = numpy.arange(shape[1], dtype=config.floatX)
for slices in [ for slices in [
# results are vector # results are vector
(slice(None), 3), (slice(None), 3),
...@@ -2106,8 +2107,8 @@ def test_local_subtensor_of_alloc(): ...@@ -2106,8 +2107,8 @@ def test_local_subtensor_of_alloc():
(slice(1, None, 2)), (slice(1, None, 2)),
]: ]:
z = yx.__getitem__(slices) z = yx.__getitem__(slices)
f = theano.function([x, y], z) f = theano.function([x], z)
val = f(xval, yval) val = f(xval)
assert xval.__getitem__(slices).shape == val.shape assert xval.__getitem__(slices).shape == val.shape
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论