提交 c111364f authored 作者: Ricardo Vieira's avatar Ricardo Vieira 提交者: Ricardo Vieira

Do not manually include fast_run in test_shape_i_*

It doesn't make sense to include `fast_run` if `fast_compile` mode is being used. Some rewrites such as the FusionOptimizer are not compatible with `fast_compile` mode which prevents the creation of C thunks. The FusionOptimizer has no way of knowing this is the case, and assumes it is safe to return Composites with more than 32 operands, even though that's not the case with the Python perform method.
上级 bc40a323
...@@ -986,7 +986,7 @@ class TestSubtensor(utt.OptimizationTestMixin): ...@@ -986,7 +986,7 @@ class TestSubtensor(utt.OptimizationTestMixin):
def test_shape_i_const(self): def test_shape_i_const(self):
# Each axis is treated independently by shape_i/shape operators # Each axis is treated independently by shape_i/shape operators
mode_opt = self.mode.including("fast_run") mode_opt = self.mode
data = self.shared(np.array(np.arange(5), dtype=self.dtype)) data = self.shared(np.array(np.arange(5), dtype=self.dtype))
for start in [None] + [-8, -5, -1, 0, 1, 5, 8]: for start in [None] + [-8, -5, -1, 0, 1, 5, 8]:
outs = [] outs = []
...@@ -1004,7 +1004,7 @@ class TestSubtensor(utt.OptimizationTestMixin): ...@@ -1004,7 +1004,7 @@ class TestSubtensor(utt.OptimizationTestMixin):
def test_shape_i_scalar(self): def test_shape_i_scalar(self):
# Each axis is treated independently by shape_i/shape operators # Each axis is treated independently by shape_i/shape operators
mode_opt = self.mode.including("fast_run") mode_opt = self.mode
v_data = np.array(np.arange(5), dtype=self.dtype) v_data = np.array(np.arange(5), dtype=self.dtype)
t_data = self.shared(v_data) t_data = self.shared(v_data)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论