Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
898d146d
提交
898d146d
authored
3月 20, 2017
作者:
Frédéric Bastien
提交者:
GitHub
3月 20, 2017
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #5698 from lamblin/fix_pep8
Fix pep8
上级
44f7578c
d11e2251
显示空白字符变更
内嵌
并排
正在显示
4 个修改的文件
包含
307 行增加
和
291 行删除
+307
-291
unittest.txt
doc/extending/unittest.txt
+16
-11
test_basic.py
theano/tensor/tests/test_basic.py
+291
-275
test_flake8.py
theano/tests/test_flake8.py
+0
-1
unittest_tools.py
theano/tests/unittest_tools.py
+0
-4
没有找到文件。
doc/extending/unittest.txt
浏览文件 @
898d146d
...
...
@@ -333,12 +333,12 @@ type this:
Using Random Values in Test Cases
---------------------------------
numpy.random
is often used in unit tests to initialize large data
``numpy.random``
is often used in unit tests to initialize large data
structures, for use as inputs to the function or module being
tested. When doing this, it is imperative that the random number
generator be seeded at the be beginning of each unit test. This will
ensure that unittest behaviour is consistent from one execution to
another (i.e always pass or always fail).
another (i.e
.,
always pass or always fail).
Instead of using ``numpy.random.seed`` to do this, we encourage users to
do the following:
...
...
@@ -351,30 +351,30 @@ do the following:
def setUp(self):
unittest_tools.seed_rng()
# OR ... call with an explicit seed
unittest_tools.seed_rng(234234)
#
use only if really necessary!
unittest_tools.seed_rng(234234)
#
use only if really necessary!
The behaviour of
seed_rng
is as follows:
The behaviour of
``seed_rng``
is as follows:
* If an explicit seed is given, it will be used for seeding numpy's rng.
* If not, it will use ``config.unittests.rseed`` (its default value is
666
).
* If not, it will use ``config.unittests.rseed`` (its default value is
``666``
).
* If
config.unittests.rseed is set to "random"
, it will seed the rng with
* If
``config.unittests.rseed`` is set to ``"random"``
, it will seed the rng with
None, which is equivalent to seeding with a random seed.
The main advantage of using
unittest_tools.seed_rng
is that it allows
The main advantage of using
``unittest_tools.seed_rng``
is that it allows
us to change the seed used in the unitests, without having to manually
edit all the files. For example, this allows the nightly build to run
theano-nose
repeatedly, changing the seed on every run (hence achieving
``theano-nose``
repeatedly, changing the seed on every run (hence achieving
a higher confidence that the variables are correct), while still
making sure unittests are deterministic.
Users who prefer their unittests to be random (when run on their local
machine) can simply set ``config.unittests.rseed`` to
'random'
(see
machine) can simply set ``config.unittests.rseed`` to
``'random'``
(see
:mod:`config`).
Similarly, to provide a seed to
numpy.random.RandomState
, simply use:
Similarly, to provide a seed to
``numpy.random.RandomState``
, simply use:
.. testcode::
...
...
@@ -382,7 +382,7 @@ Similarly, to provide a seed to numpy.random.RandomState, simply use:
rng = numpy.random.RandomState(unittest_tools.fetch_seed())
# OR providing an explicit seed
rng = numpy.random.RandomState(unittest_tools.fetch_seed(1231))
#
again not recommended
rng = numpy.random.RandomState(unittest_tools.fetch_seed(1231))
#
again not recommended
Note that the ability to change the seed from one nosetest to another,
is incompatible with the method of hard-coding the baseline variables
...
...
@@ -390,6 +390,11 @@ is incompatible with the method of hard-coding the baseline variables
determined "algorithmically". Although this represents more work, the
test suite will be better because of it.
To help you check that the boundaries provided to ``numpy.random`` are
correct and your tests will pass those corner cases, you can check
``utt.MockRandomState``. Code using ``utt.MockRandomState`` should not
be committed, it is just a tool to help adjust the sampling range.
Creating an Op UnitTest
=======================
...
...
theano/tensor/tests/test_basic.py
浏览文件 @
898d146d
from
__future__
import
absolute_import
,
print_function
,
division
from
copy
import
copy
,
deepcopy
from
functools
import
partial
import
itertools
import
logging
from
nose.plugins.skip
import
SkipTest
from
nose.tools
import
assert_raises
import
operator
import
os
import
sys
from
tempfile
import
mkstemp
import
unittest
import
warnings
from
copy
import
copy
,
deepcopy
# Import builtin min to be able to use it after importing the tensor version.
from
theano.compat
import
izip
from
six
import
iteritems
from
six.moves
import
StringIO
,
reduce
from
six.moves
import
xrange
# Import builtin min to be able to use it after importing the tensor version.
from
six.moves.builtins
import
min
as
builtin_min
from
nose.tools
import
assert_raises
from
nose.plugins.skip
import
SkipTest
import
numpy
from
numpy.testing
import
dec
,
assert_array_equal
,
assert_allclose
from
distutils.version
import
LooseVersion
from
functools
import
partial
import
theano
from
theano.compat
import
izip
from
theano.compat
import
PY3
,
exc_message
,
operator_div
from
six.moves
import
StringIO
,
reduce
from
theano
import
compile
,
config
,
function
,
gof
,
tensor
,
shared
from
theano.compile
import
DeepCopyOp
from
theano.compile.mode
import
get_default_mode
from
theano.scalar
import
autocast_float_as
,
autocast_float
from
theano.tensor
import
(
_shared
,
wvector
,
bvector
,
argmin
,
max_and_argmax
,
cscalar
,
ctensor3
,
join
,
from
theano.tensor
import
(
wvector
,
bvector
,
argmin
,
max_and_argmax
,
cscalar
,
join
,
horizontal_stack
,
vertical_stack
,
argmax
,
get_vector_length
,
fscalar
,
zeros_like
,
sum
,
tensor3
,
vector
,
add
,
addbroadcast
,
fscalar
,
sum
,
tensor3
,
vector
,
add
,
addbroadcast
,
alloc
,
as_tensor_variable
,
tensor_from_scalar
,
ARange
,
clip
,
constant
,
default
,
diag
,
diagonal
,
dot
,
batched_dot
,
clip
,
constant
,
default
,
diag
,
dot
,
batched_dot
,
dmatrix
,
dscalar
,
dvector
,
eq
,
eye
,
fill
,
flatten
,
inverse_permutation
,
tensor4
,
permute_row_elements
,
Flatten
,
fmatrix
,
fscalars
,
grad
,
tensor4
,
permute_row_elements
,
fmatrix
,
fscalars
,
grad
,
inplace
,
iscalar
,
matrix
,
minimum
,
matrices
,
maximum
,
mul
,
neq
,
Reshape
,
row
,
scalar
,
scalars
,
second
,
smallest
,
stack
,
sub
,
Tensor
,
tensor_copy
,
tensordot
,
TensorType
,
Tri
,
tri
,
tril
,
triu
,
unbroadcast
,
var
,
Join
,
shape
,
MaxAndArgmax
,
lscalar
,
zvector
,
exp
,
get_scalar_constant_value
,
ivector
,
reshape
,
scalar_from_tensor
,
scal
,
iscalars
,
arange
,
dscalars
,
fvector
,
imatrix
,
numeric_grad
,
opt
,
lvector
,
lmatrix
,
true_div
,
max
,
min
,
Split
,
roll
,
opt
,
lvector
,
true_div
,
max
,
min
,
Split
,
roll
,
tile
,
patternbroadcast
,
Eye
,
Shape
,
Dot
,
PermuteRowElements
,
ScalarFromTensor
,
TensorFromScalar
,
dtensor4
,
Rebroadcast
,
Alloc
,
dtensor3
,
SpecifyShape
,
Mean
,
...
...
@@ -61,7 +64,6 @@ mode_no_scipy = get_default_mode()
try
:
import
scipy.special
import
scipy.stats
from
scipy
import
__version__
as
scipy_version
imported_scipy_special
=
True
except
ImportError
:
if
config
.
mode
==
"FAST_COMPILE"
:
...
...
@@ -77,6 +79,11 @@ else:
# Use a seeded random number generator so that unittests are deterministic
utt
.
seed_rng
()
test_rng
=
numpy
.
random
.
RandomState
(
seed
=
utt
.
fetch_seed
())
# In order to check random values close to the boundaries when designing
# new tests, you can use utt.MockRandomState, for instance:
# test_rng = MockRandomState(0)
# test_rng = MockRandomState(0.99999982)
# test_rng = MockRandomState(1)
if
PY3
:
...
...
@@ -84,7 +91,7 @@ if PY3:
return
i
else
:
def
L
(
i
):
return
long
(
i
)
return
long
(
i
)
# noqa for Python 3
def
inplace_func
(
inputs
,
outputs
,
mode
=
None
,
allow_input_downcast
=
False
,
...
...
@@ -122,8 +129,8 @@ def get_numeric_subclasses(cls=numpy.number, ignore=None):
numpy
.
array
(
0
,
dtype
=
dtype
)
rval
.
append
(
cls
)
ignore
.
append
(
dtype_num
)
for
sub
in
cls
.
__subclasses__
():
rval
+=
[
c
for
c
in
get_numeric_subclasses
(
sub
,
ignore
=
ignore
)]
for
sub
_
in
cls
.
__subclasses__
():
rval
+=
[
c
for
c
in
get_numeric_subclasses
(
sub
_
,
ignore
=
ignore
)]
return
rval
...
...
@@ -181,8 +188,8 @@ def _numpy_checker(x, y):
# Checks if x.data and y.data have the same contents.
# Used in DualLinker to compare C version with Python version.
x
,
y
=
x
[
0
],
y
[
0
]
if
(
x
.
dtype
!=
y
.
dtype
or
x
.
shape
!=
y
.
shape
or
numpy
.
any
(
numpy
.
abs
(
x
-
y
)
>
1e-10
)):
if
(
x
.
dtype
!=
y
.
dtype
or
x
.
shape
!=
y
.
shape
or
numpy
.
any
(
numpy
.
abs
(
x
-
y
)
>
1e-10
)):
raise
Exception
(
"Output mismatch."
,
{
'performlinker'
:
x
,
'clinker'
:
y
})
...
...
@@ -363,8 +370,8 @@ def makeTester(name, op, expected, checks=None, good=None, bad_build=None,
" trying to make a Function"
)
%
(
self
.
op
,
testname
)
exc
.
args
+=
(
err_msg
,)
raise
if
(
isinstance
(
self
.
expected
,
dict
)
and
testname
in
self
.
expected
):
if
(
isinstance
(
self
.
expected
,
dict
)
and
testname
in
self
.
expected
):
expecteds
=
self
.
expected
[
testname
]
# with numpy version, when we print a number and read it
# back, we don't get exactly the same result, so we accept
...
...
@@ -393,9 +400,9 @@ def makeTester(name, op, expected, checks=None, good=None, bad_build=None,
for
i
,
(
variable
,
expected
)
in
enumerate
(
izip
(
variables
,
expecteds
)):
if
(
variable
.
dtype
!=
expected
.
dtype
or
variable
.
shape
!=
expected
.
shape
or
not
numpy
.
allclose
(
variable
,
expected
,
if
(
variable
.
dtype
!=
expected
.
dtype
or
variable
.
shape
!=
expected
.
shape
or
not
numpy
.
allclose
(
variable
,
expected
,
atol
=
eps
,
rtol
=
eps
)):
self
.
fail
((
"Test
%
s::
%
s: Output
%
s gave the wrong"
" value. With inputs
%
s, expected
%
s (dtype
%
s),"
...
...
@@ -504,8 +511,8 @@ def makeTester(name, op, expected, checks=None, good=None, bad_build=None,
for
shape_elem
in
input
.
shape
]
)()
for
input
in
inputs
]
if
(
isinstance
(
self
.
expected
,
dict
)
and
testname
in
self
.
expected
):
if
(
isinstance
(
self
.
expected
,
dict
)
and
testname
in
self
.
expected
):
expecteds
=
self
.
expected
[
testname
]
# with numpy version, when we print a number and read it
# back, we don't get exactly the same result, so we accept
...
...
@@ -587,7 +594,7 @@ def rand_ranged(min, max, shape):
def
randint_ranged
(
min
,
max
,
shape
):
return
test_rng
.
randint
(
min
,
max
+
1
,
shape
)
return
test_rng
.
randint
(
min
,
max
+
1
,
shape
)
def
randc128_ranged
(
min
,
max
,
shape
):
...
...
@@ -608,6 +615,7 @@ def rand_of_dtype(shape, dtype):
# Used to exclude random numbers too close to certain values
_eps
=
1e-2
def
makeBroadcastTester
(
op
,
expected
,
checks
=
None
,
name
=
None
,
**
kwargs
):
if
checks
is
None
:
checks
=
{}
...
...
@@ -635,8 +643,8 @@ def makeBroadcastTester(op, expected, checks=None, name=None, **kwargs):
if
kwargs
[
'inplace'
]:
_expected
=
expected
if
not
isinstance
(
_expected
,
dict
):
expected
=
lambda
*
inputs
:
numpy
.
array
(
_expected
(
*
inputs
),
dtype
=
inputs
[
0
]
.
dtype
)
def
expected
(
*
inputs
):
return
numpy
.
array
(
_expected
(
*
inputs
),
dtype
=
inputs
[
0
]
.
dtype
)
def
inplace_check
(
inputs
,
outputs
):
# this used to be inputs[0] is output[0]
...
...
@@ -681,7 +689,7 @@ _grad_broadcast_binary_normal = dict(
row
=
(
rand
(
2
,
3
),
rand
(
1
,
3
)),
column
=
(
rand
(
2
,
3
),
rand
(
2
,
1
)),
# This don't work as verify grad don't support that
#empty=(numpy.asarray([]), numpy.asarray([1]))
#
empty=(numpy.asarray([]), numpy.asarray([1]))
# complex1=(randcomplex(2,3),randcomplex(2,3)),
# complex2=(randcomplex(2,3),rand(2,3)),
# Disabled as we test the case where we reuse the same output as the
...
...
@@ -702,8 +710,8 @@ def check_floatX(inputs, rval):
# input.
if
(
isinstance
(
rval
,
numpy
.
ndarray
)
and
rval
.
dtype
==
'float64'
and
config
.
cast_policy
==
'numpy+floatX'
and
config
.
floatX
==
'float32'
and
config
.
cast_policy
==
'numpy+floatX'
and
config
.
floatX
==
'float32'
and
all
(
x
.
dtype
!=
'float64'
for
x
in
inputs
)):
# Then we expect float32 instead of float64.
return
rval
.
astype
(
'float32'
)
...
...
@@ -719,9 +727,9 @@ AddTester = makeBroadcastTester(
three_inputs_same_shapes
=
(
rand
(
2
,
3
),
rand
(
2
,
3
),
rand
(
2
,
3
)),
three_inputs_same_shapes_uint
=
(
randuint32
(
2
,
3
),
randuint32
(
2
,
3
),
randuint32
(
2
,
3
)),
three_inputs_same_shapes_uint
=
(
randuint32
(
2
,
3
),
randuint32
(
2
,
3
),
randuint32
(
2
,
3
)),
four_inputs_broadcast
=
(
rand
(
2
,
3
),
rand
(
1
,
3
),
rand
(
2
,
1
),
...
...
@@ -777,22 +785,24 @@ SwitchTester = makeBroadcastTester(
# So we can't call verify_grad with cond 0.
grad
=
dict
(
all_true
=
(
numpy
.
asarray
(
1
,
dtype
=
config
.
floatX
),
rand
(
4
,
5
),
rand
(
4
,
5
)),
#
false_true=(numpy.asarray(0, dtype=config.floatX),
#
rand(4, 5), rand(4, 5)),
#
mixed=(randint_ranged(0, 1, (4, 5)).astype(config.floatX),
#
rand(4, 5), rand(4, 5))
#
false_true=(numpy.asarray(0, dtype=config.floatX),
#
rand(4, 5), rand(4, 5)),
#
mixed=(randint_ranged(0, 1, (4, 5)).astype(config.floatX),
#
rand(4, 5), rand(4, 5))
),
)
MaximumTester
=
makeBroadcastTester
(
op
=
maximum
,
MaximumTester
=
makeBroadcastTester
(
op
=
maximum
,
expected
=
lambda
*
inputs
:
check_floatX
(
inputs
,
numpy
.
maximum
(
*
inputs
)),
good
=
_good_broadcast_binary_normal
,
bad_build
=
_bad_build_broadcast_binary_normal
,
bad_runtime
=
_bad_runtime_broadcast_binary_normal
,
grad
=
_grad_broadcast_binary_normal
)
MaximumInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
maximum_inplace
,
MaximumInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
maximum_inplace
,
expected
=
numpy
.
maximum
,
good
=
_good_broadcast_binary_normal
,
bad_build
=
_bad_build_broadcast_binary_normal
,
...
...
@@ -800,14 +810,16 @@ MaximumInplaceTester = makeBroadcastTester(op=inplace.maximum_inplace,
grad
=
_grad_broadcast_binary_normal
,
inplace
=
True
)
MinimumTester
=
makeBroadcastTester
(
op
=
minimum
,
MinimumTester
=
makeBroadcastTester
(
op
=
minimum
,
expected
=
lambda
*
inputs
:
check_floatX
(
inputs
,
numpy
.
minimum
(
*
inputs
)),
good
=
_good_broadcast_binary_normal
,
bad_build
=
_bad_build_broadcast_binary_normal
,
bad_runtime
=
_bad_runtime_broadcast_binary_normal
,
grad
=
_grad_broadcast_binary_normal
)
MinimumInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
minimum_inplace
,
MinimumInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
minimum_inplace
,
expected
=
numpy
.
minimum
,
good
=
_good_broadcast_binary_normal
,
bad_build
=
_bad_build_broadcast_binary_normal
,
...
...
@@ -815,7 +827,8 @@ MinimumInplaceTester = makeBroadcastTester(op=inplace.minimum_inplace,
grad
=
_grad_broadcast_binary_normal
,
inplace
=
True
)
MulTester
=
makeBroadcastTester
(
op
=
mul
,
MulTester
=
makeBroadcastTester
(
op
=
mul
,
expected
=
lambda
*
inputs
:
check_floatX
(
inputs
,
reduce
(
lambda
x
,
y
:
x
*
y
,
inputs
)),
good
=
dict
(
three_inputs_same_shapes
=
(
rand
(
2
,
3
),
rand
(
2
,
3
),
rand
(
2
,
3
)),
four_inputs_broadcast
=
(
rand
(
2
,
3
),
rand
(
1
,
3
),
rand
(
2
,
1
),
rand
(
1
,
1
)),
...
...
@@ -825,7 +838,9 @@ MulTester = makeBroadcastTester(op=mul,
grad
=
dict
(
three_inputs_same_shapes
=
(
rand
(
2
,
3
),
rand
(
2
,
3
),
rand
(
2
,
3
)),
four_inputs_broadcast
=
(
rand
(
2
,
3
),
rand
(
1
,
3
),
rand
(
2
,
1
),
rand
(
1
,
1
)),
**
_grad_broadcast_binary_normal
))
MulInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
mul_inplace
,
MulInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
mul_inplace
,
expected
=
lambda
x
,
y
:
x
*
y
,
good
=
_good_broadcast_binary_normal
,
bad_build
=
_bad_build_broadcast_binary_normal
,
...
...
@@ -864,7 +879,7 @@ _good_broadcast_div_mod_normal_float_no_complex = dict(
dtype
=
'int8'
),
[
255
,
1
])],
# This empty2 doesn't work for some tests. I don't remember why
#empty2=(numpy.asarray([0]), numpy.asarray([])),
#
empty2=(numpy.asarray([0]), numpy.asarray([])),
)
if
PY3
:
...
...
@@ -881,7 +896,7 @@ else:
complex1
=
(
randcomplex
(
2
,
3
),
randcomplex_nonzero
((
2
,
3
))),
complex2
=
(
randcomplex
(
2
,
3
),
rand_nonzero
((
2
,
3
))),
# Inplace on the first element. Must have the same type.
#complex3=(rand(2, 3) ,randcomplex(2, 3)),
#
complex3=(rand(2, 3) ,randcomplex(2, 3)),
)
_good_broadcast_div_mod_normal_float
=
copymod
(
...
...
@@ -896,13 +911,13 @@ _grad_broadcast_div_mod_normal = dict(
scalar
=
(
rand
(
2
,
3
),
rand_nonzero
((
1
,
1
))),
row
=
(
rand
(
2
,
3
),
rand_nonzero
((
1
,
3
))),
column
=
(
rand
(
2
,
3
),
rand_nonzero
((
2
,
1
))),
#
complex1=(randcomplex(2, 3), randcomplex_nonzero((2, 3))),
#
complex2=(randcomplex(2, 3), rand_nonzero((2, 3))),
#
complex3=(rand(2, 3), randcomplex_nonzero((2, 3))),
#
dtype_mixup_1=(rand(2, 3), randint_nonzero(2, 3)),
#
dtype_mixup_2=(randint_nonzero(2, 3), rand_nonzero((2, 3))),
#
empty1=(numpy.asarray([]), numpy.asarray([1.])),
#
empty2=(numpy.asarray([0]), numpy.asarray([])),
#
complex1=(randcomplex(2, 3), randcomplex_nonzero((2, 3))),
#
complex2=(randcomplex(2, 3), rand_nonzero((2, 3))),
#
complex3=(rand(2, 3), randcomplex_nonzero((2, 3))),
#
dtype_mixup_1=(rand(2, 3), randint_nonzero(2, 3)),
#
dtype_mixup_2=(randint_nonzero(2, 3), rand_nonzero((2, 3))),
#
empty1=(numpy.asarray([]), numpy.asarray([1.])),
#
empty2=(numpy.asarray([0]), numpy.asarray([])),
)
div_grad_rtol
=
None
...
...
@@ -988,8 +1003,8 @@ CeilIntDivTester = makeBroadcastTester(
good
=
_good_broadcast_div_mod_normal_float_no_complex
,
name
=
'CeilIntDiv'
,
# As we implement this function with neq, the gradient returned is always 0.
#
grad=_grad_broadcast_div_mod_normal,
#
grad_rtol=div_grad_rtol,
#
grad=_grad_broadcast_div_mod_normal,
#
grad_rtol=div_grad_rtol,
)
ModTester
=
makeBroadcastTester
(
...
...
@@ -1013,7 +1028,8 @@ ModInplaceTester = makeBroadcastTester(
grad_eps
=
1e-5
,
inplace
=
True
)
_good_broadcast_pow_normal_float
=
dict
(
same_shapes
=
(
rand_ranged
(
1
,
5
,
(
2
,
3
)),
rand_ranged
(
-
3
,
3
,
(
2
,
3
))),
_good_broadcast_pow_normal_float
=
dict
(
same_shapes
=
(
rand_ranged
(
1
,
5
,
(
2
,
3
)),
rand_ranged
(
-
3
,
3
,
(
2
,
3
))),
scalar
=
(
rand_ranged
(
1
,
5
,
(
2
,
3
)),
rand_ranged
(
-
3
,
3
,
(
1
,
1
))),
row
=
(
rand_ranged
(
1
,
5
,
(
2
,
3
)),
rand_ranged
(
-
3
,
3
,
(
1
,
3
))),
column
=
(
rand_ranged
(
1
,
5
,
(
2
,
3
)),
rand_ranged
(
-
3
,
3
,
(
2
,
1
))),
...
...
@@ -1027,17 +1043,17 @@ _good_broadcast_pow_normal_float = dict(same_shapes=(rand_ranged(1, 5, (2, 3)),
numpy
.
asarray
([],
dtype
=
config
.
floatX
)),
empty3
=
(
numpy
.
asarray
([],
dtype
=
config
.
floatX
),
numpy
.
asarray
([],
dtype
=
config
.
floatX
)),
)
_grad_broadcast_pow_normal
=
dict
(
same_shapes
=
(
rand_ranged
(
1
,
5
,
(
2
,
3
)),
rand_ranged
(
-
3
,
3
,
(
2
,
3
))),
)
_grad_broadcast_pow_normal
=
dict
(
same_shapes
=
(
rand_ranged
(
1
,
5
,
(
2
,
3
)),
rand_ranged
(
-
3
,
3
,
(
2
,
3
))),
scalar
=
(
rand_ranged
(
1
,
5
,
(
2
,
3
)),
rand_ranged
(
-
3
,
3
,
(
1
,
1
))),
row
=
(
rand_ranged
(
1
,
5
,
(
2
,
3
)),
rand_ranged
(
-
3
,
3
,
(
1
,
3
))),
row
=
(
rand_ranged
(
1
,
5
,
(
2
,
3
)),
rand_ranged
(
-
3
,
3
,
(
1
,
3
))),
column
=
(
rand_ranged
(
1
,
5
,
(
2
,
3
)),
rand_ranged
(
-
3
,
3
,
(
2
,
1
))),
#
complex1 = (randcomplex(2,3),randcomplex(2,3)),
#
complex2 = (randcomplex(2,3),rand(2,3)),
#
complex3 = (rand(2,3),randcomplex(2,3)),
#
empty1 = (numpy.asarray([]), numpy.asarray([1])),
#
empty2 = (numpy.asarray([0]), numpy.asarray([])),
#
complex1 = (randcomplex(2,3),randcomplex(2,3)),
#
complex2 = (randcomplex(2,3),rand(2,3)),
#
complex3 = (rand(2,3),randcomplex(2,3)),
#
empty1 = (numpy.asarray([]), numpy.asarray([1])),
#
empty2 = (numpy.asarray([0]), numpy.asarray([])),
x_eq_zero
=
(
numpy
.
asarray
([
0.
],
dtype
=
config
.
floatX
),
numpy
.
asarray
([
2.
],
dtype
=
config
.
floatX
)
...
...
@@ -1163,69 +1179,75 @@ _grad_broadcast_unary_0_2_no_complex = dict(
# inplace ops when the input is integer and the output is float*
# don't have a well defined behavior. We don't test that case.
AbsTester
=
makeBroadcastTester
(
op
=
tensor
.
abs_
,
AbsTester
=
makeBroadcastTester
(
op
=
tensor
.
abs_
,
expected
=
lambda
x
:
abs
(
x
),
good
=
_good_broadcast_unary_normal
,
grad
=
_grad_broadcast_unary_normal
)
_good_broadcast_unary_normal_abs
=
copy
(
_good_broadcast_unary_normal
)
# Can't do inplace on Abs as the input/output are not of the same type!
del
_good_broadcast_unary_normal_abs
[
'complex'
]
AbsInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
abs__inplace
,
AbsInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
abs__inplace
,
expected
=
lambda
x
:
numpy
.
abs
(
x
),
good
=
_good_broadcast_unary_normal_abs
,
grad
=
_grad_broadcast_unary_normal
,
inplace
=
True
)
NegTester
=
makeBroadcastTester
(
op
=
tensor
.
neg
,
NegTester
=
makeBroadcastTester
(
op
=
tensor
.
neg
,
expected
=
lambda
x
:
-
x
,
good
=
_good_broadcast_unary_normal
,
grad
=
_grad_broadcast_unary_normal
)
NegInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
neg_inplace
,
NegInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
neg_inplace
,
expected
=
lambda
x
:
-
x
,
good
=
_good_broadcast_unary_normal
,
grad
=
_grad_broadcast_unary_normal
,
inplace
=
True
)
SgnTester
=
makeBroadcastTester
(
op
=
tensor
.
sgn
,
SgnTester
=
makeBroadcastTester
(
op
=
tensor
.
sgn
,
expected
=
numpy
.
sign
,
good
=
_good_broadcast_unary_normal_no_complex
,
grad
=
_grad_broadcast_unary_normal
,)
SgnInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
sgn_inplace
,
SgnInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
sgn_inplace
,
expected
=
numpy
.
sign
,
good
=
_good_broadcast_unary_normal_no_complex
,
grad
=
_grad_broadcast_unary_normal
,
inplace
=
True
)
IntDivTester
=
makeBroadcastTester
(
op
=
tensor
.
int_div
,
expected
=
lambda
x
,
y
:
check_floatX
((
x
,
y
),
x
//
y
),
good
=
_good_broadcast_div_mod_normal_float
,
# I don't test the grad as the output is always an integer
# (this is not a continuous output).
#
grad=_grad_broadcast_div_mod_normal,
#
grad=_grad_broadcast_div_mod_normal,
)
IntDivInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
int_div_inplace
,
expected
=
lambda
x
,
y
:
check_floatX
((
x
,
y
),
x
//
y
),
good
=
_good_broadcast_div_mod_normal_float_inplace
,
# I don't test the grad as the output is always an integer
# (this is not a continuous output).
#
grad=_grad_broadcast_div_mod_normal,
#
grad=_grad_broadcast_div_mod_normal,
inplace
=
True
)
CeilTester
=
makeBroadcastTester
(
op
=
tensor
.
ceil
,
CeilTester
=
makeBroadcastTester
(
op
=
tensor
.
ceil
,
expected
=
upcast_float16_ufunc
(
numpy
.
ceil
),
good
=
_good_broadcast_unary_normal_no_complex
,
grad
=
copymod
(
_grad_broadcast_unary_normal_noint
,
extra
=
[
numpy
.
asarray
([
-
2.5
,
-
1.5
,
-
1.51
,
0.49
,
.
98
,
1.02
],
dtype
=
floatX
)]))
CeilInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
ceil_inplace
,
CeilInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
ceil_inplace
,
expected
=
upcast_float16_ufunc
(
numpy
.
ceil
),
good
=
_good_broadcast_unary_normal_no_complex
,
# corner cases includes a lot of integers: points where Ceil is not
...
...
@@ -1235,12 +1257,14 @@ CeilInplaceTester = makeBroadcastTester(op=inplace.ceil_inplace,
dtype
=
floatX
)]),
inplace
=
True
)
FloorTester
=
makeBroadcastTester
(
op
=
tensor
.
floor
,
FloorTester
=
makeBroadcastTester
(
op
=
tensor
.
floor
,
expected
=
upcast_float16_ufunc
(
numpy
.
floor
),
good
=
_good_broadcast_unary_normal_no_complex
,
grad
=
_grad_broadcast_unary_normal_noint
)
FloorInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
floor_inplace
,
FloorInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
floor_inplace
,
expected
=
upcast_float16_ufunc
(
numpy
.
floor
),
good
=
_good_broadcast_unary_normal_no_complex
,
grad
=
_grad_broadcast_unary_normal_noint
,
...
...
@@ -1278,7 +1302,6 @@ RoundHalfAwayFromZeroTester = makeBroadcastTester(
expected
=
lambda
a
:
theano
.
scalar
.
basic
.
round_half_away_from_zero_vec
(
a
),
good
=
_good_broadcast_unary_normal_float_no_empty_no_complex
,
grad
=
_grad_broadcast_unary_normal_no_complex_no_corner_case
)
#_good_broadcast_unary_normal_float)
RoundHalfAwayFromZeroInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
round_half_away_from_zero_inplace
,
...
...
@@ -1287,12 +1310,14 @@ RoundHalfAwayFromZeroInplaceTester = makeBroadcastTester(
grad
=
_grad_broadcast_unary_normal_no_complex_no_corner_case
,
inplace
=
True
)
SqrTester
=
makeBroadcastTester
(
op
=
tensor
.
sqr
,
SqrTester
=
makeBroadcastTester
(
op
=
tensor
.
sqr
,
expected
=
numpy
.
square
,
good
=
_good_broadcast_unary_normal
,
grad
=
_grad_broadcast_unary_normal
)
SqrInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
sqr_inplace
,
SqrInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
sqr_inplace
,
expected
=
numpy
.
square
,
good
=
_good_broadcast_unary_normal
,
grad
=
_grad_broadcast_unary_normal
,
...
...
@@ -1313,7 +1338,8 @@ ExpInplaceTester = makeBroadcastTester(
grad
=
_grad_broadcast_unary_normal
,
inplace
=
True
)
Exp2Tester
=
makeBroadcastTester
(
op
=
tensor
.
exp2
,
Exp2Tester
=
makeBroadcastTester
(
op
=
tensor
.
exp2
,
expected
=
upcast_float16_ufunc
(
numpy
.
exp2
),
good
=
_good_broadcast_unary_normal
,
grad
=
_grad_broadcast_unary_normal
)
...
...
@@ -1347,7 +1373,7 @@ _good_broadcast_unary_positive = dict(
uint8
=
[
numpy
.
arange
(
1
,
256
,
dtype
=
'uint8'
)],
complex
=
(
randc128_ranged
(
1
,
5
,
(
2
,
3
)),),
empty
=
(
numpy
.
asarray
([],
dtype
=
config
.
floatX
),),
)
)
_good_broadcast_unary_positive_float
=
copymod
(
_good_broadcast_unary_positive
,
...
...
@@ -1355,7 +1381,8 @@ _good_broadcast_unary_positive_float = copymod(
_grad_broadcast_unary_positive
=
dict
(
normal
=
(
rand_ranged
(
_eps
,
5
,
(
2
,
3
)),),)
LogTester
=
makeBroadcastTester
(
op
=
tensor
.
log
,
LogTester
=
makeBroadcastTester
(
op
=
tensor
.
log
,
expected
=
upcast_float16_ufunc
(
numpy
.
log
),
good
=
_good_broadcast_unary_positive
,
grad
=
_grad_broadcast_unary_positive
)
...
...
@@ -1366,7 +1393,8 @@ LogInplaceTester = makeBroadcastTester(
grad
=
_grad_broadcast_unary_positive
,
inplace
=
True
)
Log2Tester
=
makeBroadcastTester
(
op
=
tensor
.
log2
,
Log2Tester
=
makeBroadcastTester
(
op
=
tensor
.
log2
,
expected
=
upcast_float16_ufunc
(
numpy
.
log2
),
good
=
_good_broadcast_unary_positive
,
grad
=
_grad_broadcast_unary_positive
)
...
...
@@ -1377,7 +1405,8 @@ Log2InplaceTester = makeBroadcastTester(
grad
=
_grad_broadcast_unary_positive
,
inplace
=
True
)
Log10Tester
=
makeBroadcastTester
(
op
=
tensor
.
log10
,
Log10Tester
=
makeBroadcastTester
(
op
=
tensor
.
log10
,
expected
=
upcast_float16_ufunc
(
numpy
.
log10
),
good
=
_good_broadcast_unary_positive
,
grad
=
_grad_broadcast_unary_positive
)
...
...
@@ -1388,7 +1417,8 @@ Log10InplaceTester = makeBroadcastTester(
grad
=
_grad_broadcast_unary_positive
,
inplace
=
True
)
Log1pTester
=
makeBroadcastTester
(
op
=
tensor
.
log1p
,
Log1pTester
=
makeBroadcastTester
(
op
=
tensor
.
log1p
,
expected
=
upcast_float16_ufunc
(
numpy
.
log1p
),
good
=
_good_broadcast_unary_positive
,
grad
=
_grad_broadcast_unary_positive
)
...
...
@@ -1399,7 +1429,8 @@ Log1pInplaceTester = makeBroadcastTester(
grad
=
_grad_broadcast_unary_positive
,
inplace
=
True
)
SqrtTester
=
makeBroadcastTester
(
op
=
tensor
.
sqrt
,
SqrtTester
=
makeBroadcastTester
(
op
=
tensor
.
sqrt
,
expected
=
upcast_float16_ufunc
(
numpy
.
sqrt
),
good
=
_good_broadcast_unary_positive
,
grad
=
_grad_broadcast_unary_positive
)
...
...
@@ -1456,7 +1487,8 @@ Rad2degInplaceTester = makeBroadcastTester(
inplace
=
True
,
eps
=
angle_eps
)
SinTester
=
makeBroadcastTester
(
op
=
tensor
.
sin
,
SinTester
=
makeBroadcastTester
(
op
=
tensor
.
sin
,
expected
=
upcast_float16_ufunc
(
numpy
.
sin
),
good
=
_good_broadcast_unary_wide
,
grad
=
_grad_broadcast_unary_wide
)
...
...
@@ -1484,7 +1516,8 @@ _good_broadcast_unary_arcsin_float = copymod(
# unstable near those values
_grad_broadcast_unary_arcsin
=
dict
(
normal
=
(
rand_ranged
(
-
0.9
,
0.9
,
(
2
,
3
)),),)
ArcsinTester
=
makeBroadcastTester
(
op
=
tensor
.
arcsin
,
ArcsinTester
=
makeBroadcastTester
(
op
=
tensor
.
arcsin
,
expected
=
upcast_float16_ufunc
(
numpy
.
arcsin
),
good
=
_good_broadcast_unary_arcsin
,
grad
=
_grad_broadcast_unary_arcsin
)
...
...
@@ -1495,7 +1528,8 @@ ArcsinInplaceTester = makeBroadcastTester(
grad
=
_grad_broadcast_unary_arcsin
,
inplace
=
True
)
CosTester
=
makeBroadcastTester
(
op
=
tensor
.
cos
,
CosTester
=
makeBroadcastTester
(
op
=
tensor
.
cos
,
expected
=
upcast_float16_ufunc
(
numpy
.
cos
),
good
=
_good_broadcast_unary_wide
,
grad
=
_grad_broadcast_unary_wide
)
...
...
@@ -1506,13 +1540,15 @@ CosInplaceTester = makeBroadcastTester(
grad
=
_grad_broadcast_unary_wide
,
inplace
=
True
)
def
test_py_c_match
():
a
=
tensor
.
TensorType
(
dtype
=
'int8'
,
broadcastable
=
(
False
,))()
f
=
theano
.
function
([
a
],
tensor
.
arccos
(
a
),
mode
=
'DebugMode'
)
# This can fail in DebugMode
f
(
numpy
.
asarray
([
1
,
0
,
-
1
],
dtype
=
'int8'
))
ArccosTester
=
makeBroadcastTester
(
op
=
tensor
.
arccos
,
ArccosTester
=
makeBroadcastTester
(
op
=
tensor
.
arccos
,
expected
=
upcast_float16_ufunc
(
numpy
.
arccos
),
good
=
_good_broadcast_unary_arcsin
,
grad
=
_grad_broadcast_unary_arcsin
)
...
...
@@ -1536,7 +1572,8 @@ _good_broadcast_unary_tan = dict(
_grad_broadcast_unary_tan
=
dict
(
normal
=
(
rand_ranged
(
-
1.5
,
1.5
,
(
2
,
3
)),),
shifted
=
(
rand_ranged
(
1.6
,
4.6
,
(
2
,
3
)),))
TanTester
=
makeBroadcastTester
(
op
=
tensor
.
tan
,
TanTester
=
makeBroadcastTester
(
op
=
tensor
.
tan
,
expected
=
upcast_float16_ufunc
(
numpy
.
tan
),
good
=
_good_broadcast_unary_tan
,
grad
=
_grad_broadcast_unary_tan
)
...
...
@@ -1548,7 +1585,8 @@ TanInplaceTester = makeBroadcastTester(
grad
=
_grad_broadcast_unary_tan
,
inplace
=
True
)
ArctanTester
=
makeBroadcastTester
(
op
=
tensor
.
arctan
,
ArctanTester
=
makeBroadcastTester
(
op
=
tensor
.
arctan
,
expected
=
upcast_float16_ufunc
(
numpy
.
arctan
),
good
=
_good_broadcast_unary_wide
,
grad
=
_grad_broadcast_unary_wide
)
...
...
@@ -1661,7 +1699,8 @@ ArcsinhInplaceTester = makeBroadcastTester(
grad
=
_grad_broadcast_unary_normal
,
inplace
=
True
)
TanhTester
=
makeBroadcastTester
(
op
=
tensor
.
tanh
,
TanhTester
=
makeBroadcastTester
(
op
=
tensor
.
tanh
,
expected
=
upcast_float16_ufunc
(
numpy
.
tanh
),
good
=
_good_broadcast_unary_normal
,
grad
=
_grad_broadcast_unary_normal
)
...
...
@@ -2123,7 +2162,8 @@ ConjInplaceTester = makeBroadcastTester(
inplace
=
True
)
DotTester
=
makeTester
(
name
=
'DotTester'
,
DotTester
=
makeTester
(
name
=
'DotTester'
,
op
=
dot
,
expected
=
lambda
x
,
y
:
numpy
.
dot
(
x
,
y
),
checks
=
{},
...
...
@@ -2131,12 +2171,9 @@ DotTester = makeTester(name='DotTester',
correct2
=
(
rand
(
5
,
7
),
rand
(
7
,
9
)),
correct3
=
(
rand
(
5
,
7
),
rand
(
7
)),
correct4
=
(
rand
(
5
),
rand
(
5
,
7
)),
mixed1
=
(
rand
(
5
)
.
astype
(
'float32'
),
rand
(
5
,
7
)),
mixed2
=
(
rand
(
5
)
.
astype
(
'float64'
),
rand
(
5
,
7
)),
complex1
=
(
randcomplex
(
5
,
7
),
randcomplex
(
7
)),
mixed1
=
(
rand
(
5
)
.
astype
(
'float32'
),
rand
(
5
,
7
)),
mixed2
=
(
rand
(
5
)
.
astype
(
'float64'
),
rand
(
5
,
7
)),
complex1
=
(
randcomplex
(
5
,
7
),
randcomplex
(
7
)),
complex2
=
(
rand
(
5
,
7
),
randcomplex
(
7
)),
complex3
=
(
randcomplex
(
5
,
7
),
rand
(
7
)),
empty1
=
(
numpy
.
asarray
([],
dtype
=
config
.
floatX
),
...
...
@@ -2241,8 +2278,7 @@ SecondBroadcastTester = makeTester(
multi_dtype_checks
((
2
,
3
,
2
),
(
3
,
2
)),
multi_dtype_checks
((
2
,
3
,
2
),
(
2
,)),
)),
# I can't think of any way to make this fail at
# build time
# I can't think of any way to make this fail at build time
# Just some simple smoke tests
bad_runtime
=
dict
(
fail1
=
(
rand
(
5
,
4
),
rand
(
5
)),
...
...
@@ -2283,43 +2319,32 @@ AllocTester = makeBroadcastTester(
correct01_bcast
=
(
rand
(
1
),
numpy
.
int32
(
7
)),
correct02
=
(
rand
(),
numpy
.
int32
(
4
),
numpy
.
int32
(
7
)),
correct12
=
(
rand
(
7
),
numpy
.
int32
(
4
),
numpy
.
int32
(
7
)),
correct13
=
(
rand
(
7
),
numpy
.
int32
(
2
),
numpy
.
int32
(
4
),
numpy
.
int32
(
7
)),
correct23
=
(
rand
(
4
,
7
),
numpy
.
int32
(
2
),
numpy
.
int32
(
4
),
numpy
.
int32
(
7
)),
correct13
=
(
rand
(
7
),
numpy
.
int32
(
2
),
numpy
.
int32
(
4
),
numpy
.
int32
(
7
)),
correct23
=
(
rand
(
4
,
7
),
numpy
.
int32
(
2
),
numpy
.
int32
(
4
),
numpy
.
int32
(
7
)),
correctb1
=
(
rand
(
1
,
7
),
numpy
.
int32
(
4
),
numpy
.
int32
(
7
)),
correctb2
=
(
rand
(
1
,
7
),
numpy
.
int32
(
2
),
numpy
.
int32
(
4
),
numpy
.
int32
(
7
)),
correctb2
=
(
rand
(
1
,
7
),
numpy
.
int32
(
2
),
numpy
.
int32
(
4
),
numpy
.
int32
(
7
)),
correctb3
=
(
rand
(
7
,
1
),
numpy
.
int32
(
7
),
numpy
.
int32
(
4
)),
correctb4
=
(
rand
(
7
,
1
),
numpy
.
int32
(
2
),
numpy
.
int32
(
7
),
numpy
.
int32
(
4
)),
correctb4
=
(
rand
(
7
,
1
),
numpy
.
int32
(
2
),
numpy
.
int32
(
7
),
numpy
.
int32
(
4
)),
),
bad_runtime
=
dict
(
bad_shape12
=
(
rand
(
7
),
numpy
.
int32
(
7
),
numpy
.
int32
(
5
)),
),
bad_build
=
dict
(
vec
=
(
rand
(
1
),
[
numpy
.
int32
(
2
)]),
too_big32
=
(
rand
(
6
,
2
,
4
),
numpy
.
int32
(
6
),
numpy
.
int32
(
2
)),
too_big32b
=
(
rand
(
6
,
2
,
4
),
numpy
.
int32
(
6
),
numpy
.
int32
(
4
)),
too_big32c
=
(
rand
(
6
,
2
,
4
),
numpy
.
int32
(
2
),
numpy
.
int32
(
4
)),
too_big32d
=
(
rand
(
6
,
2
,
4
),
numpy
.
int32
(
2
),
numpy
.
int32
(
6
)),
too_big32e
=
(
rand
(
6
,
2
,
4
),
numpy
.
int32
(
4
),
numpy
.
int32
(
6
)),
too_big32f
=
(
rand
(
6
,
2
,
4
),
numpy
.
int32
(
4
),
numpy
.
int32
(
2
)),
too_big32
=
(
rand
(
6
,
2
,
4
),
numpy
.
int32
(
6
),
numpy
.
int32
(
2
)),
too_big32b
=
(
rand
(
6
,
2
,
4
),
numpy
.
int32
(
6
),
numpy
.
int32
(
4
)),
too_big32c
=
(
rand
(
6
,
2
,
4
),
numpy
.
int32
(
2
),
numpy
.
int32
(
4
)),
too_big32d
=
(
rand
(
6
,
2
,
4
),
numpy
.
int32
(
2
),
numpy
.
int32
(
6
)),
too_big32e
=
(
rand
(
6
,
2
,
4
),
numpy
.
int32
(
4
),
numpy
.
int32
(
6
)),
too_big32f
=
(
rand
(
6
,
2
,
4
),
numpy
.
int32
(
4
),
numpy
.
int32
(
2
)),
),
)
)
# Since not all inputs of Alloc are differentiable, we need different testers
s1
,
s2
,
s3
=
randint_ranged
(
1
,
13
,
(
3
,))
# alloc a scalar into a vector
Alloc01GradTester
=
makeBroadcastTester
(
name
=
'Alloc01GradTester'
,
#op = (lambda self, x: alloc(x, s1)),
op
=
(
lambda
x
:
alloc
(
x
,
s1
)),
expected
=
(
lambda
x
:
numpy
.
zeros
((
s1
,),
dtype
=
x
.
dtype
)
+
x
),
grad
=
dict
(
...
...
@@ -2332,7 +2357,6 @@ Alloc01GradTester = makeBroadcastTester(
# alloc a vector into a tensor3
Alloc13GradTester
=
makeBroadcastTester
(
name
=
'Alloc13GradTester'
,
#op = (lambda self, x: alloc(x, s1, s2, s3)),
op
=
(
lambda
x
:
alloc
(
x
,
s1
,
s2
,
s3
)),
expected
=
(
lambda
x
:
numpy
.
zeros
((
s1
,
s2
,
s3
),
dtype
=
x
.
dtype
)
+
x
),
grad
=
dict
(
...
...
@@ -2431,7 +2455,7 @@ class TestAsTensorVariable(unittest.TestCase):
def
test_one_output
(
self
):
good_apply_var
=
ApplyDefaultTestOp
(
0
)
.
make_node
(
self
.
x
)
x
=
as_tensor_variable
(
good_apply_var
)
as_tensor_variable
(
good_apply_var
)
def
test_below_zero_output
(
self
):
bad_apply_var
=
ApplyDefaultTestOp
(
-
1
)
.
make_node
(
self
.
x
)
...
...
@@ -2472,7 +2496,7 @@ class TestAlloc(unittest.TestCase):
variables
=
self
.
shared
(
numpy
.
ones
((
50
,),
dtype
=
self
.
dtype
))
idx
=
tensor
.
constant
(
numpy
.
arange
(
50
))
for
alloc
,
(
subtensor
,
n_alloc
)
in
zip
(
self
.
allocs
,
[
for
alloc
_
,
(
subtensor
,
n_alloc
)
in
zip
(
self
.
allocs
,
[
# IncSubtensor1
(
some_matrix
[:
60
],
2
),
# AdvancedIncSubtensor1
...
...
@@ -2487,31 +2511,31 @@ class TestAlloc(unittest.TestCase):
fgrad
=
theano
.
function
([
some_vector
],
grad_derp
,
mode
=
self
.
mode
)
topo_obj
=
fobj
.
maker
.
fgraph
.
toposort
()
#<= is needed as the GPU currently don't implement
#
<= is needed as the GPU currently don't implement
# AdvancedIncSubtensor. When this is the case it can be
# replaced with ==.
assert
numpy
.
sum
([
isinstance
(
node
.
op
,
type
(
alloc
))
assert
numpy
.
sum
([
isinstance
(
node
.
op
,
type
(
alloc
_
))
for
node
in
topo_obj
])
<=
1
topo_grad
=
fgrad
.
maker
.
fgraph
.
toposort
()
# print subtensor
# theano.printing.debugprint(fgrad)
assert
numpy
.
sum
([
isinstance
(
node
.
op
,
type
(
alloc
))
assert
numpy
.
sum
([
isinstance
(
node
.
op
,
type
(
alloc
_
))
for
node
in
topo_grad
])
==
n_alloc
,
(
alloc
,
subtensor
,
n_alloc
,
topo_grad
)
alloc
_
,
subtensor
,
n_alloc
,
topo_grad
)
fobj
(
test_params
)
fgrad
(
test_params
)
def
test_alloc_output
(
self
):
val
=
tensor
.
constant
(
self
.
rng
.
randn
(
1
,
1
),
dtype
=
self
.
dtype
)
for
alloc
in
self
.
allocs
:
for
alloc
_
in
self
.
allocs
:
# The output is the result of the alloc operation,
# we do not want it to be constant-folded
out
=
alloc
(
val
,
50
,
60
)
out
=
alloc
_
(
val
,
50
,
60
)
f
=
theano
.
function
([],
out
,
mode
=
self
.
mode
)
topo
=
f
.
maker
.
fgraph
.
toposort
()
assert
numpy
.
sum
([
isinstance
(
node
.
op
,
type
(
alloc
))
assert
numpy
.
sum
([
isinstance
(
node
.
op
,
type
(
alloc
_
))
for
node
in
topo
])
==
1
assert
not
isinstance
(
topo
[
0
]
.
op
,
DeepCopyOp
)
...
...
@@ -2800,7 +2824,8 @@ class CastTester(unittest.TestCase):
self
.
assertRaises
(
TypeError
,
tensor
.
cast
(
inp
,
dtype
=
complex_dtype
))
ClipTester
=
makeTester
(
name
=
'ClipTester'
,
ClipTester
=
makeTester
(
name
=
'ClipTester'
,
op
=
clip
,
expected
=
lambda
x
,
y
,
z
:
numpy
.
clip
(
x
,
y
,
z
),
good
=
dict
(
correct1
=
((
5
*
rand
(
5
,
5
))
.
astype
(
'float32'
),
...
...
@@ -2832,8 +2857,8 @@ ClipTester = makeTester(name='ClipTester',
correct9
=
(
randint
(
0
,
5
)
.
astype
(
'uint16'
),
numpy
.
array
(
2
,
dtype
=
'uint16'
),
numpy
.
array
(
4
,
dtype
=
'uint16'
)),)
)
# I can't think of any way to make this fail at runtime
)
class
T_Clip
(
unittest
.
TestCase
):
...
...
@@ -3122,7 +3147,7 @@ class T_max_and_argmax(unittest.TestCase):
try
:
eval_outputs
(
max_and_argmax
(
n
,
3
))
assert
False
except
ValueError
as
e
:
except
ValueError
:
pass
finally
:
_logger
.
setLevel
(
oldlevel
)
...
...
@@ -3135,7 +3160,7 @@ class T_max_and_argmax(unittest.TestCase):
try
:
eval_outputs
(
max_and_argmax
(
n
,
-
3
))
assert
False
except
ValueError
as
e
:
except
ValueError
:
pass
finally
:
sys
.
stderr
=
old_stderr
...
...
@@ -3338,7 +3363,7 @@ class T_argmin_argmax(unittest.TestCase):
try
:
eval_outputs
(
fct
(
n
,
3
))
assert
False
except
ValueError
as
e
:
except
ValueError
:
pass
finally
:
_logger
.
setLevel
(
oldlevel
)
...
...
@@ -3352,7 +3377,7 @@ class T_argmin_argmax(unittest.TestCase):
try
:
eval_outputs
(
fct
(
n
,
-
3
))
assert
False
except
ValueError
as
e
:
except
ValueError
:
pass
finally
:
sys
.
stderr
=
old_stderr
...
...
@@ -3401,7 +3426,7 @@ class T_argmin_argmax(unittest.TestCase):
try
:
cost
=
argmin
(
n
,
axis
=-
1
)
cost
.
name
=
None
g
=
g
rad
(
cost
,
n
)
grad
(
cost
,
n
)
raise
Exception
(
'Expected an error'
)
except
TypeError
:
pass
...
...
@@ -3471,7 +3496,7 @@ class T_min_max(unittest.TestCase):
try
:
eval_outputs
(
fct
(
n
,
3
))
assert
False
except
ValueError
as
e
:
except
ValueError
:
pass
finally
:
_logger
.
setLevel
(
oldlevel
)
...
...
@@ -3485,7 +3510,7 @@ class T_min_max(unittest.TestCase):
try
:
eval_outputs
(
fct
(
n
,
-
3
))
assert
False
except
ValueError
as
e
:
except
ValueError
:
pass
finally
:
sys
.
stderr
=
old_stderr
...
...
@@ -3544,8 +3569,8 @@ class T_min_max(unittest.TestCase):
z
[
numpy
.
argmax
(
data
,
axis
=
axis
)]
+=
1
else
:
for
id
,
v
in
enumerate
(
argmax
):
z
[
v
*
numpy
.
prod
(
data
.
shape
[
data
.
ndim
-
1
:
axis
:
-
1
])
+
id
]
+=
1
z
[
v
*
numpy
.
prod
(
data
.
shape
[
data
.
ndim
-
1
:
axis
:
-
1
])
+
id
]
+=
1
z
=
z
.
reshape
(
data
.
shape
)
assert
numpy
.
all
(
max_grad_data
==
z
)
...
...
@@ -3578,8 +3603,8 @@ class T_min_max(unittest.TestCase):
z
[
numpy
.
argmin
(
data
,
axis
=
axis
)]
+=
1
else
:
for
id
,
v
in
enumerate
(
argmin
):
z
[
v
*
numpy
.
prod
(
data
.
shape
[
data
.
ndim
-
1
:
axis
:
-
1
])
+
id
]
+=
1
z
[
v
*
numpy
.
prod
(
data
.
shape
[
data
.
ndim
-
1
:
axis
:
-
1
])
+
id
]
+=
1
z
=
z
.
reshape
(
data
.
shape
)
assert
numpy
.
all
(
min_grad_data
==
z
)
...
...
@@ -3604,9 +3629,9 @@ class T_min_max(unittest.TestCase):
# This not implemented, so we disable the test. See ticket:
# http://www.assembla.com/spaces/theano/tickets/511
data
=
rand
(
2
,
3
)
n
=
as_tensor_variable
(
data
)
for
fct
in
[
max_and_argmax
,
max
,
min
]:
utt
.
verify_grad
(
lambda
v
:
fct
(
v
,
axis
=
[
0
,
1
]),
[
data
])
# n = as_tensor_variable(data)
# check_grad_max(data, eval_outputs(grad(max_and_argmax(n,
# axis=1)[0], n)),axis=1)
...
...
@@ -3676,15 +3701,13 @@ class T_Join_and_Split(unittest.TestCase):
Join
.
debug
=
False
utt
.
seed_rng
()
self
.
mode
=
theano
.
compile
.
get_default_mode
()
.
excluding
(
'constant_folding'
)
'constant_folding'
)
self
.
join_op
=
Join
()
self
.
split_op_class
=
Split
self
.
make_vector_op
=
opt
.
MakeVector
()
self
.
floatX
=
config
.
floatX
self
.
hide_error
=
theano
.
config
.
mode
not
in
[
'DebugMode'
,
'DEBUG_MODE'
,
'FAST_COMPILE'
]
self
.
hide_error
=
theano
.
config
.
mode
not
in
[
'DebugMode'
,
'DEBUG_MODE'
,
'FAST_COMPILE'
]
self
.
shared
=
shared
def
eval_outputs_and_check_join
(
self
,
outputs
):
...
...
@@ -3713,11 +3736,7 @@ class T_Join_and_Split(unittest.TestCase):
def
test_join_scalar
(
self
):
a
=
as_tensor_variable
(
1
)
b
=
as_tensor_variable
(
2
)
try
:
s
=
join
(
0
,
a
,
b
)
except
TypeError
:
return
self
.
fail
()
self
.
assertRaises
(
TypeError
,
join
,
0
,
a
,
b
)
def
test_stack_mixed_type_constants
(
self
):
# tested only on cpu as gpu support only float32
...
...
@@ -3811,16 +3830,16 @@ class T_Join_and_Split(unittest.TestCase):
s
=
stack
([
a
,
b
],
axis
=-
1
)
f
=
function
([
a
,
b
],
s
,
mode
=
self
.
mode
)
v
=
numpy
.
zeros
((
2
,
3
,
2
))
v
[:,
:,
0
]
=
v1
v
[:,
:,
1
]
=
v2
v
[:,
:,
0
]
=
v1
v
[:,
:,
1
]
=
v2
out
=
f
(
v1
,
v2
)
self
.
assertTrue
(
v
.
shape
==
out
.
shape
)
self
.
assertTrue
(
numpy
.
all
(
v
==
out
))
s
=
stack
([
a
,
b
],
axis
=-
2
)
f
=
function
([
a
,
b
],
s
,
mode
=
self
.
mode
)
v
=
numpy
.
zeros
((
2
,
2
,
3
))
v
[:,
0
,
:]
=
v1
v
[:,
1
,
:]
=
v2
v
[:,
0
,
:]
=
v1
v
[:,
1
,
:]
=
v2
out
=
f
(
v1
,
v2
)
self
.
assertTrue
(
v
.
shape
==
out
.
shape
)
self
.
assertTrue
(
numpy
.
all
(
v
==
out
))
...
...
@@ -4279,7 +4298,7 @@ class T_Join_and_Split(unittest.TestCase):
# Should raise an error if length of dimension 0 is not 1
self
.
assertRaises
(
TypeError
,
a
.
set_value
,
rng
.
rand
(
2
,
4
,
1
)
.
astype
(
self
.
floatX
))
#self.assertRaises(TypeError, f, bad_a_val)
#
self.assertRaises(TypeError, f, bad_a_val)
def
test_broadcastable_flags_many_dims_and_inputs
(
self
):
# Test that the right broadcastable flags get set for a join
...
...
@@ -4382,7 +4401,7 @@ class T_Join_and_Split(unittest.TestCase):
x
=
tensor
.
TensorType
(
self
.
floatX
,
[
False
,
False
,
True
])()
u
=
tensor
.
TensorType
(
self
.
floatX
,
[
False
,
False
,
True
])()
# This line used to crash.
z
=
tensor
.
concatenate
([
x
,
-
u
],
axis
=
2
)
tensor
.
concatenate
([
x
,
-
u
],
axis
=
2
)
def
test_concatenate_same
(
self
):
# Test that we can concatenate the same tensor multiple time.
...
...
@@ -4426,6 +4445,7 @@ class T_Join_and_Split(unittest.TestCase):
for
node
in
f
.
maker
.
fgraph
.
toposort
()])
self
.
assertRaises
(
ValueError
,
f
)
def
test_join_inplace
():
# Test join to work inplace.
#
...
...
@@ -4443,7 +4463,7 @@ def test_join_inplace():
f
=
theano
.
function
([
theano
.
In
(
x
,
borrow
=
True
),
s
],
theano
.
Out
(
c
,
borrow
=
True
))
data
=
numpy
.
array
([
3
,
4
,
5
],
dtype
=
theano
.
config
.
floatX
)
print
(
f
(
data
,
0
))
print
(
f
(
data
,
0
))
if
theano
.
config
.
mode
not
in
[
"DebugMode"
,
"DEBUG_MODE"
]:
assert
f
(
data
,
0
)
is
data
...
...
@@ -5058,7 +5078,7 @@ class t_dot(unittest.TestCase):
_logger
.
setLevel
(
logging
.
CRITICAL
)
try
:
try
:
tz
=
eval_outputs
([
z
])
eval_outputs
([
z
])
assert
False
# should have raised exception
except
ValueError
as
e
:
e0
=
exc_message
(
e
)
...
...
@@ -5068,8 +5088,8 @@ class t_dot(unittest.TestCase):
# Reported by blas or Theano.
e0
.
split
()[
0
:
2
]
==
[
'Shape'
,
'mismatch:'
]
or
# Reported by Theano perform
e0
.
split
()[
0
:
4
]
==
[
'Incompatible'
,
'shapes'
,
'for'
,
'gemv'
]
or
(
e0
.
split
()[
0
:
4
]
==
[
'Incompatible'
,
'shapes'
,
'for'
,
'gemv'
])
or
e
)
finally
:
_logger
.
setLevel
(
oldlevel
)
...
...
@@ -5251,7 +5271,7 @@ class T_scalarfromtensor(unittest.TestCase):
self
.
assertTrue
(
v
.
shape
==
(),
v
.
shape
)
tt
=
lscalar
()
ss
=
scalar_from_tensor
(
tt
)
g
=
ss
.
owner
.
op
.
grad
([
tt
],
[
ss
])
ss
.
owner
.
op
.
grad
([
tt
],
[
ss
])
fff
=
function
([
tt
],
ss
)
v
=
fff
(
numpy
.
asarray
(
5
))
self
.
assertTrue
(
v
==
5
,
v
)
...
...
@@ -5341,7 +5361,6 @@ class test_grad(unittest.TestCase):
def
test_cost_is_scalar
(
self
):
# grad: Test that a non-scalar cost raises a TypeError
s
=
scalar
()
v
=
vector
()
m
=
matrix
()
# grad(v,...) and grad(m,...) should fail
...
...
@@ -5355,7 +5374,6 @@ class T_op_cache(unittest.TestCase):
def
test0
(
self
):
# trigger bug in ticket #162
lr
=
constant
(
0.011
)
v
=
matrix
()
v
.
name
=
'v'
gv
=
fill
(
v
/
v
,
1.0
)
/
v
-
(
fill
(
v
/
v
,
1.0
)
*
v
)
/
(
v
*
v
)
...
...
@@ -5501,14 +5519,12 @@ class T_reshape(utt.InferShapeTester, utt.TestOptimizationMixin):
# Test reshape to 1 dim
r
=
a
.
reshape
(
shapes
,
ndim
=
1
)
z
=
zeros_like
(
r
)
f
=
self
.
function
([
a
,
shapes
],
r
)
self
.
assertRaises
(
ValueError
,
f
,
a_val
,
[
13
])
# Test reshape to 2 dim
r
=
a
.
reshape
(
shapes
,
ndim
=
2
)
z
=
zeros_like
(
r
)
f
=
self
.
function
([
a
,
shapes
],
r
)
...
...
@@ -5631,16 +5647,8 @@ def test_flatten_broadcastable():
def
test_flatten_outdim_invalid
():
a
=
dmatrix
()
try
:
c
=
flatten
(
a
,
3
)
assert
False
except
ValueError
:
pass
try
:
c
=
flatten
(
a
,
0
)
assert
False
except
ValueError
:
pass
assert_raises
(
ValueError
,
flatten
,
a
,
3
)
assert_raises
(
ValueError
,
flatten
,
a
,
0
)
def
test_is_flat
():
...
...
@@ -5717,61 +5725,61 @@ def test_tile():
k
=
0
for
xtype
in
[
vector
(),
matrix
(),
tensor3
(),
tensor4
()]:
x
=
xtype
k
=
k
+
1
k
=
k
+
1
x_
=
rng
.
randn
(
*
test_shape
[
0
:
k
])
.
astype
(
config
.
floatX
)
# integer:
reps_
=
2
f
=
function
([
x
],
tile
(
x
,
reps_
))
assert
numpy
.
all
(
f
(
x_
)
==
numpy
.
tile
(
x_
,
reps_
))
assert
numpy
.
all
(
f
(
x_
)
==
numpy
.
tile
(
x_
,
reps_
))
# tensor.scalar:
reps
=
iscalar
()
reps_
=
2
f
=
function
([
x
,
reps
],
tile
(
x
,
reps
))
assert
numpy
.
all
(
f
(
x_
,
reps_
)
==
numpy
.
tile
(
x_
,
reps_
))
assert
numpy
.
all
(
f
(
x_
,
reps_
)
==
numpy
.
tile
(
x_
,
reps_
))
# tensor.vector:
reps
=
ivector
()
reps_
=
[
2
]
if
k
==
1
or
k
==
2
else
[
2
,
3
]
ndim_
=
k
f
=
function
([
x
,
reps
],
tile
(
x
,
reps
,
ndim_
))
assert
numpy
.
all
(
f
(
x_
,
reps_
)
==
numpy
.
tile
(
x_
,
reps_
))
assert
numpy
.
all
(
f
(
x_
,
reps_
)
==
numpy
.
tile
(
x_
,
reps_
))
# list of integers:
reps_
=
[
2
,
3
,
4
]
f
=
function
([
x
],
tile
(
x
,
reps_
))
assert
numpy
.
all
(
f
(
x_
)
==
numpy
.
tile
(
x_
,
reps_
))
assert
numpy
.
all
(
f
(
x_
)
==
numpy
.
tile
(
x_
,
reps_
))
# list of integers and tensor.scalars:
d
=
iscalar
()
reps
=
[
2
,
d
,
4
]
f
=
function
([
x
,
d
],
tile
(
x
,
reps
))
reps_
=
[
2
,
3
,
4
]
assert
numpy
.
all
(
f
(
x_
,
3
)
==
numpy
.
tile
(
x_
,
reps_
))
assert
numpy
.
all
(
f
(
x_
,
3
)
==
numpy
.
tile
(
x_
,
reps_
))
# reps is list, len(reps) > x.ndim, 3 cases below:
r
=
[
2
,
3
,
4
,
5
,
6
]
reps_
=
r
[:
k
+
1
]
# len(reps_) = x.ndim+1
reps_
=
r
[:
k
+
1
]
# len(reps_) = x.ndim+1
# (1) ndim = None.
f
=
function
([
x
],
tile
(
x
,
reps_
))
assert
numpy
.
all
(
f
(
x_
)
==
numpy
.
tile
(
x_
,
reps_
))
assert
numpy
.
all
(
f
(
x_
)
==
numpy
.
tile
(
x_
,
reps_
))
# (2) ndim = len(reps).
ndim_
=
len
(
reps_
)
f
=
function
([
x
],
tile
(
x
,
reps_
,
ndim_
))
assert
numpy
.
all
(
f
(
x_
)
==
numpy
.
tile
(
x_
,
reps_
))
assert
numpy
.
all
(
f
(
x_
)
==
numpy
.
tile
(
x_
,
reps_
))
# (3) ndim > len(reps)
ndim_
=
len
(
reps_
)
+
1
f
=
function
([
x
],
tile
(
x
,
reps_
,
ndim_
))
assert
numpy
.
all
(
f
(
x_
)
==
numpy
.
tile
(
x_
,
[
1
]
+
reps_
))
assert
numpy
.
all
(
f
(
x_
)
==
numpy
.
tile
(
x_
,
[
1
]
+
reps_
))
# reps is list, ndim > x.ndim > len(reps):
r
=
[
2
,
3
,
4
,
5
]
if
k
>
1
:
ndim_
=
k
+
1
reps_
=
r
[:
k
-
1
]
ndim_
=
k
+
1
reps_
=
r
[:
k
-
1
]
f
=
function
([
x
],
tile
(
x
,
reps_
,
ndim_
))
assert
numpy
.
all
(
f
(
x_
)
==
numpy
.
tile
(
x_
,
[
1
,
1
]
+
reps_
))
assert
numpy
.
all
(
f
(
x_
)
==
numpy
.
tile
(
x_
,
[
1
,
1
]
+
reps_
))
# error raising test: ndim not specified when reps is vector
reps
=
ivector
()
...
...
@@ -5787,14 +5795,14 @@ def test_tile():
# error raising test: ndim is not None, ndim < x.ndim
# 3 cases below (reps is list/tensor.scalar/tensor.vector):
for
reps
in
[[
2
,
3
,
4
],
iscalar
(),
ivector
()]:
for
reps
in
[[
2
,
3
,
4
],
iscalar
(),
ivector
()]:
if
k
>
1
:
ndim
=
k
-
1
ndim
=
k
-
1
numpy
.
testing
.
assert_raises
(
ValueError
,
tile
,
x
,
reps
,
ndim
)
# error raising test: reps is list, len(reps) > ndim
r
=
[
2
,
3
,
4
,
5
,
6
]
reps
=
r
[:
k
+
1
]
reps
=
r
[:
k
+
1
]
ndim
=
k
numpy
.
testing
.
assert_raises
(
ValueError
,
tile
,
x
,
reps
,
ndim
)
...
...
@@ -5803,11 +5811,12 @@ def test_tile():
# reps_value is the real value when excuting the function.
reps
=
ivector
()
r
=
[
2
,
3
,
4
,
5
,
6
,
7
]
reps_
=
r
[:
k
+
2
]
ndim_
=
k
+
1
reps_
=
r
[:
k
+
2
]
ndim_
=
k
+
1
f
=
function
([
x
,
reps
],
tile
(
x
,
reps
,
ndim_
))
numpy
.
testing
.
assert_raises
(
AssertionError
,
f
,
x_
,
reps_
)
def
test_tile_grad
():
def
grad_tile
(
x
,
reps
,
np_x
):
...
...
@@ -5887,8 +5896,8 @@ class TestARange(unittest.TestCase):
assert
out
.
dtype
==
config
.
floatX
else
:
raise
NotImplementedError
(
config
.
cast_policy
)
arg_vals
=
[(
0
,
5
,
1
),
(
2
,
11
,
4
),
(
-
5
,
1.1
,
1.2
),
(
1.3
,
2
,
-
2.1
),
(
10
,
2
,
2
)]
arg_vals
=
[(
0
,
5
,
1
),
(
2
,
11
,
4
),
(
-
5
,
1.1
,
1.2
),
(
1.3
,
2
,
-
2.1
),
(
10
,
2
,
2
)]
for
arg_v
in
arg_vals
:
start_v
,
stop_v
,
step_v
=
arg_v
start_v_
,
stop_v_
,
step_v_
=
numpy
.
asarray
(
arg_v
,
...
...
@@ -5911,8 +5920,8 @@ class TestARange(unittest.TestCase):
f
=
function
([
start
,
stop
,
step
],
out
)
assert
out
.
dtype
==
start
.
type
.
dtype
arg_vals
=
[(
0
,
5
,
1
),
(
2
,
11
,
4
),
(
-
5
,
1.1
,
1.2
),
(
1.3
,
2
,
-
2.1
),
(
10
,
2
,
2
)]
arg_vals
=
[(
0
,
5
,
1
),
(
2
,
11
,
4
),
(
-
5
,
1.1
,
1.2
),
(
1.3
,
2
,
-
2.1
),
(
10
,
2
,
2
)]
for
arg_v
in
arg_vals
:
start_v
,
stop_v
,
step_v
=
arg_v
start_v_
,
stop_v_
,
step_v_
=
numpy
.
asarray
(
arg_v
,
...
...
@@ -6128,7 +6137,7 @@ class TestARange(unittest.TestCase):
out
=
arange
(
0
,
stop
,
1
)
f
=
function
([
stop
],
out
.
shape
,
mode
=
mode
)
assert
len
(
f
.
maker
.
fgraph
.
toposort
())
==
2
#[Elemwise{Cast{int64}}(stop), MakeVector(Elemwise{Cast{int64}}.0)]
#
[Elemwise{Cast{int64}}(stop), MakeVector(Elemwise{Cast{int64}}.0)]
if
config
.
cast_policy
==
'custom'
:
assert
out
.
dtype
==
'int64'
...
...
@@ -6174,26 +6183,26 @@ class TestNdGrid(unittest.TestCase):
def
test_mgrid_theano_variable_numpy_equiv
(
self
):
nfmgrid
=
numpy
.
mgrid
[
0
:
1
:
.
1
,
1
:
10
:
1.
,
10
:
100
:
10.
]
nimgrid
=
numpy
.
mgrid
[
0
:
2
:
1
,
1
:
10
:
1
,
10
:
100
:
10
]
i
,
j
,
k
=
dscalars
(
'i'
,
'j'
,
'k'
)
l
,
m
,
n
=
iscalars
(
'l'
,
'm'
,
'n'
)
i
,
j
,
k
=
dscalars
(
'i'
,
'j'
,
'k'
)
l
,
m
,
n
=
iscalars
(
'l'
,
'm'
,
'n'
)
tfmgrid
=
mgrid
[
i
:
1
:
.
1
,
1
:
j
:
1.
,
10
:
100
:
k
]
timgrid
=
mgrid
[
l
:
2
:
1
,
1
:
m
:
1
,
10
:
100
:
n
]
ff
=
theano
.
function
([
i
,
j
,
k
],
tfmgrid
)
fi
=
theano
.
function
([
l
,
m
,
n
],
timgrid
)
for
n
,
t
in
zip
((
nfmgrid
,
nimgrid
),
(
ff
(
0
,
10
,
10.
),
fi
(
0
,
10
,
10
))):
for
n
,
t
in
zip
((
nfmgrid
,
nimgrid
),
(
ff
(
0
,
10
,
10.
),
fi
(
0
,
10
,
10
))):
for
ng
,
tg
in
zip
(
n
,
t
):
utt
.
assert_allclose
(
ng
,
tg
)
def
test_ogrid_theano_variable_numpy_equiv
(
self
):
nfogrid
=
numpy
.
ogrid
[
0
:
1
:
.
1
,
1
:
10
:
1.
,
10
:
100
:
10.
]
niogrid
=
numpy
.
ogrid
[
0
:
2
:
1
,
1
:
10
:
1
,
10
:
100
:
10
]
i
,
j
,
k
=
dscalars
(
'i'
,
'j'
,
'k'
)
l
,
m
,
n
=
iscalars
(
'l'
,
'm'
,
'n'
)
i
,
j
,
k
=
dscalars
(
'i'
,
'j'
,
'k'
)
l
,
m
,
n
=
iscalars
(
'l'
,
'm'
,
'n'
)
tfogrid
=
ogrid
[
i
:
1
:
.
1
,
1
:
j
:
1.
,
10
:
100
:
k
]
tiogrid
=
ogrid
[
l
:
2
:
1
,
1
:
m
:
1
,
10
:
100
:
n
]
ff
=
theano
.
function
([
i
,
j
,
k
],
tfogrid
)
fi
=
theano
.
function
([
l
,
m
,
n
],
tiogrid
)
for
n
,
t
in
zip
((
nfogrid
,
niogrid
),
(
ff
(
0
,
10
,
10.
),
fi
(
0
,
10
,
10
))):
for
n
,
t
in
zip
((
nfogrid
,
niogrid
),
(
ff
(
0
,
10
,
10.
),
fi
(
0
,
10
,
10
))):
for
ng
,
tg
in
zip
(
n
,
t
):
utt
.
assert_allclose
(
ng
,
tg
)
...
...
@@ -6305,8 +6314,8 @@ class TestPermuteRowElements(unittest.TestCase):
# Each row of p contains a permutation to apply to the corresponding
# row of input
out_bis
=
numpy
.
asarray
([
i_row
[
p_row
]
for
i_row
,
p_row
in
zip
(
input_val
,
p_val
)])
out_bis
=
numpy
.
asarray
([
i_row
[
p_row
]
for
i_row
,
p_row
in
zip
(
input_val
,
p_val
)])
assert
numpy
.
all
(
out_val
==
out_bis
)
# Verify gradient
...
...
@@ -6325,8 +6334,8 @@ class TestPermuteRowElements(unittest.TestCase):
rng
=
numpy
.
random
.
RandomState
(
utt
.
fetch_seed
())
input_val
=
rng
.
uniform
(
size
=
(
5
,))
.
astype
(
config
.
floatX
)
p_val
=
numpy
.
asarray
([
rng
.
permutation
(
5
)
for
i
in
range
(
3
)
],
dtype
=
'int32'
)
p_val
=
numpy
.
asarray
([
rng
.
permutation
(
5
)
for
i
in
range
(
3
)
],
dtype
=
'int32'
)
out_val
=
permute
(
input_val
,
p_val
)
# Each row of p contains a permutation to apply to the input vector
...
...
@@ -6357,8 +6366,8 @@ class TestPermuteRowElements(unittest.TestCase):
# Each row of p contains a permutation to apply to each row
# of the input tensor
out_bis
=
numpy
.
asarray
([[
in_mat
[
0
,
p_row
]
for
p_row
in
p_val
]
for
in_mat
in
input_val
])
out_bis
=
numpy
.
asarray
([[
in_mat
[
0
,
p_row
]
for
p_row
in
p_val
]
for
in_mat
in
input_val
])
assert
numpy
.
all
(
out_val
==
out_bis
)
# Verify gradient
...
...
@@ -6410,9 +6419,9 @@ class test_tensordot(unittest.TestCase):
[((
1
,),
(
0
,)),
[(
4
,
7
),
(
7
,
9
)]],
[((
1
,),
(
1
,)),
[(
4
,
7
),
(
9
,
7
)]],
[((
0
,
1
),
(
0
,
1
)),
[(
4
,
7
),
(
4
,
7
)]],
#
[((0, 1), (1, 0)), [(4, 7), (7, 4)]],
#
[((1, 0), (1, 0)), [(4, 7), (4, 7)]],
#
[((1, 0), (0, 1)), [(4, 7), (7, 4)]],
#
[((0, 1), (1, 0)), [(4, 7), (7, 4)]],
#
[((1, 0), (1, 0)), [(4, 7), (4, 7)]],
#
[((1, 0), (0, 1)), [(4, 7), (7, 4)]],
]:
c
=
tensordot
(
amat
,
bmat
,
axes
)
f3
=
inplace_func
([
amat
,
bmat
],
c
)
...
...
@@ -6427,9 +6436,9 @@ class test_tensordot(unittest.TestCase):
[((
0
,),
(
1
,)),
[(
1
,
2
,
3
,
4
),
(
3
,
1
)]],
[((
0
,),
(
0
,)),
[(
1
,
2
,
3
,
4
),
(
1
,
3
)]],
[((
3
,),
(
0
,)),
[(
1
,
2
,
3
,
4
),
(
4
,
1
)]],
#
[((3, 1), (0, 1)), [(1, 2, 3, 4), (4, 2)]],
#
[((0, 1), (1, 0)), [(1, 2, 3, 4), (2, 1)]],
#
[((3, 1), (1, 0)), [(1, 2, 3, 4), (2, 4)]],
#
[((3, 1), (0, 1)), [(1, 2, 3, 4), (4, 2)]],
#
[((0, 1), (1, 0)), [(1, 2, 3, 4), (2, 1)]],
#
[((3, 1), (1, 0)), [(1, 2, 3, 4), (2, 4)]],
]:
atens
=
tensor4
()
c
=
tensordot
(
atens
,
bmat
,
axes
)
...
...
@@ -6685,7 +6694,7 @@ def test_autocast():
# Call test functions for all possible values of `config.cast_policy`.
for
autocast_cfg
in
(
'custom'
,
#'numpy', # Commented out until it is implemented properly.
#
'numpy', # Commented out until it is implemented properly.
'numpy+floatX'
,
):
config
.
cast_policy
=
autocast_cfg
...
...
@@ -6721,10 +6730,10 @@ def _test_autocast_custom():
with
autocast_float_as
(
'float32'
):
assert
(
dvector
()
+
1.1
)
.
dtype
==
'float64'
assert
(
fvector
()
+
1.1
)
.
dtype
==
'float32'
assert
(
fvector
()
+
theano
.
_asarray
(
1.1
,
dtype
=
'float64'
))
.
dtype
==
\
'float64'
assert
(
fvector
()
+
theano
.
_asarray
(
1.1
,
dtype
=
'float32'
))
.
dtype
==
\
'float32'
assert
(
(
fvector
()
+
theano
.
_asarray
(
1.1
,
dtype
=
'float64'
))
.
dtype
==
'float64'
)
assert
(
(
fvector
()
+
theano
.
_asarray
(
1.1
,
dtype
=
'float32'
))
.
dtype
==
'float32'
)
assert
(
dvector
()
+
1
)
.
dtype
==
'float64'
assert
(
fvector
()
+
1
)
.
dtype
==
'float32'
...
...
@@ -6734,10 +6743,10 @@ def _test_autocast_custom():
assert
(
dvector
()
+
1.1
)
.
dtype
==
'float64'
assert
(
fvector
()
+
1.1
)
.
dtype
==
'float64'
assert
(
fvector
()
+
1.0
)
.
dtype
==
'float64'
assert
(
fvector
()
+
theano
.
_asarray
(
1.1
,
dtype
=
'float64'
))
.
dtype
==
\
'float64'
assert
(
fvector
()
+
theano
.
_asarray
(
1.1
,
dtype
=
'float32'
))
.
dtype
==
\
'float32'
assert
(
(
fvector
()
+
theano
.
_asarray
(
1.1
,
dtype
=
'float64'
))
.
dtype
==
'float64'
)
assert
(
(
fvector
()
+
theano
.
_asarray
(
1.1
,
dtype
=
'float32'
))
.
dtype
==
'float32'
)
assert
(
dvector
()
+
1
)
.
dtype
==
'float64'
assert
(
fvector
()
+
1
)
.
dtype
==
'float32'
...
...
@@ -6826,16 +6835,29 @@ class test_arithmetic_cast(unittest.TestCase):
def
test_arithmetic_cast
(
self
):
backup_config
=
config
.
cast_policy
dtypes
=
get_numeric_types
(
with_complex
=
True
)
# Here:
# scalar == scalar stored as a 0d array
# array == 1d array
# i_scalar == scalar type used internally by Theano
theano_scalar
=
lambda
dtype
:
tensor
.
scalar
(
dtype
=
str
(
dtype
))
numpy_scalar
=
lambda
dtype
:
numpy
.
array
(
1
,
dtype
=
dtype
)
theano_array
=
lambda
dtype
:
tensor
.
vector
(
dtype
=
str
(
dtype
))
numpy_array
=
lambda
dtype
:
numpy
.
array
([
1
],
dtype
=
dtype
)
theano_i_scalar
=
lambda
dtype
:
theano
.
scalar
.
Scalar
(
str
(
dtype
))()
numpy_i_scalar
=
numpy_scalar
def
theano_scalar
(
dtype
):
return
tensor
.
scalar
(
dtype
=
str
(
dtype
))
def
numpy_scalar
(
dtype
):
return
numpy
.
array
(
1
,
dtype
=
dtype
)
def
theano_array
(
dtype
):
return
tensor
.
vector
(
dtype
=
str
(
dtype
))
def
numpy_array
(
dtype
):
return
numpy
.
array
([
1
],
dtype
=
dtype
)
def
theano_i_scalar
(
dtype
):
return
theano
.
scalar
.
Scalar
(
str
(
dtype
))()
def
numpy_i_scalar
(
dtype
):
return
numpy_scalar
(
dtype
)
if
config
.
int_division
==
'int'
:
# Avoid deprecation warning during tests.
warnings
.
filterwarnings
(
'ignore'
,
message
=
'Division of two integer'
,
...
...
@@ -6865,10 +6887,10 @@ class test_arithmetic_cast(unittest.TestCase):
(
'i_scalar'
,
'i_scalar'
),
):
theano_args
=
list
(
map
(
eval
,
[
'theano_
%
s'
%
c
for
c
in
combo
]))
numpy_args
=
list
(
map
(
eval
,
[
'numpy_
%
s'
%
c
for
c
in
combo
]))
theano_args
=
list
(
map
(
eval
,
[
'theano_
%
s'
%
c
for
c
in
combo
]))
numpy_args
=
list
(
map
(
eval
,
[
'numpy_
%
s'
%
c
for
c
in
combo
]))
try
:
theano_dtype
=
op
(
theano_args
[
0
](
a_type
),
...
...
@@ -6911,8 +6933,7 @@ class test_arithmetic_cast(unittest.TestCase):
# not try to prevent the scalar from
# upcasting the array.
array_type
,
scalar_type
=
(
(
a_type
,
b_type
)[
list
(
combo
)
.
index
(
arg
)]
(
a_type
,
b_type
)[
list
(
combo
)
.
index
(
arg
)]
for
arg
in
(
'array'
,
'scalar'
))
up_type
=
theano
.
scalar
.
upcast
(
array_type
,
scalar_type
)
...
...
@@ -7087,13 +7108,9 @@ class test_broadcast(unittest.TestCase):
def
test_len
():
for
shape
in
[(
5
,),
(
3
,
4
),
(
7
,
4
,
6
)]:
x
=
tensor
.
tensor
(
dtype
=
'floatX'
,
broadcastable
=
(
False
,)
*
len
(
shape
))
try
:
len
(
x
)
assert
False
,
"Expected an error"
except
TypeError
:
pass
for
shape_
in
[(
5
,),
(
3
,
4
),
(
7
,
4
,
6
)]:
x
=
tensor
.
tensor
(
dtype
=
'floatX'
,
broadcastable
=
(
False
,)
*
len
(
shape_
))
assert_raises
(
TypeError
,
len
,
x
)
def
test_mod
():
...
...
@@ -7140,10 +7157,9 @@ def test_mod_compile():
# compilation in the same commit.
x
=
tensor
.
vector
()
y
=
tensor
.
vector
()
shape
=
x
.
shape
out
=
tensor
.
switch
(
tensor
.
eq
(
3
%
x
.
shape
[
0
],
0
),
y
,
y
[:
-
1
])
f
=
theano
.
function
([
x
,
y
],
out
)
theano
.
function
([
x
,
y
],
out
)
def
test_unalign
():
...
...
@@ -7170,7 +7186,7 @@ def test_unalign():
assert
not
b
.
flags
.
aligned
assert
numpy
.
allclose
(
out_numpy
,
out_theano
)
assert
False
except
TypeError
as
e
:
except
TypeError
:
pass
a
=
numpy
.
empty
((),
dtype
=
dtype
)[
'f1'
]
...
...
@@ -7188,7 +7204,7 @@ def test_unalign():
assert
not
b
.
flags
.
aligned
assert
numpy
.
allclose
(
out_numpy
,
out_theano
)
assert
False
except
TypeError
as
e
:
except
TypeError
:
pass
...
...
@@ -7198,7 +7214,7 @@ def test_dimshuffle_duplicate():
success
=
False
try
:
y
=
tensor
.
DimShuffle
((
False
,
),
(
0
,
0
))(
x
)
tensor
.
DimShuffle
((
False
,
),
(
0
,
0
))(
x
)
except
ValueError
as
e
:
assert
str
(
e
)
.
find
(
"may not appear twice"
)
!=
-
1
success
=
True
...
...
@@ -7277,7 +7293,7 @@ class T_get_scalar_constant_value(unittest.TestCase):
assert
get_scalar_constant_value
(
s
)
==
3
s
=
opt
.
Shape_i
(
1
)(
c
)
assert
get_scalar_constant_value
(
s
)
==
4
d
=
theano
.
shared
(
numpy
.
random
.
randn
(
1
,
1
),
broadcastable
=
(
True
,
True
))
d
=
theano
.
shared
(
numpy
.
random
.
randn
(
1
,
1
),
broadcastable
=
(
True
,
True
))
f
=
theano
.
tensor
.
basic
.
ScalarFromTensor
()(
opt
.
Shape_i
(
0
)(
d
))
assert
get_scalar_constant_value
(
f
)
==
1
...
...
@@ -7353,7 +7369,7 @@ class T_as_tensor_variable(unittest.TestCase):
new_inp
=
numpy
.
memmap
(
fname
,
dtype
=
inp
.
dtype
,
mode
=
'w+'
,
shape
=
inp
.
shape
)
new_inp
[
...
]
=
inp
x
=
as_tensor_variable
(
new_inp
)
as_tensor_variable
(
new_inp
)
def
test_empty_dtype
(
self
):
old
=
theano
.
config
.
floatX
...
...
@@ -7584,7 +7600,7 @@ def test_stacklists():
result
=
f
(
1
,
2
,
3
,
4
)
assert
result
.
shape
==
(
2
,
2
,
1
)
a
,
b
,
c
,
d
=
[
matrix
(
a
)
for
a
in
'abcd'
]
a
,
b
,
c
,
d
=
[
matrix
(
x
)
for
x
in
'abcd'
]
X
=
stacklists
([[
a
,
b
],
[
c
,
d
]])
f
=
function
([
a
,
b
,
c
,
d
],
X
)
...
...
@@ -7622,8 +7638,8 @@ class TestSpecifyShape(unittest.TestCase):
if
isinstance
(
n
.
op
,
SpecifyShape
)][
0
]
.
inputs
[
0
]
.
type
,
self
.
input_type
)
f
(
xval
)
for
shape
in
[(
1
,
3
),
(
2
,
2
),
(
5
,
5
)]:
xval
=
numpy
.
random
.
rand
(
*
shape
)
.
astype
(
floatX
)
for
shape
_
in
[(
1
,
3
),
(
2
,
2
),
(
5
,
5
)]:
xval
=
numpy
.
random
.
rand
(
*
shape
_
)
.
astype
(
floatX
)
self
.
assertRaises
(
AssertionError
,
f
,
xval
)
def
test_bad_number_of_shape
(
self
):
...
...
@@ -7646,16 +7662,16 @@ class TestSpecifyShape(unittest.TestCase):
x
=
matrix
()
xval
=
numpy
.
random
.
rand
(
2
,
3
)
.
astype
(
floatX
)
for
shape
in
[(),
for
shape
_
in
[(),
(
1
,),
(
2
,
3
,
4
)]:
self
.
assertRaises
(
AssertionError
,
specify_shape
,
x
,
shape
)
self
.
assertRaises
(
AssertionError
,
specify_shape
,
x
,
shape
_
)
f
=
theano
.
function
([
x
,
shape_vec
],
specify_shape
(
x
,
shape_vec
),
mode
=
self
.
mode
)
assert
isinstance
([
n
for
n
in
f
.
maker
.
fgraph
.
toposort
()
if
isinstance
(
n
.
op
,
SpecifyShape
)][
0
]
.
inputs
[
0
]
.
type
,
self
.
input_type
)
self
.
assertRaises
(
AssertionError
,
f
,
xval
,
shape
)
self
.
assertRaises
(
AssertionError
,
f
,
xval
,
shape
_
)
class
TestInferShape
(
utt
.
InferShapeTester
):
...
...
@@ -7911,10 +7927,11 @@ class TestInferShape(utt.InferShapeTester):
biscal_val
=
randint
(
3
,
6
,
size
=
())
ciscal_val
=
randint
(
3
,
6
,
size
=
())
discal_val
=
randint
(
3
,
6
,
size
=
())
self
.
_compile_and_check
([
adscal
,
aiscal
,
biscal
,
ciscal
,
discal
],
self
.
_compile_and_check
(
[
adscal
,
aiscal
,
biscal
,
ciscal
,
discal
],
[
Alloc
()(
adscal
,
aiscal
,
biscal
,
ciscal
,
discal
)],
[
adscal_val
,
aiscal_val
,
biscal_val
,
ciscal_val
,
discal_val
],
Alloc
)
[
adscal_val
,
aiscal_val
,
biscal_val
,
ciscal_val
,
discal_val
]
,
Alloc
)
# MaxAndArgmax,
adtens3_val
=
rand
(
4
,
5
,
3
)
...
...
@@ -8400,8 +8417,6 @@ class T_Choose(utt.InferShapeTester):
a
=
tensor
.
scalar
(
dtype
=
'float32'
)
b
=
tensor
.
matrix
(
dtype
=
'float32'
)
A
=
3
B
=
numpy
.
asarray
(
numpy
.
random
.
rand
(
4
,
4
),
dtype
=
'float32'
)
self
.
assertRaises
(
TypeError
,
choose
,
a
,
b
)
def
test_numpy_compare_tuple
(
self
):
...
...
@@ -8480,6 +8495,7 @@ class T_Choose(utt.InferShapeTester):
# Op that should be removed from the graph.
self
.
op_class
)
def
test_allocempty
():
# Test that we allocated correctly
f
=
theano
.
function
([],
AllocEmpty
(
"float32"
)(
2
,
3
))
...
...
theano/tests/test_flake8.py
浏览文件 @
898d146d
...
...
@@ -51,7 +51,6 @@ whitelist_flake8 = [
"tensor/tests/test_misc.py"
,
"tensor/tests/mlp_test.py"
,
"tensor/tests/test_opt_uncanonicalize.py"
,
"tensor/tests/test_basic.py"
,
"tensor/tests/test_blas.py"
,
"tensor/tests/test_merge.py"
,
"tensor/tests/test_gc.py"
,
...
...
theano/tests/unittest_tools.py
浏览文件 @
898d146d
...
...
@@ -120,10 +120,6 @@ class MockRandomState:
return
out
+
minval
else
:
return
out
+
maxval
-
1
# Examples of use:
# test_rng = MockRandomState(0)
# test_rng = MockRandomState(0.99999982)
# test_rng = MockRandomState(1)
class
TestOptimizationMixin
(
object
):
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论