Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
795be453
提交
795be453
authored
4月 01, 2009
作者:
Pascal Lamblin
浏览文件
操作
浏览文件
下载
差异文件
merge
上级
dcf8f04a
e80036df
隐藏空白字符变更
内嵌
并排
正在显示
7 个修改的文件
包含
266 行增加
和
159 行删除
+266
-159
theano-theta-117x117.png
doc/images/theano-theta-117x117.png
+0
-0
debugmode.txt
doc/topics/debugmode.txt
+22
-20
debugmode.py
theano/compile/debugmode.py
+171
-138
test_debugmode.py
theano/compile/tests/test_debugmode.py
+60
-0
test_module.py
theano/compile/tests/test_module.py
+4
-1
basic.py
theano/scalar/basic.py
+2
-0
basic.py
theano/tensor/basic.py
+7
-0
没有找到文件。
doc/images/theano-theta-117x117.png
0 → 100644
浏览文件 @
795be453
6.3 KB
doc/topics/debugmode.txt
浏览文件 @
795be453
...
...
@@ -39,9 +39,26 @@ Some kinds of errors can only be detected for certain input value combinations.
In the example above, there is no way to guarantee that a future call to say,
``f(-1)`` won't cause a problem. DebugMode is not a silver bullet.
If you instantiate DebugMode using the constructor ``compile.DebugMode``
rather than the keyword ``DEBUG_MODE`` you can configure its behaviour via
constructor arguments. See :api:`DebugMode` for details.
The keyword version of DebugMode (which you get by using ``mode='DEBUG_MODE``)
is quite strict, and can raise several different Exception types.
There following are DebugMode exceptions you might encounter:
DebugModeError
--------------
This is a generic error. All the other exceptions inherit from this one.
This error is typically not raised directly.
However, you can use ``except DebugModeError: ...`` to catch any of the more
specific types of Exception.
For detailed documentation see :api:`DebugModeError`.
BadCLinkerOutput
----------------
...
...
@@ -105,18 +122,6 @@ whereby we debug in DEBUG_MODE and then run the full-size jobs in FAST_RUN.
For detailed documentation see :api:`StochasticOrder`.
FloatError
----------
This happens when invalid floating-point values such as NaN and Inf are
introduced into the computations. It indicates which Op created the first
NaN.
Currently this exception is never raised because the check is not being
performed, but the plan is that it will be. (see ticket #320)
For detailed documentation see :api:`FloatError`.
InvalidValueError
-----------------
...
...
@@ -126,14 +131,11 @@ an output that is invalid with respect to the type of the corresponding output
variable. Like if it returned a complex-valued ndarray for a ``dscalar``
Type.
For detailed documentation see :api:`InvalidValueError`.
DebugModeError
--------------
This can also be triggered when floating-point values such as NaN and Inf are
introduced into the computations. It indicates which Op created the first
NaN. These floating-point values can be allowed by passing the
``check_isfinite=False`` argument to DebugMode.
This is a generic error, pretty unhelpful. You'll generally have to look at the
stack trace and then in the code to figure out why DebugMode is complaining.
For detailed documentation see :api:`InvalidValueError`.
For detailed documentation see :api:`DebugModeError`.
theano/compile/debugmode.py
浏览文件 @
795be453
...
...
@@ -170,13 +170,6 @@ class StochasticOrder(DebugModeError):
"""
pass
class
FloatError
(
DebugModeError
):
"""Exception: Inf or NaN has crept into calculations
:note: See #320 for what this exception is for
"""
pass
class
InvalidValueError
(
DebugModeError
):
"""Exception: some Op an output value that is inconsistent with the Type of that output"""
def
__init__
(
self
,
r
,
v
):
...
...
@@ -785,141 +778,153 @@ class _Linker(gof.link.LocalLinker):
for
x
in
no_recycling
:
x
[
0
]
=
None
equiv_vals
=
{}
problematic
=
set
()
# r_vals are the true values associated with each variable in the graph
# they should not change during the evaluation of this function, even when the
# graph has destructive ops in it
#
# This dictionary is used to populate the storage_map as necessary
r_vals
=
{}
# dr_vals are the values taken by variables after being destroyed
dr_vals
=
{}
assert
len
(
thunks_py
)
==
len
(
order
)
# nest all this in try-finally to put storage *back* into storage_map when an
# exception is raised
original_storage_map_keys
=
[
r
for
r
in
storage_map
if
r
.
owner
is
None
]
# transfer the initial values from the storage_map to the r_vals
for
r
in
storage_map
:
if
(
r
.
owner
is
None
):
if
(
storage_map
[
r
][
0
]
is
None
):
raise
Exception
(
'Missing input'
,
r
)
if
not
r
.
type
.
is_valid_value
(
storage_map
[
r
][
0
]):
raise
InvalidValueError
(
r
,
storage_map
[
r
][
0
])
r_vals
[
r
]
=
storage_map
[
r
][
0
]
storage_map
[
r
][
0
]
=
None
#####
# Precondition: the storage map is empty, transferred completely to r_vals
#####
for
r
,
s
in
storage_map
.
iteritems
():
assert
s
[
0
]
is
None
#try:
# compute the value of all variables
for
i
,
(
thunk_py
,
thunk_c
,
node
)
in
enumerate
(
zip
(
thunks_py
,
thunks_c
,
order
)):
this_node_destroyed_variables
=
set
()
# put a copy of each input into the storage_map
# also, check that inputs have valid values
for
r
in
node
.
inputs
:
assert
isinstance
(
r
,
gof
.
Variable
)
assert
r
in
r_vals
storage_map
[
r
][
0
]
=
_lessbroken_deepcopy
(
r_vals
[
r
])
if
not
r
.
type
.
is_valid_value
(
storage_map
[
r
][
0
]):
raise
InvalidValueError
(
r
,
storage_map
[
r
][
0
])
if
thunk_py
:
thunk_py
()
_check_inputs
(
node
,
storage_map
,
r_vals
,
dr_vals
,
active_order_set
,
clobber_dr_vals
=
True
)
_check_viewmap
(
node
,
storage_map
)
# check output values for type-correctness
#retrieve each output from the storage_map
for
r
in
node
.
outputs
:
try
:
equiv_vals
=
{}
problematic
=
set
()
# r_vals are the true values associated with each variable in the graph
# they should not change during the evaluation of this function, even when the
# graph has destructive ops in it
#
# This dictionary is used to populate the storage_map as necessary
r_vals
=
{}
# dr_vals are the values taken by variables after being destroyed
dr_vals
=
{}
assert
len
(
thunks_py
)
==
len
(
order
)
# transfer the initial values from the storage_map to the r_vals
for
r
in
storage_map
:
if
(
r
.
owner
is
None
):
if
(
storage_map
[
r
][
0
]
is
None
):
raise
Exception
(
'Missing input'
,
r
)
if
not
r
.
type
.
is_valid_value
(
storage_map
[
r
][
0
]):
raise
InvalidValueError
(
r
,
storage_map
[
r
][
0
])
#if r in r_vals:
#print >> sys.stderr, 'OUTPUT', r, 'ALREADY HAS_VALUE!', r_vals[r], 'WHAT ABOUT', storage_map[r][0]
assert
r
not
in
r_vals
r_vals
[
r
]
=
storage_map
[
r
][
0
]
storage_map
[
r
][
0
]
=
None
#clear the storage_map of outputs for the thunk_c
if
thunk_c
:
storage_map
[
r
][
0
]
=
None
#####
# Precondition: the storage map is empty, transferred completely to r_vals
#####
for
r
,
s
in
storage_map
.
iteritems
():
assert
s
[
0
]
is
None
#try:
# compute the value of all variables
for
i
,
(
thunk_py
,
thunk_c
,
node
)
in
enumerate
(
zip
(
thunks_py
,
thunks_c
,
order
)):
this_node_destroyed_variables
=
set
()
# put a copy of each input into the storage_map
# also, check that inputs have valid values
for
r
in
node
.
inputs
:
# TODO: we only need to overwrite the non-destroyed inputs
assert
isinstance
(
r
,
gof
.
Variable
)
assert
r
in
r_vals
storage_map
[
r
][
0
]
=
_lessbroken_deepcopy
(
r_vals
[
r
])
if
not
r
.
type
.
is_valid_value
(
storage_map
[
r
][
0
]):
raise
InvalidValueError
(
r
,
storage_map
[
r
][
0
])
thunk_c
()
if
thunk_py
:
thunk_py
()
_check_inputs
(
node
,
storage_map
,
r_vals
,
dr_vals
,
active_order_set
,
clobber_dr_vals
=
False
)
# check output values for type-correctness
for
r
in
node
.
outputs
:
if
not
r
.
type
.
is_valid_value
(
storage_map
[
r
][
0
]):
raise
InvalidValueError
(
r
,
storage_map
[
r
][
0
])
#if r in r_vals:
_check_viewmap
(
node
,
storage_map
)
_check_inputs
(
node
,
storage_map
,
r_vals
,
dr_vals
,
active_order_set
,
clobber_dr_vals
=
True
)
for
r
in
node
.
outputs
:
# check output values for type-correctness
if
not
r
.
type
.
is_valid_value
(
storage_map
[
r
][
0
]):
raise
InvalidValueError
(
r
,
storage_map
[
r
][
0
])
_check_viewmap
(
node
,
storage_map
)
if
r
in
r_vals
:
# compares the version from thunk_py (in r_vals)
# to the version produced by thunk_c (in storage_map)
if
not
r
.
type
.
values_eq_approx
(
r_vals
[
r
],
storage_map
[
r
][
0
]):
raise
BadCLinkerOutput
(
r
,
val_py
=
r_vals
[
r
],
val_c
=
storage_map
[
r
][
0
])
else
:
#retrieve each output from the storage_map
#retrieve each output from the storage_map
for
r
in
node
.
outputs
:
assert
r
not
in
r_vals
r_vals
[
r
]
=
storage_map
[
r
][
0
]
storage_map
[
r
][
0
]
=
None
#clear the storage_map for the thunk_c
# we're done with this thunk
# clear everything out of the storage_map
for
r
in
node
.
inputs
:
storage_map
[
r
][
0
]
=
None
#except:
# raise_with_op(node)
_find_bad_optimizations
(
order
,
env
.
equivalence_tracker
.
reasons
,
r_vals
)
#####
# Postcondition: the input and output variables are in the storage map, nothing more
#####
# Nothing should be in storage map after evaluating each the thunk (specifically the
# last one)
for
r
,
s
in
storage_map
.
iteritems
():
assert
type
(
s
)
is
list
assert
s
[
0
]
is
None
# store our output variables to their respective storage lists
for
output
,
storage
in
zip
(
env
.
outputs
,
output_storage
):
storage
[
0
]
=
r_vals
[
output
]
# transfer all inputs back to their respective storage lists
for
r
in
r_vals
:
if
r
.
owner
is
None
:
if
r
in
env
.
inputs
:
assert
storage_map
[
r
]
is
input_storage
[
env
.
inputs
.
index
(
r
)]
storage_map
[
r
][
0
]
=
r_vals
[
r
]
# if an input was destroyed, the destroyed value should be returned
for
r
in
dr_vals
:
assert
dr_vals
[
r
][
0
]
is
not
None
if
r
.
owner
is
None
:
assert
r
in
env
.
inputs
#HACK TO LOOK LIKE A REAL DESTRUCTIVE ACTION TOOK PLACE
if
type
(
dr_vals
[
r
][
0
])
is
numpy
.
ndarray
\
and
dr_vals
[
r
][
0
]
.
dtype
==
storage_map
[
r
][
0
]
.
dtype
\
and
dr_vals
[
r
][
0
]
.
shape
==
storage_map
[
r
][
0
]
.
shape
:
if
len
(
dr_vals
[
r
][
0
]
.
shape
):
storage_map
[
r
][
0
][:]
=
dr_vals
[
r
][
0
]
storage_map
[
r
][
0
]
=
None
#clear the storage_map of outputs for the thunk_c
if
thunk_c
:
for
r
in
node
.
inputs
:
# TODO: we only need to overwrite the non-destroyed inputs
storage_map
[
r
][
0
]
=
_lessbroken_deepcopy
(
r_vals
[
r
])
thunk_c
()
for
r
in
node
.
outputs
:
# check output values for type-correctness
if
not
r
.
type
.
is_valid_value
(
storage_map
[
r
][
0
]):
raise
InvalidValueError
(
r
,
storage_map
[
r
][
0
])
_check_inputs
(
node
,
storage_map
,
r_vals
,
dr_vals
,
active_order_set
,
clobber_dr_vals
=
False
)
_check_viewmap
(
node
,
storage_map
)
for
r
in
node
.
outputs
:
if
r
in
r_vals
:
# compares the version from thunk_py (in r_vals)
# to the version produced by thunk_c (in storage_map)
if
not
r
.
type
.
values_eq_approx
(
r_vals
[
r
],
storage_map
[
r
][
0
]):
raise
BadCLinkerOutput
(
r
,
val_py
=
r_vals
[
r
],
val_c
=
storage_map
[
r
][
0
])
else
:
#retrieve each output from the storage_map
r_vals
[
r
]
=
storage_map
[
r
][
0
]
storage_map
[
r
][
0
]
=
None
#clear the storage_map for the thunk_c
# we're done with this thunk
# clear everything out of the storage_map
for
r
in
node
.
inputs
:
storage_map
[
r
][
0
]
=
None
#except:
# raise_with_op(node)
_find_bad_optimizations
(
order
,
env
.
equivalence_tracker
.
reasons
,
r_vals
)
#####
# Postcondition: the input and output variables are in the storage map, nothing more
#####
# Nothing should be in storage map after evaluating each the thunk (specifically the
# last one)
for
r
,
s
in
storage_map
.
iteritems
():
assert
type
(
s
)
is
list
assert
s
[
0
]
is
None
# store our output variables to their respective storage lists
for
output
,
storage
in
zip
(
env
.
outputs
,
output_storage
):
storage
[
0
]
=
r_vals
[
output
]
# transfer all inputs back to their respective storage lists
for
r
in
r_vals
:
if
r
.
owner
is
None
:
if
r
in
env
.
inputs
:
assert
storage_map
[
r
]
is
input_storage
[
env
.
inputs
.
index
(
r
)]
storage_map
[
r
][
0
]
=
r_vals
[
r
]
# if an input was destroyed, the destroyed value should be returned
for
r
in
dr_vals
:
assert
dr_vals
[
r
][
0
]
is
not
None
if
r
.
owner
is
None
:
assert
r
in
env
.
inputs
#HACK TO LOOK LIKE A REAL DESTRUCTIVE ACTION TOOK PLACE
if
type
(
dr_vals
[
r
][
0
])
is
numpy
.
ndarray
\
and
dr_vals
[
r
][
0
]
.
dtype
==
storage_map
[
r
][
0
]
.
dtype
\
and
dr_vals
[
r
][
0
]
.
shape
==
storage_map
[
r
][
0
]
.
shape
:
if
len
(
dr_vals
[
r
][
0
]
.
shape
):
storage_map
[
r
][
0
][:]
=
dr_vals
[
r
][
0
]
else
:
storage_map
[
r
][
0
]
.
itemset
(
dr_vals
[
r
][
0
])
else
:
storage_map
[
r
][
0
]
.
itemset
(
dr_vals
[
r
][
0
])
else
:
storage_map
[
r
][
0
]
=
dr_vals
[
r
][
0
]
storage_map
[
r
][
0
]
=
dr_vals
[
r
][
0
]
except
:
for
r
in
original_storage_map_keys
:
if
storage_map
[
r
][
0
]
is
None
:
storage_map
[
r
][
0
]
=
r_vals
[
r
]
raise
#print ""
#print output_storage
#print dr_vals
...
...
@@ -961,8 +966,16 @@ class _Maker(FunctionMaker): #inheritance buys a few helper functions
:param accept_inplace: True iff it is acceptable to have inplace operations
in the graph from the inputs to the outputs
:note: this function sets TensorType.filter_checks_isfinite when `mode.check_isfinite` is True
"""
# WARNING: this is a global mechanism... so it will screw up if we are trying to use
# multiple modes at once.
from
..tensor
import
TensorType
#to set filter_check_isfinite
TensorType
.
filter_checks_isfinite
=
mode
.
check_isfinite
# Handle the case where inputs and/or outputs is a single Variable (not in a list)
unpack_single
=
False
return_none
=
False
...
...
@@ -1182,6 +1195,12 @@ class DebugMode(Mode):
Should we evaluate (and check) the `perform` implementations?
"""
check_isfinite
=
True
"""
Should we check for (and complain about) NaN/Inf ndarray elements?
"""
# This function will be used to create a FunctionMaker in
# function_module.function
def
function_maker
(
self
,
i
,
o
,
m
,
*
args
,
**
kwargs
):
...
...
@@ -1191,18 +1210,32 @@ class DebugMode(Mode):
def
__init__
(
self
,
optimizer
=
'fast_run'
,
stability_patience
=
10
,
check_c_code
=
True
,
check_py_code
=
True
):
"""Initialize member variables
stability_patience
=
None
,
check_c_code
=
None
,
check_py_code
=
None
,
check_isfinite
=
None
):
"""Initialize member variables.
If any of these arguments (except optimizer) is not None, it overrides the class default.
"""
if
not
(
check_c_code
or
check_py_code
):
raise
ValueError
(
'DebugMode has to check at least one of c and py code'
)
super
(
DebugMode
,
self
)
.
__init__
(
optimizer
=
optimizer
,
linker
=
_Linker
)
self
.
stability_patience
=
stability_patience
self
.
check_c_code
=
check_c_code
self
.
check_py_code
=
check_py_code
if
stability_patience
is
not
None
:
self
.
stability_patience
=
stability_patience
if
check_c_code
is
not
None
:
self
.
check_c_code
=
check_c_code
if
check_py_code
is
not
None
:
self
.
check_py_code
=
check_py_code
if
check_isfinite
is
not
None
:
self
.
check_isfinite
=
check_isfinite
if
not
(
self
.
check_c_code
or
self
.
check_py_code
):
raise
ValueError
(
'DebugMode has to check at least one of c and py code'
)
register_mode
(
'DEBUG_MODE'
,
DebugMode
(
optimizer
=
'fast_run'
))
theano/compile/tests/test_debugmode.py
浏览文件 @
795be453
...
...
@@ -531,3 +531,63 @@ class Test_ViewMap(unittest.TestCase):
# input, but guarantees correctness.
#custom_op.view_map = {0:[0], 1:[1]}
#f([1,2,3,4],[5,6,7,8])
class
Test_check_isfinite
(
unittest
.
TestCase
):
def
setUp
(
self
):
print
'Up'
self
.
old_val
=
theano
.
tensor
.
TensorType
.
filter_checks_isfinite
def
tearDown
(
self
):
print
'Down'
theano
.
tensor
.
TensorType
.
filter_checks_isfinite
=
self
.
old_val
def
test_check_isfinite
(
self
):
x
=
theano
.
tensor
.
dvector
()
f
=
theano
.
function
([
x
],
(
x
+
2
)
*
5
,
mode
=
'DEBUG_MODE'
)
# this should work
f
(
numpy
.
log
([
3
,
4
,
5
]))
# this should raise InvalidValueError
try
:
# insert a NaN
f
(
numpy
.
log
([
3
,
-
4
,
5
]))
assert
False
except
debugmode
.
InvalidValueError
:
pass
# this should raise InvalidValueError
try
:
# insert an Nan and Inf
f
(
numpy
.
asarray
([
0
,
1.0
,
0
])
/
0
)
assert
False
except
debugmode
.
InvalidValueError
:
pass
# this should raise InvalidValueError
try
:
# insert several Inf
f
(
numpy
.
asarray
([
1.0
,
1.0
,
1.0
])
/
0
)
assert
False
except
debugmode
.
InvalidValueError
:
pass
# this should disable the exception
theano
.
tensor
.
TensorType
.
filter_checks_isfinite
=
False
# insert several Inf
f
(
numpy
.
asarray
([
1.0
,
1.0
,
1.0
])
/
0
)
def
test_check_isfinite_disabled
(
self
):
x
=
theano
.
tensor
.
dvector
()
f
=
theano
.
function
([
x
],
(
x
+
2
)
*
5
,
mode
=
debugmode
.
DebugMode
(
check_isfinite
=
False
))
# the DestroyMap checker should be triggered by Nan != Nan
try
:
f
(
numpy
.
log
([
3
,
-
4
,
5
]))
assert
False
except
debugmode
.
BadDestroyMap
:
pass
#inf should go through
f
(
numpy
.
asarray
([
1.0
,
1.0
,
1.0
])
/
0
)
theano/compile/tests/test_module.py
浏览文件 @
795be453
...
...
@@ -435,6 +435,9 @@ class T_module(unittest.TestCase):
"""Test that we can manipulate the mutable, strict, etc. flags (see SymbolicInput) of
Method inputs"""
if
default_mode
==
'FAST_COMPILE'
:
return
M
=
Module
()
M
.
x
=
T
.
dvector
()
M
.
y
=
T
.
dvector
()
...
...
@@ -598,7 +601,7 @@ def test_method_updates():
m
=
M
.
make
()
m
.
f
([
9
,
9
])
assert
m
.
x
is
None
assert
numpy
.
all
(
xval
==
[
0
,
1
])
assert
numpy
.
all
(
m
.
f
[
M
.
x
]
==
[
0
,
1
])
# when a variable is listed explicitly and in an update, then there's a problem.
...
...
theano/scalar/basic.py
浏览文件 @
795be453
...
...
@@ -644,6 +644,8 @@ class Abs(UnaryScalarOp):
return
"
%(z)
s = abs(
%(x)
s);"
%
locals
()
if
type
in
float_types
:
return
"
%(z)
s = fabs(
%(x)
s);"
%
locals
()
if
type
in
complex_types
:
return
"
%(z)
s = sqrt(
%(x)
s.real*
%(x)
s.real +
%(x)
s.imag*
%(x)
s.imag);"
%
locals
()
#complex, other?
raise
NotImplementedError
(
'type not supported'
,
type
)
abs_
=
Abs
(
same_out
)
...
...
theano/tensor/basic.py
浏览文件 @
795be453
...
...
@@ -164,6 +164,11 @@ def value(x, name=None, ndim=None):
class
TensorType
(
Type
):
"""Symbolic `Type` representing a numpy.ndarray value."""
filter_checks_isfinite
=
False
"""
When this is True, strict filtering rejects data containing NaN or Inf entries. (Used in `DebugMode`)
"""
def
__init__
(
self
,
dtype
,
broadcastable
,
name
=
None
):
"""Initialize self.dtype and self.broadcastable.
...
...
@@ -199,6 +204,8 @@ class TensorType(Type):
raise
TypeError
(
"
%
s expected a ndarray object with dtype =
%
s (got
%
s)."
%
(
self
,
self
.
dtype
,
data
.
dtype
))
if
not
data
.
ndim
==
self
.
ndim
:
raise
TypeError
(
"
%
s expected a ndarray object with
%
s dimensions (got
%
s)."
%
(
self
,
self
.
ndim
,
data
.
ndim
))
if
self
.
filter_checks_isfinite
and
(
not
numpy
.
all
(
numpy
.
isfinite
(
data
))):
raise
TypeError
(
"non-finite elements not allowed"
)
return
data
else
:
data
=
numpy
.
asarray
(
data
,
dtype
=
self
.
dtype
)
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论