Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
5cf3761c
提交
5cf3761c
authored
3月 25, 2009
作者:
desjagui@atchoum.iro.umontreal.ca
浏览文件
操作
浏览文件
下载
差异文件
merge
上级
1a6a156f
c55bf98d
隐藏空白字符变更
内嵌
并排
正在显示
3 个修改的文件
包含
225 行增加
和
4 行删除
+225
-4
debug_faq.txt
doc/advanced/debug_faq.txt
+84
-0
debugmode.txt
doc/advanced/debugmode.txt
+88
-0
debugmode.py
theano/compile/debugmode.py
+53
-4
没有找到文件。
doc/advanced/debug_faq.txt
0 → 100644
浏览文件 @
5cf3761c
.. _debug_faq:
=========================================
Debugging Theano: FAQ and Troubleshooting
=========================================
There are many kinds of bugs that might come up in a computer program.
This page is structured as an FAQ. It should provide recipes to tackle common
problems, and introduce some of the tools that we use to find problems in our
Theano code, and even (it happens) in Theano's internals.
How do I print an intermediate value in a Function/Method?
----------------------------------------------------------
Theano provides a 'Print' Op to do this.
.. code-block::
x = theano.tensor.dvector('x')
x_printed = theano.Print('this is a very important value')(x)
f = theano.function([x], x * 5)
f_with_print = theano.function([x], x_printed * 5)
#this runs the graph without any printing
assert numpy.all( f([1,2,3]) == [5, 10, 15])
#this runs the graph with the message, and value printed
assert numpy.all( f_with_print([1,2,3]) == [5, 10, 15])
Since Theano runs your program in a topological order, you won't have precise
control over the order in which multiple Print() Ops are evaluted. For a more
precise inspection of what's being computed where, when, and how, see the
:ref:`Stepping through a compiled function with the WrapLinker`.
How do I step through a compiled function with the WrapLinker?
--------------------------------------------------------------
WRITEME
I wrote a new Op, and weird stuff is happening...
-------------------------------------------------
First, check the :ref:`Op Contract` and make sure you're following the rules.
Then try running your program in :ref:`debugmode`. DebugMode might catch
something that you're not seeing.
I wrote a new optimization, but it's not getting used...
---------------------------------------------------------
Remember that you have to register optimizations with the OptDb, for them to get
used by the normal modes like FAST_COMPILE, FAST_RUN, and DEBUG_MODE.
I wrote a new optimization, and it changed my results even though I'm pretty sure it is correct.
------------------------------------------------------------------------------------------------
First, check the :ref:`Op Contract` and make sure you're following the rules.
Then try running your program in :ref:`debugmode`. DebugMode might catch
something that you're not seeing.
The function I compiled is too slow, what's up?
-----------------------------------------------
First, make sure you're running in FAST_RUN mode, by passing ``mode='FAST_RUN'``
to ``theano.function`` or ``theano.make``.
Second, try the theano :ref:`profiler`. This will tell you which Apply nodes,
and which Ops are eating up your CPU cycles.
doc/advanced/debugmode.txt
0 → 100644
浏览文件 @
5cf3761c
.. _debugmode:
===============
Using DebugMode
===============
The DebugMode evaluation mode (available via ``mode='DEBUG_MODE'``, :api:`DebugMode`) includes a number of
self-checks and assertions that
can help to diagnose several kinds of programmer
errors that can lead to incorrect output.
It is much slower to evaluate a function or method in DEBUG_MODE than it would
be in FAST_RUN or even FAST_COMPILE, so it is recommended to use it during
development, but not when you launch 1000 nearly-identical processes on a
cluster.
DebugMode is easy to use:
.. code-block::
x = theano.dvector('x')
f = theano.function(x, 10*x, mode='DEBUG_MODE')
f(5)
f(0)
f(7)
If any problem is detected, at either call time (e.g. ``f(5)``) or compile time
(e.g ``f = theano.function(x, 10*x, mode='DEBUG_MODE')``) then DebugMode will
raise an exception according to what went wrong. None of these exceptions is
OK to ignore; talk to you your local Theano guru if you can't make the exception
go away.
Some kinds of errors can only be detected for certain input value combinations.
In the example above, there is no way to guarantee that a future call to say,
``f(-1)`` won't cause a problem. DebugMode is no silver bullet.
BadCLinkerOutput
----------------
This really just means that python and c didn't match. The problem might be a
bug in either python or c or both.
BadOptimization
---------------
This happens when ... WRITEME.
BadDestroyMap
-------------
This happens when an Op's perform() or c_code() modifies an input that it wasn't
supposed to.
For detailed documentation see :api:`BadDestroyMap`.
BadViewMap
----------
This happens when ... WRITEME.
StochasticOrder
---------------
This happens when ... WRITEME.
FloatError
----------
This happens when ... WRITEME.
InvalidValueError
-----------------
This happens when ... WRITEME.
DebugModeError
--------------
This is a generic error, pretty unhelpful. You'll generally have to look at the
stack trace and then in the code to figure out why DebugMode is complaining.
theano/compile/debugmode.py
浏览文件 @
5cf3761c
...
@@ -366,9 +366,11 @@ def _lessbroken_deepcopy(a):
...
@@ -366,9 +366,11 @@ def _lessbroken_deepcopy(a):
return
rval
return
rval
def
_find_bad_optimizations0
(
order
,
reasons
,
r_vals
):
def
_find_bad_optimizations0
(
order
,
reasons
,
r_vals
):
"""Use a simple algorithm to find broken optimizations. This algorithm is simple to
"""Use a simple algorithm to find broken optimizations.
understand, but sometimes when there's a problem it identifies the wrong optimization as
the culprit.
This algorithm is simple to understand, but sometimes when there's a problem it identifies
the wrong optimization as the culprit. The problem stems from the fact that results are
not evaluated in chronological order (looking at when they were introduced to the graph).
"""
"""
# iterate over variables looking for values that don't match the values of the
# iterate over variables looking for values that don't match the values of the
# variables they replaced. This is the sign of a broken optimization.
# variables they replaced. This is the sign of a broken optimization.
...
@@ -438,6 +440,53 @@ def _find_bad_optimizations1(order, reasons, r_vals):
...
@@ -438,6 +440,53 @@ def _find_bad_optimizations1(order, reasons, r_vals):
print
first_broken_set
print
first_broken_set
raise
Exception
(
'broken'
)
raise
Exception
(
'broken'
)
def
_find_bad_optimizations2
(
order
,
reasons
,
r_vals
):
"""Use a simple algorithm to find broken optimizations.
This algorithm is simple to understand, but sometimes when there's a problem it identifies
the wrong optimization as the culprit. The problem stems from the fact that results are
not evaluated in chronological order (looking at when they were introduced to the graph).
"""
checked_variables
=
set
()
def
check_variable_norec
(
new_r
):
"""Verify that `r` has the same value as the results it replaces """
for
reason
,
r
,
old_graph_str
,
new_graph_str
in
reasons
[
new_r
]:
new_r_val
=
r_vals
[
new_r
]
r_val
=
r_vals
[
r
]
if
(
r
.
type
!=
new_r
.
type
)
or
(
not
r
.
type
.
values_eq_approx
(
r_val
,
new_r_val
)):
raise
BadOptimization
(
old_r
=
r
,
new_r
=
new_r
,
old_r_val
=
r_val
,
new_r_val
=
new_r_val
,
reason
=
reason
,
old_graph
=
old_graph_str
,
new_graph
=
new_graph_str
)
def
check_variable
(
r
):
if
r
in
checked_variables
:
return
# (recursively) first check all the variables that could make r look bad:
for
var_that_could_make_r_look_bad
in
\
[
old_r
for
(
reason
,
old_r
,
olds
,
news
)
in
reasons
[
r
]]
\
+
([]
if
(
None
is
r
.
owner
)
else
r
.
owner
.
inputs
):
check_variable
(
var_that_could_make_r_look_bad
)
check_variable_norec
(
r
)
checked_variables
.
add
(
r
)
# iterate over variables looking for values that don't match the values of the
# variables they replaced. This is the sign of a broken optimization.
for
i
,
node
in
enumerate
(
order
):
for
new_r
in
node
.
outputs
:
check_variable
(
new_r
)
_find_bad_optimizations
=
_find_bad_optimizations2
class
_EnvEvent
(
object
):
class
_EnvEvent
(
object
):
"""A record of an event in the life of an Env.
"""A record of an event in the life of an Env.
...
@@ -819,7 +868,7 @@ class _Linker(gof.link.LocalLinker):
...
@@ -819,7 +868,7 @@ class _Linker(gof.link.LocalLinker):
#except:
#except:
# raise_with_op(node)
# raise_with_op(node)
_find_bad_optimizations
0
(
order
,
env
.
equivalence_tracker
.
reasons
,
r_vals
)
_find_bad_optimizations
(
order
,
env
.
equivalence_tracker
.
reasons
,
r_vals
)
#####
#####
# Postcondition: the input and output variables are in the storage map, nothing more
# Postcondition: the input and output variables are in the storage map, nothing more
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论