Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
966577ac
提交
966577ac
authored
6月 16, 2016
作者:
Simon Lefrancois
提交者:
GitHub
6月 16, 2016
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #2 from abergeron/buildbot_speedup
Don't use DebugMode for the numerical gradient in verify_grad.
上级
f0d0c92f
2d1196c3
隐藏空白字符变更
内嵌
并排
正在显示
1 个修改的文件
包含
19 行增加
和
9 行删除
+19
-9
gradient.py
theano/gradient.py
+19
-9
没有找到文件。
theano/gradient.py
浏览文件 @
966577ac
...
...
@@ -16,7 +16,7 @@ from theano.compat import OrderedDict, izip
from
six.moves
import
xrange
,
reduce
from
theano.gof.null_type
import
NullType
,
null_type
from
theano.gof.op
import
get_debug_values
from
theano.compile
import
ViewOp
from
theano.compile
import
ViewOp
,
FAST_RUN
,
DebugMode
np
=
numpy
__authors__
=
"James Bergstra, Razvan Pascanu, Arnaud Bergeron, Ian Goodfellow"
...
...
@@ -1542,9 +1542,18 @@ class numeric_grad(object):
return
(
max_arg
,
max_pos
,
abs_errs
[
max_arg
],
rel_errs
[
max_arg
])
def
mode_not_debug
(
mode
):
if
isinstance
(
mode
,
DebugMode
):
link
,
opt
=
mode
.
get_linker_optimizer
()
return
FAST_RUN
.
clone
(
optimizer
=
opt
)
else
:
return
mode
def
verify_grad
(
fun
,
pt
,
n_tests
=
2
,
rng
=
None
,
eps
=
None
,
out_type
=
None
,
abs_tol
=
None
,
rel_tol
=
None
,
mode
=
None
,
cast_to_output_type
=
False
):
rel_tol
=
None
,
mode
=
None
,
cast_to_output_type
=
False
,
no_debug_ref
=
True
):
"""Test a gradient by Finite Difference Method. Raise error on failure.
Example:
...
...
@@ -1581,11 +1590,8 @@ def verify_grad(fun, pt, n_tests=2, rng=None, eps=None,
:param cast_to_output_type: if the output is float32 and
cast_to_output_type is True, cast the random projection to
float32. Otherwise it is float64.
:note: WARNING to unit-test writers: if `op` is a function that builds
a graph, try to make it a SMALL graph. Often verify grad is run
in debug mode, which can be very slow if it has to verify a lot of
intermediate computations.
:param no_debug_ref: Don't use DebugMode for the numerical
gradient function.
:note: This function does not support multiple outputs. In
tests/test_scan.py there is an experimental verify_grad that
...
...
@@ -1623,7 +1629,7 @@ def verify_grad(fun, pt, n_tests=2, rng=None, eps=None,
# We allow input downcast in function, because numeric_grad works in the
# most precise dtype used among the inputs, so we may need to cast some.
def
function
(
inputs
,
output
,
name
):
def
function
(
inputs
,
output
,
name
,
mode
=
mode
):
f
=
compile
.
function
(
inputs
,
output
,
accept_inplace
=
True
,
allow_input_downcast
=
True
,
mode
=
mode
,
on_unused_input
=
'ignore'
,
name
=
name
)
...
...
@@ -1669,7 +1675,11 @@ def verify_grad(fun, pt, n_tests=2, rng=None, eps=None,
# This sum() is defined above, it's not the builtin sum.
cost
=
theano
.
tensor
.
sum
(
t_r
*
o_output
)
cost_fn
=
function
(
tensor_pt
,
cost
,
name
=
'gradient.py cost'
)
if
no_debug_ref
:
cost_fn
=
function
(
tensor_pt
,
cost
,
name
=
'gradient.py cost'
,
mode
=
mode_not_debug
(
mode
))
else
:
cost_fn
=
function
(
tensor_pt
,
cost
,
name
=
'gradient.py cost'
)
symbolic_grad
=
grad
(
cost
,
tensor_pt
,
disconnected_inputs
=
'ignore'
)
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论