Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
022c711b
提交
022c711b
authored
8月 09, 2011
作者:
Frederic Bastien
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Reverted the interface change in grad. Added a deprecation of that case.
上级
11385f53
隐藏空白字符变更
内嵌
并排
正在显示
2 个修改的文件
包含
34 行增加
和
23 行删除
+34
-23
NEWS.txt
NEWS.txt
+8
-8
tensor_grad.py
theano/tensor/tensor_grad.py
+26
-15
没有找到文件。
NEWS.txt
浏览文件 @
022c711b
Modifications in the 0.4.1 release candidate 1(28 July 2011)
Modifications in the 0.4.1 release candidate 1(28 July 2011)
Interface change:
* tensor.grad(cost, wrt) will return an object of the "same type" of wrt.
If wrt is a tensor variable, list or tuple, it will return the same thing.
This is an interface change outside the normal release number scheme. We need
this for pylearn2 and waiting for a future release is not a good solution.
Know bug:
Know bug:
* CAReduce with nan in inputs don't return the good output.
* CAReduce with nan in inputs don't return the good output.
* This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements.
* This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements.
* This is not a new bug, just a bug discovered since the last release that we didn't had time to fix.
* This is not a new bug, just a bug discovered since the last release that we didn't had time to fix.
...
@@ -25,13 +20,18 @@ Deprecation (will be removed in Theano 0.5):
...
@@ -25,13 +20,18 @@ Deprecation (will be removed in Theano 0.5):
updates following this order:
updates following this order:
[outputs], [updates], [condition]. One can skip any of the three if not
[outputs], [updates], [condition]. One can skip any of the three if not
used, but the order has to stay unchanged.
used, but the order has to stay unchanged.
* tensor.grad(cost, wrt) will return an object of the "same type" as wrt
(list/tuple/TensorVariable).
* Currently tensor.grad return a type list when the wrt is a list/tuple of
more then 1 element.
Decrecated in 0.4.0:
Decrecated in 0.4.0:
* tag.shape attribute deprecated (#633)
* CudaNdarray_new_null is deprecated in favour of CudaNdarray_New
* Dividing integers with / is deprecated: use // for integer division, or
* Dividing integers with / is deprecated: use // for integer division, or
cast one of the integers to a float type if you want a float result (you may
cast one of the integers to a float type if you want a float result (you may
also change this behavior with config.int_division).
also change this behavior with config.int_division).
* tag.shape attribute deprecated (#633)
* CudaNdarray_new_null is deprecated in favour of CudaNdarray_New
New features:
New features:
...
...
theano/tensor/tensor_grad.py
浏览文件 @
022c711b
...
@@ -150,7 +150,7 @@ def Lop(f, wrt, eval_points, consider_constant=None, warn_type=False,
...
@@ -150,7 +150,7 @@ def Lop(f, wrt, eval_points, consider_constant=None, warn_type=False,
where the indices in that expression are magic multidimensional
where the indices in that expression are magic multidimensional
indices that specify both the position within a list and all
indices that specify both the position within a list and all
coordinates of the tensor element in the last
coordinates of the tensor element in the last
If `
wrt
` is a list/tuple, then return a list/tuple with the results.
If `
f
` is a list/tuple, then return a list/tuple with the results.
"""
"""
if
consider_constant
is
None
:
if
consider_constant
is
None
:
consider_constant
=
[]
consider_constant
=
[]
...
@@ -242,11 +242,12 @@ def grad(cost, wrt, g_cost=None, consider_constant=None, warn_type=False,
...
@@ -242,11 +242,12 @@ def grad(cost, wrt, g_cost=None, consider_constant=None, warn_type=False,
:rtype: `Variable` or list/tuple of `Variable`s (depending upon `wrt`)
:rtype: `Variable` or list/tuple of `Variable`s (depending upon `wrt`)
:return: symbolic expression of gradient of `cost` with respect to
:return: symbolic expression of gradient of `cost` with respect to `wrt`.
`wrt`. If `wrt` is a list/tuple, then return a list/tuple
If an element of `wrt` is not differentiable with respect
containing the gradient of `cost` wrt each element of the list.
to the output, then a zero variable is returned.
If an element of `wrt` is not differentiable with respect to the
If `wrt` is a list/tuple, longer then 1, a list will be returned.
output, then a zero variable is returned.
DEPRECATION: In Theano 0.5, grad will return an object of the same
type as `wrt`: a list/tuple or TensorVariable in all case.
This function is a wrapper around the more general function
This function is a wrapper around the more general function
`theano.gradient.grad_sources_inputs``.
`theano.gradient.grad_sources_inputs``.
...
@@ -282,7 +283,7 @@ def grad(cost, wrt, g_cost=None, consider_constant=None, warn_type=False,
...
@@ -282,7 +283,7 @@ def grad(cost, wrt, g_cost=None, consider_constant=None, warn_type=False,
# gradient, but for now Theano needs to throw an exception, and make the
# gradient, but for now Theano needs to throw an exception, and make the
# user aware that it does not know how to compute that gradient
# user aware that it does not know how to compute that gradient
using_list
=
isinstance
(
wrt
,
list
)
using_list
=
isinstance
(
wrt
,
list
)
using_tuple
=
isinstance
(
lis
t
,
tuple
)
using_tuple
=
isinstance
(
wr
t
,
tuple
)
if
not
(
using_list
or
using_tuple
):
if
not
(
using_list
or
using_tuple
):
wrt
=
[
wrt
]
wrt
=
[
wrt
]
ret
=
[]
ret
=
[]
...
@@ -307,15 +308,25 @@ def grad(cost, wrt, g_cost=None, consider_constant=None, warn_type=False,
...
@@ -307,15 +308,25 @@ def grad(cost, wrt, g_cost=None, consider_constant=None, warn_type=False,
ret
.
append
(
zeros_like
(
p
))
ret
.
append
(
zeros_like
(
p
))
if
len
(
ret
)
==
1
:
if
len
(
ret
)
==
1
:
if
using_list
:
if
using_list
or
using_tuple
:
return
ret
warnings
.
warn
((
"The return type of tensor.grad will change in this "
elif
using_tuple
:
"case. In the future grad(cost, wrt) will return an "
return
tuple
(
ret
)
"object of the same type as wrt. So if wrt is a "
else
:
"list/tuple, list/tuple will be returned. Idem for "
return
ret
[
0
]
"TensorVariable."
),
stacklevel
=
2
)
# TODO: when we release Theano 0.5, uncomment the following lines
# and remove the warning. Don't forget the line in the currently
# enabled else.
#if using_list:
# return ret
#elif using_tuple:
# return tuple(ret)
#else:
return
ret
[
0
]
else
:
else
:
if
using_tuple
:
#
if using_tuple:
return
tuple
(
ret
)
#
return tuple(ret)
return
ret
return
ret
class
numeric_grad
:
class
numeric_grad
:
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论