提交 022c711b authored 作者: Frederic Bastien's avatar Frederic Bastien

Reverted the interface change in grad. Added a deprecation of that case.

上级 11385f53
Modifications in the 0.4.1 release candidate 1(28 July 2011) Modifications in the 0.4.1 release candidate 1(28 July 2011)
Interface change:
* tensor.grad(cost, wrt) will return an object of the "same type" of wrt.
If wrt is a tensor variable, list or tuple, it will return the same thing.
This is an interface change outside the normal release number scheme. We need
this for pylearn2 and waiting for a future release is not a good solution.
Know bug: Know bug:
* CAReduce with nan in inputs don't return the good output. * CAReduce with nan in inputs don't return the good output.
* This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements. * This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements.
* This is not a new bug, just a bug discovered since the last release that we didn't had time to fix. * This is not a new bug, just a bug discovered since the last release that we didn't had time to fix.
...@@ -25,13 +20,18 @@ Deprecation (will be removed in Theano 0.5): ...@@ -25,13 +20,18 @@ Deprecation (will be removed in Theano 0.5):
updates following this order: updates following this order:
[outputs], [updates], [condition]. One can skip any of the three if not [outputs], [updates], [condition]. One can skip any of the three if not
used, but the order has to stay unchanged. used, but the order has to stay unchanged.
* tensor.grad(cost, wrt) will return an object of the "same type" as wrt
(list/tuple/TensorVariable).
* Currently tensor.grad return a type list when the wrt is a list/tuple of
more then 1 element.
Decrecated in 0.4.0: Decrecated in 0.4.0:
* tag.shape attribute deprecated (#633)
* CudaNdarray_new_null is deprecated in favour of CudaNdarray_New
* Dividing integers with / is deprecated: use // for integer division, or * Dividing integers with / is deprecated: use // for integer division, or
cast one of the integers to a float type if you want a float result (you may cast one of the integers to a float type if you want a float result (you may
also change this behavior with config.int_division). also change this behavior with config.int_division).
* tag.shape attribute deprecated (#633)
* CudaNdarray_new_null is deprecated in favour of CudaNdarray_New
New features: New features:
......
...@@ -150,7 +150,7 @@ def Lop(f, wrt, eval_points, consider_constant=None, warn_type=False, ...@@ -150,7 +150,7 @@ def Lop(f, wrt, eval_points, consider_constant=None, warn_type=False,
where the indices in that expression are magic multidimensional where the indices in that expression are magic multidimensional
indices that specify both the position within a list and all indices that specify both the position within a list and all
coordinates of the tensor element in the last coordinates of the tensor element in the last
If `wrt` is a list/tuple, then return a list/tuple with the results. If `f` is a list/tuple, then return a list/tuple with the results.
""" """
if consider_constant is None: if consider_constant is None:
consider_constant = [] consider_constant = []
...@@ -242,11 +242,12 @@ def grad(cost, wrt, g_cost=None, consider_constant=None, warn_type=False, ...@@ -242,11 +242,12 @@ def grad(cost, wrt, g_cost=None, consider_constant=None, warn_type=False,
:rtype: `Variable` or list/tuple of `Variable`s (depending upon `wrt`) :rtype: `Variable` or list/tuple of `Variable`s (depending upon `wrt`)
:return: symbolic expression of gradient of `cost` with respect to :return: symbolic expression of gradient of `cost` with respect to `wrt`.
`wrt`. If `wrt` is a list/tuple, then return a list/tuple If an element of `wrt` is not differentiable with respect
containing the gradient of `cost` wrt each element of the list. to the output, then a zero variable is returned.
If an element of `wrt` is not differentiable with respect to the If `wrt` is a list/tuple, longer then 1, a list will be returned.
output, then a zero variable is returned. DEPRECATION: In Theano 0.5, grad will return an object of the same
type as `wrt`: a list/tuple or TensorVariable in all case.
This function is a wrapper around the more general function This function is a wrapper around the more general function
`theano.gradient.grad_sources_inputs``. `theano.gradient.grad_sources_inputs``.
...@@ -282,7 +283,7 @@ def grad(cost, wrt, g_cost=None, consider_constant=None, warn_type=False, ...@@ -282,7 +283,7 @@ def grad(cost, wrt, g_cost=None, consider_constant=None, warn_type=False,
# gradient, but for now Theano needs to throw an exception, and make the # gradient, but for now Theano needs to throw an exception, and make the
# user aware that it does not know how to compute that gradient # user aware that it does not know how to compute that gradient
using_list = isinstance(wrt, list) using_list = isinstance(wrt, list)
using_tuple = isinstance(list, tuple) using_tuple = isinstance(wrt, tuple)
if not (using_list or using_tuple): if not (using_list or using_tuple):
wrt = [wrt] wrt = [wrt]
ret = [] ret = []
...@@ -307,15 +308,25 @@ def grad(cost, wrt, g_cost=None, consider_constant=None, warn_type=False, ...@@ -307,15 +308,25 @@ def grad(cost, wrt, g_cost=None, consider_constant=None, warn_type=False,
ret.append(zeros_like(p)) ret.append(zeros_like(p))
if len(ret) == 1: if len(ret) == 1:
if using_list: if using_list or using_tuple:
return ret warnings.warn(("The return type of tensor.grad will change in this "
elif using_tuple: "case. In the future grad(cost, wrt) will return an "
return tuple(ret) "object of the same type as wrt. So if wrt is a "
else: "list/tuple, list/tuple will be returned. Idem for "
return ret[0] "TensorVariable."),
stacklevel=2)
# TODO: when we release Theano 0.5, uncomment the following lines
# and remove the warning. Don't forget the line in the currently
# enabled else.
#if using_list:
# return ret
#elif using_tuple:
# return tuple(ret)
#else:
return ret[0]
else: else:
if using_tuple: #if using_tuple:
return tuple(ret) # return tuple(ret)
return ret return ret
class numeric_grad: class numeric_grad:
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论