提交 8296aa0b authored 作者: Kelvin Xu's avatar Kelvin Xu

merged theano.gradient & grad documentation, added a link

上级 4e5b2223
......@@ -9,11 +9,11 @@
:synopsis: low-level automatic differentiation
.. moduleauthor:: LISA
Symbolic gradient is usually computed from :func:`tensor.grad`, which offers a
Symbolic gradient is usually computed from :func:`gradient.grad`, which offers a
more convenient syntax for the common case of wanting the gradient in some
expressions with respect to a scalar cost. The :func:`grad_sources_inputs`
function does the underlying work, and is more flexible, but is also more
awkward to use when :func:`tensor.grad` can do the job.
awkward to use when :func:`gradient.grad` can do the job.
.. automodule:: theano.gradient
......
......@@ -1632,31 +1632,11 @@ Linear Algebra
Gradient / Differentiation
==========================
.. function:: grad(cost, wrt, g_cost=None, consider_constant=None, warn_type=False)
Return symbolic gradients for one or more variables with respect to some
cost.
For more information about how automatic differentiation works in Theano,
see :mod:`gradient`. For information on how to implement the gradient of
a certain Op, see :func:`grad`.
:type cost: 0-d tensor variable
:type wrt: tensor variable or list of tensor variables
:type g_cost: same as type of `cost`
:type consider_constant: list of variables
:type warn_type: bool
:param cost: a scalar with respect to which we are differentiating
:param wrt: term[s] for which we want gradients
:param g_cost: the gradient on the cost
:param consider_constant: variables whose gradients will be held at 0.
:param warn_type: True will trigger warnings via the logging module when
the gradient on an expression has a different type than the original
expression
:rtype: variable or list of variables (matching `wrt`)
:returns: gradients of the cost with respect to each of the `wrt` terms
.. automodule:: theano.gradient
:members: grad
See the :ref:`gradient <libdoc_gradient>` page for complete documentation
of the gradient module.
.. _R_op_list:
......
......@@ -356,9 +356,21 @@ def grad(cost, wrt, consider_constant=None,
disconnected_inputs='raise', add_names=True,
known_grads=None, return_disconnected='zero'):
"""
:type cost: Scalar (0-dimensional) Variable.
Return symbolic gradients for one or more variables with respect to some
cost.
For more information about how automatic differentiation works in Theano,
see :mod:`gradient`. For information on how to implement the gradient of
a certain Op, see :func:`grad`.
:type cost: Scalar (0-dimensional) tensor variable.
May optionally be None if known_grads is provided.
:type wrt: Variable or list of Variables.
:param cost: a scalar with respect to which we are differentiating
:type wrt: Tensor variable or list of variables.
:param wrt: term[s] for which we want gradients
:type consider_constant: list of variables
:param consider_constant: a list of expressions not to backpropagate
through
......@@ -389,9 +401,10 @@ def grad(cost, wrt, consider_constant=None,
None
- 'Disconnected' : returns variables of type DisconnectedType
:rtype: Variable or list/tuple of Variables (depending upon `wrt`)
:rtype: variable or list/tuple of Variables (matching `wrt`)
:return: symbolic expression of gradient of `cost` with respect to `wrt`.
:return: symbolic expression of gradient of `cost` with respect to each
of the `wrt` terms.
If an element of `wrt` is not differentiable with respect
to the output, then a zero variable is returned.
It returns an object of same type as `wrt`: a list/tuple
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论