提交 66623c32 authored 作者: Frederic's avatar Frederic

Small doc refactor for r op.

上级 f78fd294
...@@ -533,6 +533,8 @@ These are the function required to work with gradient.grad(). ...@@ -533,6 +533,8 @@ These are the function required to work with gradient.grad().
the outputs) back to their corresponding shapes and return them as the the outputs) back to their corresponding shapes and return them as the
output of the :func:`R_op` method. output of the :func:`R_op` method.
:ref:`List of op with r op support <R_op_list>`.
Defining an Op: ``mul`` Defining an Op: ``mul``
======================= =======================
......
...@@ -20,5 +20,59 @@ function does the underlying work, and is more flexible, but is also more ...@@ -20,5 +20,59 @@ function does the underlying work, and is more flexible, but is also more
awkward to use when :func:`gradient.grad` can do the job. awkward to use when :func:`gradient.grad` can do the job.
Gradient related functions
==========================
.. automodule:: theano.gradient .. automodule:: theano.gradient
:members: :members:
.. _R_op_list:
List of Implemented R op
========================
See the :ref:`gradient tutorial <tutcomputinggrads>` for the R op documentation.
list of ops that support R-op:
* with test [Most is tensor/tests/test_rop.py]
* SpecifyShape
* MaxAndArgmax
* Subtensor
* IncSubtensor set_subtensor too
* Alloc
* Dot
* Elemwise
* Sum
* Softmax
* Shape
* Join
* Rebroadcast
* Reshape
* Flatten
* DimShuffle
* Scan [In scan_module/tests/test_scan.test_rop]
* without test
* Split
* ARange
* ScalarFromTensor
* AdvancedSubtensor1
* AdvancedIncSubtensor1
* AdvancedIncSubtensor
Partial list of ops without support for R-op:
* All sparse ops
* All linear algebra ops.
* PermuteRowElements
* Tile
* AdvancedSubtensor
* TensorDot
* Outer
* Prod
* MulwithoutZeros
* ProdWithoutZeros
* CAReduce(for max,... done for MaxAndArgmax op)
* MaxAndArgmax(only for matrix on axis 0 or 1)
...@@ -1741,55 +1741,3 @@ Gradient / Differentiation ...@@ -1741,55 +1741,3 @@ Gradient / Differentiation
See the :ref:`gradient <libdoc_gradient>` page for complete documentation See the :ref:`gradient <libdoc_gradient>` page for complete documentation
of the gradient module. of the gradient module.
.. _R_op_list:
List of Implemented R op
========================
See the :ref:`gradient tutorial <tutcomputinggrads>` for the R op documentation.
list of ops that support R-op:
* with test [Most is tensor/tests/test_rop.py]
* SpecifyShape
* MaxAndArgmax
* Subtensor
* IncSubtensor set_subtensor too
* Alloc
* Dot
* Elemwise
* Sum
* Softmax
* Shape
* Join
* Rebroadcast
* Reshape
* Flatten
* DimShuffle
* Scan [In scan_module/tests/test_scan.test_rop]
* without test
* Split
* ARange
* ScalarFromTensor
* AdvancedSubtensor1
* AdvancedIncSubtensor1
* AdvancedIncSubtensor
Partial list of ops without support for R-op:
* All sparse ops
* All linear algebra ops.
* PermuteRowElements
* Tile
* AdvancedSubtensor
* TensorDot
* Outer
* Prod
* MulwithoutZeros
* ProdWithoutZeros
* CAReduce(for max,... done for MaxAndArgmax op)
* MaxAndArgmax(only for matrix on axis 0 or 1)
...@@ -238,6 +238,7 @@ array([[ 0., 0.], ...@@ -238,6 +238,7 @@ array([[ 0., 0.],
as the input parameter, while the result of the *R-operator* has a shape similar as the input parameter, while the result of the *R-operator* has a shape similar
to that of the output. to that of the output.
:ref:`List of op with r op support <R_op_list>`.
Hessian times a Vector Hessian times a Vector
====================== ======================
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论