提交 e617dc50 authored 作者: abergeron's avatar abergeron

Merge pull request #3456 from nouiz/nanguard

[CRASH] in nanguardmode with pycobject
......@@ -533,6 +533,8 @@ These are the function required to work with gradient.grad().
the outputs) back to their corresponding shapes and return them as the
output of the :func:`R_op` method.
:ref:`List of op with r op support <R_op_list>`.
Defining an Op: ``mul``
=======================
......
......@@ -20,5 +20,59 @@ function does the underlying work, and is more flexible, but is also more
awkward to use when :func:`gradient.grad` can do the job.
Gradient related functions
==========================
.. automodule:: theano.gradient
:members:
.. _R_op_list:
List of Implemented R op
========================
See the :ref:`gradient tutorial <tutcomputinggrads>` for the R op documentation.
list of ops that support R-op:
* with test [Most is tensor/tests/test_rop.py]
* SpecifyShape
* MaxAndArgmax
* Subtensor
* IncSubtensor set_subtensor too
* Alloc
* Dot
* Elemwise
* Sum
* Softmax
* Shape
* Join
* Rebroadcast
* Reshape
* Flatten
* DimShuffle
* Scan [In scan_module/tests/test_scan.test_rop]
* without test
* Split
* ARange
* ScalarFromTensor
* AdvancedSubtensor1
* AdvancedIncSubtensor1
* AdvancedIncSubtensor
Partial list of ops without support for R-op:
* All sparse ops
* All linear algebra ops.
* PermuteRowElements
* Tile
* AdvancedSubtensor
* TensorDot
* Outer
* Prod
* MulwithoutZeros
* ProdWithoutZeros
* CAReduce(for max,... done for MaxAndArgmax op)
* MaxAndArgmax(only for matrix on axis 0 or 1)
......@@ -1746,55 +1746,3 @@ Gradient / Differentiation
See the :ref:`gradient <libdoc_gradient>` page for complete documentation
of the gradient module.
.. _R_op_list:
List of Implemented R op
========================
See the :ref:`gradient tutorial <tutcomputinggrads>` for the R op documentation.
list of ops that support R-op:
* with test [Most is tensor/tests/test_rop.py]
* SpecifyShape
* MaxAndArgmax
* Subtensor
* IncSubtensor set_subtensor too
* Alloc
* Dot
* Elemwise
* Sum
* Softmax
* Shape
* Join
* Rebroadcast
* Reshape
* Flatten
* DimShuffle
* Scan [In scan_module/tests/test_scan.test_rop]
* without test
* Split
* ARange
* ScalarFromTensor
* AdvancedSubtensor1
* AdvancedIncSubtensor1
* AdvancedIncSubtensor
Partial list of ops without support for R-op:
* All sparse ops
* All linear algebra ops.
* PermuteRowElements
* Tile
* AdvancedSubtensor
* TensorDot
* Outer
* Prod
* MulwithoutZeros
* ProdWithoutZeros
* CAReduce(for max,... done for MaxAndArgmax op)
* MaxAndArgmax(only for matrix on axis 0 or 1)
......@@ -238,6 +238,7 @@ array([[ 0., 0.],
as the input parameter, while the result of the *R-operator* has a shape similar
to that of the output.
:ref:`List of op with r op support <R_op_list>`.
Hessian times a Vector
======================
......
......@@ -263,14 +263,14 @@ class NanGuardMode(Mode):
error = True
if big_is_error:
err = False
if var.size == 0:
if isinstance(var, theano.gof.type.CDataType._cdata_type):
err = False
elif cuda.cuda_available and isinstance(var, cuda.CudaNdarray):
err = (f_gpuabsmax(var.reshape(var.size)) > 1e10)
elif isinstance(var, theano.gof.type.CDataType._cdata_type):
err = False
elif isinstance(var, np.random.mtrand.RandomState):
err = False
elif var.size == 0:
err = False
else:
err = (np.abs(var).max() > 1e10)
if err:
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论