提交 7b285bba authored 作者: Frederic Bastien's avatar Frederic Bastien

Use different doc format to work around rest limitation.

上级 39bfada8
...@@ -132,70 +132,24 @@ To get an error if Theano can not use cuDNN, use this Theano flag: ...@@ -132,70 +132,24 @@ To get an error if Theano can not use cuDNN, use this Theano flag:
cudnn.h must be readable by everybody. cudnn.h must be readable by everybody.
* :ref:`Convolution <libdoc_gpuarray_dnn_convolution>` - Convolution:
* :ref:`Pooling <libdoc_gpuarray_dnn_pooling>` - :func:`theano.gpuarray.dnn.dnn_conv`, :func:`theano.gpuarray.dnn.dnn_conv3d`.
* :ref:`Batch Normalization <libdoc_gpuarray_dnn_bn>` - :func:`theano.gpuarray.dnn.dnn_gradweight`, :func:`theano.gpuarray.dnn.dnn_gradweight3d`.
* :ref:`RNN <libdoc_gpuarray_dnn_rnn>` - :func:`theano.gpuarray.dnn.dnn_gradinput`, :func:`theano.gpuarray.dnn.dnn_gradinput3d`.
* :ref:`Softmax <libdoc_gpuarray_dnn_softmax>` - Pooling:
* :ref:`Internal Ops <libdoc_gpuarray_dnn_internal_ops>` - :func:`theano.gpuarray.dnn.dnn_pool`.
- Batch Normalization:
dnn_present, dnn_available - :func:`theano.gpuarray.dnn.dnn_batch_normalization_train`
- :func:`theano.gpuarray.dnn.dnn_batch_normalization_test`.
.. _libdoc_gpuarray_dnn_convolution: - RNN:
- :class:`theano.gpuarray.dnn.RNNBlock`
Convolution - Softmax:
=========== - You can manually use the op :class:`GpuDnnSoftmax
<theano.gpuarray.dnn.GpuDnnSoftmax>` to use its extra feature.
.. automodule:: theano.gpuarray.dnn
:noindex: List of Implemented Operations
:members: dnn_conv, dnn_conv3d, dnn_gradweight, dnn_gradweight3d, dnn_gradinput, dnn_gradinput3d ==============================
.. _libdoc_gpuarray_dnn_pooling:
Pooling
=======
.. automodule:: theano.gpuarray.dnn
:noindex:
:members: dnn_pool
.. _libdoc_gpuarray_dnn_bn:
Batch Normalization
===================
.. automodule:: theano.gpuarray.dnn
:noindex:
:members: dnn_batch_normalization_train, dnn_batch_normalization_test
.. _libdoc_gpuarray_dnn_rnn:
RNN
===
Without dropout support.
.. automodule:: theano.gpuarray.dnn
:noindex:
:members: RNNBlock
.. _libdoc_gpuarray_dnn_softmax:
Softmax
=======
You can manually use the op :class:`GpuDnnSoftmax
<theano.gpuarray.dnn.GpuDnnSoftmax>` to use its extra feature.
.. _libdoc_gpuarray_dnn_internal_ops:
Internal Ops
============
.. automodule:: theano.gpuarray.dnn .. automodule:: theano.gpuarray.dnn
:noindex: :members:
:members: GpuDnnConvDesc, GpuDnnConv, GpuDnnConvGradW, GpuDnnConvGradI,
GpuDnnPoolDesc, GpuDnnPool, GpuDnnPoolGrad,
GpuDnnBatchNormInference, GpuDnnBatchNorm, GpuDnnBatchNormGrad,
GpuDnnSoftmax, GpuDnnSoftmaxGrad
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
======================================= =======================================
:mod:`theano.sandbox.cuda.dnn` -- cuDNN :mod:`theano.sandbox.cuda.dnn` -- cuDNN
======================================= =======================================r
.. moduleauthor:: LISA .. moduleauthor:: LISA
...@@ -145,65 +145,24 @@ get an error when cuDNN can not be used with them, use this flag: ...@@ -145,65 +145,24 @@ get an error when cuDNN can not be used with them, use this flag:
cudnn.h must be readable by everybody. cudnn.h must be readable by everybody.
* :ref:`Convolution <libdoc_cuda_dnn_convolution>` - Convolution:
* :ref:`Pooling <libdoc_cuda_dnn_pooling>` - :func:`theano.sandbox.cuda.dnn.dnn_conv`, :func:`theano.sandbox.cuda.dnn.dnn_conv3d`.
* :ref:`Batch Normalization <libdoc_cuda_dnn_bn>` - :func:`theano.sandbox.cuda.dnn.dnn_gradweight`.
* :ref:`RNN <libdoc_cuda_dnn_rnn>` - :func:`theano.sandbox.cuda.dnn.dnn_gradinput`.
* :ref:`Softmax <libdoc_cuda_dnn_softmax>` - Pooling:
* :ref:`Internal Ops <libdoc_cuda_dnn_internal_ops>` - :func:`theano.sandbox.cuda.dnn.dnn_pool`.
- Batch Normalization:
.. _libdoc_cuda_dnn_convolution: - :func:`theano.sandbox.cuda.dnn.dnn_batch_normalization_train`
- :func:`theano.sandbox.cuda.dnn.dnn_batch_normalization_test`.
Convolution - RNN:
=========== - :class:`New back-end only! <theano.gpuarray.dnn.RNNBlock>`.
- Softmax:
.. automodule:: theano.sandbox.cuda.dnn - You can manually use the op :class:`GpuDnnSoftmax
:noindex: <theano.sandbox.cuda.dnn.GpuDnnSoftmax>` to use its extra feature.
:members: dnn_conv, dnn_conv3d, dnn_gradweight, dnn_gradinput
.. _libdoc_cuda_dnn_pooling:
Pooling
=======
.. automodule:: theano.sandbox.cuda.dnn
:noindex:
:members: dnn_pool
.. _libdoc_cuda_dnn_bn:
Batch Normalization
===================
.. automodule:: theano.sandbox.cuda.dnn
:noindex:
:members: dnn_batch_normalization_train, dnn_batch_normalization_test
.. _libdoc_cuda_dnn_rnn:
RNN
===
`New back-end only! <libdoc_gpuarray_dnn_rnn>`_
.. _libdoc_cuda_dnn_softmax:
Softmax
=======
You can manually use the op :class:`GpuDnnSoftmax
<theano.sandbox.cuda.dnn.GpuDnnSoftmax>` to use its extra feature.
.. _libdoc_cuda_dnn_internal_ops:
Internal Ops List of Implemented Operations
============ ==============================
.. automodule:: theano.sandbox.cuda.dnn .. automodule:: theano.sandbox.cuda.dnn
:noindex: :members:
:members: GpuDnnConvDesc, GpuDnnConv, GpuDnnConv3d, GpuDnnConvGradW,
GpuDnnConv3dGradW, GpuDnnConvGradI, GpuDnnConv3dGradI,
GpuDnnPoolDesc, GpuDnnPool, GpuDnnPoolGrad,
GpuDnnBatchNormInference, GpuDnnBatchNorm, GpuDnnBatchNormGrad,
GpuDnnSoftmax, GpuDnnSoftmaxGrad
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论