提交 ca3e1d2d authored 作者: lamblin's avatar lamblin

Merge pull request #999 from nouiz/doc

Doc
...@@ -39,7 +39,7 @@ For more details: ...@@ -39,7 +39,7 @@ For more details:
Use ReST for documentation Use ReST for documentation
-------------------------- --------------------------
* :ref:`ReST <http://docutils.sourceforge.net/rst.html>` is standardized. * `ReST <http://docutils.sourceforge.net/rst.html>`__ is standardized.
epydoc is not. trac wiki-markup is not. epydoc is not. trac wiki-markup is not.
This means that ReST can be cut-and-pasted between epydoc, code, other This means that ReST can be cut-and-pasted between epydoc, code, other
docs, and TRAC. This is a huge win! docs, and TRAC. This is a huge win!
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
fg fgraph
toolbox toolbox
type type
......
...@@ -7,8 +7,8 @@ ...@@ -7,8 +7,8 @@
.. note:: .. note::
Two similar implementation exists for conv2d: Two similar implementation exists for conv2d:
**theano.tensor.signal.conv.conv2d** and :func:`signal.conv2d <theano.tensor.signal.conv.conv2d>` and
**theano.tensor.nnet.conv.conv2d**. The foremer implements a traditional :func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>. The former implements a traditional
2D convolution, while the latter implements the convolutional layers 2D convolution, while the latter implements the convolutional layers
present in convolutional neural networks (where filters are 3D and pool present in convolutional neural networks (where filters are 3D and pool
over several input channels). over several input channels).
...@@ -26,6 +26,5 @@ TODO: Give examples for how to use these things! They are pretty complicated. ...@@ -26,6 +26,5 @@ TODO: Give examples for how to use these things! They are pretty complicated.
- :func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>`. - :func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>`.
- :func:`conv3D <theano.tensor.nnet.Conv3D.conv3D>`. - :func:`conv3D <theano.tensor.nnet.Conv3D.conv3D>`.
.. autofunction:: theano.tensor.signal.conv.conv2d
.. autofunction:: theano.tensor.nnet.conv.conv2d .. autofunction:: theano.tensor.nnet.conv.conv2d
.. autofunction:: theano.tensor.nnet.Conv3D.conv3D .. autofunction:: theano.tensor.nnet.Conv3D.conv3D
...@@ -52,9 +52,9 @@ variables, and then: ...@@ -52,9 +52,9 @@ variables, and then:
\frac{\partial C}{\partial r} = \frac{\partial C}{\partial x} \frac{\partial x}{\partial r} + \frac{\partial C}{\partial y} \frac{\partial y}{\partial r} \frac{\partial C}{\partial r} = \frac{\partial C}{\partial x} \frac{\partial x}{\partial r} + \frac{\partial C}{\partial y} \frac{\partial y}{\partial r}
If we want to use an algorithm similar to gradient backpropagation, If we want to use an algorithm similar to gradient backpropagation,
we can see that, here, we need to have both :math:\frac{\partial we can see that, here, we need to have both :math:`\frac{\partial
C}{\partial \Re t} and :math:\frac{\partial C}{\partial \Im t}, in order C}{\partial \Re t}` and :math:`\frac{\partial C}{\partial \Im t}`, in order
to compute :math:`\frac{\partial C}{\partial r}. to compute :math:`\frac{\partial C}{\partial r}`.
For each variable :math:`v` in the expression graph, let us denote For each variable :math:`v` in the expression graph, let us denote
:math:`\nabla_C(v)` the *gradient* of :math:`C` with respect to :math:`\nabla_C(v)` the *gradient* of :math:`C` with respect to
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论