提交 289ea572 authored 作者: Frederic Bastien's avatar Frederic Bastien

Fix some documentation syntax problem that make the cron that build the doc fail.

Should allow the documentation to be updated automatically again.
上级 555ca75c
...@@ -657,8 +657,8 @@ Theano dependencies is easy, but be aware that it will take a long time ...@@ -657,8 +657,8 @@ Theano dependencies is easy, but be aware that it will take a long time
Homebrew Homebrew
~~~~~~~~ ~~~~~~~~
There are some :ref:`instructions There are some `instructions
<https://github.com/samueljohn/homebrew-python>` by Samuel John on how to install <https://github.com/samueljohn/homebrew-python>`__ by Samuel John on how to install
Theano dependencies with Homebrew instead of MacPort. Theano dependencies with Homebrew instead of MacPort.
......
...@@ -1229,6 +1229,7 @@ Linear Algebra ...@@ -1229,6 +1229,7 @@ Linear Algebra
If an integer i, it is converted to an array containing If an integer i, it is converted to an array containing
the last i dimensions of the first tensor and the first the last i dimensions of the first tensor and the first
i dimensions of the second tensor: i dimensions of the second tensor:
axes = [range(a.ndim - i, b.ndim), range(i)] axes = [range(a.ndim - i, b.ndim), range(i)]
If an array, its two elements must contain compatible axes If an array, its two elements must contain compatible axes
...@@ -1251,6 +1252,8 @@ Linear Algebra ...@@ -1251,6 +1252,8 @@ Linear Algebra
are compatible. The resulting tensor will have shape (2, 5, 6) -- the are compatible. The resulting tensor will have shape (2, 5, 6) -- the
dimensions that are not being summed: dimensions that are not being summed:
.. code-block:: python
a = np.random.random((2,3,4)) a = np.random.random((2,3,4))
b = np.random.random((5,6,4,3)) b = np.random.random((5,6,4,3))
...@@ -1284,6 +1287,8 @@ Linear Algebra ...@@ -1284,6 +1287,8 @@ Linear Algebra
In an extreme case, no axes may be specified. The resulting tensor In an extreme case, no axes may be specified. The resulting tensor
will have shape equal to the concatenation of the shapes of a and b: will have shape equal to the concatenation of the shapes of a and b:
.. code-block:: python
c = np.tensordot(a, b, 0) c = np.tensordot(a, b, 0)
print(a.shape) #(2,3,4) print(a.shape) #(2,3,4)
print(b.shape) #(5,6,4,3) print(b.shape) #(5,6,4,3)
......
...@@ -7,8 +7,11 @@ ...@@ -7,8 +7,11 @@
.. note:: .. note::
Two similar implementation exists for conv2d: Two similar implementation exists for conv2d:
:func:`signal.conv2d <theano.tensor.signal.conv.conv2d>` and :func:`signal.conv2d <theano.tensor.signal.conv.conv2d>` and
:func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>`. The former implements a traditional :func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>`.
The former implements a traditional
2D convolution, while the latter implements the convolutional layers 2D convolution, while the latter implements the convolutional layers
present in convolutional neural networks (where filters are 3D and pool present in convolutional neural networks (where filters are 3D and pool
over several input channels). over several input channels).
......
...@@ -7,8 +7,11 @@ ...@@ -7,8 +7,11 @@
.. note:: .. note::
Two similar implementation exists for conv2d: Two similar implementation exists for conv2d:
:func:`signal.conv2d <theano.tensor.signal.conv.conv2d>` and :func:`signal.conv2d <theano.tensor.signal.conv.conv2d>` and
:func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>. The former implements a traditional :func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>`.
The former implements a traditional
2D convolution, while the latter implements the convolutional layers 2D convolution, while the latter implements the convolutional layers
present in convolutional neural networks (where filters are 3D and pool present in convolutional neural networks (where filters are 3D and pool
over several input channels). over several input channels).
......
...@@ -2455,7 +2455,7 @@ class GpuIncSubtensor(tensor.IncSubtensor, GpuOp): ...@@ -2455,7 +2455,7 @@ class GpuIncSubtensor(tensor.IncSubtensor, GpuOp):
:return: C code expression to make a copy of x :return: C code expression to make a copy of x
Base class uses PyArrayObject *, subclasses may override for Base class uses `PyArrayObject *`, subclasses may override for
different types of arrays. different types of arrays.
""" """
return """(CudaNdarray*) CudaNdarray_Copy(%(x)s)""" % locals() return """(CudaNdarray*) CudaNdarray_Copy(%(x)s)""" % locals()
......
...@@ -433,16 +433,14 @@ class CholeskyGrad(Op): ...@@ -433,16 +433,14 @@ class CholeskyGrad(Op):
return Apply(self, [x, l, dz], [x.type()]) return Apply(self, [x, l, dz], [x.type()])
def perform(self, node, inputs, outputs): def perform(self, node, inputs, outputs):
""" """Implements the "reverse-mode" gradient [1]_ for the
Implements the "reverse-mode" gradient for the Cholesky factorization Cholesky factorization of a positive-definite matrix.
of a positive-definite matrix.
References
----------
.. [1] S. P. Smith. "Differentiation of the Cholesky Algorithm". .. [1] S. P. Smith. "Differentiation of the Cholesky Algorithm".
Journal of Computational and Graphical Statistics, Journal of Computational and Graphical Statistics,
Vol. 4, No. 2 (Jun.,1995), pp. 134-147 Vol. 4, No. 2 (Jun.,1995), pp. 134-147
http://www.jstor.org/stable/1390762 http://www.jstor.org/stable/1390762
""" """
x = inputs[0] x = inputs[0]
L = inputs[1] L = inputs[1]
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论