提交 75b1c227 authored 作者: hantek's avatar hantek

fixed all warnings in doc. added the sphinx -m flag in docgen

上级 25c0f5e5
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
:mod:`shared` - defines theano.shared :mod:`shared` - defines theano.shared
=========================================== ===========================================
.. module:: shared .. module:: theano.compile.sharedvalue
:platform: Unix, Windows :platform: Unix, Windows
:synopsis: defines theano.shared and related classes :synopsis: defines theano.shared and related classes
.. moduleauthor:: LISA .. moduleauthor:: LISA
...@@ -47,7 +47,7 @@ ...@@ -47,7 +47,7 @@
:type: class:`Container` :type: class:`Container`
.. autofunction:: theano.compile.sharedvalue.shared .. autofunction:: shared
.. function:: shared_constructor(ctor) .. function:: shared_constructor(ctor)
......
...@@ -104,7 +104,7 @@ TODO: Give examples on how to use these things! They are pretty complicated. ...@@ -104,7 +104,7 @@ TODO: Give examples on how to use these things! They are pretty complicated.
as a manual replacement for nnet.conv2d. as a manual replacement for nnet.conv2d.
- :func:`GpuCorrMM <theano.sandbox.cuda.blas.GpuCorrMM>` - :func:`GpuCorrMM <theano.sandbox.cuda.blas.GpuCorrMM>`
This is a GPU-only 2d correlation implementation taken from This is a GPU-only 2d correlation implementation taken from
`caffe <https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cu>`_ `caffe's CUDA implementation <https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cu>`_
and also used by Torch. It does not flip the kernel. and also used by Torch. It does not flip the kernel.
For each element in a batch, it first creates a For each element in a batch, it first creates a
...@@ -122,7 +122,7 @@ TODO: Give examples on how to use these things! They are pretty complicated. ...@@ -122,7 +122,7 @@ TODO: Give examples on how to use these things! They are pretty complicated.
If using it, please see the warning about a bug in CUDA 5.0 to 6.0 below. If using it, please see the warning about a bug in CUDA 5.0 to 6.0 below.
- :func:`CorrMM <theano.tensor.nnet.corr.CorrMM>` - :func:`CorrMM <theano.tensor.nnet.corr.CorrMM>`
This is a CPU-only 2d correlation implementation taken from This is a CPU-only 2d correlation implementation taken from
`caffe <https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cpp>`_ `caffe's cpp implementation <https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cpp>`_
and also used by Torch. It does not flip the kernel. As it provides a gradient, and also used by Torch. It does not flip the kernel. As it provides a gradient,
you can use it as a replacement for nnet.conv2d. For convolutions done on you can use it as a replacement for nnet.conv2d. For convolutions done on
CPU, nnet.conv2d will be replaced by CorrMM. To explicitly disable it, set CPU, nnet.conv2d will be replaced by CorrMM. To explicitly disable it, set
......
...@@ -21,7 +21,7 @@ object for each such variable, and draw from it as necessary. We will call this ...@@ -21,7 +21,7 @@ object for each such variable, and draw from it as necessary. We will call this
random numbers a *random stream*. random numbers a *random stream*.
For an example of how to use random numbers, see For an example of how to use random numbers, see
:ref:`using_random_numbers`. :ref:`Using Random Numbers <using_random_numbers>`.
Reference Reference
......
...@@ -89,7 +89,7 @@ The proposal is for two new ways of creating a *shared* variable: ...@@ -89,7 +89,7 @@ The proposal is for two new ways of creating a *shared* variable:
def shared(value, name=None, strict=False, **kwargs): def shared(value, name=None, strict=False, **kwargs):
"""Return a SharedVariable Variable, initialized with a copy or reference of `value`. """Return a SharedVariable Variable, initialized with a copy or reference of `value`.
This function iterates over constructor functions (see `shared_constructor`) to find a This function iterates over constructor functions (see :func:`shared_constructor`) to find a
suitable SharedVariable subclass. suitable SharedVariable subclass.
:note: :note:
......
...@@ -56,7 +56,7 @@ if __name__ == '__main__': ...@@ -56,7 +56,7 @@ if __name__ == '__main__':
def call_sphinx(builder, workdir, extraopts=None): def call_sphinx(builder, workdir, extraopts=None):
import sphinx import sphinx
if extraopts is None: if extraopts is None:
extraopts = [] # '-W'] extraopts = ['-W']
if not options['--cache'] and files is None: if not options['--cache'] and files is None:
extraopts.append('-E') extraopts.append('-E')
docpath = os.path.join(throot, 'doc') docpath = os.path.join(throot, 'doc')
......
...@@ -27,9 +27,7 @@ functions using either of the following two options: ...@@ -27,9 +27,7 @@ functions using either of the following two options:
:attr:`profiling.n_ops` and :attr:`profiling.min_memory_size` :attr:`profiling.n_ops` and :attr:`profiling.min_memory_size`
to modify the quantify of information printed. to modify the quantify of information printed.
2. Pass the argument :attr:`profile=True` to the function 2. Pass the argument :attr:`profile=True` to the function :func:`theano.function <function.function>`. And then call :attr:`f.profile.print_summary()` for a single function.
:func:`theano.function <function.function>`. And then call
:attr:`f.profile.print_summary()` for a single function.
- Use this option when you want to profile not all the - Use this option when you want to profile not all the
functions but one or more specific function(s). functions but one or more specific function(s).
- You can also combine the profile of many functions: - You can also combine the profile of many functions:
......
...@@ -200,20 +200,25 @@ def shared_constructor(ctor, remove=False): ...@@ -200,20 +200,25 @@ def shared_constructor(ctor, remove=False):
def shared(value, name=None, strict=False, allow_downcast=None, **kwargs): def shared(value, name=None, strict=False, allow_downcast=None, **kwargs):
""" """Return a SharedVariable Variable, initialized with a copy or
Return a SharedVariable Variable, initialized with a copy or
reference of `value`. reference of `value`.
This function iterates over This function iterates over constructor functions to find a
:ref:`constructor functions <shared_constructor>` suitable SharedVariable subclass. The suitable one is the first
to find a suitable SharedVariable subclass. constructor that accept the given value. See the documentation of
The suitable one is the first constructor that accept the given value. :func:`shared_constructor` for the definition of a contructor
function.
This function is meant as a convenient default. If you want to use a This function is meant as a convenient default. If you want to use a
specific shared variable constructor, consider calling it directly. specific shared variable constructor, consider calling it directly.
``theano.shared`` is a shortcut to this function. ``theano.shared`` is a shortcut to this function.
.. attribute:: constructors
A list of shared variable constructors that will be tried in reverse
order.
Notes Notes
----- -----
By passing kwargs, you effectively limit the set of potential constructors By passing kwargs, you effectively limit the set of potential constructors
...@@ -229,11 +234,6 @@ def shared(value, name=None, strict=False, allow_downcast=None, **kwargs): ...@@ -229,11 +234,6 @@ def shared(value, name=None, strict=False, allow_downcast=None, **kwargs):
This parameter allows you to create for example a `row` or `column` 2d This parameter allows you to create for example a `row` or `column` 2d
tensor. tensor.
.. attribute:: constructors
A list of shared variable constructors that will be tried in reverse
order.
""" """
try: try:
......
...@@ -273,7 +273,7 @@ def sp_ones_like(x): ...@@ -273,7 +273,7 @@ def sp_ones_like(x):
Returns Returns
------- -------
matrix A sparse matrix
The same as `x` with data changed for ones. The same as `x` with data changed for ones.
""" """
...@@ -293,7 +293,7 @@ def sp_zeros_like(x): ...@@ -293,7 +293,7 @@ def sp_zeros_like(x):
Returns Returns
------- -------
matrix A sparse matrix
The same as `x` with zero entries for all element. The same as `x` with zero entries for all element.
""" """
...@@ -1765,7 +1765,7 @@ def row_scale(x, s): ...@@ -1765,7 +1765,7 @@ def row_scale(x, s):
Returns Returns
------- -------
matrix A sparse matrix
A sparse matrix in the same format as `x` whose each row has been A sparse matrix in the same format as `x` whose each row has been
multiplied by the corresponding element of `s`. multiplied by the corresponding element of `s`.
...@@ -2070,7 +2070,7 @@ def clean(x): ...@@ -2070,7 +2070,7 @@ def clean(x):
Returns Returns
------- -------
matrix A sparse matrix
The same as `x` with indices sorted and zeros The same as `x` with indices sorted and zeros
removed. removed.
...@@ -2166,7 +2166,7 @@ y ...@@ -2166,7 +2166,7 @@ y
Returns Returns
------- -------
matrix A sparse matrix
The sum of the two sparse matrices element wise. The sum of the two sparse matrices element wise.
Notes Notes
...@@ -2270,7 +2270,7 @@ y ...@@ -2270,7 +2270,7 @@ y
Returns Returns
------- -------
matrix A sparse matrix
A sparse matrix containing the addition of the vector to A sparse matrix containing the addition of the vector to
the data of the sparse matrix. the data of the sparse matrix.
...@@ -2297,7 +2297,7 @@ def add(x, y): ...@@ -2297,7 +2297,7 @@ def add(x, y):
Returns Returns
------- -------
matrix A sparse matrix
`x` + `y` `x` + `y`
Notes Notes
...@@ -2348,7 +2348,7 @@ def sub(x, y): ...@@ -2348,7 +2348,7 @@ def sub(x, y):
Returns Returns
------- -------
matrix A sparse matrix
`x` - `y` `x` - `y`
Notes Notes
...@@ -2547,7 +2547,7 @@ y ...@@ -2547,7 +2547,7 @@ y
Returns Returns
------- -------
matrix A sparse matrix
The product x * y element wise. The product x * y element wise.
Notes Notes
...@@ -2572,7 +2572,7 @@ def mul(x, y): ...@@ -2572,7 +2572,7 @@ def mul(x, y):
Returns Returns
------- -------
matrix A sparse matrix
`x` + `y` `x` + `y`
Notes Notes
...@@ -3720,7 +3720,7 @@ def structured_dot(x, y): ...@@ -3720,7 +3720,7 @@ def structured_dot(x, y):
Returns Returns
------- -------
matrix A sparse matrix
The dot product of `a` and `b`. The dot product of `a` and `b`.
Notes Notes
......
...@@ -461,7 +461,7 @@ class Elemwise(OpenMPOp): ...@@ -461,7 +461,7 @@ class Elemwise(OpenMPOp):
scalar.ScalarOp to get help about controlling the output type) scalar.ScalarOp to get help about controlling the output type)
Parameters Parameters
----------- ----------
scalar_op scalar_op
An instance of a subclass of scalar.ScalarOp which works uniquely An instance of a subclass of scalar.ScalarOp which works uniquely
on scalars. on scalars.
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论