提交 75b1c227 authored 作者: hantek's avatar hantek

fixed all warnings in doc. added the sphinx -m flag in docgen

上级 25c0f5e5
......@@ -5,7 +5,7 @@
:mod:`shared` - defines theano.shared
===========================================
.. module:: shared
.. module:: theano.compile.sharedvalue
:platform: Unix, Windows
:synopsis: defines theano.shared and related classes
.. moduleauthor:: LISA
......@@ -47,7 +47,7 @@
:type: class:`Container`
.. autofunction:: theano.compile.sharedvalue.shared
.. autofunction:: shared
.. function:: shared_constructor(ctor)
......
......@@ -104,7 +104,7 @@ TODO: Give examples on how to use these things! They are pretty complicated.
as a manual replacement for nnet.conv2d.
- :func:`GpuCorrMM <theano.sandbox.cuda.blas.GpuCorrMM>`
This is a GPU-only 2d correlation implementation taken from
`caffe <https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cu>`_
`caffe's CUDA implementation <https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cu>`_
and also used by Torch. It does not flip the kernel.
For each element in a batch, it first creates a
......@@ -122,7 +122,7 @@ TODO: Give examples on how to use these things! They are pretty complicated.
If using it, please see the warning about a bug in CUDA 5.0 to 6.0 below.
- :func:`CorrMM <theano.tensor.nnet.corr.CorrMM>`
This is a CPU-only 2d correlation implementation taken from
`caffe <https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cpp>`_
`caffe's cpp implementation <https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cpp>`_
and also used by Torch. It does not flip the kernel. As it provides a gradient,
you can use it as a replacement for nnet.conv2d. For convolutions done on
CPU, nnet.conv2d will be replaced by CorrMM. To explicitly disable it, set
......
......@@ -21,7 +21,7 @@ object for each such variable, and draw from it as necessary. We will call this
random numbers a *random stream*.
For an example of how to use random numbers, see
:ref:`using_random_numbers`.
:ref:`Using Random Numbers <using_random_numbers>`.
Reference
......
......@@ -89,7 +89,7 @@ The proposal is for two new ways of creating a *shared* variable:
def shared(value, name=None, strict=False, **kwargs):
"""Return a SharedVariable Variable, initialized with a copy or reference of `value`.
This function iterates over constructor functions (see `shared_constructor`) to find a
This function iterates over constructor functions (see :func:`shared_constructor`) to find a
suitable SharedVariable subclass.
:note:
......
......@@ -56,7 +56,7 @@ if __name__ == '__main__':
def call_sphinx(builder, workdir, extraopts=None):
import sphinx
if extraopts is None:
extraopts = [] # '-W']
extraopts = ['-W']
if not options['--cache'] and files is None:
extraopts.append('-E')
docpath = os.path.join(throot, 'doc')
......
......@@ -27,9 +27,7 @@ functions using either of the following two options:
:attr:`profiling.n_ops` and :attr:`profiling.min_memory_size`
to modify the quantify of information printed.
2. Pass the argument :attr:`profile=True` to the function
:func:`theano.function <function.function>`. And then call
:attr:`f.profile.print_summary()` for a single function.
2. Pass the argument :attr:`profile=True` to the function :func:`theano.function <function.function>`. And then call :attr:`f.profile.print_summary()` for a single function.
- Use this option when you want to profile not all the
functions but one or more specific function(s).
- You can also combine the profile of many functions:
......
......@@ -200,20 +200,25 @@ def shared_constructor(ctor, remove=False):
def shared(value, name=None, strict=False, allow_downcast=None, **kwargs):
"""
Return a SharedVariable Variable, initialized with a copy or
"""Return a SharedVariable Variable, initialized with a copy or
reference of `value`.
This function iterates over
:ref:`constructor functions <shared_constructor>`
to find a suitable SharedVariable subclass.
The suitable one is the first constructor that accept the given value.
This function iterates over constructor functions to find a
suitable SharedVariable subclass. The suitable one is the first
constructor that accept the given value. See the documentation of
:func:`shared_constructor` for the definition of a contructor
function.
This function is meant as a convenient default. If you want to use a
specific shared variable constructor, consider calling it directly.
``theano.shared`` is a shortcut to this function.
.. attribute:: constructors
A list of shared variable constructors that will be tried in reverse
order.
Notes
-----
By passing kwargs, you effectively limit the set of potential constructors
......@@ -229,11 +234,6 @@ def shared(value, name=None, strict=False, allow_downcast=None, **kwargs):
This parameter allows you to create for example a `row` or `column` 2d
tensor.
.. attribute:: constructors
A list of shared variable constructors that will be tried in reverse
order.
"""
try:
......
......@@ -273,7 +273,7 @@ def sp_ones_like(x):
Returns
-------
matrix
A sparse matrix
The same as `x` with data changed for ones.
"""
......@@ -293,7 +293,7 @@ def sp_zeros_like(x):
Returns
-------
matrix
A sparse matrix
The same as `x` with zero entries for all element.
"""
......@@ -1765,7 +1765,7 @@ def row_scale(x, s):
Returns
-------
matrix
A sparse matrix
A sparse matrix in the same format as `x` whose each row has been
multiplied by the corresponding element of `s`.
......@@ -2070,7 +2070,7 @@ def clean(x):
Returns
-------
matrix
A sparse matrix
The same as `x` with indices sorted and zeros
removed.
......@@ -2166,7 +2166,7 @@ y
Returns
-------
matrix
A sparse matrix
The sum of the two sparse matrices element wise.
Notes
......@@ -2270,7 +2270,7 @@ y
Returns
-------
matrix
A sparse matrix
A sparse matrix containing the addition of the vector to
the data of the sparse matrix.
......@@ -2297,7 +2297,7 @@ def add(x, y):
Returns
-------
matrix
A sparse matrix
`x` + `y`
Notes
......@@ -2348,7 +2348,7 @@ def sub(x, y):
Returns
-------
matrix
A sparse matrix
`x` - `y`
Notes
......@@ -2547,7 +2547,7 @@ y
Returns
-------
matrix
A sparse matrix
The product x * y element wise.
Notes
......@@ -2572,7 +2572,7 @@ def mul(x, y):
Returns
-------
matrix
A sparse matrix
`x` + `y`
Notes
......@@ -3720,7 +3720,7 @@ def structured_dot(x, y):
Returns
-------
matrix
A sparse matrix
The dot product of `a` and `b`.
Notes
......
......@@ -461,7 +461,7 @@ class Elemwise(OpenMPOp):
scalar.ScalarOp to get help about controlling the output type)
Parameters
-----------
----------
scalar_op
An instance of a subclass of scalar.ScalarOp which works uniquely
on scalars.
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论