提交 043cdaea authored 作者: Francesco Visin's avatar Francesco Visin

Fix multiple problems in elemwise docstrings

上级 073b220b
...@@ -8,7 +8,7 @@ Contributing ...@@ -8,7 +8,7 @@ Contributing
============ ============
You want to contribute to Theano? That is great! This page explain our You want to contribute to Theano? That is great! This page explain our
workflow and some ressource for doing so. workflow and some resource for doing so.
Looking for an idea for a first contribution? Check `github issue Looking for an idea for a first contribution? Check `github issue
<https://github.com/Theano/Theano/issues?q=is%3Aopen+is%3Aissue+label%3A%22Easy+fix%22>` <https://github.com/Theano/Theano/issues?q=is%3Aopen+is%3Aissue+label%3A%22Easy+fix%22>`
...@@ -39,7 +39,7 @@ To get up to speed, you'll need to ...@@ -39,7 +39,7 @@ To get up to speed, you'll need to
.. _Sphinx: http://sphinx.pocoo.org/ .. _Sphinx: http://sphinx.pocoo.org/
.. _reStructuredText: http://docutils.sourceforge.net/rst.html .. _reStructuredText: http://docutils.sourceforge.net/rst.html
.. _Docstring sections: https://sphinxcontrib-napoleon.readthedocs.org/en/latest/#docstring-sections .. _Allowed docstring sections in Napoleon: https://sphinxcontrib-napoleon.readthedocs.org/en/latest/#docstring-sections
.. _NumPy documentation: http://docs.scipy.org/numpy/ .. _NumPy documentation: http://docs.scipy.org/numpy/
.. _unittest: http://docs.python.org/library/unittest.html .. _unittest: http://docs.python.org/library/unittest.html
.. _nose: http://nose.readthedocs.org/en/latest/ .. _nose: http://nose.readthedocs.org/en/latest/
...@@ -220,12 +220,57 @@ GitHub, to let them know that you have submitted a fix. ...@@ -220,12 +220,57 @@ GitHub, to let them know that you have submitted a fix.
Tips for Quality Contributions Tips for Quality Contributions
============================== ==============================
Coding Style * All the code should be properly tested.
-----------------------
Please always respect the :ref:`Documentation` * The code should be compatible with Python 2.6 and above, as well as Python
or your contribution will not be accepted 3.3 and above (using `six` if needed).
* All the code should respect the
`PEP8 Code Style Guide <http://www.python.org/dev/peps/pep-0008>`_.
* The docstrings of all the classes and functions should respect the
`PEP257 <https://www.python.org/dev/peps/pep-0257/>`_ rules and follow the
`Numpy docstring standard
<https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt>`_.
Each point will be referred to more in detail in the following.
Unit tests
----------
When you submit a pull request, your changes will automatically be
tested via Travis-CI. This will post the results of the tests with a
little icon next to your commit. A yellow circle means the tests are
running. A red X means the tests failed and a green circle means the
tests passed.
Just because the tests run automatically does not mean you shouldn't
run them yourself to make sure everything is all right. You can run
only the portion you are modifying to go faster and have travis to
make sure there are no global impacts.
Also, if you are changing GPU code, travis doesn't test that, because
there are no GPUs on the test nodes.
To run the test suite with the default options, you can follow the
instructions of :ref:`testing_installation`.
Each night we execute all the unit tests automatically, with several
sets of options. The result is sent by email to the `theano-buildbot`_
mailing list.
For more detail, see :ref:`metadocumentation_nightly_build`.
To run all the tests with the same configuration as the buildbot, run
this script:
.. code-block:: bash
theano/misc/do_nightly_build
This script accepts arguments that it forwards to nosetests. You can
run only some tests or enable pdb by giving the equivalent nosetests
parameters.
Setting up your Editor for PEP8 Setting up your Editor for PEP8
------------------------------- -------------------------------
...@@ -260,7 +305,7 @@ To setup VIM: ...@@ -260,7 +305,7 @@ To setup VIM:
#. Edit ``~/.vimrc`` and add the lines:: #. Edit ``~/.vimrc`` and add the lines::
.. code-block:: python .. code-block:: txt
set nocompatible " be iMproved, required set nocompatible " be iMproved, required
filetype off " required filetype off " required
...@@ -377,42 +422,70 @@ Then in your ``~/.emacs`` file, add this: ...@@ -377,42 +422,70 @@ Then in your ``~/.emacs`` file, add this:
'(flymake-errline ((((class color)) (:underline "red")))) '(flymake-errline ((((class color)) (:underline "red"))))
'(flymake-warnline ((((class color)) (:underline "yellow"))))) '(flymake-warnline ((((class color)) (:underline "yellow")))))
Unit tests Documentation and docstrings
---------- ----------------------------
* The documentation and the API documentation are generated using `Sphinx`_.
When you submit a pull request, your changes will automatically be * The documentation should be written in `reStructuredText`_ and the
tested via Travis-CI. This will post the results of the tests with a docstrings of all the classes and functions should respect the
little icon next to your commit. A yellow circle means the tests are `PEP257 <https://www.python.org/dev/peps/pep-0257/>`_ rules and follow the
running. A red X means the tests failed and a green circle means the `Numpy docstring standard
tests passed. <https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt>`_.
Just because the tests run automatically does not mean you shouldn't * Split the docstrings in sections, according to the `Allowed docstring
run them yourself to make sure everything is all right. You can run sections in Napoleon`_
only the portion you are modifying to go faster and have travis to
make sure there are no global impacts.
Also, if you are changing GPU code, travis doesn't test that, because * To cross-reference other objects (e.g. reference other classes or methods) in
there are no GPUs on the test nodes. the docstrings, use the
`cross-referencing objects <http://www.sphinx-doc.org/en/stable/domains.html#cross-referencing-python-objects>`_
syntax. ``:py`` can be omitted, see e.g. this
`stackoverflow answer <http://stackoverflow.com/a/7754189>`_.
To run the test suite with the default options, you can follow the * See :ref:`metadocumentation`, for some information on how to generate the
instructions of :ref:`testing_installation`. documentation.
Each night we execute all the unit tests automatically, with several
sets of options. The result is sent by email to the `theano-buildbot`_
mailing list.
For more detail, see :ref:`metadocumentation_nightly_build`. A Docstring Example
~~~~~~~~~~~~~~~~~~~
Here is an example on how to add a docstring to a class.
To run all the tests with the same configuration as the buildbot, run .. testcode:: python
this script:
.. code-block:: bash import theano
theano/misc/do_nightly_build class DoubleOp(theano.Op):
"""
Double each element of a tensor.
This script accepts arguments that it forwards to nosetests. You can Parameters
run only some tests or enable pdb by giving the equivalent nosetests ----------
parameters. x : tensor
Input tensor
Returns
-------
tensor
a tensor of the same shape and dtype as the input with all
values doubled.
Notes
-----
this is a test note
See Also
--------
:class:`~theano.tensor.elemwise.Elemwise` : You can use this to replace
this example. Just execute `x * 2` with x being a Theano variable.
.. versionadded:: 0.6
"""
This is how it will show up for files that we auto-list in the library
documentation:
.. automodule:: theano.misc.doubleop
:members:
More Advanced Git Usage More Advanced Git Usage
......
...@@ -875,66 +875,12 @@ Modify and execute to compute: numpy.add and numpy.subtract. ...@@ -875,66 +875,12 @@ Modify and execute to compute: numpy.add and numpy.subtract.
Modify and execute the example to return two outputs: x + y Modify and execute the example to return two outputs: x + y
and x - y. and x - y.
.. _Documentation: .. _Documentation:
Documentation and Coding Style Documentation and Coding Style
------------------------------ ------------------------------
Please always respect the :ref:`quality_contributions` or your contribution
* The code should be compatible with Python 2.6 and above, as well as Python 3.3 and above (using 2to3 for conversion will not be accepted.
to Python 3.x).
* All the code should respect the `PEP8 Code Style Guide <http://www.python.org/dev/peps/pep-0008>`_.
* The docstrings of all the classes and functions should respect the
`PEP257 <https://www.python.org/dev/peps/pep-0257/>`_ rules and follow the `Numpy docstring standard
<https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt>`_.
* To cross-reference other objects (e.g. reference other classes or methods) in the docstrings, use the
`cross-referencing objects <http://www.sphinx-doc.org/en/stable/domains.html#cross-referencing-python-objects>`_ syntax. ``:py`` can be omitted, see e.g. this `stackoverflow answer <http://stackoverflow.com/a/7754189>`_.
* See :ref:`metadocumentation`, for some information on how to generate the documentation.
Here is an example how to add docstring to a class.
.. testcode:: python
import theano
class DoubleOp(theano.Op):
"""
Double each element of a tensor.
Parameters
----------
x : tensor
Input tensor
Returns
-------
tensor
a tensor of the same shape and dtype as the input with all
values doubled.
Notes
-----
this is a test note
See Also
--------
:class:`~theano.tensor.elemwise.Elemwise` : You can use this to replace
this example. Just execute `x * 2` with x being a Theano variable.
.. versionadded:: 0.6
"""
This is how it will show up for files that we auto-list in the library
documentation:
.. automodule:: theano.misc.doubleop
:members:
NanGuardMode and AllocEmpty NanGuardMode and AllocEmpty
--------------------------- ---------------------------
......
...@@ -76,18 +76,22 @@ class DimShuffle(Op): ...@@ -76,18 +76,22 @@ class DimShuffle(Op):
If True, the output will be a view of the input. If True, the output will be a view of the input.
If False (default), the output will be a copy of the input. If False (default), the output will be a copy of the input.
If j = new_order[i] is an index, the output's ith dimension Note
----
If `j = new_order[i]` is an index, the output's ith dimension
will be the input's jth dimension. will be the input's jth dimension.
If new_order[i] is 'x', the output's ith dimension will If `new_order[i]` is `x`, the output's ith dimension will
be 1 and Broadcast operations will be allowed to do broadcasting be 1 and Broadcast operations will be allowed to do broadcasting
over that dimension. over that dimension.
If input.broadcastable[i] == False then i must be found in new_order. If `input.broadcastable[i] == False` then `i` must be found in new_order.
Broadcastable dimensions, on the other hand, can be discarded. Broadcastable dimensions, on the other hand, can be discarded.
Extended Summary Note
---------------- ----
DimShuffle((False, False, False), ['x', 2, 'x', 0, 1]) .. code-block:: python
DimShuffle((False, False, False), ['x', 2, 'x', 0, 1])
This op will only work on 3d tensors with no broadcastable This op will only work on 3d tensors with no broadcastable
dimensions. The first dimension will be broadcastable, dimensions. The first dimension will be broadcastable,
...@@ -96,7 +100,9 @@ class DimShuffle(Op): ...@@ -96,7 +100,9 @@ class DimShuffle(Op):
shape (20, 30, 40), the resulting tensor will have dimensions shape (20, 30, 40), the resulting tensor will have dimensions
(1, 40, 1, 20, 30). (AxBxC tensor is mapped to 1xCx1xAxB tensor) (1, 40, 1, 20, 30). (AxBxC tensor is mapped to 1xCx1xAxB tensor)
DimShuffle((True, False), [1]) .. code-block:: python
DimShuffle((True, False), [1])
This op will only work on 2d tensors with the first dimension This op will only work on 2d tensors with the first dimension
broadcastable. broadcastable.
...@@ -105,17 +111,20 @@ class DimShuffle(Op): ...@@ -105,17 +111,20 @@ class DimShuffle(Op):
If the tensor has shape (1, 20), the resulting tensor will have shape If the tensor has shape (1, 20), the resulting tensor will have shape
(20, ). (20, ).
More examples : Example
DimShuffle((), ['x']) -> make a 0d (scalar) into a 1d vector -------
DimShuffle((False, False), [0, 1]) -> identity .. code-block:: python
DimShuffle((False, False), [1, 0]) -> inverts the 1st and 2nd dimensions
DimShuffle((False,), ['x', 0]) -> make a row out DimShuffle((), ['x']) # make a 0d (scalar) into a 1d vector
of a 1d vector (N to 1xN) DimShuffle((False, False), [0, 1]) # identity
DimShuffle((False,), [0, 'x']) -> make a column DimShuffle((False, False), [1, 0]) # inverts the 1st and 2nd dimensions
out of a 1d vector (N to Nx1) DimShuffle((False,), ['x', 0]) # make a row out of a 1d vector
DimShuffle((False, False, False), [2, 0, 1]) -> AxBxC to CxAxB # (N to 1xN)
DimShuffle((False, False), [0, 'x', 1]) -> AxB to Ax1xB DimShuffle((False,), [0, 'x']) # make a column out of a 1d vector
DimShuffle((False, False), [1, 'x', 0]) -> AxB to Bx1xA # (N to Nx1)
DimShuffle((False, False, False), [2, 0, 1]) # AxBxC to CxAxB
DimShuffle((False, False), [0, 'x', 1]) # AxB to Ax1xB
DimShuffle((False, False), [1, 'x', 0]) # AxB to Bx1xA
The reordering of the dimensions can be done with the numpy.transpose The reordering of the dimensions can be done with the numpy.transpose
function. function.
...@@ -487,15 +496,15 @@ class Elemwise(OpenMPOp): ...@@ -487,15 +496,15 @@ class Elemwise(OpenMPOp):
Examples Examples
-------- --------
Elemwise(add) # represents + on tensors (x + y) >>> Elemwise(add) # represents + on tensors (x + y)
Elemwise(add, {0 : 0}) # represents the += operation (x += y) >>> Elemwise(add, {0 : 0}) # represents the += operation (x += y)
Elemwise(add, {0 : 1}) # represents += on the second argument (y += x) >>> Elemwise(add, {0 : 1}) # represents += on the second argument (y += x)
Elemwise(mul)(rand(10, 5), rand(1, 5)) # the second input is completed >>> Elemwise(mul)(rand(10, 5), rand(1, 5)) # the second input is completed
# along the first dimension to match the first input >>> # along the first dimension to match the first input
Elemwise(true_div)(rand(10, 5), rand(10, 1)) # same but along the >>> Elemwise(true_div)(rand(10, 5), rand(10, 1)) # same but along the
# second dimension >>> # second dimension
Elemwise(int_div)(rand(1, 5), rand(10, 1)) # the output has size (10, 5) >>> Elemwise(int_div)(rand(1, 5), rand(10, 1)) # the output has size (10, 5)
Elemwise(log)(rand(3, 4, 5)) >>> Elemwise(log)(rand(3, 4, 5))
""" """
...@@ -1309,19 +1318,22 @@ class CAReduce(Op): ...@@ -1309,19 +1318,22 @@ class CAReduce(Op):
- List of dimensions that we want to reduce - List of dimensions that we want to reduce
- If None, all dimensions are reduced - If None, all dimensions are reduced
Examples Note
-------- ----
CAReduce(add) -> sum (ie, acts like the numpy sum operation) .. code-block:: python
CAReduce(mul) -> product
CAReduce(maximum) -> max CAReduce(add) # sum (ie, acts like the numpy sum operation)
CAReduce(minimum) -> min CAReduce(mul) # product
CAReduce(or) -> any # not lazy CAReduce(maximum) # max
CAReduce(and) -> all # not lazy CAReduce(minimum) # min
CAReduce(xor) -> a bit at 1 tell that there was an odd number of bit at CAReduce(or_) # any # not lazy
that position that where 1. 0 it was an even number ... CAReduce(and_) # all # not lazy
CAReduce(xor) # a bit at 1 tell that there was an odd number of
# bit at that position that where 1. 0 it was an
# even number ...
In order to (eventually) optimize memory usage patterns, In order to (eventually) optimize memory usage patterns,
L{CAReduce} makes zero guarantees on the order in which it CAReduce makes zero guarantees on the order in which it
iterates over the dimensions and the elements of the iterates over the dimensions and the elements of the
array(s). Therefore, to ensure consistent variables, the scalar array(s). Therefore, to ensure consistent variables, the scalar
operation represented by the reduction must be both commutative operation represented by the reduction must be both commutative
...@@ -1707,7 +1719,7 @@ class All(CAReduce): ...@@ -1707,7 +1719,7 @@ class All(CAReduce):
""" Applies `bitwise and` to all the values of a tensor along the """ Applies `bitwise and` to all the values of a tensor along the
specified axis(es). specified axis(es).
Equivalent to CAReduce(scalar.and, axis=axis). Equivalent to `CAReduce(scalar.and\_, axis=axis)`.
""" """
...@@ -1739,7 +1751,7 @@ class Any(CAReduce): ...@@ -1739,7 +1751,7 @@ class Any(CAReduce):
""" Applies `bitwise or` to all the values of a tensor along the """ Applies `bitwise or` to all the values of a tensor along the
specified axis(es). specified axis(es).
Equivalent to CAReduce(scalar.or, axis=axis). Equivalent to `CAReduce(scalar.or\_, axis=axis)`.
""" """
...@@ -1790,17 +1802,19 @@ class CAReduceDtype(CAReduce): ...@@ -1790,17 +1802,19 @@ class CAReduceDtype(CAReduce):
It must be commutative and associative. It must be commutative and associative.
axis axis
- the dimension along which we want to reduce * the dimension along which we want to reduce
- list of dimensions that we want to reduce * list of dimensions that we want to reduce
- if None, all dimensions are reduced * if None, all dimensions are reduced
dtype dtype
The dtype of the returned tensor. If None, then we use the default The dtype of the returned tensor. If None, then we use the default
dtype which is the same as the input tensor's dtype except when: dtype which is the same as the input tensor's dtype except when:
- the input dtype is a signed integer of precision < 64 bit, in
which case we use int64 * the input dtype is a signed integer of precision < 64 bit, in which
- the input dtype is an unsigned integer of precision < 64 bit, in case we use int64
which case we use uint64 * the input dtype is an unsigned integer of precision < 64 bit, in
which case we use uint64
This default dtype does _not_ depend on the value of "acc_dtype". This default dtype does _not_ depend on the value of "acc_dtype".
This behavior is similar in spirit to that of numpy (except numpy This behavior is similar in spirit to that of numpy (except numpy
uses the default machine integer while we always use 64 bit uses the default machine integer while we always use 64 bit
...@@ -1810,10 +1824,11 @@ class CAReduceDtype(CAReduce): ...@@ -1810,10 +1824,11 @@ class CAReduceDtype(CAReduce):
The dtype of the internal accumulator. The dtype of the internal accumulator.
If None (default), we use the dtype in the list below, If None (default), we use the dtype in the list below,
or the input dtype if its precision is higher: or the input dtype if its precision is higher:
- for int dtypes, we use at least int64;
- for uint dtypes, we use at least uint64; * for int dtypes, we use at least int64;
- for float dtypes, we use at least float64; * for uint dtypes, we use at least uint64;
- for complex dtypes, we use at least complex128. * for float dtypes, we use at least float64;
* for complex dtypes, we use at least complex128.
""" """
...@@ -1942,7 +1957,7 @@ class Sum(CAReduceDtype): ...@@ -1942,7 +1957,7 @@ class Sum(CAReduceDtype):
""" """
Sums all the values of a tensor along the specified axis(es). Sums all the values of a tensor along the specified axis(es).
Equivalent to CAReduceDtype(scalar.add, axis=axis, dtype=dtype), Equivalent to `CAReduceDtype(scalar.add, axis=axis, dtype=dtype)`,
with the difference that this defines the gradient of sum wrt its with the difference that this defines the gradient of sum wrt its
tensor input. tensor input.
...@@ -2017,7 +2032,7 @@ class Prod(CAReduceDtype): ...@@ -2017,7 +2032,7 @@ class Prod(CAReduceDtype):
""" """
Multiplies all the values of a tensor along the specified axis(es). Multiplies all the values of a tensor along the specified axis(es).
Equivalent to CAReduce(scalar.prod, axis = axis), with the Equivalent to `CAReduce(scalar.prod, axis = axis)`, with the
difference that this defines the gradient of prod wrt its tensor difference that this defines the gradient of prod wrt its tensor
input. input.
...@@ -2066,9 +2081,10 @@ class Prod(CAReduceDtype): ...@@ -2066,9 +2081,10 @@ class Prod(CAReduceDtype):
With zeros, things get more complicated. For a given group, we have 3 With zeros, things get more complicated. For a given group, we have 3
cases: cases:
* No zeros in the group. Use previous trick. * No zeros in the group. Use previous trick.
* If only one zero is present, then the gradient for that element is * If only one zero is present, then the gradient for that element is
non-zero, but is zero for all others. non-zero, but is zero for all others.
* If more than one zero is present, then all the derivatives are zero. * If more than one zero is present, then all the derivatives are zero.
For the last two cases (with 1 or more zeros), we can't use the For the last two cases (with 1 or more zeros), we can't use the
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论