提交 1b387826 authored 作者: Taesup (TS) Kim's avatar Taesup (TS) Kim 提交者: Francesco Visin

Check RST files in '/theano/doc/' to follow numpy's docstring

上级 63990436
......@@ -219,27 +219,11 @@ GitHub, to let them know that you have submitted a fix.
Tips for Quality Contributions
==============================
Coding Style Auto Check
Coding Style
-----------------------
In Theano, we use the same coding style as the `Pylearn
<http://deeplearning.net/software/pylearn/v2_planning/API_coding_style.html>`_
project, except that we don't use the numpy docstring standard.
The principal thing to know is that we follow the
`PEP 8 <http://www.python.org/dev/peps/pep-0008/>`_ coding style.
We use git hooks provided in the project `pygithooks
<https://github.com/lumberlabs/pygithooks>`_ to validate that commits
respect pep8. This happens when each user commits, not when we
push/merge to the Theano repository. Github doesn't allow us to have
code executed when we push to the repository. So we ask all
contributors to use those hooks.
For historic reason, we currently don't have all files respecting pep8.
We decided to fix everything incrementally. So not all files respect it
now. So we strongly suggest that you use the "increment" pygithooks
config option to have a good workflow. See the pygithooks main page
for how to set it up for Theano and how to enable this option.
Please always respect the :ref:`Documentation`
or your contribution will not be accepted
Setting up your Editor for PEP8
......@@ -259,49 +243,49 @@ Detection of warnings and errors is done by the `pep8`_ script
errors). Syntax highlighting and general integration into Vim is done by
the `Syntastic`_ plugin for Vim.
To install flake8, simply run::
pip install flake8
You can use ``easy_install`` instead of ``pip``, and ``pep8`` instead of
``flake8`` if you prefer. The important thing is that the ``flake8`` or
``pep8`` executable ends up in your ``$PATH``.
To setup VIM:
To install Syntastic, according to its documentation, the easiest way is
to install `pathogen.vim`_ first.
#. Install flake8 (if not already installed) with::
Here's a relevant extract of pathogen.vim's installation instructions:
Install to ``~/.vim/autoload/pathogen.vim``. Or copy and paste::
mkdir -p ~/.vim/autoload ~/.vim/bundle; \
curl -so ~/.vim/autoload/pathogen.vim \
https://raw.github.com/tpope/vim-pathogen/HEAD/autoload/pathogen.vim
pip install flake8
If you don't have ``curl``, use ``wget -O`` instead.
.. note:: You can use ``easy_install`` instead of ``pip``, and ``pep8``
instead of ``flake8`` if you prefer. The important thing is that the
``flake8`` or ``pep8`` executable ends up in your ``$PATH``.
By the way, if you're using Windows, change all occurrences of ``~/.vim``
to ``~\vimfiles``.
#. Install vundle with::
Add this to your vimrc::
git clone https://github.com/VundleVim/Vundle.vim.git ~/.vim/bundle/Vundle.vim
call pathogen#infect()
#. Edit ``~/.vimrc`` and add the lines::
Now any plugins you wish to install can be extracted to a subdirectory
under ``~/.vim/bundle``, and they will be added to the ``'runtimepath'``.
.. code-block:: python
Now, we can install Syntastic. From the installation instructions:
set nocompatible " be iMproved, required
filetype off " required
" set the runtime path to include Vundle and initialize
set rtp+=~/.vim/bundle/Vundle.vim
call vundle#begin()
.. code-block:: bash
Plugin 'gmarik/Vundle.vim' " let Vundle manage Vundle (required!)
Plugin 'scrooloose/syntastic'
Plugin 'jimf/vim-pep8-text-width'
cd ~/.vim/bundle
git clone https://github.com/scrooloose/syntastic.git
" Syntastic settings
" You can run checkers explicitly by calling :SyntasticCheck <checker
let g:syntastic_python_checkers = ['flake8'] "use one of the following checkers:
" flake8, pyflakes, pylint, python (native checker)
let g:syntastic_enable_highlighting = 1 "highlight errors and warnings
let g:syntastic_style_error_symbol = ">>" "error symbol
let g:syntastic_warning_symbol = ">>" "warning symbol
let g:syntastic_check_on_open = 1
let g:syntastic_auto_jump = 0 "do not jump to errors when detected
Then reload vim, run ``:Helptags``, and check out ``:help syntastic.txt``.
#. Open a new vim and run ``:PluginInstall`` to automatically install the
plugins. When the installation is done, close the installation "window" with ``:q``. From now on Vim will check for PEP8 errors and highlight them whenever a file is saved.
From now on, when you save into a Python file, a syntax check will be
run, and results will be displayed using Vim's `quickfix`_ mechanism
(more precisely, a location-list). A few useful commands are:
A few useful commands
"""""""""""""""""""""
* Open the list of errors: ``:lopen``, that can be abbreviated in ``:lop``
(denoted ``:lop[en]``).
......
......@@ -876,11 +876,24 @@ Modify and execute the example to return two outputs: x + y
and x - y.
Documentation
-------------
.. _Documentation:
See :ref:`metadocumentation`, for some information on how to generate
the documentation.
Documentation and Coding Style
------------------------------
* The code should be compatible with Python 2.6 and above, as well as Python 3.3 and above (using 2to3 for conversion
to Python 3.x).
* All the code should respect the `PEP8 Code Style Guide <http://www.python.org/dev/peps/pep-0008>`_.
* The docstrings of all the classes and functions should respect the
`PEP257 <https://www.python.org/dev/peps/pep-0257/>`_ rules and follow the `Numpy docstring standard
<https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt>`_.
* To cross-reference other objects (e.g. reference other classes or methods) in the docstrings, use the
`cross-referencing objects <http://www.sphinx-doc.org/en/stable/domains.html#cross-referencing-python-objects>`_ syntax. ``:py`` can be omitted, see e.g. this `stackoverflow answer <http://stackoverflow.com/a/7754189>`_.
* See :ref:`metadocumentation`, for some information on how to generate the documentation.
Here is an example how to add docstring to a class.
......@@ -889,19 +902,29 @@ Here is an example how to add docstring to a class.
import theano
class DoubleOp(theano.Op):
""" Double each element of a tensor.
"""
Double each element of a tensor.
Parameters
----------
x : tensor
Input tensor
:param x: input tensor.
Returns
-------
tensor
a tensor of the same shape and dtype as the input with all
values doubled.
:return: a tensor of the same shape and dtype as the input with all
values doubled.
Notes
-----
this is a test note
:note:
this is a test note
See Also
--------
:class:`~theano.tensor.elemwise.Elemwise` : You can use this to replace
this example. Just execute `x * 2` with x being a Theano variable.
:seealso:
You can use the elemwise op to replace this example.
Just execute `x * 2` with x being a Theano variable.
.. versionadded:: 0.6
"""
......
......@@ -476,8 +476,10 @@ Here is an example showing how to use ``verify_grad`` on an Op instance:
.. testcode::
def test_flatten_outdimNone():
# Testing gradient w.r.t. all inputs of an op (in this example the op
# being used is Flatten(), which takes a single input).
"""
Testing gradient w.r.t. all inputs of an op (in this example the op
being used is Flatten(), which takes a single input).
"""
a_val = numpy.asarray([[0,1,2],[3,4,5]], dtype='float64')
rng = numpy.random.RandomState(42)
tensor.verify_grad(tensor.Flatten(), [a_val], rng=rng)
......
===================================================================
:mod:`tensor.elemwise` -- Tensor Elemwise
===================================================================
.. testsetup::
from theano.tensor.elemwise import *
.. module:: tensor.elemwise
:platform: Unix, Windows
:synopsis: Tensor Elemwise
.. moduleauthor:: LISA
.. automodule:: theano.tensor.elemwise
:members:
......@@ -23,6 +23,7 @@ They are grouped into the following sections:
shared_randomstreams
signal/index
utils
elemwise
extra_ops
io
opt
......
......@@ -5,19 +5,29 @@ import theano
class DoubleOp(theano.Op):
""" Double each element of a tensor.
"""
Double each element of a tensor.
:param x: input tensor.
Parameters
----------
x : tensor
Input tensor
:return: a tensor of the same shape and dtype as the input with all
Returns
-------
tensor
a tensor of the same shape and dtype as the input with all
values doubled.
:note:
this is a test note
Notes
-----
this is a test note
See Also
--------
:class:`~theano.tensor.elemwise.Elemwise` : You can use this to replace
this example. Just execute `x * 2` with x being a Theano variable.
:seealso:
You can use the elemwise op to replace this example.
Just execute `x * 2` with x being a Theano variable.
.. versionadded:: 0.6
"""
......
......@@ -117,8 +117,8 @@ class DimShuffle(Op):
DimShuffle((False, False), [0, 'x', 1]) -> AxB to Ax1xB
DimShuffle((False, False), [1, 'x', 0]) -> AxB to Bx1xA
The reordering of the dimensions can be done in numpy with the
transpose function.
The reordering of the dimensions can be done with the numpy.transpose
function.
Adding, subtracting dimensions can be done with reshape.
"""
......@@ -300,13 +300,14 @@ class DimShuffle(Op):
# get the copy / view of the input depending on whether we're doingi
# things inplace or not.
if self.inplace:
get_base = [
'{ PyArrayObject * %(basename)s = %(input)s', 'Py_INCREF((PyObject*)%(basename)s)']
get_base = ['{ PyArrayObject * %(basename)s = %(input)s',
'Py_INCREF((PyObject*)%(basename)s)']
else:
get_base = [('{ PyArrayObject * %(basename)s = '
'(PyArrayObject*)PyArray_FromAny((PyObject*)%(input)s,'
' NULL, 0, 0, NPY_ARRAY_ALIGNED|NPY_ARRAY_ENSURECOPY,'
' NULL)')]
get_base = [
('{ PyArrayObject * %(basename)s = '
'(PyArrayObject*)PyArray_FromAny((PyObject*)%(input)s,'
' NULL, 0, 0, NPY_ARRAY_ALIGNED|NPY_ARRAY_ENSURECOPY,'
' NULL)')]
shape_statements = ['npy_intp dimensions[%i]' % nd_out]
for i, o in enumerate(self.new_order):
......@@ -343,11 +344,13 @@ class DimShuffle(Op):
)
for i in xrange(nd_out - 2, -1, -1):
strides_statements.append(
"if (strides[%(i)s] == 0) strides[%(i)s] = strides[%(i)s+1] * dimensions[%(i)s+1]" % dict(i=str(i)))
"if (strides[%(i)s] == 0) strides[%(i)s] = strides[%(i)s+1] * "
"dimensions[%(i)s+1]" % dict(i=str(i)))
#
# PyObject* PyArray_New(PyTypeObject* subtype, int nd, npy_intp* dims, int type_num,
# npy_intp* strides, void* data, int itemsize, int flags, PyObject* obj)
# PyObject* PyArray_New(PyTypeObject* subtype, int nd, npy_intp* dims,
# int type_num, npy_intp* strides, void* data,
# int itemsize, int flags, PyObject* obj)
#
close_bracket = [
# create a new array,
......@@ -781,7 +784,8 @@ class Elemwise(OpenMPOp):
# the gradient contains a constant, translate it as
# an equivalent TensorType of size 1 and proper number of
# dimensions
res = theano.tensor.constant(numpy.asarray(r.data), dtype=r.type.dtype)
res = theano.tensor.constant(numpy.asarray(r.data),
dtype=r.type.dtype)
return DimShuffle((), ['x'] * nd, inplace=False)(res)
new_r = Elemwise(node.op, {})(
*[transform(ipt) for ipt in node.inputs])
......@@ -1127,15 +1131,20 @@ class Elemwise(OpenMPOp):
idtypes + list(real_odtypes))])
preloops = {}
for i, (loop_order, dtype) in enumerate(zip(loop_orders, dtypes)):
for i, (loop_order, dtype) in enumerate(zip(loop_orders,
dtypes)):
for j, index in enumerate(loop_order):
if index != 'x':
preloops.setdefault(j, "")
preloops[j] += ("%%(lv%(i)s)s_iter = (%(dtype)s*)(PyArray_DATA(%%(lv%(i)s)s));\n" % locals()) % sub
preloops[j] += ("%%(lv%(i)s)s_iter = (%(dtype)s*)"
"(PyArray_DATA(%%(lv%(i)s)s));\n"
% locals()) % sub
break
else: # all broadcastable
preloops.setdefault(0, "")
preloops[0] += ("%%(lv%(i)s)s_iter = (%(dtype)s*)(PyArray_DATA(%%(lv%(i)s)s));\n" % locals()) % sub
preloops[0] += ("%%(lv%(i)s)s_iter = (%(dtype)s*)"
"(PyArray_DATA(%%(lv%(i)s)s));\n"
% locals()) % sub
init_array = preloops.get(0, " ")
loop = """
......@@ -1202,7 +1211,8 @@ class Elemwise(OpenMPOp):
dtype_%(x)s& %(x)s_i = ((dtype_%(x)s*) PyArray_DATA(%(x)s))[0];
""" % locals()
if self.openmp:
contig += """#pragma omp parallel for if(n>=%d)""" % (config.openmp_elemwise_minsize)
contig += """#pragma omp parallel for if(n>=%d)
""" % (config.openmp_elemwise_minsize)
contig += """
for(int i=0; i<n; i++){
%(index)s
......@@ -1259,7 +1269,8 @@ class Elemwise(OpenMPOp):
for output in node.outputs])
version.append(self.scalar_op.c_code_cache_version_apply(scalar_node))
for i in node.inputs + node.outputs:
version.append(get_scalar_type(dtype=i.type.dtype).c_code_cache_version())
version.append(
get_scalar_type(dtype=i.type.dtype).c_code_cache_version())
version.append(('openmp', self.openmp))
if all(version):
return tuple(version)
......@@ -1304,11 +1315,10 @@ class CAReduce(Op):
CAReduce(mul) -> product
CAReduce(maximum) -> max
CAReduce(minimum) -> min
CAReduce(or_) -> any # not lazy
CAReduce(and_) -> all # not lazy
CAReduce(or) -> any # not lazy
CAReduce(and) -> all # not lazy
CAReduce(xor) -> a bit at 1 tell that there was an odd number of bit at
that position that where 1.
0 it was an even number ...
that position that where 1. 0 it was an even number ...
In order to (eventually) optimize memory usage patterns,
L{CAReduce} makes zero guarantees on the order in which it
......@@ -1507,7 +1517,8 @@ class CAReduce(Op):
if hasattr(self, 'acc_dtype') and self.acc_dtype is not None:
if self.acc_dtype == 'float16':
raise theano.gof.utils.MethodNotDefined("no c_code for float16")
raise theano.gof.utils.MethodNotDefined("no c_code for "
"float16")
acc_type = TensorType(
broadcastable=node.outputs[0].broadcastable,
dtype=self.acc_dtype)
......@@ -1684,7 +1695,8 @@ for(int i=0;i<PyArray_NDIM(%(iname)s);i++){
for output in node.outputs])
version.append(self.scalar_op.c_code_cache_version_apply(scalar_node))
for i in node.inputs + node.outputs:
version.append(get_scalar_type(dtype=i.type.dtype).c_code_cache_version())
version.append(
get_scalar_type(dtype=i.type.dtype).c_code_cache_version())
if all(version):
return tuple(version)
else:
......@@ -1695,7 +1707,7 @@ class All(CAReduce):
""" Applies `bitwise and` to all the values of a tensor along the
specified axis(es).
Equivalent to CAReduce(scalar.and_, axis=axis).
Equivalent to CAReduce(scalar.and, axis=axis).
"""
......@@ -1727,7 +1739,7 @@ class Any(CAReduce):
""" Applies `bitwise or` to all the values of a tensor along the
specified axis(es).
Equivalent to CAReduce(scalar.or_, axis=axis).
Equivalent to CAReduce(scalar.or, axis=axis).
"""
......@@ -2052,13 +2064,11 @@ class Prod(CAReduceDtype):
"incoming gradient", ie. the gradient of the cost relative to the
output/product).
-----
With zeros, things get more complicated. For a given group, we have 3
cases:
* No zeros in the group. Use previous trick.
* If only one zero is present, then the gradient for that element is
non-zero, but is zero for all others.
non-zero, but is zero for all others.
* If more than one zero is present, then all the derivatives are zero.
For the last two cases (with 1 or more zeros), we can't use the
......@@ -2188,7 +2198,8 @@ class MulWithoutZeros(scalar.BinaryScalarOp):
def c_code_cache_version(self):
return (1,)
mul_without_zeros = MulWithoutZeros(scalar.upcast_out, name='mul_without_zeros')
mul_without_zeros = MulWithoutZeros(scalar.upcast_out,
name='mul_without_zeros')
class ProdWithoutZeros(CAReduceDtype):
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论