提交 1b387826 authored 作者: Taesup (TS) Kim's avatar Taesup (TS) Kim 提交者: Francesco Visin

Check RST files in '/theano/doc/' to follow numpy's docstring

上级 63990436
...@@ -219,27 +219,11 @@ GitHub, to let them know that you have submitted a fix. ...@@ -219,27 +219,11 @@ GitHub, to let them know that you have submitted a fix.
Tips for Quality Contributions Tips for Quality Contributions
============================== ==============================
Coding Style Auto Check Coding Style
----------------------- -----------------------
In Theano, we use the same coding style as the `Pylearn Please always respect the :ref:`Documentation`
<http://deeplearning.net/software/pylearn/v2_planning/API_coding_style.html>`_ or your contribution will not be accepted
project, except that we don't use the numpy docstring standard.
The principal thing to know is that we follow the
`PEP 8 <http://www.python.org/dev/peps/pep-0008/>`_ coding style.
We use git hooks provided in the project `pygithooks
<https://github.com/lumberlabs/pygithooks>`_ to validate that commits
respect pep8. This happens when each user commits, not when we
push/merge to the Theano repository. Github doesn't allow us to have
code executed when we push to the repository. So we ask all
contributors to use those hooks.
For historic reason, we currently don't have all files respecting pep8.
We decided to fix everything incrementally. So not all files respect it
now. So we strongly suggest that you use the "increment" pygithooks
config option to have a good workflow. See the pygithooks main page
for how to set it up for Theano and how to enable this option.
Setting up your Editor for PEP8 Setting up your Editor for PEP8
...@@ -259,49 +243,49 @@ Detection of warnings and errors is done by the `pep8`_ script ...@@ -259,49 +243,49 @@ Detection of warnings and errors is done by the `pep8`_ script
errors). Syntax highlighting and general integration into Vim is done by errors). Syntax highlighting and general integration into Vim is done by
the `Syntastic`_ plugin for Vim. the `Syntastic`_ plugin for Vim.
To install flake8, simply run:: To setup VIM:
pip install flake8
You can use ``easy_install`` instead of ``pip``, and ``pep8`` instead of
``flake8`` if you prefer. The important thing is that the ``flake8`` or
``pep8`` executable ends up in your ``$PATH``.
To install Syntastic, according to its documentation, the easiest way is #. Install flake8 (if not already installed) with::
to install `pathogen.vim`_ first.
Here's a relevant extract of pathogen.vim's installation instructions: pip install flake8
Install to ``~/.vim/autoload/pathogen.vim``. Or copy and paste::
mkdir -p ~/.vim/autoload ~/.vim/bundle; \
curl -so ~/.vim/autoload/pathogen.vim \
https://raw.github.com/tpope/vim-pathogen/HEAD/autoload/pathogen.vim
If you don't have ``curl``, use ``wget -O`` instead. .. note:: You can use ``easy_install`` instead of ``pip``, and ``pep8``
instead of ``flake8`` if you prefer. The important thing is that the
``flake8`` or ``pep8`` executable ends up in your ``$PATH``.
By the way, if you're using Windows, change all occurrences of ``~/.vim`` #. Install vundle with::
to ``~\vimfiles``.
Add this to your vimrc:: git clone https://github.com/VundleVim/Vundle.vim.git ~/.vim/bundle/Vundle.vim
call pathogen#infect() #. Edit ``~/.vimrc`` and add the lines::
Now any plugins you wish to install can be extracted to a subdirectory .. code-block:: python
under ``~/.vim/bundle``, and they will be added to the ``'runtimepath'``.
Now, we can install Syntastic. From the installation instructions: set nocompatible " be iMproved, required
filetype off " required
" set the runtime path to include Vundle and initialize
set rtp+=~/.vim/bundle/Vundle.vim
call vundle#begin()
.. code-block:: bash Plugin 'gmarik/Vundle.vim' " let Vundle manage Vundle (required!)
Plugin 'scrooloose/syntastic'
Plugin 'jimf/vim-pep8-text-width'
cd ~/.vim/bundle " Syntastic settings
git clone https://github.com/scrooloose/syntastic.git " You can run checkers explicitly by calling :SyntasticCheck <checker
let g:syntastic_python_checkers = ['flake8'] "use one of the following checkers:
" flake8, pyflakes, pylint, python (native checker)
let g:syntastic_enable_highlighting = 1 "highlight errors and warnings
let g:syntastic_style_error_symbol = ">>" "error symbol
let g:syntastic_warning_symbol = ">>" "warning symbol
let g:syntastic_check_on_open = 1
let g:syntastic_auto_jump = 0 "do not jump to errors when detected
Then reload vim, run ``:Helptags``, and check out ``:help syntastic.txt``. #. Open a new vim and run ``:PluginInstall`` to automatically install the
plugins. When the installation is done, close the installation "window" with ``:q``. From now on Vim will check for PEP8 errors and highlight them whenever a file is saved.
From now on, when you save into a Python file, a syntax check will be A few useful commands
run, and results will be displayed using Vim's `quickfix`_ mechanism """""""""""""""""""""
(more precisely, a location-list). A few useful commands are:
* Open the list of errors: ``:lopen``, that can be abbreviated in ``:lop`` * Open the list of errors: ``:lopen``, that can be abbreviated in ``:lop``
(denoted ``:lop[en]``). (denoted ``:lop[en]``).
......
...@@ -876,11 +876,24 @@ Modify and execute the example to return two outputs: x + y ...@@ -876,11 +876,24 @@ Modify and execute the example to return two outputs: x + y
and x - y. and x - y.
Documentation .. _Documentation:
-------------
See :ref:`metadocumentation`, for some information on how to generate Documentation and Coding Style
the documentation. ------------------------------
* The code should be compatible with Python 2.6 and above, as well as Python 3.3 and above (using 2to3 for conversion
to Python 3.x).
* All the code should respect the `PEP8 Code Style Guide <http://www.python.org/dev/peps/pep-0008>`_.
* The docstrings of all the classes and functions should respect the
`PEP257 <https://www.python.org/dev/peps/pep-0257/>`_ rules and follow the `Numpy docstring standard
<https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt>`_.
* To cross-reference other objects (e.g. reference other classes or methods) in the docstrings, use the
`cross-referencing objects <http://www.sphinx-doc.org/en/stable/domains.html#cross-referencing-python-objects>`_ syntax. ``:py`` can be omitted, see e.g. this `stackoverflow answer <http://stackoverflow.com/a/7754189>`_.
* See :ref:`metadocumentation`, for some information on how to generate the documentation.
Here is an example how to add docstring to a class. Here is an example how to add docstring to a class.
...@@ -889,19 +902,29 @@ Here is an example how to add docstring to a class. ...@@ -889,19 +902,29 @@ Here is an example how to add docstring to a class.
import theano import theano
class DoubleOp(theano.Op): class DoubleOp(theano.Op):
""" Double each element of a tensor. """
Double each element of a tensor.
Parameters
----------
x : tensor
Input tensor
:param x: input tensor. Returns
-------
tensor
a tensor of the same shape and dtype as the input with all
values doubled.
:return: a tensor of the same shape and dtype as the input with all Notes
values doubled. -----
this is a test note
:note: See Also
this is a test note --------
:class:`~theano.tensor.elemwise.Elemwise` : You can use this to replace
this example. Just execute `x * 2` with x being a Theano variable.
:seealso:
You can use the elemwise op to replace this example.
Just execute `x * 2` with x being a Theano variable.
.. versionadded:: 0.6 .. versionadded:: 0.6
""" """
......
...@@ -476,8 +476,10 @@ Here is an example showing how to use ``verify_grad`` on an Op instance: ...@@ -476,8 +476,10 @@ Here is an example showing how to use ``verify_grad`` on an Op instance:
.. testcode:: .. testcode::
def test_flatten_outdimNone(): def test_flatten_outdimNone():
# Testing gradient w.r.t. all inputs of an op (in this example the op """
# being used is Flatten(), which takes a single input). Testing gradient w.r.t. all inputs of an op (in this example the op
being used is Flatten(), which takes a single input).
"""
a_val = numpy.asarray([[0,1,2],[3,4,5]], dtype='float64') a_val = numpy.asarray([[0,1,2],[3,4,5]], dtype='float64')
rng = numpy.random.RandomState(42) rng = numpy.random.RandomState(42)
tensor.verify_grad(tensor.Flatten(), [a_val], rng=rng) tensor.verify_grad(tensor.Flatten(), [a_val], rng=rng)
......
===================================================================
:mod:`tensor.elemwise` -- Tensor Elemwise
===================================================================
.. testsetup::
from theano.tensor.elemwise import *
.. module:: tensor.elemwise
:platform: Unix, Windows
:synopsis: Tensor Elemwise
.. moduleauthor:: LISA
.. automodule:: theano.tensor.elemwise
:members:
...@@ -23,6 +23,7 @@ They are grouped into the following sections: ...@@ -23,6 +23,7 @@ They are grouped into the following sections:
shared_randomstreams shared_randomstreams
signal/index signal/index
utils utils
elemwise
extra_ops extra_ops
io io
opt opt
......
...@@ -5,19 +5,29 @@ import theano ...@@ -5,19 +5,29 @@ import theano
class DoubleOp(theano.Op): class DoubleOp(theano.Op):
""" Double each element of a tensor. """
Double each element of a tensor.
:param x: input tensor. Parameters
----------
x : tensor
Input tensor
:return: a tensor of the same shape and dtype as the input with all Returns
-------
tensor
a tensor of the same shape and dtype as the input with all
values doubled. values doubled.
:note: Notes
this is a test note -----
this is a test note
See Also
--------
:class:`~theano.tensor.elemwise.Elemwise` : You can use this to replace
this example. Just execute `x * 2` with x being a Theano variable.
:seealso:
You can use the elemwise op to replace this example.
Just execute `x * 2` with x being a Theano variable.
.. versionadded:: 0.6 .. versionadded:: 0.6
""" """
......
...@@ -117,8 +117,8 @@ class DimShuffle(Op): ...@@ -117,8 +117,8 @@ class DimShuffle(Op):
DimShuffle((False, False), [0, 'x', 1]) -> AxB to Ax1xB DimShuffle((False, False), [0, 'x', 1]) -> AxB to Ax1xB
DimShuffle((False, False), [1, 'x', 0]) -> AxB to Bx1xA DimShuffle((False, False), [1, 'x', 0]) -> AxB to Bx1xA
The reordering of the dimensions can be done in numpy with the The reordering of the dimensions can be done with the numpy.transpose
transpose function. function.
Adding, subtracting dimensions can be done with reshape. Adding, subtracting dimensions can be done with reshape.
""" """
...@@ -300,13 +300,14 @@ class DimShuffle(Op): ...@@ -300,13 +300,14 @@ class DimShuffle(Op):
# get the copy / view of the input depending on whether we're doingi # get the copy / view of the input depending on whether we're doingi
# things inplace or not. # things inplace or not.
if self.inplace: if self.inplace:
get_base = [ get_base = ['{ PyArrayObject * %(basename)s = %(input)s',
'{ PyArrayObject * %(basename)s = %(input)s', 'Py_INCREF((PyObject*)%(basename)s)'] 'Py_INCREF((PyObject*)%(basename)s)']
else: else:
get_base = [('{ PyArrayObject * %(basename)s = ' get_base = [
'(PyArrayObject*)PyArray_FromAny((PyObject*)%(input)s,' ('{ PyArrayObject * %(basename)s = '
' NULL, 0, 0, NPY_ARRAY_ALIGNED|NPY_ARRAY_ENSURECOPY,' '(PyArrayObject*)PyArray_FromAny((PyObject*)%(input)s,'
' NULL)')] ' NULL, 0, 0, NPY_ARRAY_ALIGNED|NPY_ARRAY_ENSURECOPY,'
' NULL)')]
shape_statements = ['npy_intp dimensions[%i]' % nd_out] shape_statements = ['npy_intp dimensions[%i]' % nd_out]
for i, o in enumerate(self.new_order): for i, o in enumerate(self.new_order):
...@@ -343,11 +344,13 @@ class DimShuffle(Op): ...@@ -343,11 +344,13 @@ class DimShuffle(Op):
) )
for i in xrange(nd_out - 2, -1, -1): for i in xrange(nd_out - 2, -1, -1):
strides_statements.append( strides_statements.append(
"if (strides[%(i)s] == 0) strides[%(i)s] = strides[%(i)s+1] * dimensions[%(i)s+1]" % dict(i=str(i))) "if (strides[%(i)s] == 0) strides[%(i)s] = strides[%(i)s+1] * "
"dimensions[%(i)s+1]" % dict(i=str(i)))
# #
# PyObject* PyArray_New(PyTypeObject* subtype, int nd, npy_intp* dims, int type_num, # PyObject* PyArray_New(PyTypeObject* subtype, int nd, npy_intp* dims,
# npy_intp* strides, void* data, int itemsize, int flags, PyObject* obj) # int type_num, npy_intp* strides, void* data,
# int itemsize, int flags, PyObject* obj)
# #
close_bracket = [ close_bracket = [
# create a new array, # create a new array,
...@@ -781,7 +784,8 @@ class Elemwise(OpenMPOp): ...@@ -781,7 +784,8 @@ class Elemwise(OpenMPOp):
# the gradient contains a constant, translate it as # the gradient contains a constant, translate it as
# an equivalent TensorType of size 1 and proper number of # an equivalent TensorType of size 1 and proper number of
# dimensions # dimensions
res = theano.tensor.constant(numpy.asarray(r.data), dtype=r.type.dtype) res = theano.tensor.constant(numpy.asarray(r.data),
dtype=r.type.dtype)
return DimShuffle((), ['x'] * nd, inplace=False)(res) return DimShuffle((), ['x'] * nd, inplace=False)(res)
new_r = Elemwise(node.op, {})( new_r = Elemwise(node.op, {})(
*[transform(ipt) for ipt in node.inputs]) *[transform(ipt) for ipt in node.inputs])
...@@ -1127,15 +1131,20 @@ class Elemwise(OpenMPOp): ...@@ -1127,15 +1131,20 @@ class Elemwise(OpenMPOp):
idtypes + list(real_odtypes))]) idtypes + list(real_odtypes))])
preloops = {} preloops = {}
for i, (loop_order, dtype) in enumerate(zip(loop_orders, dtypes)): for i, (loop_order, dtype) in enumerate(zip(loop_orders,
dtypes)):
for j, index in enumerate(loop_order): for j, index in enumerate(loop_order):
if index != 'x': if index != 'x':
preloops.setdefault(j, "") preloops.setdefault(j, "")
preloops[j] += ("%%(lv%(i)s)s_iter = (%(dtype)s*)(PyArray_DATA(%%(lv%(i)s)s));\n" % locals()) % sub preloops[j] += ("%%(lv%(i)s)s_iter = (%(dtype)s*)"
"(PyArray_DATA(%%(lv%(i)s)s));\n"
% locals()) % sub
break break
else: # all broadcastable else: # all broadcastable
preloops.setdefault(0, "") preloops.setdefault(0, "")
preloops[0] += ("%%(lv%(i)s)s_iter = (%(dtype)s*)(PyArray_DATA(%%(lv%(i)s)s));\n" % locals()) % sub preloops[0] += ("%%(lv%(i)s)s_iter = (%(dtype)s*)"
"(PyArray_DATA(%%(lv%(i)s)s));\n"
% locals()) % sub
init_array = preloops.get(0, " ") init_array = preloops.get(0, " ")
loop = """ loop = """
...@@ -1202,7 +1211,8 @@ class Elemwise(OpenMPOp): ...@@ -1202,7 +1211,8 @@ class Elemwise(OpenMPOp):
dtype_%(x)s& %(x)s_i = ((dtype_%(x)s*) PyArray_DATA(%(x)s))[0]; dtype_%(x)s& %(x)s_i = ((dtype_%(x)s*) PyArray_DATA(%(x)s))[0];
""" % locals() """ % locals()
if self.openmp: if self.openmp:
contig += """#pragma omp parallel for if(n>=%d)""" % (config.openmp_elemwise_minsize) contig += """#pragma omp parallel for if(n>=%d)
""" % (config.openmp_elemwise_minsize)
contig += """ contig += """
for(int i=0; i<n; i++){ for(int i=0; i<n; i++){
%(index)s %(index)s
...@@ -1259,7 +1269,8 @@ class Elemwise(OpenMPOp): ...@@ -1259,7 +1269,8 @@ class Elemwise(OpenMPOp):
for output in node.outputs]) for output in node.outputs])
version.append(self.scalar_op.c_code_cache_version_apply(scalar_node)) version.append(self.scalar_op.c_code_cache_version_apply(scalar_node))
for i in node.inputs + node.outputs: for i in node.inputs + node.outputs:
version.append(get_scalar_type(dtype=i.type.dtype).c_code_cache_version()) version.append(
get_scalar_type(dtype=i.type.dtype).c_code_cache_version())
version.append(('openmp', self.openmp)) version.append(('openmp', self.openmp))
if all(version): if all(version):
return tuple(version) return tuple(version)
...@@ -1304,11 +1315,10 @@ class CAReduce(Op): ...@@ -1304,11 +1315,10 @@ class CAReduce(Op):
CAReduce(mul) -> product CAReduce(mul) -> product
CAReduce(maximum) -> max CAReduce(maximum) -> max
CAReduce(minimum) -> min CAReduce(minimum) -> min
CAReduce(or_) -> any # not lazy CAReduce(or) -> any # not lazy
CAReduce(and_) -> all # not lazy CAReduce(and) -> all # not lazy
CAReduce(xor) -> a bit at 1 tell that there was an odd number of bit at CAReduce(xor) -> a bit at 1 tell that there was an odd number of bit at
that position that where 1. that position that where 1. 0 it was an even number ...
0 it was an even number ...
In order to (eventually) optimize memory usage patterns, In order to (eventually) optimize memory usage patterns,
L{CAReduce} makes zero guarantees on the order in which it L{CAReduce} makes zero guarantees on the order in which it
...@@ -1507,7 +1517,8 @@ class CAReduce(Op): ...@@ -1507,7 +1517,8 @@ class CAReduce(Op):
if hasattr(self, 'acc_dtype') and self.acc_dtype is not None: if hasattr(self, 'acc_dtype') and self.acc_dtype is not None:
if self.acc_dtype == 'float16': if self.acc_dtype == 'float16':
raise theano.gof.utils.MethodNotDefined("no c_code for float16") raise theano.gof.utils.MethodNotDefined("no c_code for "
"float16")
acc_type = TensorType( acc_type = TensorType(
broadcastable=node.outputs[0].broadcastable, broadcastable=node.outputs[0].broadcastable,
dtype=self.acc_dtype) dtype=self.acc_dtype)
...@@ -1684,7 +1695,8 @@ for(int i=0;i<PyArray_NDIM(%(iname)s);i++){ ...@@ -1684,7 +1695,8 @@ for(int i=0;i<PyArray_NDIM(%(iname)s);i++){
for output in node.outputs]) for output in node.outputs])
version.append(self.scalar_op.c_code_cache_version_apply(scalar_node)) version.append(self.scalar_op.c_code_cache_version_apply(scalar_node))
for i in node.inputs + node.outputs: for i in node.inputs + node.outputs:
version.append(get_scalar_type(dtype=i.type.dtype).c_code_cache_version()) version.append(
get_scalar_type(dtype=i.type.dtype).c_code_cache_version())
if all(version): if all(version):
return tuple(version) return tuple(version)
else: else:
...@@ -1695,7 +1707,7 @@ class All(CAReduce): ...@@ -1695,7 +1707,7 @@ class All(CAReduce):
""" Applies `bitwise and` to all the values of a tensor along the """ Applies `bitwise and` to all the values of a tensor along the
specified axis(es). specified axis(es).
Equivalent to CAReduce(scalar.and_, axis=axis). Equivalent to CAReduce(scalar.and, axis=axis).
""" """
...@@ -1727,7 +1739,7 @@ class Any(CAReduce): ...@@ -1727,7 +1739,7 @@ class Any(CAReduce):
""" Applies `bitwise or` to all the values of a tensor along the """ Applies `bitwise or` to all the values of a tensor along the
specified axis(es). specified axis(es).
Equivalent to CAReduce(scalar.or_, axis=axis). Equivalent to CAReduce(scalar.or, axis=axis).
""" """
...@@ -2052,13 +2064,11 @@ class Prod(CAReduceDtype): ...@@ -2052,13 +2064,11 @@ class Prod(CAReduceDtype):
"incoming gradient", ie. the gradient of the cost relative to the "incoming gradient", ie. the gradient of the cost relative to the
output/product). output/product).
-----
With zeros, things get more complicated. For a given group, we have 3 With zeros, things get more complicated. For a given group, we have 3
cases: cases:
* No zeros in the group. Use previous trick. * No zeros in the group. Use previous trick.
* If only one zero is present, then the gradient for that element is * If only one zero is present, then the gradient for that element is
non-zero, but is zero for all others. non-zero, but is zero for all others.
* If more than one zero is present, then all the derivatives are zero. * If more than one zero is present, then all the derivatives are zero.
For the last two cases (with 1 or more zeros), we can't use the For the last two cases (with 1 or more zeros), we can't use the
...@@ -2188,7 +2198,8 @@ class MulWithoutZeros(scalar.BinaryScalarOp): ...@@ -2188,7 +2198,8 @@ class MulWithoutZeros(scalar.BinaryScalarOp):
def c_code_cache_version(self): def c_code_cache_version(self):
return (1,) return (1,)
mul_without_zeros = MulWithoutZeros(scalar.upcast_out, name='mul_without_zeros') mul_without_zeros = MulWithoutZeros(scalar.upcast_out,
name='mul_without_zeros')
class ProdWithoutZeros(CAReduceDtype): class ProdWithoutZeros(CAReduceDtype):
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论