提交 48c63a85 authored 作者: Frédéric Bastien's avatar Frédéric Bastien

Merge pull request #2069 from abergeron/doc

Doc
差异被折叠。
...@@ -67,16 +67,17 @@ installation and configuration, see :ref:`installing Theano <install>`. ...@@ -67,16 +67,17 @@ installation and configuration, see :ref:`installing Theano <install>`.
Status Status
====== ======
.. image:: https://secure.travis-ci.org/Theano/Theano.png?branch=master .. raw:: html
:target: http://travis-ci.org/Theano/Theano/builds
.. image:: https://pypip.in/v/Theano/badge.png <a href="http://travis-ci.org/Theano/Theano/builds"><img src="https://secure.travis-ci.org/Theano/Theano.png?branch=master" /></a>&nbsp;
:target: https://crate.io/packages/Theano/
:alt: Latest PyPI version
.. image:: https://pypip.in/d/Theano/badge.png .. raw:: html
:target: https://crate.io/packages/Theano/
:alt: Number of PyPI downloads <a href="https://crate.io/packages/Theano/"><img src="https://pypip.in/v/Theano/badge.png" alt="Latest PyPI version" /></a>&nbsp;
.. raw:: html
<a href="https://crate.io/packages/Theano/"><img src="https://pypip.in/d/Theano/badge.png" alt="Number of PyPI downloads" /></a>&nbsp;
.. _available on PyPI: http://pypi.python.org/pypi/Theano .. _available on PyPI: http://pypi.python.org/pypi/Theano
.. _Related Projects: https://github.com/Theano/Theano/wiki/Related-projects .. _Related Projects: https://github.com/Theano/Theano/wiki/Related-projects
......
.. ../../../../theano/sandbox/linalg/ops.py .. ../../../../theano/sandbox/linalg/ops.py
.. ../../../../theano/sandbox/linalg .. ../../../../theano/sandbox/linalg
.. _libdoc_linalg: .. _libdoc_sandbox_linalg:
=================================================================== ===================================================================
:mod:`sandbox.linalg` -- Linear Algebra Ops :mod:`sandbox.linalg` -- Linear Algebra Ops
......
...@@ -32,18 +32,20 @@ TODO: Give examples on how to use these things! They are pretty complicated. ...@@ -32,18 +32,20 @@ TODO: Give examples on how to use these things! They are pretty complicated.
Most of the more efficient GPU implementations listed below can be used Most of the more efficient GPU implementations listed below can be used
as an automatic replacement for nnet.conv2d by enabling specific graph as an automatic replacement for nnet.conv2d by enabling specific graph
optimizations. optimizations.
- :func:`conv2d_fft <theano.sandbox.cuda.fftconv.conv2d_fft>` - :func:`conv2d_fft <theano.sandbox.cuda.fftconv.conv2d_fft>` This
This is a GPU-only version of nnet.conv2d that uses an FFT transform is a GPU-only version of nnet.conv2d that uses an FFT transform
to perform the work. conv2d_fft should not be called directly as it to perform the work. conv2d_fft should not be used directly as
does not provide a gradient. Instead, use nnet.conv2d and allow it does not provide a gradient. Instead, use nnet.conv2d and
Theano's graph optimizer to replace it by the FFT version by setting allow Theano's graph optimizer to replace it by the FFT version
``THEANO_FLAGS=optimizer_including=conv_fft_valid:conv_fft_full`` by setting
'THEANO_FLAGS=optimizer_including=conv_fft_valid:conv_fft_full'
in your environement. This is not enabled by default because it in your environement. This is not enabled by default because it
has some restrictions on input and uses a lot more memory. Also note has some restrictions on input and uses a lot more memory. Also
that it requires CUDA >= 5.0, scikits.cuda >= 0.5.0 and PyCUDA to run. note that it requires CUDA >= 5.0, scikits.cuda >= 0.5.0 and
To deactivate the FFT optimization on a specific nnet.conv2d PyCUDA to run. To deactivate the FFT optimization on a specific
while the optimization flags are active, you can set its ``version`` nnet.conv2d while the optimization flags are active, you can set
parameter to ``'no_fft'``. To enable it for just one Theano function: its ``version`` parameter to ``'no_fft'``. To enable it for just
one Theano function:
.. code-block:: python .. code-block:: python
......
.. ../../../../theano/sandbox/slinalg.py .. ../../../../theano/sandbox/slinalg.py
.. _libdoc_linalg: .. _libdoc_slinalg:
=================================================================== ===================================================================
:mod:`tensor.slinalg` -- Linear Algebra Ops Using Scipy :mod:`tensor.slinalg` -- Linear Algebra Ops Using Scipy
......
...@@ -7,8 +7,8 @@ from theano import tensor ...@@ -7,8 +7,8 @@ from theano import tensor
from theano.compat.six import StringIO from theano.compat.six import StringIO
from theano.sandbox.cuda.type import CudaNdarrayType from theano.sandbox.cuda.type import CudaNdarrayType
from theano.sandbox.cuda import GpuOp from theano.sandbox.cuda import GpuOp
from theano.sandbox.cuda import as_cuda_ndarray_variable from theano.sandbox.cuda.basic_ops import (as_cuda_ndarray_variable,
from theano.sandbox.cuda.basic_ops import gpu_contiguous gpu_contiguous)
class GpuDot22(GpuOp): class GpuDot22(GpuOp):
......
from theano import Op, Apply from theano import Op, Apply
from theano.compat.six import StringIO from theano.compat.six import StringIO
from theano.sandbox.cuda import GpuOp, as_cuda_ndarray_variable from theano.sandbox.cuda import GpuOp
from theano.sandbox.cuda.basic_ops import as_cuda_ndarray_variable
from theano.sandbox.cuda.kernel_codegen import (nvcc_kernel, from theano.sandbox.cuda.kernel_codegen import (nvcc_kernel,
inline_softmax, inline_softmax,
......
...@@ -1143,11 +1143,12 @@ class GetItem2Lists(gof.op.Op): ...@@ -1143,11 +1143,12 @@ class GetItem2Lists(gof.op.Op):
get_item_2lists = GetItem2Lists() get_item_2lists = GetItem2Lists()
"""Select elements of sparse matrix, returning them in a vector. """Select elements of sparse matrix, returning them in a vector.
:param x: Sparse matrix. :param x: Sparse matrix.
:param index: List of two lists, first list indicating the row
of each element and second list indicating its column. :param index: List of two lists, first list indicating the row of
each element and second list indicating its column.
:return: The corresponding elements in `x`. :return: The corresponding elements in `x`.
""" """
...@@ -1737,13 +1738,14 @@ class Diag(gof.op.Op): ...@@ -1737,13 +1738,14 @@ class Diag(gof.op.Op):
diag = Diag() diag = Diag()
"""Extract the diagonal of a square sparse matrix as a dense vector. """Extract the diagonal of a square sparse matrix as a dense vector.
:param x: A square sparse matrix in csc format. :param x: A square sparse matrix in csc format.
:return: A dense vector representing the diagonal elements. :return: A dense vector representing the diagonal elements.
:note: The grad implemented is regular, i.e. not structured, since .. note::
the output is a dense vector.
The grad implemented is regular, i.e. not structured, since the
output is a dense vector.
""" """
......
...@@ -863,18 +863,21 @@ class FillDiagonalOffset(gof.Op): ...@@ -863,18 +863,21 @@ class FillDiagonalOffset(gof.Op):
return [wr_a, wr_val,wr_offset] return [wr_a, wr_val,wr_offset]
fill_diagonal_offset = FillDiagonalOffset() fill_diagonal_offset_ = FillDiagonalOffset()
""" Returns a copy of an array with all
elements of the main diagonal set to a specified scalar value.
:param a: Rectangular array of two dimensions. def fill_diagonal_offset(a, val, offset):
:param val: Scalar value to fill the diagonal whose type must be """
compatible with that of array 'a' (i.e. 'val' cannot be viewed Returns a copy of an array with all
as an upcast of 'a'). elements of the main diagonal set to a specified scalar value.
:params offset : Scalar value Offset of the diagonal from the main
diagonal. Can be positive or negative integer.
:return: An array identical to 'a' except that its offset diagonal
is filled with scalar 'val'. The output is unwrapped.
""" :param a: Rectangular array of two dimensions.
:param val: Scalar value to fill the diagonal whose type must be
compatible with that of array 'a' (i.e. 'val' cannot be viewed
as an upcast of 'a').
:param offset: Scalar value Offset of the diagonal from the main
diagonal. Can be positive or negative integer.
:return: An array identical to 'a' except that its offset diagonal
is filled with scalar 'val'. The output is unwrapped.
"""
return fill_diagonal_offset_(a, val, offset)
...@@ -496,20 +496,35 @@ def qr(a, mode="full"): ...@@ -496,20 +496,35 @@ def qr(a, mode="full"):
Factor the matrix a as qr, where q Factor the matrix a as qr, where q
is orthonormal and r is upper-triangular. is orthonormal and r is upper-triangular.
Parameters : :type a:
------------ array_like, shape (M, N)
:param a:
a : array_like, shape (M, N)
Matrix to be factored. Matrix to be factored.
mode : {'reduced', 'complete', 'r', 'raw', 'full', 'economic'}, optional :type mode:
one of 'reduced', 'complete', 'r', 'raw', 'full' and
'economic', optional
:keyword mode:
If K = min(M, N), then If K = min(M, N), then
'reduced' : returns q, r with dimensions (M, K), (K, N) (default)
'complete' : returns q, r with dimensions (M, M), (M, N) 'reduced'
'r' : returns r only with dimensions (K, N) returns q, r with dimensions (M, K), (K, N)
'raw' : returns h, tau with dimensions (N, M), (K,)
'full' : alias of 'reduced', deprecated 'complete'
'economic' : returns h from 'raw', deprecated. The options 'reduced', returns q, r with dimensions (M, M), (M, N)
'r'
returns r only with dimensions (K, N)
'raw'
returns h, tau with dimensions (N, M), (K,)
'full'
alias of 'reduced', deprecated (default)
'economic'
returns h from 'raw', deprecated. The options 'reduced',
'complete', and 'raw' are new in numpy 1.8, see the notes for more 'complete', and 'raw' are new in numpy 1.8, see the notes for more
information. The default is 'reduced' and to maintain backward information. The default is 'reduced' and to maintain backward
compatibility with earlier versions of numpy both it and the old compatibility with earlier versions of numpy both it and the old
...@@ -518,21 +533,25 @@ def qr(a, mode="full"): ...@@ -518,21 +533,25 @@ def qr(a, mode="full"):
deprecated. The modes 'full' and 'economic' may be passed using only deprecated. The modes 'full' and 'economic' may be passed using only
the first letter for backwards compatibility, but all others the first letter for backwards compatibility, but all others
must be spelled out. must be spelled out.
Default mode is 'full' which is also default for numpy 1.6.1.
Note: Default mode was left to full as full and reduced are both doing Default mode is 'full' which is also default for numpy 1.6.1.
the same thing in the new numpy version but only full works on the old
previous numpy version.
Returns :
---------
q : matrix of float or complex, optional
A matrix with orthonormal columns. When mode = 'complete'
the result is an orthogonal/unitary matrix depending on whether
or not a is real/complex. The determinant may be either +/- 1 in that case.
r : matrix of float or complex, optional
The upper-triangular matrix.
:note: Default mode was left to full as full and reduced are
both doing the same thing in the new numpy version but only
full works on the old previous numpy version.
:rtype q:
matrix of float or complex, optional
:return q:
A matrix with orthonormal columns. When mode = 'complete' the
result is an orthogonal/unitary matrix depending on whether or
not a is real/complex. The determinant may be either +/- 1 in
that case.
:rtype r:
matrix of float or complex, optional
:return r:
The upper-triangular matrix.
""" """
x = [[2, 1], [3, 4]] x = [[2, 1], [3, 4]]
if isinstance(numpy.linalg.qr(x,mode), tuple): if isinstance(numpy.linalg.qr(x,mode), tuple):
...@@ -549,8 +568,6 @@ class SVD(Op): ...@@ -549,8 +568,6 @@ class SVD(Op):
def __init__(self, full_matrices=True, compute_uv=True): def __init__(self, full_matrices=True, compute_uv=True):
""" """
inputs :
--------
full_matrices : bool, optional full_matrices : bool, optional
If True (default), u and v have the shapes (M, M) and (N, N), If True (default), u and v have the shapes (M, M) and (N, N),
respectively. respectively.
...@@ -582,21 +599,18 @@ def svd(a, full_matrices=1, compute_uv=1): ...@@ -582,21 +599,18 @@ def svd(a, full_matrices=1, compute_uv=1):
""" """
This function performs the SVD on CPU. This function performs the SVD on CPU.
Parameters : :type full_matrices: bool, optional
------------ :param full_matrices:
full_matrices : bool, optional
If True (default), u and v have the shapes (M, M) and (N, N), If True (default), u and v have the shapes (M, M) and (N, N),
respectively. respectively.
Otherwise, the shapes are (M, K) and (K, N), respectively, Otherwise, the shapes are (M, K) and (K, N), respectively,
where K = min(M, N). where K = min(M, N).
compute_uv : bool, optional :type compute_uv: bool, optional
:param compute_uv:
Whether or not to compute u and v in addition to s. Whether or not to compute u and v in addition to s.
True by default. True by default.
Returns : :returns: U, V and D matrices.
-------
U, V and D matrices.
""" """
return SVD(full_matrices, compute_uv)(a) return SVD(full_matrices, compute_uv)(a)
......
...@@ -533,31 +533,33 @@ class Conv3D(theano.Op): ...@@ -533,31 +533,33 @@ class Conv3D(theano.Op):
return strutil.render_string(codeSource,locals()) return strutil.render_string(codeSource,locals())
_conv3D = Conv3D()
conv3D = Conv3D() def conv3D(V, W, b, d):
""" """
3D "convolution" of multiple filters on a minibatch 3D "convolution" of multiple filters on a minibatch
(does not flip the kernel, moves kernel with a user specified stride) (does not flip the kernel, moves kernel with a user specified stride)
:param V: Visible unit, input. :param V: Visible unit, input.
dimensions: (batch, row, column, time, in channel) dimensions: (batch, row, column, time, in channel)
:param W: Weights, filter. :param W: Weights, filter.
dimensions: (out channel, row, column, time ,in channel) dimensions: (out channel, row, column, time ,in channel)
:param b: bias, shape == (W.shape[0],) :param b: bias, shape == (W.shape[0],)
:param d: strides when moving the filter over the input(dx, dy, dt) :param d: strides when moving the filter over the input(dx, dy, dt)
:note: The order of dimensions does not correspond to the one in `conv2d`. :note: The order of dimensions does not correspond to the one in `conv2d`.
This is for optimization. This is for optimization.
:note: The GPU implementation is very slow. You should use :note: The GPU implementation is very slow. You should use
:func:`conv3d2d <theano.tensor.nnet.conv3d2d.conv3d>` for a GPU :func:`conv3d2d <theano.tensor.nnet.conv3d2d.conv3d>` for a
graph instead. GPU graph instead.
:see: Someone made a script that shows how to swap the axes between :see: Someone made a script that shows how to swap the axes
both 3d convolution implementations in Theano. See the last between both 3d convolution implementations in Theano. See
`attachment <https://groups.google.com/d/msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ>`_. the last `attachment
<https://groups.google.com/d/msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ>`_.
""" """
return _conv3D(V, W, b, d)
def computeH(V,W,b,d): def computeH(V,W,b,d):
assert len(W.shape) == 5 assert len(W.shape) == 5
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论