Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
48c63a85
提交
48c63a85
authored
9月 02, 2014
作者:
Frédéric Bastien
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #2069 from abergeron/doc
Doc
上级
cfc493d1
75fa53f8
全部展开
显示空白字符变更
内嵌
并排
正在显示
12 个修改的文件
包含
111 行增加
和
86 行删除
+111
-86
op.txt
doc/extending/op.txt
+0
-0
index.txt
doc/index.txt
+9
-8
linalg.txt
doc/library/sandbox/linalg.txt
+1
-1
conv.txt
doc/library/tensor/nnet/conv.txt
+13
-11
slinalg.txt
doc/library/tensor/slinalg.txt
+1
-1
extending_theano.txt
doc/tutorial/extending_theano.txt
+0
-0
blas.py
theano/sandbox/cuda/blas.py
+2
-2
nnet.py
theano/sandbox/cuda/nnet.py
+2
-1
basic.py
theano/sparse/basic.py
+10
-8
extra_ops.py
theano/tensor/extra_ops.py
+8
-5
nlinalg.py
theano/tensor/nlinalg.py
+47
-33
Conv3D.py
theano/tensor/nnet/Conv3D.py
+18
-16
没有找到文件。
doc/extending/op.txt
浏览文件 @
48c63a85
差异被折叠。
点击展开。
doc/index.txt
浏览文件 @
48c63a85
...
@@ -67,16 +67,17 @@ installation and configuration, see :ref:`installing Theano <install>`.
...
@@ -67,16 +67,17 @@ installation and configuration, see :ref:`installing Theano <install>`.
Status
Status
======
======
.. image:: https://secure.travis-ci.org/Theano/Theano.png?branch=master
.. raw:: html
:target: http://travis-ci.org/Theano/Theano/builds
.. image:: https://pypip.in/v/Theano/badge.png
<a href="http://travis-ci.org/Theano/Theano/builds"><img src="https://secure.travis-ci.org/Theano/Theano.png?branch=master" /></a>
:target: https://crate.io/packages/Theano/
:alt: Latest PyPI version
.. image:: https://pypip.in/d/Theano/badge.png
.. raw:: html
:target: https://crate.io/packages/Theano/
:alt: Number of PyPI downloads
<a href="https://crate.io/packages/Theano/"><img src="https://pypip.in/v/Theano/badge.png" alt="Latest PyPI version" /></a>
.. raw:: html
<a href="https://crate.io/packages/Theano/"><img src="https://pypip.in/d/Theano/badge.png" alt="Number of PyPI downloads" /></a>
.. _available on PyPI: http://pypi.python.org/pypi/Theano
.. _available on PyPI: http://pypi.python.org/pypi/Theano
.. _Related Projects: https://github.com/Theano/Theano/wiki/Related-projects
.. _Related Projects: https://github.com/Theano/Theano/wiki/Related-projects
...
...
doc/library/sandbox/linalg.txt
浏览文件 @
48c63a85
.. ../../../../theano/sandbox/linalg/ops.py
.. ../../../../theano/sandbox/linalg/ops.py
.. ../../../../theano/sandbox/linalg
.. ../../../../theano/sandbox/linalg
.. _libdoc_linalg:
.. _libdoc_
sandbox_
linalg:
===================================================================
===================================================================
:mod:`sandbox.linalg` -- Linear Algebra Ops
:mod:`sandbox.linalg` -- Linear Algebra Ops
...
...
doc/library/tensor/nnet/conv.txt
浏览文件 @
48c63a85
...
@@ -32,18 +32,20 @@ TODO: Give examples on how to use these things! They are pretty complicated.
...
@@ -32,18 +32,20 @@ TODO: Give examples on how to use these things! They are pretty complicated.
Most of the more efficient GPU implementations listed below can be used
Most of the more efficient GPU implementations listed below can be used
as an automatic replacement for nnet.conv2d by enabling specific graph
as an automatic replacement for nnet.conv2d by enabling specific graph
optimizations.
optimizations.
- :func:`conv2d_fft <theano.sandbox.cuda.fftconv.conv2d_fft>`
- :func:`conv2d_fft <theano.sandbox.cuda.fftconv.conv2d_fft>` This
This is a GPU-only version of nnet.conv2d that uses an FFT transform
is a GPU-only version of nnet.conv2d that uses an FFT transform
to perform the work. conv2d_fft should not be called directly as it
to perform the work. conv2d_fft should not be used directly as
does not provide a gradient. Instead, use nnet.conv2d and allow
it does not provide a gradient. Instead, use nnet.conv2d and
Theano's graph optimizer to replace it by the FFT version by setting
allow Theano's graph optimizer to replace it by the FFT version
``THEANO_FLAGS=optimizer_including=conv_fft_valid:conv_fft_full``
by setting
'THEANO_FLAGS=optimizer_including=conv_fft_valid:conv_fft_full'
in your environement. This is not enabled by default because it
in your environement. This is not enabled by default because it
has some restrictions on input and uses a lot more memory. Also note
has some restrictions on input and uses a lot more memory. Also
that it requires CUDA >= 5.0, scikits.cuda >= 0.5.0 and PyCUDA to run.
note that it requires CUDA >= 5.0, scikits.cuda >= 0.5.0 and
To deactivate the FFT optimization on a specific nnet.conv2d
PyCUDA to run. To deactivate the FFT optimization on a specific
while the optimization flags are active, you can set its ``version``
nnet.conv2d while the optimization flags are active, you can set
parameter to ``'no_fft'``. To enable it for just one Theano function:
its ``version`` parameter to ``'no_fft'``. To enable it for just
one Theano function:
.. code-block:: python
.. code-block:: python
...
...
doc/library/tensor/slinalg.txt
浏览文件 @
48c63a85
.. ../../../../theano/sandbox/slinalg.py
.. ../../../../theano/sandbox/slinalg.py
.. _libdoc_linalg:
.. _libdoc_
s
linalg:
===================================================================
===================================================================
:mod:`tensor.slinalg` -- Linear Algebra Ops Using Scipy
:mod:`tensor.slinalg` -- Linear Algebra Ops Using Scipy
...
...
doc/tutorial/extending_theano.txt
浏览文件 @
48c63a85
差异被折叠。
点击展开。
theano/sandbox/cuda/blas.py
浏览文件 @
48c63a85
...
@@ -7,8 +7,8 @@ from theano import tensor
...
@@ -7,8 +7,8 @@ from theano import tensor
from
theano.compat.six
import
StringIO
from
theano.compat.six
import
StringIO
from
theano.sandbox.cuda.type
import
CudaNdarrayType
from
theano.sandbox.cuda.type
import
CudaNdarrayType
from
theano.sandbox.cuda
import
GpuOp
from
theano.sandbox.cuda
import
GpuOp
from
theano.sandbox.cuda
import
as_cuda_ndarray_variable
from
theano.sandbox.cuda
.basic_ops
import
(
as_cuda_ndarray_variable
,
from
theano.sandbox.cuda.basic_ops
import
gpu_contiguous
gpu_contiguous
)
class
GpuDot22
(
GpuOp
):
class
GpuDot22
(
GpuOp
):
...
...
theano/sandbox/cuda/nnet.py
浏览文件 @
48c63a85
from
theano
import
Op
,
Apply
from
theano
import
Op
,
Apply
from
theano.compat.six
import
StringIO
from
theano.compat.six
import
StringIO
from
theano.sandbox.cuda
import
GpuOp
,
as_cuda_ndarray_variable
from
theano.sandbox.cuda
import
GpuOp
from
theano.sandbox.cuda.basic_ops
import
as_cuda_ndarray_variable
from
theano.sandbox.cuda.kernel_codegen
import
(
nvcc_kernel
,
from
theano.sandbox.cuda.kernel_codegen
import
(
nvcc_kernel
,
inline_softmax
,
inline_softmax
,
...
...
theano/sparse/basic.py
浏览文件 @
48c63a85
...
@@ -1143,11 +1143,12 @@ class GetItem2Lists(gof.op.Op):
...
@@ -1143,11 +1143,12 @@ class GetItem2Lists(gof.op.Op):
get_item_2lists
=
GetItem2Lists
()
get_item_2lists
=
GetItem2Lists
()
"""Select elements of sparse matrix, returning them in a vector.
"""Select elements of sparse matrix, returning them in a vector.
:param x: Sparse matrix.
:param x: Sparse matrix.
:param index: List of two lists, first list indicating the row
of each element and second list indicating its column.
:return: The corresponding elements in `x`.
:param index: List of two lists, first list indicating the row of
each element and second list indicating its column.
:return: The corresponding elements in `x`.
"""
"""
...
@@ -1737,13 +1738,14 @@ class Diag(gof.op.Op):
...
@@ -1737,13 +1738,14 @@ class Diag(gof.op.Op):
diag
=
Diag
()
diag
=
Diag
()
"""Extract the diagonal of a square sparse matrix as a dense vector.
"""Extract the diagonal of a square sparse matrix as a dense vector.
:param x: A square sparse matrix in csc format.
:param x: A square sparse matrix in csc format.
:return: A dense vector representing the diagonal elements.
:return: A dense vector representing the diagonal elements.
:note: The grad implemented is regular, i.e. not structured, since
.. note::
the output is a dense vector.
The grad implemented is regular, i.e. not structured, since the
output is a dense vector.
"""
"""
...
...
theano/tensor/extra_ops.py
浏览文件 @
48c63a85
...
@@ -863,18 +863,21 @@ class FillDiagonalOffset(gof.Op):
...
@@ -863,18 +863,21 @@ class FillDiagonalOffset(gof.Op):
return
[
wr_a
,
wr_val
,
wr_offset
]
return
[
wr_a
,
wr_val
,
wr_offset
]
fill_diagonal_offset
=
FillDiagonalOffset
()
fill_diagonal_offset_
=
FillDiagonalOffset
()
""" Returns a copy of an array with all
def
fill_diagonal_offset
(
a
,
val
,
offset
):
"""
Returns a copy of an array with all
elements of the main diagonal set to a specified scalar value.
elements of the main diagonal set to a specified scalar value.
:param a: Rectangular array of two dimensions.
:param a: Rectangular array of two dimensions.
:param val: Scalar value to fill the diagonal whose type must be
:param val: Scalar value to fill the diagonal whose type must be
compatible with that of array 'a' (i.e. 'val' cannot be viewed
compatible with that of array 'a' (i.e. 'val' cannot be viewed
as an upcast of 'a').
as an upcast of 'a').
:params offset : Scalar value Offset of the diagonal from the main
:param offset: Scalar value Offset of the diagonal from the main
diagonal. Can be positive or negative integer.
diagonal. Can be positive or negative integer.
:return: An array identical to 'a' except that its offset diagonal
:return: An array identical to 'a' except that its offset diagonal
is filled with scalar 'val'. The output is unwrapped.
is filled with scalar 'val'. The output is unwrapped.
"""
"""
return
fill_diagonal_offset_
(
a
,
val
,
offset
)
theano/tensor/nlinalg.py
浏览文件 @
48c63a85
...
@@ -496,20 +496,35 @@ def qr(a, mode="full"):
...
@@ -496,20 +496,35 @@ def qr(a, mode="full"):
Factor the matrix a as qr, where q
Factor the matrix a as qr, where q
is orthonormal and r is upper-triangular.
is orthonormal and r is upper-triangular.
Parameters :
:type a:
------------
array_like, shape (M, N)
:param a:
a : array_like, shape (M, N)
Matrix to be factored.
Matrix to be factored.
mode : {'reduced', 'complete', 'r', 'raw', 'full', 'economic'}, optional
:type mode:
one of 'reduced', 'complete', 'r', 'raw', 'full' and
'economic', optional
:keyword mode:
If K = min(M, N), then
If K = min(M, N), then
'reduced' : returns q, r with dimensions (M, K), (K, N) (default)
'complete' : returns q, r with dimensions (M, M), (M, N)
'reduced'
'r' : returns r only with dimensions (K, N)
returns q, r with dimensions (M, K), (K, N)
'raw' : returns h, tau with dimensions (N, M), (K,)
'full' : alias of 'reduced', deprecated
'complete'
'economic' : returns h from 'raw', deprecated. The options 'reduced',
returns q, r with dimensions (M, M), (M, N)
'r'
returns r only with dimensions (K, N)
'raw'
returns h, tau with dimensions (N, M), (K,)
'full'
alias of 'reduced', deprecated (default)
'economic'
returns h from 'raw', deprecated. The options 'reduced',
'complete', and 'raw' are new in numpy 1.8, see the notes for more
'complete', and 'raw' are new in numpy 1.8, see the notes for more
information. The default is 'reduced' and to maintain backward
information. The default is 'reduced' and to maintain backward
compatibility with earlier versions of numpy both it and the old
compatibility with earlier versions of numpy both it and the old
...
@@ -518,21 +533,25 @@ def qr(a, mode="full"):
...
@@ -518,21 +533,25 @@ def qr(a, mode="full"):
deprecated. The modes 'full' and 'economic' may be passed using only
deprecated. The modes 'full' and 'economic' may be passed using only
the first letter for backwards compatibility, but all others
the first letter for backwards compatibility, but all others
must be spelled out.
must be spelled out.
Default mode is 'full' which is also default for numpy 1.6.1.
Default mode is 'full' which is also default for numpy 1.6.1.
Note: Default mode was left to full as full and reduced are both doing
:note: Default mode was left to full as full and reduced are
the same thing in the new numpy version but only full works on the old
both doing the same thing in the new numpy version but only
previous numpy version.
full works on the old previous numpy version.
Returns :
---------
:rtype q:
q : matrix of float or complex, optional
matrix of float or complex, optional
A matrix with orthonormal columns. When mode = 'complete'
:return q:
the result is an orthogonal/unitary matrix depending on whether
A matrix with orthonormal columns. When mode = 'complete' the
or not a is real/complex. The determinant may be either +/- 1 in that case.
result is an orthogonal/unitary matrix depending on whether or
not a is real/complex. The determinant may be either +/- 1 in
r : matrix of float or complex, optional
that case.
:rtype r:
matrix of float or complex, optional
:return r:
The upper-triangular matrix.
The upper-triangular matrix.
"""
"""
x
=
[[
2
,
1
],
[
3
,
4
]]
x
=
[[
2
,
1
],
[
3
,
4
]]
if
isinstance
(
numpy
.
linalg
.
qr
(
x
,
mode
),
tuple
):
if
isinstance
(
numpy
.
linalg
.
qr
(
x
,
mode
),
tuple
):
...
@@ -549,8 +568,6 @@ class SVD(Op):
...
@@ -549,8 +568,6 @@ class SVD(Op):
def
__init__
(
self
,
full_matrices
=
True
,
compute_uv
=
True
):
def
__init__
(
self
,
full_matrices
=
True
,
compute_uv
=
True
):
"""
"""
inputs :
--------
full_matrices : bool, optional
full_matrices : bool, optional
If True (default), u and v have the shapes (M, M) and (N, N),
If True (default), u and v have the shapes (M, M) and (N, N),
respectively.
respectively.
...
@@ -582,21 +599,18 @@ def svd(a, full_matrices=1, compute_uv=1):
...
@@ -582,21 +599,18 @@ def svd(a, full_matrices=1, compute_uv=1):
"""
"""
This function performs the SVD on CPU.
This function performs the SVD on CPU.
Parameters :
:type full_matrices: bool, optional
------------
:param full_matrices:
full_matrices : bool, optional
If True (default), u and v have the shapes (M, M) and (N, N),
If True (default), u and v have the shapes (M, M) and (N, N),
respectively.
respectively.
Otherwise, the shapes are (M, K) and (K, N), respectively,
Otherwise, the shapes are (M, K) and (K, N), respectively,
where K = min(M, N).
where K = min(M, N).
compute_uv : bool, optional
:type compute_uv: bool, optional
:param compute_uv:
Whether or not to compute u and v in addition to s.
Whether or not to compute u and v in addition to s.
True by default.
True by default.
Returns :
:returns: U, V and D matrices.
-------
U, V and D matrices.
"""
"""
return
SVD
(
full_matrices
,
compute_uv
)(
a
)
return
SVD
(
full_matrices
,
compute_uv
)(
a
)
...
...
theano/tensor/nnet/Conv3D.py
浏览文件 @
48c63a85
...
@@ -533,31 +533,33 @@ class Conv3D(theano.Op):
...
@@ -533,31 +533,33 @@ class Conv3D(theano.Op):
return
strutil
.
render_string
(
codeSource
,
locals
())
return
strutil
.
render_string
(
codeSource
,
locals
())
_conv3D
=
Conv3D
()
conv3D
=
Conv3D
()
def
conv3D
(
V
,
W
,
b
,
d
):
"""
"""
3D "convolution" of multiple filters on a minibatch
3D "convolution" of multiple filters on a minibatch
(does not flip the kernel, moves kernel with a user specified stride)
(does not flip the kernel, moves kernel with a user specified stride)
:param V: Visible unit, input.
:param V: Visible unit, input.
dimensions: (batch, row, column, time, in channel)
dimensions: (batch, row, column, time, in channel)
:param W: Weights, filter.
:param W: Weights, filter.
dimensions: (out channel, row, column, time ,in channel)
dimensions: (out channel, row, column, time ,in channel)
:param b: bias, shape == (W.shape[0],)
:param b: bias, shape == (W.shape[0],)
:param d: strides when moving the filter over the input(dx, dy, dt)
:param d: strides when moving the filter over the input(dx, dy, dt)
:note: The order of dimensions does not correspond to the one in `conv2d`.
:note: The order of dimensions does not correspond to the one in `conv2d`.
This is for optimization.
This is for optimization.
:note: The GPU implementation is very slow. You should use
:note: The GPU implementation is very slow. You should use
:func:`conv3d2d <theano.tensor.nnet.conv3d2d.conv3d>` for a GPU
:func:`conv3d2d <theano.tensor.nnet.conv3d2d.conv3d>` for a
graph instead.
GPU graph instead.
:see: Someone made a script that shows how to swap the axes between
both 3d convolution implementations in Theano. See the last
`attachment <https://groups.google.com/d/msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ>`_.
:see: Someone made a script that shows how to swap the axes
between both 3d convolution implementations in Theano. See
the last `attachment
<https://groups.google.com/d/msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ>`_.
"""
"""
return
_conv3D
(
V
,
W
,
b
,
d
)
def
computeH
(
V
,
W
,
b
,
d
):
def
computeH
(
V
,
W
,
b
,
d
):
assert
len
(
W
.
shape
)
==
5
assert
len
(
W
.
shape
)
==
5
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论