Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
48c63a85
提交
48c63a85
authored
9月 02, 2014
作者:
Frédéric Bastien
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #2069 from abergeron/doc
Doc
上级
cfc493d1
75fa53f8
全部展开
隐藏空白字符变更
内嵌
并排
正在显示
12 个修改的文件
包含
126 行增加
和
101 行删除
+126
-101
op.txt
doc/extending/op.txt
+0
-0
index.txt
doc/index.txt
+9
-8
linalg.txt
doc/library/sandbox/linalg.txt
+1
-1
conv.txt
doc/library/tensor/nnet/conv.txt
+13
-11
slinalg.txt
doc/library/tensor/slinalg.txt
+1
-1
extending_theano.txt
doc/tutorial/extending_theano.txt
+0
-0
blas.py
theano/sandbox/cuda/blas.py
+2
-2
nnet.py
theano/sandbox/cuda/nnet.py
+2
-1
basic.py
theano/sparse/basic.py
+10
-8
extra_ops.py
theano/tensor/extra_ops.py
+15
-12
nlinalg.py
theano/tensor/nlinalg.py
+48
-34
Conv3D.py
theano/tensor/nnet/Conv3D.py
+25
-23
没有找到文件。
doc/extending/op.txt
浏览文件 @
48c63a85
差异被折叠。
点击展开。
doc/index.txt
浏览文件 @
48c63a85
...
...
@@ -67,16 +67,17 @@ installation and configuration, see :ref:`installing Theano <install>`.
Status
======
.. image:: https://secure.travis-ci.org/Theano/Theano.png?branch=master
:target: http://travis-ci.org/Theano/Theano/builds
.. raw:: html
.. image:: https://pypip.in/v/Theano/badge.png
:target: https://crate.io/packages/Theano/
:alt: Latest PyPI version
<a href="http://travis-ci.org/Theano/Theano/builds"><img src="https://secure.travis-ci.org/Theano/Theano.png?branch=master" /></a>
.. image:: https://pypip.in/d/Theano/badge.png
:target: https://crate.io/packages/Theano/
:alt: Number of PyPI downloads
.. raw:: html
<a href="https://crate.io/packages/Theano/"><img src="https://pypip.in/v/Theano/badge.png" alt="Latest PyPI version" /></a>
.. raw:: html
<a href="https://crate.io/packages/Theano/"><img src="https://pypip.in/d/Theano/badge.png" alt="Number of PyPI downloads" /></a>
.. _available on PyPI: http://pypi.python.org/pypi/Theano
.. _Related Projects: https://github.com/Theano/Theano/wiki/Related-projects
...
...
doc/library/sandbox/linalg.txt
浏览文件 @
48c63a85
.. ../../../../theano/sandbox/linalg/ops.py
.. ../../../../theano/sandbox/linalg
.. _libdoc_linalg:
.. _libdoc_
sandbox_
linalg:
===================================================================
:mod:`sandbox.linalg` -- Linear Algebra Ops
...
...
doc/library/tensor/nnet/conv.txt
浏览文件 @
48c63a85
...
...
@@ -32,18 +32,20 @@ TODO: Give examples on how to use these things! They are pretty complicated.
Most of the more efficient GPU implementations listed below can be used
as an automatic replacement for nnet.conv2d by enabling specific graph
optimizations.
- :func:`conv2d_fft <theano.sandbox.cuda.fftconv.conv2d_fft>`
This is a GPU-only version of nnet.conv2d that uses an FFT transform
to perform the work. conv2d_fft should not be called directly as it
does not provide a gradient. Instead, use nnet.conv2d and allow
Theano's graph optimizer to replace it by the FFT version by setting
``THEANO_FLAGS=optimizer_including=conv_fft_valid:conv_fft_full``
- :func:`conv2d_fft <theano.sandbox.cuda.fftconv.conv2d_fft>` This
is a GPU-only version of nnet.conv2d that uses an FFT transform
to perform the work. conv2d_fft should not be used directly as
it does not provide a gradient. Instead, use nnet.conv2d and
allow Theano's graph optimizer to replace it by the FFT version
by setting
'THEANO_FLAGS=optimizer_including=conv_fft_valid:conv_fft_full'
in your environement. This is not enabled by default because it
has some restrictions on input and uses a lot more memory. Also note
that it requires CUDA >= 5.0, scikits.cuda >= 0.5.0 and PyCUDA to run.
To deactivate the FFT optimization on a specific nnet.conv2d
while the optimization flags are active, you can set its ``version``
parameter to ``'no_fft'``. To enable it for just one Theano function:
has some restrictions on input and uses a lot more memory. Also
note that it requires CUDA >= 5.0, scikits.cuda >= 0.5.0 and
PyCUDA to run. To deactivate the FFT optimization on a specific
nnet.conv2d while the optimization flags are active, you can set
its ``version`` parameter to ``'no_fft'``. To enable it for just
one Theano function:
.. code-block:: python
...
...
doc/library/tensor/slinalg.txt
浏览文件 @
48c63a85
.. ../../../../theano/sandbox/slinalg.py
.. _libdoc_linalg:
.. _libdoc_
s
linalg:
===================================================================
:mod:`tensor.slinalg` -- Linear Algebra Ops Using Scipy
...
...
doc/tutorial/extending_theano.txt
浏览文件 @
48c63a85
差异被折叠。
点击展开。
theano/sandbox/cuda/blas.py
浏览文件 @
48c63a85
...
...
@@ -7,8 +7,8 @@ from theano import tensor
from
theano.compat.six
import
StringIO
from
theano.sandbox.cuda.type
import
CudaNdarrayType
from
theano.sandbox.cuda
import
GpuOp
from
theano.sandbox.cuda
import
as_cuda_ndarray_variable
from
theano.sandbox.cuda.basic_ops
import
gpu_contiguous
from
theano.sandbox.cuda
.basic_ops
import
(
as_cuda_ndarray_variable
,
gpu_contiguous
)
class
GpuDot22
(
GpuOp
):
...
...
theano/sandbox/cuda/nnet.py
浏览文件 @
48c63a85
from
theano
import
Op
,
Apply
from
theano.compat.six
import
StringIO
from
theano.sandbox.cuda
import
GpuOp
,
as_cuda_ndarray_variable
from
theano.sandbox.cuda
import
GpuOp
from
theano.sandbox.cuda.basic_ops
import
as_cuda_ndarray_variable
from
theano.sandbox.cuda.kernel_codegen
import
(
nvcc_kernel
,
inline_softmax
,
...
...
theano/sparse/basic.py
浏览文件 @
48c63a85
...
...
@@ -1143,11 +1143,12 @@ class GetItem2Lists(gof.op.Op):
get_item_2lists
=
GetItem2Lists
()
"""Select elements of sparse matrix, returning them in a vector.
:param x: Sparse matrix.
:param index: List of two lists, first list indicating the row
of each element and second list indicating its column.
:param x: Sparse matrix.
:param index: List of two lists, first list indicating the row of
each element and second list indicating its column.
:return: The corresponding elements in `x`.
:return: The corresponding elements in `x`.
"""
...
...
@@ -1737,13 +1738,14 @@ class Diag(gof.op.Op):
diag
=
Diag
()
"""Extract the diagonal of a square sparse matrix as a dense vector.
:param x: A square sparse matrix in csc format.
:param x: A square sparse matrix in csc format.
:return: A dense vector representing the diagonal elements.
:return: A dense vector representing the diagonal elements.
:note: The grad implemented is regular, i.e. not structured, since
the output is a dense vector.
.. note::
The grad implemented is regular, i.e. not structured, since the
output is a dense vector.
"""
...
...
theano/tensor/extra_ops.py
浏览文件 @
48c63a85
...
...
@@ -863,18 +863,21 @@ class FillDiagonalOffset(gof.Op):
return
[
wr_a
,
wr_val
,
wr_offset
]
fill_diagonal_offset
=
FillDiagonalOffset
()
""" Returns a copy of an array with all
elements of the main diagonal set to a specified scalar value.
fill_diagonal_offset_
=
FillDiagonalOffset
()
:param a: Rectangular array of two dimensions.
:param val: Scalar value to fill the diagonal whose type must be
compatible with that of array 'a' (i.e. 'val' cannot be viewed
as an upcast of 'a').
:params offset : Scalar value Offset of the diagonal from the main
diagonal. Can be positive or negative integer.
:return: An array identical to 'a' except that its offset diagonal
is filled with scalar 'val'. The output is unwrapped.
def
fill_diagonal_offset
(
a
,
val
,
offset
):
"""
Returns a copy of an array with all
elements of the main diagonal set to a specified scalar value.
"""
:param a: Rectangular array of two dimensions.
:param val: Scalar value to fill the diagonal whose type must be
compatible with that of array 'a' (i.e. 'val' cannot be viewed
as an upcast of 'a').
:param offset: Scalar value Offset of the diagonal from the main
diagonal. Can be positive or negative integer.
:return: An array identical to 'a' except that its offset diagonal
is filled with scalar 'val'. The output is unwrapped.
"""
return
fill_diagonal_offset_
(
a
,
val
,
offset
)
theano/tensor/nlinalg.py
浏览文件 @
48c63a85
...
...
@@ -496,20 +496,35 @@ def qr(a, mode="full"):
Factor the matrix a as qr, where q
is orthonormal and r is upper-triangular.
Parameters :
------------
a : array_like, shape (M, N)
:type a:
array_like, shape (M, N)
:param a:
Matrix to be factored.
mode : {'reduced', 'complete', 'r', 'raw', 'full', 'economic'}, optional
:type mode:
one of 'reduced', 'complete', 'r', 'raw', 'full' and
'economic', optional
:keyword mode:
If K = min(M, N), then
'reduced' : returns q, r with dimensions (M, K), (K, N) (default)
'complete' : returns q, r with dimensions (M, M), (M, N)
'r' : returns r only with dimensions (K, N)
'raw' : returns h, tau with dimensions (N, M), (K,)
'full' : alias of 'reduced', deprecated
'economic' : returns h from 'raw', deprecated. The options 'reduced',
'reduced'
returns q, r with dimensions (M, K), (K, N)
'complete'
returns q, r with dimensions (M, M), (M, N)
'r'
returns r only with dimensions (K, N)
'raw'
returns h, tau with dimensions (N, M), (K,)
'full'
alias of 'reduced', deprecated (default)
'economic'
returns h from 'raw', deprecated. The options 'reduced',
'complete', and 'raw' are new in numpy 1.8, see the notes for more
information. The default is 'reduced' and to maintain backward
compatibility with earlier versions of numpy both it and the old
...
...
@@ -518,21 +533,25 @@ def qr(a, mode="full"):
deprecated. The modes 'full' and 'economic' may be passed using only
the first letter for backwards compatibility, but all others
must be spelled out.
Default mode is 'full' which is also default for numpy 1.6.1.
Note: Default mode was left to full as full and reduced are both doing
the same thing in the new numpy version but only full works on the old
previous numpy version.
Returns :
---------
q : matrix of float or complex, optional
A matrix with orthonormal columns. When mode = 'complete'
the result is an orthogonal/unitary matrix depending on whether
or not a is real/complex. The determinant may be either +/- 1 in that case.
r : matrix of float or complex, optional
The upper-triangular matrix.
Default mode is 'full' which is also default for numpy 1.6.1.
:note: Default mode was left to full as full and reduced are
both doing the same thing in the new numpy version but only
full works on the old previous numpy version.
:rtype q:
matrix of float or complex, optional
:return q:
A matrix with orthonormal columns. When mode = 'complete' the
result is an orthogonal/unitary matrix depending on whether or
not a is real/complex. The determinant may be either +/- 1 in
that case.
:rtype r:
matrix of float or complex, optional
:return r:
The upper-triangular matrix.
"""
x
=
[[
2
,
1
],
[
3
,
4
]]
if
isinstance
(
numpy
.
linalg
.
qr
(
x
,
mode
),
tuple
):
...
...
@@ -549,8 +568,6 @@ class SVD(Op):
def
__init__
(
self
,
full_matrices
=
True
,
compute_uv
=
True
):
"""
inputs :
--------
full_matrices : bool, optional
If True (default), u and v have the shapes (M, M) and (N, N),
respectively.
...
...
@@ -582,21 +599,18 @@ def svd(a, full_matrices=1, compute_uv=1):
"""
This function performs the SVD on CPU.
Parameters :
------------
full_matrices : bool, optional
:type full_matrices: bool, optional
:param full_matrices:
If True (default), u and v have the shapes (M, M) and (N, N),
respectively.
Otherwise, the shapes are (M, K) and (K, N), respectively,
where K = min(M, N).
compute_uv : bool, optional
:type compute_uv: bool, optional
:param compute_uv:
Whether or not to compute u and v in addition to s.
True by default.
Returns :
-------
U, V and D matrices.
:returns: U, V and D matrices.
"""
return
SVD
(
full_matrices
,
compute_uv
)(
a
)
...
...
theano/tensor/nnet/Conv3D.py
浏览文件 @
48c63a85
...
...
@@ -533,31 +533,33 @@ class Conv3D(theano.Op):
return
strutil
.
render_string
(
codeSource
,
locals
())
_conv3D
=
Conv3D
()
conv3D
=
Conv3D
()
"""
3D "convolution" of multiple filters on a minibatch
(does not flip the kernel, moves kernel with a user specified stride)
:param V: Visible unit, input.
dimensions: (batch, row, column, time, in channel)
:param W: Weights, filter.
dimensions: (out channel, row, column, time ,in channel)
:param b: bias, shape == (W.shape[0],)
:param d: strides when moving the filter over the input(dx, dy, dt)
:note: The order of dimensions does not correspond to the one in `conv2d`.
This is for optimization.
:note: The GPU implementation is very slow. You should use
:func:`conv3d2d <theano.tensor.nnet.conv3d2d.conv3d>` for a GPU
graph instead.
:see: Someone made a script that shows how to swap the axes between
both 3d convolution implementations in Theano. See the last
`attachment <https://groups.google.com/d/msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ>`_.
def
conv3D
(
V
,
W
,
b
,
d
):
"""
3D "convolution" of multiple filters on a minibatch
(does not flip the kernel, moves kernel with a user specified stride)
:param V: Visible unit, input.
dimensions: (batch, row, column, time, in channel)
:param W: Weights, filter.
dimensions: (out channel, row, column, time ,in channel)
:param b: bias, shape == (W.shape[0],)
:param d: strides when moving the filter over the input(dx, dy, dt)
:note: The order of dimensions does not correspond to the one in `conv2d`.
This is for optimization.
:note: The GPU implementation is very slow. You should use
:func:`conv3d2d <theano.tensor.nnet.conv3d2d.conv3d>` for a
GPU
graph instead.
:see: Someone made a script that shows how to swap the axes
between both 3d convolution implementations in Theano. See
the last `attachment
<https://groups.google.com/d/msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ>`_.
"""
return
_conv3D
(
V
,
W
,
b
,
d
)
def
computeH
(
V
,
W
,
b
,
d
):
assert
len
(
W
.
shape
)
==
5
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论