Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
3f8c79ed
提交
3f8c79ed
authored
9月 13, 2011
作者:
David Warde-Farley
浏览文件
操作
浏览文件
下载
差异文件
Merge remote-tracking branch 'nouiz/doc_update'
上级
982d2659
f0e99c61
隐藏空白字符变更
内嵌
并排
正在显示
5 个修改的文件
包含
111 行增加
和
73 行删除
+111
-73
index.txt
doc/index.txt
+4
-1
dev_start_guide.txt
doc/internal/dev_start_guide.txt
+17
-2
index.txt
doc/library/sparse/index.txt
+2
-5
basic.txt
doc/library/tensor/basic.txt
+3
-3
sp.py
theano/sparse/sandbox/sp.py
+85
-62
没有找到文件。
doc/index.txt
浏览文件 @
3f8c79ed
...
@@ -100,11 +100,13 @@ Community
...
@@ -100,11 +100,13 @@ Community
(theano-users, Aug 2, 2010)
(theano-users, Aug 2, 2010)
* Register to `theano-announce`_ if you want to be kept informed on important change on theano(low volume).
* Register and post to `theano-users`_ if you want to talk to all Theano users.
* Register and post to `theano-users`_ if you want to talk to all Theano users.
* Register and post to `theano-dev`_ if you want to talk to the developers.
* Register and post to `theano-dev`_ if you want to talk to the developers.
* Register to `theano-
announce`_ if you want to be kept informed on important change on theano(low volume)
.
* Register to `theano-
github`_ if you want to receive an email for all change to the github repository
.
* Register to `theano-buildbot`_ if you want to receive our daily buildbot email.
* Register to `theano-buildbot`_ if you want to receive our daily buildbot email.
...
@@ -136,6 +138,7 @@ Community
...
@@ -136,6 +138,7 @@ Community
.. _theano-dev: http://groups.google.com/group/theano-dev
.. _theano-dev: http://groups.google.com/group/theano-dev
.. _theano-users: http://groups.google.com/group/theano-users
.. _theano-users: http://groups.google.com/group/theano-users
.. _theano-announce: http://groups.google.com/group/theano-announce
.. _theano-announce: http://groups.google.com/group/theano-announce
.. _theano-github: http://groups.google.com/group/theano-github
.. _theano-buildbot: http://groups.google.com/group/theano-buildbot
.. _theano-buildbot: http://groups.google.com/group/theano-buildbot
.. _tickets: http://pylearn.org/theano/trac/query?status=accepted&status=assigned&status=new&status=reopened&group=milestone&max=200&col=id&col=summary&col=status&col=owner&col=type&col=priority&col=component&col=time&report=9&order=priority
.. _tickets: http://pylearn.org/theano/trac/query?status=accepted&status=assigned&status=new&status=reopened&group=milestone&max=200&col=id&col=summary&col=status&col=owner&col=type&col=priority&col=component&col=time&report=9&order=priority
...
...
doc/internal/dev_start_guide.txt
浏览文件 @
3f8c79ed
...
@@ -37,6 +37,12 @@ except that we don't use the numpy docstring standard.
...
@@ -37,6 +37,12 @@ except that we don't use the numpy docstring standard.
We do not plan to change all existing code to follow this coding
We do not plan to change all existing code to follow this coding
style, but as we modify the code, we update it accordingly.
style, but as we modify the code, we update it accordingly.
Mailing list
------------
See the theano main page for the theano-dev, theano-buildbot and
theano-github mailing list. They are usefull to Theano contributor's.
Typical development workflow
Typical development workflow
----------------------------
----------------------------
...
@@ -145,7 +151,7 @@ circumvent circular dependencies might make it so you have to import files in
...
@@ -145,7 +151,7 @@ circumvent circular dependencies might make it so you have to import files in
a certain order, which is best handled by the package's own ``__init__.py``.
a certain order, which is best handled by the package's own ``__init__.py``.
More instructions
More instructions
=================
-----------------
Once you have completed these steps, you should run the tests like this:
Once you have completed these steps, you should run the tests like this:
...
@@ -181,7 +187,7 @@ Keep in mind that this branch should be "read-only": if you want to patch
...
@@ -181,7 +187,7 @@ Keep in mind that this branch should be "read-only": if you want to patch
Theano, do it in another branch like described above.
Theano, do it in another branch like described above.
Nightly test
Nightly test
============
------------
Each night we execute all the unit tests automatically. The result is sent by
Each night we execute all the unit tests automatically. The result is sent by
email to the `theano-buildbot`_ mailing list.
email to the `theano-buildbot`_ mailing list.
...
@@ -190,3 +196,12 @@ email to the `theano-buildbot`_ mailing list.
...
@@ -190,3 +196,12 @@ email to the `theano-buildbot`_ mailing list.
For more detail, see :ref:`see <metadocumentation_nightly_build>`.
For more detail, see :ref:`see <metadocumentation_nightly_build>`.
To run all the test with the same configuration as the buildbot, run this script:
.. code-block:: bash
theano/misc/do_nightly_build
This function accept argument that it forward to nosetests. So you can
run only some tests or enable pdb by giving the equivalent nosetests
parameters.
doc/library/sparse/index.txt
浏览文件 @
3f8c79ed
...
@@ -55,17 +55,14 @@ Some documentation for sparse has been written
...
@@ -55,17 +55,14 @@ Some documentation for sparse has been written
===================================================================
===================================================================
:mod:`sparse
.sandbox
` -- Sparse Op
:mod:`sparse` -- Sparse Op
===================================================================
===================================================================
.. module:: sparse
.sandbox
.. module:: sparse
:platform: Unix, Windows
:platform: Unix, Windows
:synopsis: Sparse Op
:synopsis: Sparse Op
.. moduleauthor:: LISA
.. moduleauthor:: LISA
API
===
.. automodule:: theano.sparse.basic
.. automodule:: theano.sparse.basic
:members:
:members:
doc/library/tensor/basic.txt
浏览文件 @
3f8c79ed
...
@@ -924,15 +924,15 @@ The bitwise operators possess this interface:
...
@@ -924,15 +924,15 @@ The bitwise operators possess this interface:
.. function:: bitwise_and(a, b)
.. function:: bitwise_and(a, b)
Alias for
and_
. bitwise_and is the numpy name.
Alias for
`and_`
. bitwise_and is the numpy name.
.. function:: bitwise_or(a, b)
.. function:: bitwise_or(a, b)
Alias for
or_
. bitwise_or is the numpy name.
Alias for
`or_`
. bitwise_or is the numpy name.
.. function:: bitwise_xor(a, b)
.. function:: bitwise_xor(a, b)
Alias for
xor_
. bitwise_xor is the numpy name.
Alias for
`xor_`
. bitwise_xor is the numpy name.
.. function:: bitwise_not(a, b)
.. function:: bitwise_not(a, b)
...
...
theano/sparse/sandbox/sp.py
浏览文件 @
3f8c79ed
...
@@ -14,7 +14,6 @@ import theano
...
@@ -14,7 +14,6 @@ import theano
import
theano.sparse
import
theano.sparse
from
theano
import
sparse
,
gof
,
Op
,
tensor
from
theano
import
sparse
,
gof
,
Op
,
tensor
from
theano.gof.python25
import
all
,
any
from
theano.gof.python25
import
all
,
any
from
theano.printing
import
Print
def
register_specialize
(
lopt
,
*
tags
,
**
kwargs
):
def
register_specialize
(
lopt
,
*
tags
,
**
kwargs
):
theano
.
compile
.
optdb
[
'specialize'
]
.
register
((
kwargs
and
kwargs
.
pop
(
'name'
))
or
lopt
.
__name__
,
lopt
,
'fast_run'
,
*
tags
)
theano
.
compile
.
optdb
[
'specialize'
]
.
register
((
kwargs
and
kwargs
.
pop
(
'name'
))
or
lopt
.
__name__
,
lopt
,
'fast_run'
,
*
tags
)
...
@@ -282,6 +281,7 @@ def clean(x):
...
@@ -282,6 +281,7 @@ def clean(x):
class
ConvolutionIndices
(
Op
):
class
ConvolutionIndices
(
Op
):
"""Build indices for a sparse CSC matrix that could implement A (convolve) B.
"""Build indices for a sparse CSC matrix that could implement A (convolve) B.
This generates a sparse matrix M, which generates a stack of image patches
This generates a sparse matrix M, which generates a stack of image patches
when computing the dot product of M with image patch. Convolution is then
when computing the dot product of M with image patch. Convolution is then
simply the dot product of (img x M) and the kernels.
simply the dot product of (img x M) and the kernels.
...
@@ -300,24 +300,25 @@ class ConvolutionIndices(Op):
...
@@ -300,24 +300,25 @@ class ConvolutionIndices(Op):
def
evaluate
(
inshp
,
kshp
,
(
dx
,
dy
)
=
(
1
,
1
),
nkern
=
1
,
mode
=
'valid'
,
ws
=
True
):
def
evaluate
(
inshp
,
kshp
,
(
dx
,
dy
)
=
(
1
,
1
),
nkern
=
1
,
mode
=
'valid'
,
ws
=
True
):
"""Build a sparse matrix which can be used for performing...
"""Build a sparse matrix which can be used for performing...
* convolution: in this case, the dot product of this matrix with the input
* convolution: in this case, the dot product of this matrix with the input
images will generate a stack of images patches. Convolution is then a
images will generate a stack of images patches. Convolution is then a
tensordot operation of the filters and the patch stack.
tensordot operation of the filters and the patch stack.
* sparse local connections: in this case, the sparse matrix allows us to operate
* sparse local connections: in this case, the sparse matrix allows us to operate
the weight matrix as if it were fully-connected. The structured-dot with the
the weight matrix as if it were fully-connected. The structured-dot with the
input image gives the output for the following layer.
input image gives the output for the following layer.
@param ker_shape: shape of kernel to apply (smaller than image)
:param ker_shape: shape of kernel to apply (smaller than image)
@param img_shape: shape of input images
:param img_shape: shape of input images
@param mode: 'valid' generates output only when kernel and image overlap
:param mode: 'valid' generates output only when kernel and image overlap
full' full convolution obtained by zero-padding the input
fully. Convolution obtained by zero-padding the input
@param ws: True if weight sharing, false otherwise
:param ws: True if weight sharing, false otherwise
@param (dx,dy): offset parameter. In the case of no weight sharing, gives the
:param (dx,dy): offset parameter. In the case of no weight sharing,
pixel offset between two receptive fields. With weight sharing gives the
gives the pixel offset between two receptive fields.
offset between the top-left pixels of the generated patches
With weight sharing gives the offset between the
top-left pixels of the generated patches
@rtype: tuple(indices, indptr, logical_shape, sp_type, out_img_shp)
@returns: the structure of a sparse matrix, and the logical dimensions of the image
:rtype: tuple(indices, indptr, logical_shape, sp_type, out_img_shp)
which will be the result of filtering.
:returns: the structure of a sparse matrix, and the logical dimensions
of the image which will be the result of filtering.
"""
"""
N
=
numpy
N
=
numpy
...
@@ -475,36 +476,46 @@ convolution_indices = ConvolutionIndices()
...
@@ -475,36 +476,46 @@ convolution_indices = ConvolutionIndices()
def
applySparseFilter
(
kerns
,
kshp
,
nkern
,
images
,
imgshp
,
step
=
(
1
,
1
),
bias
=
None
,
mode
=
'valid'
):
def
applySparseFilter
(
kerns
,
kshp
,
nkern
,
images
,
imgshp
,
step
=
(
1
,
1
),
bias
=
None
,
mode
=
'valid'
):
"""
"""
=== Input / Output conventions===
"images" is assumed to be a matrix of shape batch_size x img_size, where the second
"images" is assumed to be a matrix of shape batch_size x img_size, where the second
dimension represents each image in raster order
dimension represents each image in raster order
Output feature map will have shape:
Output feature map will have shape:
.. code-block:: python
batch_size x number of kernels * output_size
batch_size x number of kernels * output_size
IMPORTANT: note that this means that each feature map is contiguous in memory.
The memory layout will therefore be:
.. note::
[ <feature_map_0> <feature_map_1> ... <feature_map_n>],
where <feature_map> represents a "feature map" in raster order
IMPORTANT: note that this means that each feature map is contiguous in memory.
The memory layout will therefore be:
[ <feature_map_0> <feature_map_1> ... <feature_map_n>],
where <feature_map> represents a "feature map" in raster order
Note that the concept of feature map doesn't really apply to sparse filters without
Note that the concept of feature map doesn't really apply to sparse filters without
weight sharing. Basically, nkern=1 will generate one output img/feature map,
weight sharing. Basically, nkern=1 will generate one output img/feature map,
nkern=2 a second feature map, etc.
nkern=2 a second feature map, etc.
kerns is a 1D tensor, and assume to be of shape:
kerns is a 1D tensor, and assume to be of shape:
.. code-block:: python
nkern * N.prod(outshp) x N.prod(kshp)
nkern * N.prod(outshp) x N.prod(kshp)
Each filter is applied seperately to consecutive output pixels.
Each filter is applied seperately to consecutive output pixels.
@
param kerns: nkern*outsize*ksize vector containing kernels
:
param kerns: nkern*outsize*ksize vector containing kernels
@
param kshp: tuple containing actual dimensions of kernel (not symbolic)
:
param kshp: tuple containing actual dimensions of kernel (not symbolic)
@
param nkern: number of kernels to apply at each pixel in the input image.
:
param nkern: number of kernels to apply at each pixel in the input image.
nkern=1 will apply a single unique filter for each input pixel.
nkern=1 will apply a single unique filter for each input pixel.
@
param images: bsize x imgsize matrix containing images on which to apply filters
:
param images: bsize x imgsize matrix containing images on which to apply filters
@
param imgshp: tuple containing actual image dimensions (not symbolic)
:
param imgshp: tuple containing actual image dimensions (not symbolic)
@
param step: determines number of pixels between adjacent receptive fields
:
param step: determines number of pixels between adjacent receptive fields
(tuple containing dx,dy values)
(tuple containing dx,dy values)
@
param mode: 'full', 'valid' see CSM.evaluate function for details
:
param mode: 'full', 'valid' see CSM.evaluate function for details
@output out1:
symbolic result
:return: out1,
symbolic result
@output out2:
logical shape of the output img (nkern,height,width)
:return: out2,
logical shape of the output img (nkern,height,width)
(after dot product, not of the sparse matrix!)
(after dot product, not of the sparse matrix!)
"""
"""
# inshp contains either 2 entries (height,width) or 3 (nfeatures,h,w)
# inshp contains either 2 entries (height,width) or 3 (nfeatures,h,w)
...
@@ -529,44 +540,56 @@ def applySparseFilter(kerns, kshp, nkern, images, imgshp, step=(1,1), bias=None,
...
@@ -529,44 +540,56 @@ def applySparseFilter(kerns, kshp, nkern, images, imgshp, step=(1,1), bias=None,
def
convolve
(
kerns
,
kshp
,
nkern
,
images
,
imgshp
,
step
=
(
1
,
1
),
bias
=
None
,
\
def
convolve
(
kerns
,
kshp
,
nkern
,
images
,
imgshp
,
step
=
(
1
,
1
),
bias
=
None
,
\
mode
=
'valid'
,
flatten
=
True
):
mode
=
'valid'
,
flatten
=
True
):
"""Convolution implementation by sparse matrix multiplication.
"""Convolution implementation by sparse matrix multiplication.
@note: For best speed, put the matrix which you expect to be smaller as the 'kernel'
argument
=== Input / Output conventions===
:note: For best speed, put the matrix which you expect to be
smaller as the 'kernel' argument
"images" is assumed to be a matrix of shape batch_size x img_size, where the second
"images" is assumed to be a matrix of shape batch_size x img_size, where the second
dimension represents each image in raster order
dimension represents each image in raster order
If flatten is "False", the output feature map will have shape:
If flatten is "False", the output feature map will have shape:
.. code-block:: python
batch_size x number of kernels x output_size
batch_size x number of kernels x output_size
If flatten is "True", the output feature map will have shape:
If flatten is "True", the output feature map will have shape:
.. code-block:: python
batch_size x number of kernels * output_size
batch_size x number of kernels * output_size
IMPORTANT: note that this means that each feature map (image generate by each
kernel) is contiguous in memory. The memory layout will therefore be:
.. note::
[ <feature_map_0> <feature_map_1> ... <feature_map_n>],
where <feature_map> represents a "feature map" in raster order
IMPORTANT: note that this means that each feature map (image
generate by each kernel) is contiguous in memory. The memory
layout will therefore be: [ <feature_map_0> <feature_map_1>
... <feature_map_n>], where <feature_map> represents a
"feature map" in raster order
kerns is a 2D tensor of shape nkern x N.prod(kshp)
kerns is a 2D tensor of shape nkern x N.prod(kshp)
@
param kerns: 2D tensor containing kernels which are applied at every pixel
:
param kerns: 2D tensor containing kernels which are applied at every pixel
@
param kshp: tuple containing actual dimensions of kernel (not symbolic)
:
param kshp: tuple containing actual dimensions of kernel (not symbolic)
@
param nkern: number of kernels/filters to apply.
:
param nkern: number of kernels/filters to apply.
nkern=1 will apply one common filter to all input pixels
nkern=1 will apply one common filter to all input pixels
@
param images: tensor containing images on which to apply convolution
:
param images: tensor containing images on which to apply convolution
@
param imgshp: tuple containing image dimensions
:
param imgshp: tuple containing image dimensions
@
param step: determines number of pixels between adjacent receptive fields
:
param step: determines number of pixels between adjacent receptive fields
(tuple containing dx,dy values)
(tuple containing dx,dy values)
@param mode: 'full', 'valid' see CSM.evaluate function for details
:param mode: 'full', 'valid' see CSM.evaluate function for details
@param sumdims: dimensions over which to sum for the tensordot operation. By default
:param sumdims: dimensions over which to sum for the tensordot operation.
((2,),(1,)) assumes kerns is a nkern x kernsize matrix and images is a batchsize x
By default ((2,),(1,)) assumes kerns is a nkern x kernsize
imgsize matrix containing flattened images in raster order
matrix and images is a batchsize x imgsize matrix
@param flatten: flatten the last 2 dimensions of the output. By default, instead of
containing flattened images in raster order
generating a batchsize x outsize x nkern tensor, will flatten to
:param flatten: flatten the last 2 dimensions of the output. By default,
batchsize x outsize*nkern
instead of generating a batchsize x outsize x nkern tensor,
@output out1: symbolic result
will flatten to batchsize x outsize*nkern
@output out2: logical shape of the output img (nkern,heigt,width)
:return: out1, symbolic result
@TODO: test for 1D and think of how to do n-d convolutions
:return: out2, logical shape of the output img (nkern,heigt,width)
:TODO: test for 1D and think of how to do n-d convolutions
"""
"""
N
=
numpy
N
=
numpy
# start by computing output dimensions, size, etc
# start by computing output dimensions, size, etc
...
@@ -619,13 +642,13 @@ def max_pool(images, imgshp, maxpoolshp):
...
@@ -619,13 +642,13 @@ def max_pool(images, imgshp, maxpoolshp):
Max pooling downsamples by taking the max value in a given area, here defined by
Max pooling downsamples by taking the max value in a given area, here defined by
maxpoolshp. Outputs a 2D tensor of shape batch_size x output_size.
maxpoolshp. Outputs a 2D tensor of shape batch_size x output_size.
@
param images: 2D tensor containing images on which to apply convolution.
:
param images: 2D tensor containing images on which to apply convolution.
Assumed to be of shape batch_size x img_size
Assumed to be of shape batch_size x img_size
@
param imgshp: tuple containing image dimensions
:
param imgshp: tuple containing image dimensions
@
param maxpoolshp: tuple containing shape of area to max pool over
:
param maxpoolshp: tuple containing shape of area to max pool over
@output out1:
symbolic result (2D tensor)
:return: out1,
symbolic result (2D tensor)
@output out2:
logical shape of the output
:return: out2,
logical shape of the output
"""
"""
N
=
numpy
N
=
numpy
poolsize
=
N
.
int64
(
N
.
prod
(
maxpoolshp
))
poolsize
=
N
.
int64
(
N
.
prod
(
maxpoolshp
))
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论