Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
18592e73
提交
18592e73
authored
2月 01, 2012
作者:
lamblin
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #407 from nouiz/doc2
Doc2
上级
01ab8831
19615299
隐藏空白字符变更
内嵌
并排
正在显示
3 个修改的文件
包含
52 行增加
和
40 行删除
+52
-40
NEWS.txt
NEWS.txt
+4
-3
index.txt
doc/library/sparse/index.txt
+6
-3
basic.py
theano/sparse/basic.py
+42
-34
没有找到文件。
NEWS.txt
浏览文件 @
18592e73
...
...
@@ -120,12 +120,13 @@ New features:
* Added a_tensor.transpose(axes) axes is optional (James)
* theano.tensor.transpose(a_tensor, kwargs) We where ignoring kwargs, now it is used as the axes.
* a_CudaNdarray_object[*] = int, now work (Frederic)
* tensor_variable.size (as numpy) computes the product of the shape elements. (Olivier)
* sparse_variable.size (as scipy) computes the number of stored values. (Olivier)
* sparse_variable[N, N] now work (Li Yao, Frederic)
* sparse_variable[M:N, O:P] now work (Li Yao, Frederic)
New optimizations:
* AdvancedSubtensor1 reuses preallocated memory if available (scan, c|py_nogc linker) (Frederic)
* tensor_variable.size (as numpy) computes the product of the shape elements. (Olivier)
* sparse_variable.size (as scipy) computes the number of stored values. (Olivier)
* dot22, dot22scalar work with complex. (Frederic)
* Generate Gemv/Gemm more often. (James)
* Remove scan when all computations can be moved outside the loop. (Razvan)
...
...
doc/library/sparse/index.txt
浏览文件 @
18592e73
...
...
@@ -53,9 +53,12 @@ grad?
- When the operation has the form dot(csr_matrix, dense) the gradient of
this operation can be performed inplace by UsmmCscDense. This leads to
significant speed-ups.
Subtensor selection (aka. square-bracket notation, aka indexing) is not implemented, but the
CSR and CSC datastructures support effecient implementations.
- Subtensor
- sparse_variable[N, N], return a tensor scalar
- sparse_variable[M:N, O:P], return a sparse matrix
- Don't support [M, N:O] and [M:N, O] as we don't support sparse vector
and returning a sparse matrix would break the numpy interface.
Use [M:M+1, N:O] and [M:N, O:O+1] instead.
There are no GPU implementations for sparse matrices implemented in Theano.
...
...
theano/sparse/basic.py
浏览文件 @
18592e73
...
...
@@ -644,52 +644,56 @@ class SparseFromDense(gof.op.Op):
csr_from_dense
=
SparseFromDense
(
'csr'
)
csc_from_dense
=
SparseFromDense
(
'csc'
)
# Indexing
class
GetItem2d
(
gof
.
op
.
Op
):
"""
Implement a subtensor of sparse variable and that return a sparse matrix.
If you want to take only one element of a sparse matrix see the
class GetItemScalar
that return a tensor scalar.
If you want to take only one element of a sparse matrix see the
class GetItemScalar
that return a tensor scalar.
:note:
that subtensor selection always returns a matrix so indexing with [a:b, c:d] is forced.
If one index is a scalar,
e.g. x[a:b, c] and x[a, b:c], generate an error. Use instead
:note:
that subtensor selection always returns a matrix so
indexing with [a:b, c:d] is forced. If one index is a scalar,
e.g. x[a:b, c] and x[a, b:c], generate an error. Use instead
x[a:b, c:c+1] and x[a:a+1, b:c].
The above indexing methods are not supported because the rval would be a sparse
matrix rather than a sparse vector, which is a deviation from numpy indexing rule.
This decision is made largely for keeping the consistency between numpy and theano.
Subjected to modification when sparse vector is supported.
The above indexing methods are not supported because the rval
would be a sparse matrix rather than a sparse vector, which is a
deviation from numpy indexing rule. This decision is made largely
for keeping the consistency between numpy and theano. Subjected
to modification when sparse vector is supported.
"""
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
def
__hash__
(
self
):
return
hash
(
type
(
self
))
# Fred:Too complicated for now. If you need it, look at the Subtensor.infer_shape.
# Fred:Too complicated for now. If you need it, look at
# the Subtensor.infer_shape.
# def infer_shape(self, node, i0_shapes):
# return i0_shapes
def
make_node
(
self
,
x
,
index
):
x
=
as_sparse_variable
(
x
)
assert
len
(
index
)
in
[
1
,
2
]
input_op
=
[
x
]
for
ind
in
index
:
if
isinstance
(
ind
,
slice
):
# in case of slice is written in theano variable
# in case of slice is written in theano variable
start
=
ind
.
start
stop
=
ind
.
stop
# in case of slice is written in python int
if
isinstance
(
start
,
int
):
if
isinstance
(
start
,
int
):
start
=
theano
.
tensor
.
constant
(
start
)
if
isinstance
(
stop
,
int
):
if
isinstance
(
stop
,
int
):
stop
=
theano
.
tensor
.
constant
(
stop
)
#in case of indexing using python int
#elif isinstance(ind,int):
# start = theano.tensor.constant(ind)
...
...
@@ -697,47 +701,50 @@ class GetItem2d(gof.op.Op):
#elif ind.ndim == 0:
# start = ind
# stop = ind + 1
else
:
raise
NotImplemented
(
'Theano has no sparse vector'
+
'Use X[a:b,c:d], X[a:b,c:c+1] or X[a:b] instead.'
)
raise
NotImplemented
(
'Theano has no sparse vector'
+
'Use X[a:b,c:d], X[a:b,c:c+1] or X[a:b] instead.'
)
input_op
+=
[
start
,
stop
]
if
len
(
index
)
==
1
:
if
len
(
index
)
==
1
:
i
=
theano
.
gof
.
Constant
(
theano
.
gof
.
generic
,
None
)
input_op
+=
[
i
,
i
]
return
gof
.
Apply
(
self
,
input_op
,
[
x
.
type
()])
def
perform
(
self
,
node
,
(
x
,
start1
,
stop1
,
start2
,
stop2
),
(
out
,
)):
assert
_is_sparse
(
x
)
out
[
0
]
=
x
[
start1
:
stop1
,
start2
:
stop2
]
def
__str__
(
self
):
return
self
.
__class__
.
__name__
get_item_2d
=
GetItem2d
()
class
GetItemScalar
(
gof
.
op
.
Op
):
"""
Implement a subtensor of a sparse variable that take two scalar as index and return a scalar
Implement a subtensor of a sparse variable that take two scalar as
index and return a scalar
:see: GetItem2d to return more then one element.
"""
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
def
__hash__
(
self
):
return
hash
(
type
(
self
))
def
infer_shape
(
self
,
node
,
i0_shapes
):
return
[()]
def
make_node
(
self
,
x
,
index
):
x
=
as_sparse_variable
(
x
)
assert
len
(
index
)
==
2
assert
len
(
index
)
==
2
input_op
=
[
x
]
for
ind
in
index
:
if
isinstance
(
ind
,
slice
):
...
...
@@ -747,7 +754,7 @@ class GetItemScalar(gof.op.Op):
elif
isinstance
(
ind
,
int
):
ind
=
theano
.
tensor
.
constant
(
ind
)
input_op
+=
[
ind
]
# in case of indexing using theano variable
elif
ind
.
ndim
==
0
:
input_op
+=
[
ind
]
...
...
@@ -755,18 +762,19 @@ class GetItemScalar(gof.op.Op):
raise
NotImplemented
()
return
gof
.
Apply
(
self
,
input_op
,
[
tensor
.
scalar
(
dtype
=
x
.
dtype
)])
def
perform
(
self
,
node
,
(
x
,
ind1
,
ind2
),
(
out
,
)):
assert
_is_sparse
(
x
)
out
[
0
]
=
x
[
ind1
,
ind2
]
def
__str__
(
self
):
return
self
.
__class__
.
__name__
get_item_scalar
=
GetItemScalar
()
# Linear Algebra
class
Transpose
(
gof
.
op
.
Op
):
format_map
=
{
'csr'
:
'csc'
,
'csc'
:
'csr'
}
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论