Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
b53ab5f5
提交
b53ab5f5
authored
6月 17, 2014
作者:
abergeron
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #1918 from nouiz/doc
Doc typed list and better sparse doc.
上级
799e97dd
62f54aa9
隐藏空白字符变更
内嵌
并排
正在显示
5 个修改的文件
包含
438 行增加
和
528 行删除
+438
-528
index.txt
doc/library/index.txt
+1
-0
index.txt
doc/library/sparse/index.txt
+22
-24
typed_list.txt
doc/library/typed_list.txt
+40
-0
basic.py
theano/sparse/basic.py
+314
-488
basic.py
theano/typed_list/basic.py
+61
-16
没有找到文件。
doc/library/index.txt
浏览文件 @
b53ab5f5
...
...
@@ -22,6 +22,7 @@ Types and Ops that you can use to build and compile expression graphs.
gof/index
scan
sandbox/index
typed_list
There are also some top-level imports that you might find more convenient:
...
...
doc/library/sparse/index.txt
浏览文件 @
b53ab5f5
...
...
@@ -119,16 +119,18 @@ List of Implemented Operations
==============================
- Moving from and to sparse
- :
class:`DenseFromSparse <theano.sparse.basic.DenseFromSparse>` and ``dense_from_sparse`
`.
- :
func:`dense_from_sparse <theano.sparse.basic.dense_from_sparse>
`.
Both grads are implemented. Structured by default.
- :class:`SparseFromDense <theano.sparse.basic.SparseFromDense>` and ``csr_from_dense``, ``csc_from_dense``.
- :func:`csr_from_dense <theano.sparse.basic.csr_from_dense>`,
:func:`csc_from_dense <theano.sparse.basic.csc_from_dense>`.
The grad implemented is structured.
- Theano SparseVariable object have a method ``toarray()`` that is the same as ``dense_from_sparse``.
- Theano SparseVariable object have a method ``toarray()`` that is the same as
:func:`dense_from_sparse <theano.sparse.basic.dense_from_sparse>`.
- Construction of Sparses and their Properties
- :class:`CSM <theano.sparse.basic.CSM>` and ``CSC``, ``CSR`` to construct a matrix.
The grad implemented is regular.
- :
class:`CSMProperties <theano.sparse.basic.CSMProperties>` and ``csm_properties(x)``
- :
func:`csm_properties <theano.sparse.basic.csm_properties>`.
to get the properties of a sparse matrix.
The grad implemented is regular.
- csm_indices(x), csm_indptr(x), csm_data(x) and csm_shape(x) or x.shape.
...
...
@@ -136,22 +138,22 @@ List of Implemented Operations
The grad implemented is regular.
- :func:`sp_zeros_like <theano.sparse.basic.sp_zeros_like>`.
The grad implemented is regular.
- :
class:`SquareDiagonal <theano.sparse.basic.SquareDiagonal>` and ``square_diagonal`
`.
- :
func:`square_diagonal <theano.sparse.basic.square_diagonal>
`.
The grad implemented is regular.
- :
class:`ConstructSparseFromList <theano.sparse.basic.ConstructSparseFromList>` and ``construct_sparse_from_list`
`.
- :
func:`construct_sparse_from_list <theano.sparse.basic.construct_sparse_from_list>
`.
The grad implemented is regular.
- Cast
- :
class:`Cast <theano.sparse.basic.C
ast>` with ``bcast``, ``wcast``, ``icast``, ``lcast``,
- :
func:`cast <theano.sparse.basic.c
ast>` with ``bcast``, ``wcast``, ``icast``, ``lcast``,
``fcast``, ``dcast``, ``ccast``, and ``zcast``.
The grad implemented is regular.
- Transpose
- :
class:`Transpose <theano.sparse.basic.Transpose>` and ``transpose`
`.
- :
func:`transpose <theano.sparse.basic.transpose>
`.
The grad implemented is regular.
- Basic Arithmetic
- :
class:`Neg <theano.sparse.basic.N
eg>`.
- :
func:`neg <theano.sparse.basic.n
eg>`.
The grad implemented is regular.
- :func:`eq <theano.sparse.basic.eq>`.
- :func:`neq <theano.sparse.basic.neq>`.
...
...
@@ -201,15 +203,13 @@ List of Implemented Operations
- ``sqrt``
- Dot Product
- :class:`Dot <theano.sparse.basic.Dot>` and
:func:`dot <theano.sparse.basic.dot>`.
- :func:`dot <theano.sparse.basic.dot>`.
- One of the inputs must be sparse, the other sparse or dense.
- The grad implemented is regular.
- No C code for perform and no C code for grad.
- Returns a dense for perform and a dense for grad.
- :class:`StructuredDot <theano.sparse.basic.StructuredDot>`
and :func:`structured_dot <theano.sparse.basic.structured_dot>`.
- :func:`structured_dot <theano.sparse.basic.structured_dot>`.
- The first input is sparse, the second can be sparse or dense.
- The grad implemented is structured.
...
...
@@ -218,8 +218,7 @@ List of Implemented Operations
dense one if one of the inputs is dense.
- Returns a sparse grad for sparse inputs and dense grad for
dense inputs.
- :class:`TrueDot <theano.sparse.basic.TrueDot>` and
:func:`true_dot <theano.sparse.basic.true_dot>`.
- :func:`true_dot <theano.sparse.basic.true_dot>`.
- The first input is sparse, the second can be sparse or dense.
- The grad implemented is regular.
...
...
@@ -229,19 +228,18 @@ List of Implemented Operations
default a dense for dense inputs. The parameter
``grad_preserves_dense`` can be set to False to return a
sparse grad for dense inputs.
- :class:`SamplingDot <theano.sparse.basic.SamplingDot>` and
``sampling_dot``.
- :func:`sampling_dot <theano.sparse.basic.sampling_dot>`.
- Both inputs must be dense.
- The grad implemented is structured for `p`.
- Sample of the dot and sample of the gradient.
- C code for perform but not for grad.
- Returns sparse for perform and grad.
- :
class:`Usmm <theano.sparse.basic.Usmm>` and ``usmm`
`.
- :
func:`usmm <theano.sparse.basic.usmm>
`.
- You *shouldn't* insert this op yourself!
- There is an optimization that transform a
:
class:`Dot <theano.sparse.basic.D
ot>` to ``Usmm`` when possible.
:
func:`dot <theano.sparse.basic.d
ot>` to ``Usmm`` when possible.
- This op is the equivalent of gemm for sparse dot.
- There is no grad implemented for this op.
...
...
@@ -256,13 +254,13 @@ List of Implemented Operations
- Sparse variables don't support [M, N:O] and [M:N, O] as we don't
support sparse vectors and returning a sparse matrix would break
the numpy interface. Use [M:M+1, N:O] and [M:N, O:O+1] instead.
- :
class:`Diag <theano.sparse.basic.Diag>` and ``diag`
`.
- :
func:`diag <theano.sparse.basic.diag>
`.
The grad implemented is regular.
- Concatenation
- :
class:`HStack <theano.sparse.basic.HStack>` and ``hstack`
`.
- :
func:`hstack <theano.sparse.basic.hstack>
`.
The grad implemented is regular.
- :
class:`VStack <theano.sparse.basic.VStack>` and ``vstack`
`.
- :
func:`vstack <theano.sparse.basic.vstack>
`.
The grad implemented is regular.
- Probability
...
...
@@ -276,8 +274,8 @@ List of Implemented Operations
- Internal Representation
`They all have a regular grad implemented.`
- :
class:`EnsureSortedIndices <theano.sparse.basic.EnsureSortedIndices>` and ``ensure_sorted_indices``
- :
class:`Remove0 <theano.sparse.basic.Remove0>` and ``remove0``
- :
func:`ensure_sorted_indices <theano.sparse.basic.ensure_sorted_indices>`.
- :
func:`remove0 <theano.sparse.basic.remove0>`.
- :func:`clean <theano.sparse.basic.clean>` to resort indices and remove zeros
- To help testing
...
...
doc/library/typed_list.txt
0 → 100644
浏览文件 @
b53ab5f5
.. _libdoc_typed_list:
===============================
:mod:`typed_list` -- Typed List
===============================
.. note::
This is not in the released version 0.6.0, but will be in the next release (0.7 or 0.6.1).
This is a type that represents a list in Theano. All elements must have
the same Theano type. Here is an example::
import theano.typed_list
tl = theano.typed_list.TypedListType(theano.tensor.fvector)()
v = theano.tensor.fvector()
o = theano.typed_list.append(tl, v)
f = theano.function([tl, v], o)
print f([[1, 2, 3], [4, 5]], [2])
#[array([ 1., 2., 3.], dtype=float32), array([ 4., 5.], dtype=float32), array([ 2.], dtype=float32)]
A second example with Scan. Scan doesn't yet have direct support of
TypedList, so you can only use it as non_sequences (not in sequences or
as outputs).::
import theano.typed_list
a = theano.typed_list.TypedListType(theano.tensor.fvector)()
l = theano.typed_list.length(a)
s, _ = theano.scan(fn=lambda i, tl: tl[i].sum(),
non_sequences=[a],
sequences=[theano.tensor.arange(l, dtype='int64')])
f = theano.function([a], s)
f([[1, 2, 3], [4, 5]])
#array([ 6., 9.], dtype=float32)
.. automodule:: theano.typed_list.basic
:members:
theano/sparse/basic.py
浏览文件 @
b53ab5f5
...
...
@@ -406,6 +406,7 @@ class SparseConstant(gof.Constant, _sparse_py_operators):
SparseType
.
Variable
=
SparseVariable
SparseType
.
Constant
=
SparseConstant
# for more dtypes, call SparseType(format, dtype)
def
matrix
(
format
,
name
=
None
,
dtype
=
None
):
if
dtype
is
None
:
...
...
@@ -446,23 +447,7 @@ discrete_dtypes = int_dtypes + uint_dtypes
# CONSTRUCTION
class
CSMProperties
(
gof
.
Op
):
"""Extract all of .data, .indices, .indptr and .shape.
For specific field, `csm_data`, `csm_indices`, `csm_indptr`
and `csm_shape` are provided. Also, `kmap` could be
set through to constructor to specified the parts
of the parameter `data` the op should return.Fancy indexing
with numpy.ndarray should be used for this purpose.
:param csm: Sparse matrix in CSR or CSC format.
:return: (data, indices, indptr, shape), the properties
of `csm`.
:note: The grad implemented is regular, i.e. not structured.
`infer_shape` method is not available for this op.
"""
# See doc in instance of this Op or function after this class definition.
# NOTE
# We won't implement infer_shape for this op now. This will
# ask that we implement an GetNNZ op, and this op will keep
...
...
@@ -537,11 +522,18 @@ class CSMProperties(gof.Op):
# don't make this a function or it breaks some optimizations below
csm_properties
=
CSMProperties
()
"""An CSMProperties object instance. It return the fields data,
indices, indptr and shape of the sparse varible. Together they specify
completly the the sparse variable when we know its format. Example::
"""
Extract all of .data, .indices, .indptr and .shape field.
For specific field, `csm_data`, `csm_indices`, `csm_indptr`
and `csm_shape` are provided.
the_data, the_indices, the_indptr, the_shape = csm_properties(a_sparse_var)
:param csm: Sparse matrix in CSR or CSC format.
:return: (data, indices, indptr, shape), the properties of `csm`.
:note: The grad implemented is regular, i.e. not structured.
`infer_shape` method is not available for this op.
"""
...
...
@@ -574,35 +566,7 @@ def csm_shape(csm):
class
CSM
(
gof
.
Op
):
"""Construct a CSC or CSR matrix from the internal
representation.
The format for the sparse array can be specified
through the constructor. Also, `kmap` could be
set through to constructor to specified the parts
of the parameter `data` the op should use to construct
the sparse matrix. Fancy indexing with numpy.ndarray
should be used for this purpose.
:param data: One dimensional tensor representing
the data of the sparse to construct.
:param indices: One dimensional tensor of integers
representing the indices of the sparse
matrix to construct.
:param indptr: One dimensional tensor of integers
representing the indice pointer for
the sparse matrix to construct.
:param shape: One dimensional tensor of integers
representing the shape of the sparse
matrix to construct.
:return: A sparse matrix having the properties
specified by the inputs.
:note: The grad method returns a dense vector, so it provides
a regular grad.
"""
# See doc in instance of this Op or function after this class definition.
kmap
=
None
"""Indexing to speficied what part of the data parameter
should be use to construct the sparse matrix."""
...
...
@@ -725,7 +689,50 @@ class CSM(gof.Op):
CSC
=
CSM
(
'csc'
)
"""Construct a CSC matrix from the internal
representation.
:param data: One dimensional tensor representing
the data of the sparse to construct.
:param indices: One dimensional tensor of integers
representing the indices of the sparse
matrix to construct.
:param indptr: One dimensional tensor of integers
representing the indice pointer for
the sparse matrix to construct.
:param shape: One dimensional tensor of integers
representing the shape of the sparse
matrix to construct.
:return: A sparse matrix having the properties
specified by the inputs.
:note: The grad method returns a dense vector, so it provides
a regular grad.
"""
CSR
=
CSM
(
'csr'
)
"""Construct a CSR matrix from the internal
representation.
:param data: One dimensional tensor representing
the data of the sparse to construct.
:param indices: One dimensional tensor of integers
representing the indices of the sparse
matrix to construct.
:param indptr: One dimensional tensor of integers
representing the indice pointer for
the sparse matrix to construct.
:param shape: One dimensional tensor of integers
representing the shape of the sparse
matrix to construct.
:return: A sparse matrix having the properties
specified by the inputs.
:note: The grad method returns a dense vector, so it provides
a regular grad.
"""
class
CSMGrad
(
gof
.
op
.
Op
):
...
...
@@ -803,16 +810,7 @@ csm_grad = CSMGrad
class
Cast
(
gof
.
op
.
Op
):
"""Cast sparse variable to the desired dtype.
:param x: Sparse matrix.
:return: Same as `x` but having `out_type` as dtype.
:note: The grad implemented is regular, i.e. not
structured.
"""
# See doc in instance of this Op or function after this class definition.
def
__init__
(
self
,
out_type
):
self
.
out_type
=
out_type
...
...
@@ -857,6 +855,17 @@ zcast = Cast('complex128')
def
cast
(
variable
,
dtype
):
"""Cast sparse variable to the desired dtype.
:param variable: Sparse matrix.
:param dtype: the dtype wanted.
:return: Same as `x` but having `dtype` as dtype.
:note: The grad implemented is regular, i.e. not
structured.
"""
return
Cast
(
dtype
)(
variable
)
#
...
...
@@ -865,19 +874,7 @@ def cast(variable, dtype):
class
DenseFromSparse
(
gof
.
op
.
Op
):
"""Convert a sparse matrix to a dense one.
:param x: A sparse matrix.
:return: A dense matrix, the same as `x`.
:note: The grad implementation can be controlled
through the constructor via the `structured`
parameter. `True` will provide a structured
grad while `False` will provide a regular
grad. By default, the grad is structured.
"""
# See doc in instance of this Op or function after this class definition.
def
__init__
(
self
,
structured
=
True
):
self
.
sparse_grad
=
structured
...
...
@@ -933,25 +930,21 @@ class DenseFromSparse(gof.op.Op):
return
[
shapes
[
0
]]
dense_from_sparse
=
DenseFromSparse
()
"""Convert a sparse matrix to a dense one.
:param x: A sparse matrix.
class
SparseFromDense
(
gof
.
op
.
Op
):
"""Convert a dense matrix to a sparse matrix.
To convert in CSR format, use `csr_from_dense`
and to convert in CSC format, use `csc_from_dense`.
:param x: A dense matrix.
:return: A dense matrix, the same as `x`.
:return: The same as `x` in a sparse matrix
format.
:note: The grad implementation can be controlled
through the constructor via the `structured`
parameter. `True` will provide a structured
grad while `False` will provide a regular
grad. By default, the grad is structured.
"""
:note: The grad implementation is regular, i.e.
not structured.
:note: The output sparse format can also be controlled
via the `format` parameter in the constructor.
"""
class
SparseFromDense
(
gof
.
op
.
Op
):
def
__init__
(
self
,
format
):
self
.
format
=
format
...
...
@@ -997,38 +990,31 @@ class SparseFromDense(gof.op.Op):
return
[
shapes
[
0
]]
csr_from_dense
=
SparseFromDense
(
'csr'
)
csc_from_dense
=
SparseFromDense
(
'csc'
)
"""Convert a dense matrix to a sparse csr matrix.
# Indexing
class
GetItem2d
(
gof
.
op
.
Op
):
"""Implement a subtensor of sparse variable and that return a
sparse matrix.
:param x: A dense matrix.
If you want to take only one element of a sparse matrix see
`GetItemScalar` that return a tensor scalar.
:return: The same as `x` in a sparse matrix format.
.. note::
:note: The grad implementation is regular, i.e.
not structured.
"""
Subtensor selection always returns a matrix, so indexing
with [a:b, c:d] is forced. If one index is a scalar. For
instance, x[a:b, c] and x[a, b:c], generate an error. Use
instead x[a:b, c:c+1] and x[a:a+1, b:c].
csc_from_dense
=
SparseFromDense
(
'csc'
)
"""Convert a dense matrix to a sparse csc matrix.
The above indexing methods are not supported because the return value
would be a sparse matrix rather than a sparse vector, which is a
deviation from numpy indexing rule. This decision is made largely
for keeping the consistency between numpy and theano. Subjected
to modification when sparse vector is supported.
:param x: A dense matrix.
:param x: Sparse matrix.
:param index: Tuple of slice object.
:return: The same as `x` in a sparse matrix format.
:return: The slice corresponding in `x`.
:note: The grad implementation is regular, i.e.
not structured.
"""
:note: The grad is not implemented for this op.
"""
# Indexing
class
GetItem2d
(
gof
.
op
.
Op
):
# See doc in instance of this Op or function after this class definition.
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
...
...
@@ -1110,23 +1096,36 @@ class GetItem2d(gof.op.Op):
return
self
.
__class__
.
__name__
get_item_2d
=
GetItem2d
()
"""Implement a subtensor of sparse variable and that return a
sparse matrix.
If you want to take only one element of a sparse matrix see
`GetItemScalar` that return a tensor scalar.
class
GetItemScalar
(
gof
.
op
.
Op
):
"""Implement a subtensor of a sparse variable that take
two scalar as index and return a scalar.
.. note::
If you want to take a slice of a sparse matrix see
`GetItem2d` that return a sparse matrix.
Subtensor selection always returns a matrix, so indexing
with [a:b, c:d] is forced. If one index is a scalar. For
instance, x[a:b, c] and x[a, b:c], generate an error. Use
instead x[a:b, c:c+1] and x[a:a+1, b:c].
:param x: Sparse matrix.
:param index: Tuple of scalar..
The above indexing methods are not supported because the return value
would be a sparse matrix rather than a sparse vector, which is a
deviation from numpy indexing rule. This decision is made largely
to preserve consistency between numpy and theano. This may be revised
when sparse vectors are supported.
:return: The item corresponding in `x`.
:param x: Sparse matrix.
:param index: Tuple of slice object.
:return: The slice corresponding in `x`.
:note: The grad is not implemented for this op.
"""
:note: The grad is not implemented for this op.
"""
class
GetItemScalar
(
gof
.
op
.
Op
):
# See doc in instance of this Op or function after this class definition.
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
...
...
@@ -1169,22 +1168,24 @@ class GetItemScalar(gof.op.Op):
return
self
.
__class__
.
__name__
get_item_scalar
=
GetItemScalar
()
"""Implement a subtensor of a sparse variable that take
two scalars as index and return a scalar.
If you want to take a slice of a sparse matrix see
`GetItem2d` that returns a sparse matrix.
# Linear Algebra
class
Transpose
(
gof
.
op
.
Op
):
"""Return the transpose of the sparse matrix.
:param x: Sparse matrix.
:param index: Tuple of scalars.
:param x: Sparse matrix
.
:return: The item corresponding in `x`
.
:return: `x` transposed.
:note: The grad is not implemented for this op.
"""
:note: The returned matrix will not be in the
same format. `csc` matrix will be changed
in `csr` matrix and `csr` matrix in `csc`
matrix.
:note: The grad is regular, i.e. not structured.
"""
# Linear Algebra
class
Transpose
(
gof
.
op
.
Op
):
# See doc in instance of this Op or function after this class definition.
view_map
=
{
0
:
[
0
]}
format_map
=
{
'csr'
:
'csc'
,
...
...
@@ -1219,18 +1220,22 @@ class Transpose(gof.op.Op):
def
infer_shape
(
self
,
node
,
shapes
):
return
[
shapes
[
0
][::
-
1
]]
transpose
=
Transpose
()
"""Return the transpose of the sparse matrix.
:param x: Sparse matrix.
class
Neg
(
gof
.
op
.
Op
):
"""Return the negation of the sparse matrix.
:param x: Sparse matrix.
:return: `x` transposed.
:return: -`x`.
:note: The returned matrix will not be in the
same format. `csc` matrix will be changed
in `csr` matrix and `csr` matrix in `csc`
matrix.
:note: The grad is regular, i.e. not structured.
"""
:note: The grad is regular, i.e. not structured.
"""
class
Neg
(
gof
.
op
.
Op
):
# See doc in instance of this Op or function after this class definition.
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
...
...
@@ -1256,6 +1261,14 @@ class Neg(gof.op.Op):
def
infer_shape
(
self
,
node
,
shapes
):
return
[
shapes
[
0
]]
neg
=
Neg
()
"""Return the negation of the sparse matrix.
:param x: Sparse matrix.
:return: -`x`.
:note: The grad is regular, i.e. not structured.
"""
class
ColScaleCSC
(
gof
.
op
.
Op
):
...
...
@@ -1399,26 +1412,7 @@ def row_scale(x, s):
class
SpSum
(
gof
.
op
.
Op
):
"""Calculate the sum of a sparse matrix along a specify
axis.
It operates a reduction along the axis specified. When
`axis` is `None`, it is apply along all axis.
:param x: Sparse matrix.
:param axis: Axis along the sum is apply. Integers or `None`.
:param sparse_grad: `True` to have a structured grad. Boolean.
:return: The sum of `x` in a dense format.
:note: The grad implementation is controlled with the `sparse_grad`
parameter. `True` will provide a structured grad and `False`
will provide a regular grad. For both choice, the grad
return a sparse matrix having the same format as `x`.
:note: This op does not return a sparse matrix, but a dense tensor
matrix.
"""
# See doc in instance of this Op or function after this class definition.
def
__init__
(
self
,
axis
=
None
,
sparse_grad
=
True
):
super
(
SpSum
,
self
)
.
__init__
()
self
.
axis
=
axis
...
...
@@ -1504,21 +1498,31 @@ class SpSum(gof.op.Op):
def
sp_sum
(
x
,
axis
=
None
,
sparse_grad
=
False
):
return
SpSum
(
axis
,
sparse_grad
)(
x
)
"""Calculate the sum of a sparse matrix along a specify
axis.
class
Diag
(
gof
.
op
.
Op
):
"""Extract the diagonal of a square sparse matrix as a dense
vector.
It operates a reduction along the axis specified. When
`axis` is `None`, it is apply along all axes.
:param x: A square sparse matrix in csc format.
:param x: Sparse matrix.
:param axis: Axis along which the sum is applied. Integers or `None`.
:param sparse_grad: `True` to have a structured grad. Boolean.
:return:
A dense vector representing the diagonal elements
.
:return:
The sum of `x` in a dense format
.
:note: The grad implemented is regular, i.e. not structured, since
the output is a dense vector.
:note: The grad implementation is controlled with the `sparse_grad`
parameter. `True` will provide a structured grad and `False`
will provide a regular grad. For both choices, the grad
returns a sparse matrix having the same format as `x`.
:note: This op does not return a sparse matrix, but a dense tensor
matrix.
"""
return
SpSum
(
axis
,
sparse_grad
)(
x
)
class
Diag
(
gof
.
op
.
Op
):
# See doc in instance of this Op or function after this class definition.
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
...
...
@@ -1546,19 +1550,20 @@ class Diag(gof.op.Op):
def
__str__
(
self
):
return
self
.
__class__
.
__name__
diag
=
Diag
()
"""Extract the diagonal of a square sparse matrix as a dense vector.
:param x: A square sparse matrix in csc format.
class
SquareDiagonal
(
gof
.
op
.
Op
):
"""Return a square sparse (csc) matrix whose diagonal
is given by the dense vector argument.
:return: A dense vector representing the diagonal elements.
:param x: Dense vector for the diagonal.
:note: The grad implemented is regular, i.e. not structured, since
the output is a dense vector.
:return: A sparse matrix having `x` as diagonal.
"""
:note: The grad implemented is regular, i.e. not structured.
"""
class
SquareDiagonal
(
gof
.
op
.
Op
):
# See doc in instance of this Op or function after this class definition.
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
...
...
@@ -1593,23 +1598,19 @@ class SquareDiagonal(gof.op.Op):
def
__str__
(
self
):
return
self
.
__class__
.
__name__
square_diagonal
=
SquareDiagonal
()
"""Return a square sparse (csc) matrix whose diagonal
is given by the dense vector argument.
:param x: Dense vector for the diagonal.
class
EnsureSortedIndices
(
gof
.
op
.
Op
):
"""Resort indices of a sparse matrix.
CSR column indices are not necessarily sorted. Likewise
for CSC row indices. Use `ensure_sorted_indices` when sorted
indices are required (e.g. when passing data to other
libraries).
:param x: A sparse matrix.
:return: A sparse matrix having `x` as diagonal.
:return: The same as `x` with indices sorted.
:note: The grad implemented is regular, i.e. not structured.
"""
:note: The grad implemented is regular, i.e. not structured.
"""
class
EnsureSortedIndices
(
gof
.
op
.
Op
):
# See doc in instance of this Op or function after this class definition.
def
__init__
(
self
,
inplace
):
self
.
inplace
=
inplace
if
self
.
inplace
:
...
...
@@ -1644,6 +1645,19 @@ class EnsureSortedIndices(gof.op.Op):
else
:
return
self
.
__class__
.
__name__
+
"{no_inplace}"
ensure_sorted_indices
=
EnsureSortedIndices
(
inplace
=
False
)
"""Resort indices of a sparse matrix.
CSR column indices are not necessarily sorted. Likewise
for CSC row indices. Use `ensure_sorted_indices` when sorted
indices are required (e.g. when passing data to other
libraries).
:param x: A sparse matrix.
:return: The same as `x` with indices sorted.
:note: The grad implemented is regular, i.e. not structured.
"""
def
clean
(
x
):
...
...
@@ -1666,16 +1680,8 @@ def clean(x):
class
AddSS
(
gof
.
op
.
Op
):
"""Add tw sparse matrix.
:param x: A sparse matrix.
:param y: A sparse matrix
:return: `x`+`y`
:note: The grad implemented is regular, i.e. not structured.
"""
#add(sparse, sparse).
#see the doc of add() for more detail.
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
...
...
@@ -1715,19 +1721,7 @@ add_s_s = AddSS()
class
AddSSData
(
gof
.
op
.
Op
):
"""Add two sparse matrices assuming they have the same sparsity
pattern.
:param x: Sparse matrix.
:param y: Sparse matrix.
:return: The sum of the two sparse matrix element wise.
:note: `x` and `y` are assumed to have the same
sparsity pattern.
:note: The grad implemented is structured.
"""
# See doc in instance of this Op or function after this class definition.
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
...
...
@@ -1766,18 +1760,24 @@ class AddSSData(gof.op.Op):
def
__str__
(
self
):
return
self
.
__class__
.
__name__
add_s_s_data
=
AddSSData
()
"""Add two sparse matrices assuming they have the same sparsity
pattern.
:param x: Sparse matrix.
:param y: Sparse matrix.
class
AddSD
(
gof
.
op
.
Op
):
"""Add a sparse and a dense matrix.
:return: The sum of the two sparse matrices element wise.
:param x: A sparse matrix.
:param y: A dense matrix
:note: `x` and `y` are assumed to have the same
sparsity pattern.
:note: The grad implemented is structured.
:return: `x`+`y`
"""
:note: The grad implemented is structured on `x`.
"""
class
AddSD
(
gof
.
op
.
Op
):
#add(sparse, sparse).
#see the doc of add() for more detail.
def
__init__
(
self
,
*
args
,
**
kwargs
):
gof
.
Op
.
__init__
(
self
,
*
args
,
**
kwargs
)
...
...
@@ -1823,20 +1823,6 @@ add_s_d = AddSD()
class
StructuredAddSV
(
gof
.
op
.
Op
):
"""Structured addition of a sparse matrix and a dense vector.
The elements of the vector are are only added to the corresponding
non-zero elements. Therefore, this operation outputs another sparse
matrix.
:param x: Sparse matrix.
:param y: Tensor type vector.
:return: A sparse matrix containing the addition of the vector to
the data of the sparse matrix.
:note: The grad implemented is structured since the op is structured.
"""
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
...
...
@@ -1873,6 +1859,19 @@ class StructuredAddSV(gof.op.Op):
def
__str__
(
self
):
return
self
.
__class__
.
__name__
structured_add_s_v
=
StructuredAddSV
()
"""Structured addition of a sparse matrix and a dense vector.
The elements of the vector are are only added to the corresponding
non-zero elements. Therefore, this operation outputs another sparse
matrix.
:param x: Sparse matrix.
:param y: Tensor type vector.
:return: A sparse matrix containing the addition of the vector to
the data of the sparse matrix.
:note: The grad implemented is structured since the op is structured.
"""
def
add
(
x
,
y
):
...
...
@@ -1934,17 +1933,8 @@ def sub(x, y):
class
MulSS
(
gof
.
op
.
Op
):
"""Elementwise multiply a sparse and a sparse.
:param x: A sparse matrix.
:param y: A sparse matrix.
:return: `x` * `y`
:note: At least one of `x` and `y` must be a sparse matrix.
:note: The grad implemented is regular, i.e. not structured.
"""
# mul(sparse, sparse)
# See the doc of mul() for more detail
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
...
...
@@ -1986,16 +1976,8 @@ mul_s_s = MulSS()
class
MulSD
(
gof
.
op
.
Op
):
"""Elementwise multiply a sparse and a dense matrix.
:param x: A sparse matrix.
:param y: A dense matrix.
:return: `x` * `y`
:note: The grad is regular, i.e. not structured..
"""
# mul(sparse, dense)
# See the doc of mul() for more detail
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
...
...
@@ -2089,17 +2071,6 @@ mul_s_d = MulSD()
class
MulSV
(
gof
.
op
.
Op
):
"""Multiplication of sparse matrix by a broadcasted dense vector
element wise.
:param x: Sparse matrix to multiply.
:param y: Tensor broadcastable vector.
:Return: The product x * y element wise.
:note: The grad implemented is regular, i.e. not structured.
"""
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
...
...
@@ -2147,6 +2118,15 @@ class MulSV(gof.op.Op):
def
__str__
(
self
):
return
self
.
__class__
.
__name__
mul_s_v
=
MulSV
()
"""Multiplication of sparse matrix by a broadcasted dense vector element wise.
:param x: Sparse matrix to multiply.
:param y: Tensor broadcastable vector.
:Return: The product x * y element wise.
:note: The grad implemented is regular, i.e. not structured.
"""
def
mul
(
x
,
y
):
...
...
@@ -2323,13 +2303,6 @@ def __ComparisonSwitch(SS, SD, DS):
class
EqualSS
(
__ComparisonOpSS
):
"""
:param x:first compared sparse matrix
:param y:second compared sparse matrix
:return: x==y
"""
def
comparison
(
self
,
x
,
y
):
return
x
==
y
...
...
@@ -2338,13 +2311,6 @@ equal_s_s = EqualSS()
class
EqualSD
(
__ComparisonOpSD
):
"""
:param x:sparse matrix
:param y:dense matrix
:return: x==y
"""
def
comparison
(
self
,
x
,
y
):
return
x
==
y
...
...
@@ -2352,13 +2318,6 @@ equal_s_d = EqualSD()
class
NotEqualSS
(
__ComparisonOpSS
):
"""
:param x:first compared sparse matrix
:param y:second compared sparse matrix
:return: x!=y
"""
def
comparison
(
self
,
x
,
y
):
return
x
!=
y
...
...
@@ -2366,13 +2325,6 @@ not_equal_s_s = NotEqualSS()
class
NotEqualSD
(
__ComparisonOpSD
):
"""
:param x:sparse matrix
:param y:dense matrix
:return: x!=y
"""
def
comparison
(
self
,
x
,
y
):
return
x
!=
y
...
...
@@ -2380,13 +2332,6 @@ not_equal_s_d = NotEqualSD()
class
LessThanSS
(
__ComparisonOpSS
):
"""
:param x:first compared sparse matrix
:param y:second compared sparse matrix
:return: x<y
"""
def
comparison
(
self
,
x
,
y
):
return
x
<
y
...
...
@@ -2394,13 +2339,6 @@ less_than_s_s = LessThanSS()
class
LessThanSD
(
__ComparisonOpSD
):
"""
:param x:sparse matrix
:param y:dense matrix
:return: x<y
"""
def
comparison
(
self
,
x
,
y
):
return
x
<
y
...
...
@@ -2408,13 +2346,6 @@ less_than_s_d = LessThanSD()
class
GreaterThanSS
(
__ComparisonOpSS
):
"""
:param x:first compared sparse matrix
:param y:second compared sparse matrix
:return: x>y
"""
def
comparison
(
self
,
x
,
y
):
return
x
>
y
...
...
@@ -2422,13 +2353,6 @@ greater_than_s_s = GreaterThanSS()
class
GreaterThanSD
(
__ComparisonOpSD
):
"""
:param x:sparse matrix
:param y:dense matrix
:return: x>y
"""
def
comparison
(
self
,
x
,
y
):
return
x
>
y
...
...
@@ -2436,13 +2360,6 @@ greater_than_s_d = GreaterThanSD()
class
LessEqualSS
(
__ComparisonOpSS
):
"""
:param x:first compared sparse matrix
:param y:second compared sparse matrix
:return: x<=y
"""
def
comparison
(
self
,
x
,
y
):
return
x
<=
y
...
...
@@ -2450,13 +2367,6 @@ less_equal_s_s = LessEqualSS()
class
LessEqualSD
(
__ComparisonOpSD
):
"""
:param x:sparse matrix
:param y:dense matrix
:return: x<=y
"""
def
comparison
(
self
,
x
,
y
):
return
x
<=
y
...
...
@@ -2464,13 +2374,6 @@ less_equal_s_d = LessEqualSD()
class
GreaterEqualSS
(
__ComparisonOpSS
):
"""
:param x:first compared sparse matrix
:param y:second compared sparse matrix
:return: x>=y
"""
def
comparison
(
self
,
x
,
y
):
return
x
>=
y
...
...
@@ -2478,18 +2381,13 @@ greater_equal_s_s = GreaterEqualSS()
class
GreaterEqualSD
(
__ComparisonOpSD
):
"""
:param x:sparse matrix
:param y:dense matrix
:return: x>=y
"""
def
comparison
(
self
,
x
,
y
):
return
x
>=
y
greater_equal_s_d
=
GreaterEqualSD
()
eq
=
__ComparisonSwitch
(
equal_s_s
,
equal_s_d
,
equal_s_d
)
"""
:param x: A matrix variable.
:param y: A matrix variable.
...
...
@@ -2498,9 +2396,9 @@ greater_equal_s_d = GreaterEqualSD()
:note: At least one of `x` and `y` must be a sparse matrix.
"""
eq
=
__ComparisonSwitch
(
equal_s_s
,
equal_s_d
,
equal_s_d
)
neq
=
__ComparisonSwitch
(
not_equal_s_s
,
not_equal_s_d
,
not_equal_s_d
)
"""
:param x: A matrix variable.
:param y: A matrix variable.
...
...
@@ -2509,9 +2407,9 @@ eq = __ComparisonSwitch(equal_s_s, equal_s_d, equal_s_d)
:note: At least one of `x` and `y` must be a sparse matrix.
"""
neq
=
__ComparisonSwitch
(
not_equal_s_s
,
not_equal_s_d
,
not_equal_s_d
)
lt
=
__ComparisonSwitch
(
less_than_s_s
,
less_than_s_d
,
greater_than_s_d
)
"""
:param x: A matrix variable.
:param y: A matrix variable.
...
...
@@ -2520,9 +2418,9 @@ neq = __ComparisonSwitch(not_equal_s_s, not_equal_s_d, not_equal_s_d)
:note: At least one of `x` and `y` must be a sparse matrix.
"""
lt
=
__ComparisonSwitch
(
less_than_s_s
,
less_than_s_d
,
greater_than_s_d
)
gt
=
__ComparisonSwitch
(
greater_than_s_s
,
greater_than_s_d
,
less_than_s_d
)
"""
:param x: A matrix variable.
:param y: A matrix variable.
...
...
@@ -2532,8 +2430,7 @@ lt = __ComparisonSwitch(less_than_s_s, less_than_s_d, greater_than_s_d)
:note: At least one of `x` and `y` must be a sparse matrix.
"""
gt
=
__ComparisonSwitch
(
greater_than_s_s
,
greater_than_s_d
,
less_than_s_d
)
le
=
__ComparisonSwitch
(
less_equal_s_s
,
less_equal_s_d
,
greater_equal_s_d
)
"""
:param x: A matrix variable.
:param y: A matrix variable.
...
...
@@ -2542,8 +2439,9 @@ gt = __ComparisonSwitch(greater_than_s_s, greater_than_s_d, less_than_s_d)
:note: At least one of `x` and `y` must be a sparse matrix.
"""
le
=
__ComparisonSwitch
(
less_equal_s_s
,
less_equal_s_d
,
greater_equal_s_d
)
ge
=
__ComparisonSwitch
(
greater_equal_s_s
,
greater_equal_s_d
,
less_equal_s_d
)
"""
:param x: A matrix variable.
:param y: A matrix variable.
...
...
@@ -2553,24 +2451,9 @@ le = __ComparisonSwitch(less_equal_s_s, less_equal_s_d, greater_equal_s_d)
:note: At least one of `x` and `y` must be a sparse matrix.
"""
ge
=
__ComparisonSwitch
(
greater_equal_s_s
,
greater_equal_s_d
,
less_equal_s_d
)
class
HStack
(
gof
.
op
.
Op
):
"""Stack sparse matrices horizontally (column wise).
:param blocks: Sequence of sparse array of compatible shape.
:param format: String representing the output format. Default
is csc.
:param dtype: Output dtype. Must be specified.
:return: The concatenation of the sparse arrays column wise.
:note: The number of line of the sparse matrix must agree.
:note: The grad implemented is regular, i.e. not structured.
"""
# See doc in instance of this Op or function after this class definition.
def
__init__
(
self
,
format
=
None
,
dtype
=
None
):
if
format
is
None
:
self
.
format
=
'csc'
...
...
@@ -2667,19 +2550,7 @@ def hstack(blocks, format=None, dtype=None):
class
VStack
(
HStack
):
"""Stack sparse matrices vertically (row wise).
:param blocks: Sequence of sparse array of compatible shape.
:param format: String representing the output format. Default
is csc.
:param dtype: Output dtype. Must be specified.
:return: The concatenation of the sparse arrays row wise.
:note: The number of column of the sparse matrix must agree.
:note: The grad implemented is regular, i.e. not structured.
"""
# See doc in instance of this Op or function after this class definition.
def
perform
(
self
,
node
,
block
,
(
out
,
)):
for
b
in
block
:
assert
_is_sparse
(
b
)
...
...
@@ -2743,15 +2614,7 @@ def vstack(blocks, format=None, dtype=None):
class
Remove0
(
gof
.
Op
):
"""Remove explicit zeros from a sparse matrix.
:param x: Sparse matrix.
:return: Exactly `x` but with a data attribute
exempt of zeros.
:note: The grad implemented is regular, i.e. not structured.
"""
# See doc in instance of this Op or a function after the class definition.
def
__init__
(
self
,
inplace
=
False
,
*
args
,
**
kwargs
):
gof
.
Op
.
__init__
(
self
,
*
args
,
**
kwargs
)
self
.
inplace
=
inplace
...
...
@@ -2789,6 +2652,14 @@ class Remove0(gof.Op):
def
infer_shape
(
self
,
node
,
i0_shapes
):
return
i0_shapes
remove0
=
Remove0
()
"""Remove explicit zeros from a sparse matrix.
:param x: Sparse matrix.
:return: Exactly `x` but with a data attribute
exempt of zeros.
:note: The grad implemented is regular, i.e. not structured.
"""
# Structured monoid
...
...
@@ -3008,28 +2879,6 @@ def sqrt(x):
class
TrueDot
(
gof
.
op
.
Op
):
"""Calculate the true dot operation between two matrices.
`TrueDot` is different of `StructuredDot` for sparse matrix
since the grad of `TrueDot` is regular, i.e. not structured.
The parameter `grad_preserves_dense`, controlled by the
constructor, is a boolean flags to controls whether gradients
with respect to inputs are converted to dense matrices when the
corresponding input y is dense (not in a L{SparseVariable} wrapper).
This is generally a good idea when L{Dot} is in the middle of a
larger graph, because the types of gy will match that of y. This
conversion might be inefficient if the gradients are graph outputs
though, hence this mask.
:param x: Sparse matrix for the left operand.
:param y: Sparse or dense matrix for the right operand.
:return: The dot product `x` . `y` in a sparse matrix.
:note:
- The grad implemented is regular, i.e. not structured.
"""
# TODO
# Simplify code by splitting into DotSS and DotSD.
...
...
@@ -3133,14 +2982,15 @@ def true_dot(x, y, grad_preserves_dense=True):
one or all operands are sparse. Supported formats are CSC and CSR.
The output of the operation is sparse.
:param x: Sparse matrix
or 2d tensor variable
.
:param x: Sparse matrix.
:param y: Sparse matrix or 2d tensor variable.
:param grad_preserves_dense: if True (default), makes the grad of
dense inputs dense. Otherwise the grad is always sparse.
:return: The dot product `x`.`y` in a sparse format.
:note: one of ``x`` or ``y`` must be sparse.
:note:
- The grad implemented is regular, i.e. not structured.
"""
# TODO
# Maybe the triple-transposition formulation
...
...
@@ -3168,21 +3018,7 @@ def true_dot(x, y, grad_preserves_dense=True):
# Dot
class
StructuredDot
(
gof
.
Op
):
"""Structured Dot is like dot, except that only the
gradient wrt non-zero elements of the sparse matrix
`a` are calculated and propagated.
The output is presumed to be a dense matrix, and is represented by a
TensorType instance.
:param a: A sparse matrix.
:param b: A sparse or dense matrix.
:return: The dot product of `a` and `b` as a dense matrix.
:note: The grad implemented is structured.
"""
# See doc in instance of this Op or function after this class definition.
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
...
...
@@ -3597,33 +3433,7 @@ def structured_dot_grad(sparse_A, dense_B, ga):
class
SamplingDot
(
gof
.
op
.
Op
):
"""Operand for calculating the dot product dot(`x`, `y`.T) = `z` when you
only want to calculate a subset of `z`.
It is equivalent to `p` o (`x` . `y`.T) where o is the element-wise
product, `x` and `y` operands of the dot product and `p` is a matrix that
contains 1 when the corresponding element of `z` should be calculated
and 0 when it shouldn't. Note that SamplingDot has a different interface
than `dot` because SamplingDot requires `x` to be a `m`x`k` matrix while
`y` is a `n`x`k` matrix instead of the usual `k`x`n` matrix.
.. note::
It will work if the pattern is not binary value, but if the
pattern doesn't have a high sparsity proportion it will be slower
then a more optimized dot followed by a normal elemwise
multiplication.
:param x: Tensor matrix.
:param y: Tensor matrix.
:param p: Sparse matrix in csr format.
:return: A dense matrix containing the dot product of `x` by `y`.T only
where `p` is 1.
:note: The grad implemented is regular, i.e. not structured.
"""
# See doc in instance of this Op or function after this class definition.
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
...
...
@@ -3671,25 +3481,36 @@ class SamplingDot(gof.op.Op):
def
__str__
(
self
):
return
self
.
__class__
.
__name__
sampling_dot
=
SamplingDot
()
"""Operand for calculating the dot product dot(`x`, `y`.T) = `z` when you
only want to calculate a subset of `z`.
It is equivalent to `p` o (`x` . `y`.T) where o is the element-wise
product, `x` and `y` operands of the dot product and `p` is a matrix that
contains 1 when the corresponding element of `z` should be calculated
and 0 when it shouldn't. Note that SamplingDot has a different interface
than `dot` because SamplingDot requires `x` to be a `m`x`k` matrix while
`y` is a `n`x`k` matrix instead of the usual `k`x`n` matrix.
class
Dot
(
gof
.
op
.
Op
):
"""Operation for efficiently calculating the dot product when
one or all operands is sparse. Supported format are CSC and CSR.
The output of the operation is dense.
.. note::
:param x: sparse or dense matrix variable.
:param y: sparse or dense matrix variable.
It will work if the pattern is not binary value, but if the
pattern doesn't have a high sparsity proportion it will be slower
then a more optimized dot followed by a normal elemwise
multiplication.
:return: The dot product `x`.`y` in a dense format.
:param x: Tensor matrix.
:param y: Tensor matrix.
:param p: Sparse matrix in csr format.
:return: A dense matrix containing the dot product of `x` by `y`.T only
where `p` is 1.
:note: The grad implemented is regular, i.e. not structured.
"""
:note: The grad implemented is regular, i.e. not structured.
:note: At least one of `x` or `y` must be a sparse matrix.
:note: When the operation has the form dot(csr_matrix, dense)
the gradient of this operation can be performed inplace
by UsmmCscDense. This leads to significant speed-ups.
"""
class
Dot
(
gof
.
op
.
Op
):
# See doc in instance of this Op or function after this class definition.
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
...
...
@@ -3787,13 +3608,17 @@ def dot(x, y):
one or all operands is sparse. Supported format are CSC and CSR.
The output of the operation is dense.
:param x:
M
atrix variable.
:param y:
M
atrix variable.
:param x:
sparse or dense m
atrix variable.
:param y:
sparse or dense m
atrix variable.
:return: The dot product `x`.`y` in a dense format.
:note: The grad implemented is regular, i.e. not structured.
:note: At least one of `x` or `y` must be a sparse matrix.
:note: At least one of `x` or `y` must be a sparse matrix.
:note: When the operation has the form dot(csr_matrix, dense)
the gradient of this operation can be performed inplace
by UsmmCscDense. This leads to significant speed-ups.
"""
if
hasattr
(
x
,
'getnnz'
):
...
...
@@ -3811,19 +3636,7 @@ def dot(x, y):
class
Usmm
(
gof
.
op
.
Op
):
"""Performs the expression is `alpha` * `x` `y` + `z`.
:param x: Matrix variable.
:param y: Matrix variable.
:param z: Dense matrix.
:param alpha: A tensor scalar.
:return: The dense matrix resulting from `alpha` * `x` `y` + `z`.
:note: The grad is not implemented for this op.
:note: At least one of `x` or `y` must be a sparse matrix.
"""
# See doc in instance of this Op or function after this class definition.
# We don't implement the infer_shape as it is
# inserted by optimization only.
...
...
@@ -3883,13 +3696,22 @@ class Usmm(gof.op.Op):
out
[
0
]
=
rval
usmm
=
Usmm
()
"""Performs the expression is `alpha` * `x` `y` + `z`.
:param x: Matrix variable.
:param y: Matrix variable.
:param z: Dense matrix.
:param alpha: A tensor scalar.
class
ConstructSparseFromList
(
gof
.
Op
):
"""Constructs a sparse matrix out of a list of 2-D matrix rows
:return: The dense matrix resulting from `alpha` * `x` `y` + `z`.
:note: The grad implemented is regular, i.e. not structured.
"""
:note: The grad is not implemented for this op.
:note: At least one of `x` or `y` must be a sparse matrix.
"""
class
ConstructSparseFromList
(
gof
.
Op
):
# See doc in instance of this Op or function after this class definition.
def
__hash__
(
self
):
return
hash
((
type
(
self
)))
...
...
@@ -3979,3 +3801,7 @@ class ConstructSparseFromList(gof.Op):
return
[
gx
,
gy
]
+
[
DisconnectedType
()()]
*
len
(
idx_list
)
construct_sparse_from_list
=
ConstructSparseFromList
()
"""Constructs a sparse matrix out of a list of 2-D matrix rows
:note: The grad implemented is regular, i.e. not structured.
"""
theano/typed_list/basic.py
浏览文件 @
b53ab5f5
...
...
@@ -50,9 +50,7 @@ TypedListType.Variable = TypedListVariable
class
GetItem
(
Op
):
"""
get specified slice of a typed list
"""
# See doc in instance of this Op or function after this class definition.
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
...
...
@@ -100,13 +98,16 @@ class GetItem(Op):
return
(
1
,)
getitem
=
GetItem
()
"""
Get specified slice of a typed list.
:param x: type type list.
:param index: the index of the value to return from `x`.
"""
class
Append
(
Op
):
"""
#append an element at the end of another list
"""
class
Append
(
Op
):
# See doc in instance of this Op after the class definition.
def
__init__
(
self
,
inplace
=
False
):
self
.
inplace
=
inplace
if
self
.
inplace
:
...
...
@@ -159,13 +160,16 @@ class Append(Op):
return
(
1
,)
append
=
Append
()
"""
Append an element at the end of another list.
:param x: the base typed list.
:param y: the element to append to `x`.
"""
class
Extend
(
Op
):
"""
append all element of a list at the end of another list
"""
# See doc in instance of this Op after the class definition.
def
__init__
(
self
,
inplace
=
False
):
self
.
inplace
=
inplace
if
self
.
inplace
:
...
...
@@ -222,10 +226,16 @@ class Extend(Op):
return
(
1
,)
extend
=
Extend
()
"""
Append all element of a list at the end of another list.
:param x: The typed list to extend.
:param toAppend: The typed list that will be added at the end of `x`.
"""
class
Insert
(
Op
):
class
Insert
(
Op
):
# See doc in instance of this Op after the class definition.
def
__init__
(
self
,
inplace
=
False
):
self
.
inplace
=
inplace
if
self
.
inplace
:
...
...
@@ -283,10 +293,17 @@ class Insert(Op):
return
(
1
,)
insert
=
Insert
()
"""
Insert an element at an index in a typed list.
:param x: the typed list to modified.
:param index: the index where to put the new element in `x`.
:param toInsert: The new element to insert.
"""
class
Remove
(
Op
):
class
Remove
(
Op
):
# See doc in instance of this Op after the class definition.
def
__init__
(
self
,
inplace
=
False
):
self
.
inplace
=
inplace
if
self
.
inplace
:
...
...
@@ -324,10 +341,21 @@ class Remove(Op):
return
self
.
__class__
.
__name__
remove
=
Remove
()
"""Remove an element from a typed list.
:param x: the typed list to be changed.
:param toRemove: an element to be removed from the typed list.
We only remove the first instance.
class
Reverse
(
Op
):
:note: Python implementation of remove doesn't work when we want to
remove an ndarray from a list. This implementation works in that
case.
"""
class
Reverse
(
Op
):
# See doc in instance of this Op after the class definition.
def
__init__
(
self
,
inplace
=
False
):
self
.
inplace
=
inplace
if
self
.
inplace
:
...
...
@@ -380,10 +408,15 @@ class Reverse(Op):
return
(
1
,)
reverse
=
Reverse
()
"""
Reverse the order of a typed list.
:param x: the typed list to be reversed.
"""
class
Index
(
Op
):
class
Index
(
Op
):
# See doc in instance of this Op after the class definition.
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
...
...
@@ -413,7 +446,7 @@ index_ = Index()
class
Count
(
Op
):
# See doc in instance of this Op after the class definition.
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
...
...
@@ -441,6 +474,18 @@ class Count(Op):
return
self
.
__class__
.
__name__
count
=
Count
()
"""
Count the number of time an element is in the typed list.
:param x: The typed list to look into.
:param elem: The element we want to count in list.
The element are compared with equals.
:note: Python implementation of count doesn't work when we want to
count an ndarray from a list. This implementation works in that
case.
"""
class
Length
(
Op
):
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论