提交 b53ab5f5 authored 作者: abergeron's avatar abergeron

Merge pull request #1918 from nouiz/doc

Doc typed list and better sparse doc.
...@@ -22,6 +22,7 @@ Types and Ops that you can use to build and compile expression graphs. ...@@ -22,6 +22,7 @@ Types and Ops that you can use to build and compile expression graphs.
gof/index gof/index
scan scan
sandbox/index sandbox/index
typed_list
There are also some top-level imports that you might find more convenient: There are also some top-level imports that you might find more convenient:
......
...@@ -119,16 +119,18 @@ List of Implemented Operations ...@@ -119,16 +119,18 @@ List of Implemented Operations
============================== ==============================
- Moving from and to sparse - Moving from and to sparse
- :class:`DenseFromSparse <theano.sparse.basic.DenseFromSparse>` and ``dense_from_sparse``. - :func:`dense_from_sparse <theano.sparse.basic.dense_from_sparse>`.
Both grads are implemented. Structured by default. Both grads are implemented. Structured by default.
- :class:`SparseFromDense <theano.sparse.basic.SparseFromDense>` and ``csr_from_dense``, ``csc_from_dense``. - :func:`csr_from_dense <theano.sparse.basic.csr_from_dense>`,
:func:`csc_from_dense <theano.sparse.basic.csc_from_dense>`.
The grad implemented is structured. The grad implemented is structured.
- Theano SparseVariable object have a method ``toarray()`` that is the same as ``dense_from_sparse``. - Theano SparseVariable object have a method ``toarray()`` that is the same as
:func:`dense_from_sparse <theano.sparse.basic.dense_from_sparse>`.
- Construction of Sparses and their Properties - Construction of Sparses and their Properties
- :class:`CSM <theano.sparse.basic.CSM>` and ``CSC``, ``CSR`` to construct a matrix. - :class:`CSM <theano.sparse.basic.CSM>` and ``CSC``, ``CSR`` to construct a matrix.
The grad implemented is regular. The grad implemented is regular.
- :class:`CSMProperties <theano.sparse.basic.CSMProperties>` and ``csm_properties(x)`` - :func:`csm_properties <theano.sparse.basic.csm_properties>`.
to get the properties of a sparse matrix. to get the properties of a sparse matrix.
The grad implemented is regular. The grad implemented is regular.
- csm_indices(x), csm_indptr(x), csm_data(x) and csm_shape(x) or x.shape. - csm_indices(x), csm_indptr(x), csm_data(x) and csm_shape(x) or x.shape.
...@@ -136,22 +138,22 @@ List of Implemented Operations ...@@ -136,22 +138,22 @@ List of Implemented Operations
The grad implemented is regular. The grad implemented is regular.
- :func:`sp_zeros_like <theano.sparse.basic.sp_zeros_like>`. - :func:`sp_zeros_like <theano.sparse.basic.sp_zeros_like>`.
The grad implemented is regular. The grad implemented is regular.
- :class:`SquareDiagonal <theano.sparse.basic.SquareDiagonal>` and ``square_diagonal``. - :func:`square_diagonal <theano.sparse.basic.square_diagonal>`.
The grad implemented is regular. The grad implemented is regular.
- :class:`ConstructSparseFromList <theano.sparse.basic.ConstructSparseFromList>` and ``construct_sparse_from_list``. - :func:`construct_sparse_from_list <theano.sparse.basic.construct_sparse_from_list>`.
The grad implemented is regular. The grad implemented is regular.
- Cast - Cast
- :class:`Cast <theano.sparse.basic.Cast>` with ``bcast``, ``wcast``, ``icast``, ``lcast``, - :func:`cast <theano.sparse.basic.cast>` with ``bcast``, ``wcast``, ``icast``, ``lcast``,
``fcast``, ``dcast``, ``ccast``, and ``zcast``. ``fcast``, ``dcast``, ``ccast``, and ``zcast``.
The grad implemented is regular. The grad implemented is regular.
- Transpose - Transpose
- :class:`Transpose <theano.sparse.basic.Transpose>` and ``transpose``. - :func:`transpose <theano.sparse.basic.transpose>`.
The grad implemented is regular. The grad implemented is regular.
- Basic Arithmetic - Basic Arithmetic
- :class:`Neg <theano.sparse.basic.Neg>`. - :func:`neg <theano.sparse.basic.neg>`.
The grad implemented is regular. The grad implemented is regular.
- :func:`eq <theano.sparse.basic.eq>`. - :func:`eq <theano.sparse.basic.eq>`.
- :func:`neq <theano.sparse.basic.neq>`. - :func:`neq <theano.sparse.basic.neq>`.
...@@ -201,15 +203,13 @@ List of Implemented Operations ...@@ -201,15 +203,13 @@ List of Implemented Operations
- ``sqrt`` - ``sqrt``
- Dot Product - Dot Product
- :class:`Dot <theano.sparse.basic.Dot>` and - :func:`dot <theano.sparse.basic.dot>`.
:func:`dot <theano.sparse.basic.dot>`.
- One of the inputs must be sparse, the other sparse or dense. - One of the inputs must be sparse, the other sparse or dense.
- The grad implemented is regular. - The grad implemented is regular.
- No C code for perform and no C code for grad. - No C code for perform and no C code for grad.
- Returns a dense for perform and a dense for grad. - Returns a dense for perform and a dense for grad.
- :class:`StructuredDot <theano.sparse.basic.StructuredDot>` - :func:`structured_dot <theano.sparse.basic.structured_dot>`.
and :func:`structured_dot <theano.sparse.basic.structured_dot>`.
- The first input is sparse, the second can be sparse or dense. - The first input is sparse, the second can be sparse or dense.
- The grad implemented is structured. - The grad implemented is structured.
...@@ -218,8 +218,7 @@ List of Implemented Operations ...@@ -218,8 +218,7 @@ List of Implemented Operations
dense one if one of the inputs is dense. dense one if one of the inputs is dense.
- Returns a sparse grad for sparse inputs and dense grad for - Returns a sparse grad for sparse inputs and dense grad for
dense inputs. dense inputs.
- :class:`TrueDot <theano.sparse.basic.TrueDot>` and - :func:`true_dot <theano.sparse.basic.true_dot>`.
:func:`true_dot <theano.sparse.basic.true_dot>`.
- The first input is sparse, the second can be sparse or dense. - The first input is sparse, the second can be sparse or dense.
- The grad implemented is regular. - The grad implemented is regular.
...@@ -229,19 +228,18 @@ List of Implemented Operations ...@@ -229,19 +228,18 @@ List of Implemented Operations
default a dense for dense inputs. The parameter default a dense for dense inputs. The parameter
``grad_preserves_dense`` can be set to False to return a ``grad_preserves_dense`` can be set to False to return a
sparse grad for dense inputs. sparse grad for dense inputs.
- :class:`SamplingDot <theano.sparse.basic.SamplingDot>` and - :func:`sampling_dot <theano.sparse.basic.sampling_dot>`.
``sampling_dot``.
- Both inputs must be dense. - Both inputs must be dense.
- The grad implemented is structured for `p`. - The grad implemented is structured for `p`.
- Sample of the dot and sample of the gradient. - Sample of the dot and sample of the gradient.
- C code for perform but not for grad. - C code for perform but not for grad.
- Returns sparse for perform and grad. - Returns sparse for perform and grad.
- :class:`Usmm <theano.sparse.basic.Usmm>` and ``usmm``. - :func:`usmm <theano.sparse.basic.usmm>`.
- You *shouldn't* insert this op yourself! - You *shouldn't* insert this op yourself!
- There is an optimization that transform a - There is an optimization that transform a
:class:`Dot <theano.sparse.basic.Dot>` to ``Usmm`` when possible. :func:`dot <theano.sparse.basic.dot>` to ``Usmm`` when possible.
- This op is the equivalent of gemm for sparse dot. - This op is the equivalent of gemm for sparse dot.
- There is no grad implemented for this op. - There is no grad implemented for this op.
...@@ -256,13 +254,13 @@ List of Implemented Operations ...@@ -256,13 +254,13 @@ List of Implemented Operations
- Sparse variables don't support [M, N:O] and [M:N, O] as we don't - Sparse variables don't support [M, N:O] and [M:N, O] as we don't
support sparse vectors and returning a sparse matrix would break support sparse vectors and returning a sparse matrix would break
the numpy interface. Use [M:M+1, N:O] and [M:N, O:O+1] instead. the numpy interface. Use [M:M+1, N:O] and [M:N, O:O+1] instead.
- :class:`Diag <theano.sparse.basic.Diag>` and ``diag``. - :func:`diag <theano.sparse.basic.diag>`.
The grad implemented is regular. The grad implemented is regular.
- Concatenation - Concatenation
- :class:`HStack <theano.sparse.basic.HStack>` and ``hstack``. - :func:`hstack <theano.sparse.basic.hstack>`.
The grad implemented is regular. The grad implemented is regular.
- :class:`VStack <theano.sparse.basic.VStack>` and ``vstack``. - :func:`vstack <theano.sparse.basic.vstack>`.
The grad implemented is regular. The grad implemented is regular.
- Probability - Probability
...@@ -276,8 +274,8 @@ List of Implemented Operations ...@@ -276,8 +274,8 @@ List of Implemented Operations
- Internal Representation - Internal Representation
`They all have a regular grad implemented.` `They all have a regular grad implemented.`
- :class:`EnsureSortedIndices <theano.sparse.basic.EnsureSortedIndices>` and ``ensure_sorted_indices`` - :func:`ensure_sorted_indices <theano.sparse.basic.ensure_sorted_indices>`.
- :class:`Remove0 <theano.sparse.basic.Remove0>` and ``remove0`` - :func:`remove0 <theano.sparse.basic.remove0>`.
- :func:`clean <theano.sparse.basic.clean>` to resort indices and remove zeros - :func:`clean <theano.sparse.basic.clean>` to resort indices and remove zeros
- To help testing - To help testing
......
.. _libdoc_typed_list:
===============================
:mod:`typed_list` -- Typed List
===============================
.. note::
This is not in the released version 0.6.0, but will be in the next release (0.7 or 0.6.1).
This is a type that represents a list in Theano. All elements must have
the same Theano type. Here is an example::
import theano.typed_list
tl = theano.typed_list.TypedListType(theano.tensor.fvector)()
v = theano.tensor.fvector()
o = theano.typed_list.append(tl, v)
f = theano.function([tl, v], o)
print f([[1, 2, 3], [4, 5]], [2])
#[array([ 1., 2., 3.], dtype=float32), array([ 4., 5.], dtype=float32), array([ 2.], dtype=float32)]
A second example with Scan. Scan doesn't yet have direct support of
TypedList, so you can only use it as non_sequences (not in sequences or
as outputs).::
import theano.typed_list
a = theano.typed_list.TypedListType(theano.tensor.fvector)()
l = theano.typed_list.length(a)
s, _ = theano.scan(fn=lambda i, tl: tl[i].sum(),
non_sequences=[a],
sequences=[theano.tensor.arange(l, dtype='int64')])
f = theano.function([a], s)
f([[1, 2, 3], [4, 5]])
#array([ 6., 9.], dtype=float32)
.. automodule:: theano.typed_list.basic
:members:
...@@ -406,6 +406,7 @@ class SparseConstant(gof.Constant, _sparse_py_operators): ...@@ -406,6 +406,7 @@ class SparseConstant(gof.Constant, _sparse_py_operators):
SparseType.Variable = SparseVariable SparseType.Variable = SparseVariable
SparseType.Constant = SparseConstant SparseType.Constant = SparseConstant
# for more dtypes, call SparseType(format, dtype) # for more dtypes, call SparseType(format, dtype)
def matrix(format, name=None, dtype=None): def matrix(format, name=None, dtype=None):
if dtype is None: if dtype is None:
...@@ -446,23 +447,7 @@ discrete_dtypes = int_dtypes + uint_dtypes ...@@ -446,23 +447,7 @@ discrete_dtypes = int_dtypes + uint_dtypes
# CONSTRUCTION # CONSTRUCTION
class CSMProperties(gof.Op): class CSMProperties(gof.Op):
"""Extract all of .data, .indices, .indptr and .shape. # See doc in instance of this Op or function after this class definition.
For specific field, `csm_data`, `csm_indices`, `csm_indptr`
and `csm_shape` are provided. Also, `kmap` could be
set through to constructor to specified the parts
of the parameter `data` the op should return.Fancy indexing
with numpy.ndarray should be used for this purpose.
:param csm: Sparse matrix in CSR or CSC format.
:return: (data, indices, indptr, shape), the properties
of `csm`.
:note: The grad implemented is regular, i.e. not structured.
`infer_shape` method is not available for this op.
"""
# NOTE # NOTE
# We won't implement infer_shape for this op now. This will # We won't implement infer_shape for this op now. This will
# ask that we implement an GetNNZ op, and this op will keep # ask that we implement an GetNNZ op, and this op will keep
...@@ -537,11 +522,18 @@ class CSMProperties(gof.Op): ...@@ -537,11 +522,18 @@ class CSMProperties(gof.Op):
# don't make this a function or it breaks some optimizations below # don't make this a function or it breaks some optimizations below
csm_properties = CSMProperties() csm_properties = CSMProperties()
"""An CSMProperties object instance. It return the fields data, """
indices, indptr and shape of the sparse varible. Together they specify Extract all of .data, .indices, .indptr and .shape field.
completly the the sparse variable when we know its format. Example::
For specific field, `csm_data`, `csm_indices`, `csm_indptr`
and `csm_shape` are provided.
:param csm: Sparse matrix in CSR or CSC format.
the_data, the_indices, the_indptr, the_shape = csm_properties(a_sparse_var) :return: (data, indices, indptr, shape), the properties of `csm`.
:note: The grad implemented is regular, i.e. not structured.
`infer_shape` method is not available for this op.
""" """
...@@ -574,35 +566,7 @@ def csm_shape(csm): ...@@ -574,35 +566,7 @@ def csm_shape(csm):
class CSM(gof.Op): class CSM(gof.Op):
"""Construct a CSC or CSR matrix from the internal # See doc in instance of this Op or function after this class definition.
representation.
The format for the sparse array can be specified
through the constructor. Also, `kmap` could be
set through to constructor to specified the parts
of the parameter `data` the op should use to construct
the sparse matrix. Fancy indexing with numpy.ndarray
should be used for this purpose.
:param data: One dimensional tensor representing
the data of the sparse to construct.
:param indices: One dimensional tensor of integers
representing the indices of the sparse
matrix to construct.
:param indptr: One dimensional tensor of integers
representing the indice pointer for
the sparse matrix to construct.
:param shape: One dimensional tensor of integers
representing the shape of the sparse
matrix to construct.
:return: A sparse matrix having the properties
specified by the inputs.
:note: The grad method returns a dense vector, so it provides
a regular grad.
"""
kmap = None kmap = None
"""Indexing to speficied what part of the data parameter """Indexing to speficied what part of the data parameter
should be use to construct the sparse matrix.""" should be use to construct the sparse matrix."""
...@@ -725,7 +689,50 @@ class CSM(gof.Op): ...@@ -725,7 +689,50 @@ class CSM(gof.Op):
CSC = CSM('csc') CSC = CSM('csc')
"""Construct a CSC matrix from the internal
representation.
:param data: One dimensional tensor representing
the data of the sparse to construct.
:param indices: One dimensional tensor of integers
representing the indices of the sparse
matrix to construct.
:param indptr: One dimensional tensor of integers
representing the indice pointer for
the sparse matrix to construct.
:param shape: One dimensional tensor of integers
representing the shape of the sparse
matrix to construct.
:return: A sparse matrix having the properties
specified by the inputs.
:note: The grad method returns a dense vector, so it provides
a regular grad.
"""
CSR = CSM('csr') CSR = CSM('csr')
"""Construct a CSR matrix from the internal
representation.
:param data: One dimensional tensor representing
the data of the sparse to construct.
:param indices: One dimensional tensor of integers
representing the indices of the sparse
matrix to construct.
:param indptr: One dimensional tensor of integers
representing the indice pointer for
the sparse matrix to construct.
:param shape: One dimensional tensor of integers
representing the shape of the sparse
matrix to construct.
:return: A sparse matrix having the properties
specified by the inputs.
:note: The grad method returns a dense vector, so it provides
a regular grad.
"""
class CSMGrad(gof.op.Op): class CSMGrad(gof.op.Op):
...@@ -803,16 +810,7 @@ csm_grad = CSMGrad ...@@ -803,16 +810,7 @@ csm_grad = CSMGrad
class Cast(gof.op.Op): class Cast(gof.op.Op):
"""Cast sparse variable to the desired dtype. # See doc in instance of this Op or function after this class definition.
:param x: Sparse matrix.
:return: Same as `x` but having `out_type` as dtype.
:note: The grad implemented is regular, i.e. not
structured.
"""
def __init__(self, out_type): def __init__(self, out_type):
self.out_type = out_type self.out_type = out_type
...@@ -857,6 +855,17 @@ zcast = Cast('complex128') ...@@ -857,6 +855,17 @@ zcast = Cast('complex128')
def cast(variable, dtype): def cast(variable, dtype):
"""Cast sparse variable to the desired dtype.
:param variable: Sparse matrix.
:param dtype: the dtype wanted.
:return: Same as `x` but having `dtype` as dtype.
:note: The grad implemented is regular, i.e. not
structured.
"""
return Cast(dtype)(variable) return Cast(dtype)(variable)
# #
...@@ -865,19 +874,7 @@ def cast(variable, dtype): ...@@ -865,19 +874,7 @@ def cast(variable, dtype):
class DenseFromSparse(gof.op.Op): class DenseFromSparse(gof.op.Op):
"""Convert a sparse matrix to a dense one. # See doc in instance of this Op or function after this class definition.
:param x: A sparse matrix.
:return: A dense matrix, the same as `x`.
:note: The grad implementation can be controlled
through the constructor via the `structured`
parameter. `True` will provide a structured
grad while `False` will provide a regular
grad. By default, the grad is structured.
"""
def __init__(self, structured=True): def __init__(self, structured=True):
self.sparse_grad = structured self.sparse_grad = structured
...@@ -933,25 +930,21 @@ class DenseFromSparse(gof.op.Op): ...@@ -933,25 +930,21 @@ class DenseFromSparse(gof.op.Op):
return [shapes[0]] return [shapes[0]]
dense_from_sparse = DenseFromSparse() dense_from_sparse = DenseFromSparse()
"""Convert a sparse matrix to a dense one.
:param x: A sparse matrix.
class SparseFromDense(gof.op.Op): :return: A dense matrix, the same as `x`.
"""Convert a dense matrix to a sparse matrix.
To convert in CSR format, use `csr_from_dense` :note: The grad implementation can be controlled
and to convert in CSC format, use `csc_from_dense`. through the constructor via the `structured`
parameter. `True` will provide a structured
:param x: A dense matrix. grad while `False` will provide a regular
grad. By default, the grad is structured.
:return: The same as `x` in a sparse matrix """
format.
:note: The grad implementation is regular, i.e.
not structured.
:note: The output sparse format can also be controlled
via the `format` parameter in the constructor.
"""
class SparseFromDense(gof.op.Op):
def __init__(self, format): def __init__(self, format):
self.format = format self.format = format
...@@ -997,38 +990,31 @@ class SparseFromDense(gof.op.Op): ...@@ -997,38 +990,31 @@ class SparseFromDense(gof.op.Op):
return [shapes[0]] return [shapes[0]]
csr_from_dense = SparseFromDense('csr') csr_from_dense = SparseFromDense('csr')
csc_from_dense = SparseFromDense('csc') """Convert a dense matrix to a sparse csr matrix.
# Indexing :param x: A dense matrix.
class GetItem2d(gof.op.Op):
"""Implement a subtensor of sparse variable and that return a
sparse matrix.
If you want to take only one element of a sparse matrix see :return: The same as `x` in a sparse matrix format.
`GetItemScalar` that return a tensor scalar.
.. note:: :note: The grad implementation is regular, i.e.
not structured.
"""
Subtensor selection always returns a matrix, so indexing csc_from_dense = SparseFromDense('csc')
with [a:b, c:d] is forced. If one index is a scalar. For """Convert a dense matrix to a sparse csc matrix.
instance, x[a:b, c] and x[a, b:c], generate an error. Use
instead x[a:b, c:c+1] and x[a:a+1, b:c].
The above indexing methods are not supported because the return value :param x: A dense matrix.
would be a sparse matrix rather than a sparse vector, which is a
deviation from numpy indexing rule. This decision is made largely
for keeping the consistency between numpy and theano. Subjected
to modification when sparse vector is supported.
:param x: Sparse matrix. :return: The same as `x` in a sparse matrix format.
:param index: Tuple of slice object.
:return: The slice corresponding in `x`. :note: The grad implementation is regular, i.e.
not structured.
"""
:note: The grad is not implemented for this op.
"""
# Indexing
class GetItem2d(gof.op.Op):
# See doc in instance of this Op or function after this class definition.
def __eq__(self, other): def __eq__(self, other):
return (type(self) == type(other)) return (type(self) == type(other))
...@@ -1110,23 +1096,36 @@ class GetItem2d(gof.op.Op): ...@@ -1110,23 +1096,36 @@ class GetItem2d(gof.op.Op):
return self.__class__.__name__ return self.__class__.__name__
get_item_2d = GetItem2d() get_item_2d = GetItem2d()
"""Implement a subtensor of sparse variable and that return a
sparse matrix.
If you want to take only one element of a sparse matrix see
`GetItemScalar` that return a tensor scalar.
class GetItemScalar(gof.op.Op): .. note::
"""Implement a subtensor of a sparse variable that take
two scalar as index and return a scalar. Subtensor selection always returns a matrix, so indexing
with [a:b, c:d] is forced. If one index is a scalar. For
instance, x[a:b, c] and x[a, b:c], generate an error. Use
instead x[a:b, c:c+1] and x[a:a+1, b:c].
If you want to take a slice of a sparse matrix see The above indexing methods are not supported because the return value
`GetItem2d` that return a sparse matrix. would be a sparse matrix rather than a sparse vector, which is a
deviation from numpy indexing rule. This decision is made largely
to preserve consistency between numpy and theano. This may be revised
when sparse vectors are supported.
:param x: Sparse matrix. :param x: Sparse matrix.
:param index: Tuple of scalar.. :param index: Tuple of slice object.
:return: The item corresponding in `x`. :return: The slice corresponding in `x`.
:note: The grad is not implemented for this op.
"""
:note: The grad is not implemented for this op.
"""
class GetItemScalar(gof.op.Op):
# See doc in instance of this Op or function after this class definition.
def __eq__(self, other): def __eq__(self, other):
return (type(self) == type(other)) return (type(self) == type(other))
...@@ -1169,22 +1168,24 @@ class GetItemScalar(gof.op.Op): ...@@ -1169,22 +1168,24 @@ class GetItemScalar(gof.op.Op):
return self.__class__.__name__ return self.__class__.__name__
get_item_scalar = GetItemScalar() get_item_scalar = GetItemScalar()
"""Implement a subtensor of a sparse variable that take
two scalars as index and return a scalar.
If you want to take a slice of a sparse matrix see
`GetItem2d` that returns a sparse matrix.
# Linear Algebra :param x: Sparse matrix.
class Transpose(gof.op.Op): :param index: Tuple of scalars.
"""Return the transpose of the sparse matrix.
:param x: Sparse matrix. :return: The item corresponding in `x`.
:return: `x` transposed. :note: The grad is not implemented for this op.
"""
:note: The returned matrix will not be in the
same format. `csc` matrix will be changed # Linear Algebra
in `csr` matrix and `csr` matrix in `csc` class Transpose(gof.op.Op):
matrix. # See doc in instance of this Op or function after this class definition.
:note: The grad is regular, i.e. not structured.
"""
view_map = {0: [0]} view_map = {0: [0]}
format_map = {'csr': 'csc', format_map = {'csr': 'csc',
...@@ -1219,18 +1220,22 @@ class Transpose(gof.op.Op): ...@@ -1219,18 +1220,22 @@ class Transpose(gof.op.Op):
def infer_shape(self, node, shapes): def infer_shape(self, node, shapes):
return [shapes[0][::-1]] return [shapes[0][::-1]]
transpose = Transpose() transpose = Transpose()
"""Return the transpose of the sparse matrix.
:param x: Sparse matrix.
class Neg(gof.op.Op): :return: `x` transposed.
"""Return the negation of the sparse matrix.
:param x: Sparse matrix.
:return: -`x`. :note: The returned matrix will not be in the
same format. `csc` matrix will be changed
in `csr` matrix and `csr` matrix in `csc`
matrix.
:note: The grad is regular, i.e. not structured.
"""
:note: The grad is regular, i.e. not structured.
"""
class Neg(gof.op.Op):
# See doc in instance of this Op or function after this class definition.
def __eq__(self, other): def __eq__(self, other):
return (type(self) == type(other)) return (type(self) == type(other))
...@@ -1256,6 +1261,14 @@ class Neg(gof.op.Op): ...@@ -1256,6 +1261,14 @@ class Neg(gof.op.Op):
def infer_shape(self, node, shapes): def infer_shape(self, node, shapes):
return [shapes[0]] return [shapes[0]]
neg = Neg() neg = Neg()
"""Return the negation of the sparse matrix.
:param x: Sparse matrix.
:return: -`x`.
:note: The grad is regular, i.e. not structured.
"""
class ColScaleCSC(gof.op.Op): class ColScaleCSC(gof.op.Op):
...@@ -1399,26 +1412,7 @@ def row_scale(x, s): ...@@ -1399,26 +1412,7 @@ def row_scale(x, s):
class SpSum(gof.op.Op): class SpSum(gof.op.Op):
"""Calculate the sum of a sparse matrix along a specify # See doc in instance of this Op or function after this class definition.
axis.
It operates a reduction along the axis specified. When
`axis` is `None`, it is apply along all axis.
:param x: Sparse matrix.
:param axis: Axis along the sum is apply. Integers or `None`.
:param sparse_grad: `True` to have a structured grad. Boolean.
:return: The sum of `x` in a dense format.
:note: The grad implementation is controlled with the `sparse_grad`
parameter. `True` will provide a structured grad and `False`
will provide a regular grad. For both choice, the grad
return a sparse matrix having the same format as `x`.
:note: This op does not return a sparse matrix, but a dense tensor
matrix.
"""
def __init__(self, axis=None, sparse_grad=True): def __init__(self, axis=None, sparse_grad=True):
super(SpSum, self).__init__() super(SpSum, self).__init__()
self.axis = axis self.axis = axis
...@@ -1504,21 +1498,31 @@ class SpSum(gof.op.Op): ...@@ -1504,21 +1498,31 @@ class SpSum(gof.op.Op):
def sp_sum(x, axis=None, sparse_grad=False): def sp_sum(x, axis=None, sparse_grad=False):
return SpSum(axis, sparse_grad)(x) """Calculate the sum of a sparse matrix along a specify
axis.
class Diag(gof.op.Op): It operates a reduction along the axis specified. When
"""Extract the diagonal of a square sparse matrix as a dense `axis` is `None`, it is apply along all axes.
vector.
:param x: A square sparse matrix in csc format. :param x: Sparse matrix.
:param axis: Axis along which the sum is applied. Integers or `None`.
:param sparse_grad: `True` to have a structured grad. Boolean.
:return: A dense vector representing the diagonal elements. :return: The sum of `x` in a dense format.
:note: The grad implemented is regular, i.e. not structured, since :note: The grad implementation is controlled with the `sparse_grad`
the output is a dense vector. parameter. `True` will provide a structured grad and `False`
will provide a regular grad. For both choices, the grad
returns a sparse matrix having the same format as `x`.
:note: This op does not return a sparse matrix, but a dense tensor
matrix.
""" """
return SpSum(axis, sparse_grad)(x)
class Diag(gof.op.Op):
# See doc in instance of this Op or function after this class definition.
def __eq__(self, other): def __eq__(self, other):
return (type(self) == type(other)) return (type(self) == type(other))
...@@ -1546,19 +1550,20 @@ class Diag(gof.op.Op): ...@@ -1546,19 +1550,20 @@ class Diag(gof.op.Op):
def __str__(self): def __str__(self):
return self.__class__.__name__ return self.__class__.__name__
diag = Diag() diag = Diag()
"""Extract the diagonal of a square sparse matrix as a dense vector.
:param x: A square sparse matrix in csc format.
class SquareDiagonal(gof.op.Op): :return: A dense vector representing the diagonal elements.
"""Return a square sparse (csc) matrix whose diagonal
is given by the dense vector argument.
:param x: Dense vector for the diagonal. :note: The grad implemented is regular, i.e. not structured, since
the output is a dense vector.
:return: A sparse matrix having `x` as diagonal. """
:note: The grad implemented is regular, i.e. not structured.
"""
class SquareDiagonal(gof.op.Op):
# See doc in instance of this Op or function after this class definition.
def __eq__(self, other): def __eq__(self, other):
return type(self) == type(other) return type(self) == type(other)
...@@ -1593,23 +1598,19 @@ class SquareDiagonal(gof.op.Op): ...@@ -1593,23 +1598,19 @@ class SquareDiagonal(gof.op.Op):
def __str__(self): def __str__(self):
return self.__class__.__name__ return self.__class__.__name__
square_diagonal = SquareDiagonal() square_diagonal = SquareDiagonal()
"""Return a square sparse (csc) matrix whose diagonal
is given by the dense vector argument.
:param x: Dense vector for the diagonal.
class EnsureSortedIndices(gof.op.Op): :return: A sparse matrix having `x` as diagonal.
"""Resort indices of a sparse matrix.
CSR column indices are not necessarily sorted. Likewise :note: The grad implemented is regular, i.e. not structured.
for CSC row indices. Use `ensure_sorted_indices` when sorted """
indices are required (e.g. when passing data to other
libraries).
:param x: A sparse matrix.
:return: The same as `x` with indices sorted.
:note: The grad implemented is regular, i.e. not structured.
"""
class EnsureSortedIndices(gof.op.Op):
# See doc in instance of this Op or function after this class definition.
def __init__(self, inplace): def __init__(self, inplace):
self.inplace = inplace self.inplace = inplace
if self.inplace: if self.inplace:
...@@ -1644,6 +1645,19 @@ class EnsureSortedIndices(gof.op.Op): ...@@ -1644,6 +1645,19 @@ class EnsureSortedIndices(gof.op.Op):
else: else:
return self.__class__.__name__ + "{no_inplace}" return self.__class__.__name__ + "{no_inplace}"
ensure_sorted_indices = EnsureSortedIndices(inplace=False) ensure_sorted_indices = EnsureSortedIndices(inplace=False)
"""Resort indices of a sparse matrix.
CSR column indices are not necessarily sorted. Likewise
for CSC row indices. Use `ensure_sorted_indices` when sorted
indices are required (e.g. when passing data to other
libraries).
:param x: A sparse matrix.
:return: The same as `x` with indices sorted.
:note: The grad implemented is regular, i.e. not structured.
"""
def clean(x): def clean(x):
...@@ -1666,16 +1680,8 @@ def clean(x): ...@@ -1666,16 +1680,8 @@ def clean(x):
class AddSS(gof.op.Op): class AddSS(gof.op.Op):
"""Add tw sparse matrix. #add(sparse, sparse).
#see the doc of add() for more detail.
:param x: A sparse matrix.
:param y: A sparse matrix
:return: `x`+`y`
:note: The grad implemented is regular, i.e. not structured.
"""
def __eq__(self, other): def __eq__(self, other):
return (type(self) == type(other)) return (type(self) == type(other))
...@@ -1715,19 +1721,7 @@ add_s_s = AddSS() ...@@ -1715,19 +1721,7 @@ add_s_s = AddSS()
class AddSSData(gof.op.Op): class AddSSData(gof.op.Op):
"""Add two sparse matrices assuming they have the same sparsity # See doc in instance of this Op or function after this class definition.
pattern.
:param x: Sparse matrix.
:param y: Sparse matrix.
:return: The sum of the two sparse matrix element wise.
:note: `x` and `y` are assumed to have the same
sparsity pattern.
:note: The grad implemented is structured.
"""
def __eq__(self, other): def __eq__(self, other):
return (type(self) == type(other)) return (type(self) == type(other))
...@@ -1766,18 +1760,24 @@ class AddSSData(gof.op.Op): ...@@ -1766,18 +1760,24 @@ class AddSSData(gof.op.Op):
def __str__(self): def __str__(self):
return self.__class__.__name__ return self.__class__.__name__
add_s_s_data = AddSSData() add_s_s_data = AddSSData()
"""Add two sparse matrices assuming they have the same sparsity
pattern.
:param x: Sparse matrix.
:param y: Sparse matrix.
class AddSD(gof.op.Op): :return: The sum of the two sparse matrices element wise.
"""Add a sparse and a dense matrix.
:param x: A sparse matrix. :note: `x` and `y` are assumed to have the same
:param y: A dense matrix sparsity pattern.
:note: The grad implemented is structured.
:return: `x`+`y` """
:note: The grad implemented is structured on `x`.
""" class AddSD(gof.op.Op):
#add(sparse, sparse).
#see the doc of add() for more detail.
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
gof.Op.__init__(self, *args, **kwargs) gof.Op.__init__(self, *args, **kwargs)
...@@ -1823,20 +1823,6 @@ add_s_d = AddSD() ...@@ -1823,20 +1823,6 @@ add_s_d = AddSD()
class StructuredAddSV(gof.op.Op): class StructuredAddSV(gof.op.Op):
"""Structured addition of a sparse matrix and a dense vector.
The elements of the vector are are only added to the corresponding
non-zero elements. Therefore, this operation outputs another sparse
matrix.
:param x: Sparse matrix.
:param y: Tensor type vector.
:return: A sparse matrix containing the addition of the vector to
the data of the sparse matrix.
:note: The grad implemented is structured since the op is structured.
"""
def __eq__(self, other): def __eq__(self, other):
return (type(self) == type(other)) return (type(self) == type(other))
...@@ -1873,6 +1859,19 @@ class StructuredAddSV(gof.op.Op): ...@@ -1873,6 +1859,19 @@ class StructuredAddSV(gof.op.Op):
def __str__(self): def __str__(self):
return self.__class__.__name__ return self.__class__.__name__
structured_add_s_v = StructuredAddSV() structured_add_s_v = StructuredAddSV()
"""Structured addition of a sparse matrix and a dense vector.
The elements of the vector are are only added to the corresponding
non-zero elements. Therefore, this operation outputs another sparse
matrix.
:param x: Sparse matrix.
:param y: Tensor type vector.
:return: A sparse matrix containing the addition of the vector to
the data of the sparse matrix.
:note: The grad implemented is structured since the op is structured.
"""
def add(x, y): def add(x, y):
...@@ -1934,17 +1933,8 @@ def sub(x, y): ...@@ -1934,17 +1933,8 @@ def sub(x, y):
class MulSS(gof.op.Op): class MulSS(gof.op.Op):
"""Elementwise multiply a sparse and a sparse. # mul(sparse, sparse)
# See the doc of mul() for more detail
:param x: A sparse matrix.
:param y: A sparse matrix.
:return: `x` * `y`
:note: At least one of `x` and `y` must be a sparse matrix.
:note: The grad implemented is regular, i.e. not structured.
"""
def __eq__(self, other): def __eq__(self, other):
return (type(self) == type(other)) return (type(self) == type(other))
...@@ -1986,16 +1976,8 @@ mul_s_s = MulSS() ...@@ -1986,16 +1976,8 @@ mul_s_s = MulSS()
class MulSD(gof.op.Op): class MulSD(gof.op.Op):
"""Elementwise multiply a sparse and a dense matrix. # mul(sparse, dense)
# See the doc of mul() for more detail
:param x: A sparse matrix.
:param y: A dense matrix.
:return: `x` * `y`
:note: The grad is regular, i.e. not structured..
"""
def __eq__(self, other): def __eq__(self, other):
return (type(self) == type(other)) return (type(self) == type(other))
...@@ -2089,17 +2071,6 @@ mul_s_d = MulSD() ...@@ -2089,17 +2071,6 @@ mul_s_d = MulSD()
class MulSV(gof.op.Op): class MulSV(gof.op.Op):
"""Multiplication of sparse matrix by a broadcasted dense vector
element wise.
:param x: Sparse matrix to multiply.
:param y: Tensor broadcastable vector.
:Return: The product x * y element wise.
:note: The grad implemented is regular, i.e. not structured.
"""
def __eq__(self, other): def __eq__(self, other):
return (type(self) == type(other)) return (type(self) == type(other))
...@@ -2147,6 +2118,15 @@ class MulSV(gof.op.Op): ...@@ -2147,6 +2118,15 @@ class MulSV(gof.op.Op):
def __str__(self): def __str__(self):
return self.__class__.__name__ return self.__class__.__name__
mul_s_v = MulSV() mul_s_v = MulSV()
"""Multiplication of sparse matrix by a broadcasted dense vector element wise.
:param x: Sparse matrix to multiply.
:param y: Tensor broadcastable vector.
:Return: The product x * y element wise.
:note: The grad implemented is regular, i.e. not structured.
"""
def mul(x, y): def mul(x, y):
...@@ -2323,13 +2303,6 @@ def __ComparisonSwitch(SS, SD, DS): ...@@ -2323,13 +2303,6 @@ def __ComparisonSwitch(SS, SD, DS):
class EqualSS(__ComparisonOpSS): class EqualSS(__ComparisonOpSS):
"""
:param x:first compared sparse matrix
:param y:second compared sparse matrix
:return: x==y
"""
def comparison(self, x, y): def comparison(self, x, y):
return x == y return x == y
...@@ -2338,13 +2311,6 @@ equal_s_s = EqualSS() ...@@ -2338,13 +2311,6 @@ equal_s_s = EqualSS()
class EqualSD(__ComparisonOpSD): class EqualSD(__ComparisonOpSD):
"""
:param x:sparse matrix
:param y:dense matrix
:return: x==y
"""
def comparison(self, x, y): def comparison(self, x, y):
return x == y return x == y
...@@ -2352,13 +2318,6 @@ equal_s_d = EqualSD() ...@@ -2352,13 +2318,6 @@ equal_s_d = EqualSD()
class NotEqualSS(__ComparisonOpSS): class NotEqualSS(__ComparisonOpSS):
"""
:param x:first compared sparse matrix
:param y:second compared sparse matrix
:return: x!=y
"""
def comparison(self, x, y): def comparison(self, x, y):
return x != y return x != y
...@@ -2366,13 +2325,6 @@ not_equal_s_s = NotEqualSS() ...@@ -2366,13 +2325,6 @@ not_equal_s_s = NotEqualSS()
class NotEqualSD(__ComparisonOpSD): class NotEqualSD(__ComparisonOpSD):
"""
:param x:sparse matrix
:param y:dense matrix
:return: x!=y
"""
def comparison(self, x, y): def comparison(self, x, y):
return x != y return x != y
...@@ -2380,13 +2332,6 @@ not_equal_s_d = NotEqualSD() ...@@ -2380,13 +2332,6 @@ not_equal_s_d = NotEqualSD()
class LessThanSS(__ComparisonOpSS): class LessThanSS(__ComparisonOpSS):
"""
:param x:first compared sparse matrix
:param y:second compared sparse matrix
:return: x<y
"""
def comparison(self, x, y): def comparison(self, x, y):
return x < y return x < y
...@@ -2394,13 +2339,6 @@ less_than_s_s = LessThanSS() ...@@ -2394,13 +2339,6 @@ less_than_s_s = LessThanSS()
class LessThanSD(__ComparisonOpSD): class LessThanSD(__ComparisonOpSD):
"""
:param x:sparse matrix
:param y:dense matrix
:return: x<y
"""
def comparison(self, x, y): def comparison(self, x, y):
return x < y return x < y
...@@ -2408,13 +2346,6 @@ less_than_s_d = LessThanSD() ...@@ -2408,13 +2346,6 @@ less_than_s_d = LessThanSD()
class GreaterThanSS(__ComparisonOpSS): class GreaterThanSS(__ComparisonOpSS):
"""
:param x:first compared sparse matrix
:param y:second compared sparse matrix
:return: x>y
"""
def comparison(self, x, y): def comparison(self, x, y):
return x > y return x > y
...@@ -2422,13 +2353,6 @@ greater_than_s_s = GreaterThanSS() ...@@ -2422,13 +2353,6 @@ greater_than_s_s = GreaterThanSS()
class GreaterThanSD(__ComparisonOpSD): class GreaterThanSD(__ComparisonOpSD):
"""
:param x:sparse matrix
:param y:dense matrix
:return: x>y
"""
def comparison(self, x, y): def comparison(self, x, y):
return x > y return x > y
...@@ -2436,13 +2360,6 @@ greater_than_s_d = GreaterThanSD() ...@@ -2436,13 +2360,6 @@ greater_than_s_d = GreaterThanSD()
class LessEqualSS(__ComparisonOpSS): class LessEqualSS(__ComparisonOpSS):
"""
:param x:first compared sparse matrix
:param y:second compared sparse matrix
:return: x<=y
"""
def comparison(self, x, y): def comparison(self, x, y):
return x <= y return x <= y
...@@ -2450,13 +2367,6 @@ less_equal_s_s = LessEqualSS() ...@@ -2450,13 +2367,6 @@ less_equal_s_s = LessEqualSS()
class LessEqualSD(__ComparisonOpSD): class LessEqualSD(__ComparisonOpSD):
"""
:param x:sparse matrix
:param y:dense matrix
:return: x<=y
"""
def comparison(self, x, y): def comparison(self, x, y):
return x <= y return x <= y
...@@ -2464,13 +2374,6 @@ less_equal_s_d = LessEqualSD() ...@@ -2464,13 +2374,6 @@ less_equal_s_d = LessEqualSD()
class GreaterEqualSS(__ComparisonOpSS): class GreaterEqualSS(__ComparisonOpSS):
"""
:param x:first compared sparse matrix
:param y:second compared sparse matrix
:return: x>=y
"""
def comparison(self, x, y): def comparison(self, x, y):
return x >= y return x >= y
...@@ -2478,18 +2381,13 @@ greater_equal_s_s = GreaterEqualSS() ...@@ -2478,18 +2381,13 @@ greater_equal_s_s = GreaterEqualSS()
class GreaterEqualSD(__ComparisonOpSD): class GreaterEqualSD(__ComparisonOpSD):
"""
:param x:sparse matrix
:param y:dense matrix
:return: x>=y
"""
def comparison(self, x, y): def comparison(self, x, y):
return x >= y return x >= y
greater_equal_s_d = GreaterEqualSD() greater_equal_s_d = GreaterEqualSD()
eq = __ComparisonSwitch(equal_s_s, equal_s_d, equal_s_d)
""" """
:param x: A matrix variable. :param x: A matrix variable.
:param y: A matrix variable. :param y: A matrix variable.
...@@ -2498,9 +2396,9 @@ greater_equal_s_d = GreaterEqualSD() ...@@ -2498,9 +2396,9 @@ greater_equal_s_d = GreaterEqualSD()
:note: At least one of `x` and `y` must be a sparse matrix. :note: At least one of `x` and `y` must be a sparse matrix.
""" """
eq = __ComparisonSwitch(equal_s_s, equal_s_d, equal_s_d)
neq = __ComparisonSwitch(not_equal_s_s, not_equal_s_d, not_equal_s_d)
""" """
:param x: A matrix variable. :param x: A matrix variable.
:param y: A matrix variable. :param y: A matrix variable.
...@@ -2509,9 +2407,9 @@ eq = __ComparisonSwitch(equal_s_s, equal_s_d, equal_s_d) ...@@ -2509,9 +2407,9 @@ eq = __ComparisonSwitch(equal_s_s, equal_s_d, equal_s_d)
:note: At least one of `x` and `y` must be a sparse matrix. :note: At least one of `x` and `y` must be a sparse matrix.
""" """
neq = __ComparisonSwitch(not_equal_s_s, not_equal_s_d, not_equal_s_d)
lt = __ComparisonSwitch(less_than_s_s, less_than_s_d, greater_than_s_d)
""" """
:param x: A matrix variable. :param x: A matrix variable.
:param y: A matrix variable. :param y: A matrix variable.
...@@ -2520,9 +2418,9 @@ neq = __ComparisonSwitch(not_equal_s_s, not_equal_s_d, not_equal_s_d) ...@@ -2520,9 +2418,9 @@ neq = __ComparisonSwitch(not_equal_s_s, not_equal_s_d, not_equal_s_d)
:note: At least one of `x` and `y` must be a sparse matrix. :note: At least one of `x` and `y` must be a sparse matrix.
""" """
lt = __ComparisonSwitch(less_than_s_s, less_than_s_d, greater_than_s_d)
gt = __ComparisonSwitch(greater_than_s_s, greater_than_s_d, less_than_s_d)
""" """
:param x: A matrix variable. :param x: A matrix variable.
:param y: A matrix variable. :param y: A matrix variable.
...@@ -2532,8 +2430,7 @@ lt = __ComparisonSwitch(less_than_s_s, less_than_s_d, greater_than_s_d) ...@@ -2532,8 +2430,7 @@ lt = __ComparisonSwitch(less_than_s_s, less_than_s_d, greater_than_s_d)
:note: At least one of `x` and `y` must be a sparse matrix. :note: At least one of `x` and `y` must be a sparse matrix.
""" """
gt = __ComparisonSwitch(greater_than_s_s, greater_than_s_d, less_than_s_d) le = __ComparisonSwitch(less_equal_s_s, less_equal_s_d, greater_equal_s_d)
""" """
:param x: A matrix variable. :param x: A matrix variable.
:param y: A matrix variable. :param y: A matrix variable.
...@@ -2542,8 +2439,9 @@ gt = __ComparisonSwitch(greater_than_s_s, greater_than_s_d, less_than_s_d) ...@@ -2542,8 +2439,9 @@ gt = __ComparisonSwitch(greater_than_s_s, greater_than_s_d, less_than_s_d)
:note: At least one of `x` and `y` must be a sparse matrix. :note: At least one of `x` and `y` must be a sparse matrix.
""" """
le = __ComparisonSwitch(less_equal_s_s, less_equal_s_d, greater_equal_s_d)
ge = __ComparisonSwitch(greater_equal_s_s, greater_equal_s_d,
less_equal_s_d)
""" """
:param x: A matrix variable. :param x: A matrix variable.
:param y: A matrix variable. :param y: A matrix variable.
...@@ -2553,24 +2451,9 @@ le = __ComparisonSwitch(less_equal_s_s, less_equal_s_d, greater_equal_s_d) ...@@ -2553,24 +2451,9 @@ le = __ComparisonSwitch(less_equal_s_s, less_equal_s_d, greater_equal_s_d)
:note: At least one of `x` and `y` must be a sparse matrix. :note: At least one of `x` and `y` must be a sparse matrix.
""" """
ge = __ComparisonSwitch(greater_equal_s_s, greater_equal_s_d,
less_equal_s_d)
class HStack(gof.op.Op): class HStack(gof.op.Op):
"""Stack sparse matrices horizontally (column wise). # See doc in instance of this Op or function after this class definition.
:param blocks: Sequence of sparse array of compatible shape.
:param format: String representing the output format. Default
is csc.
:param dtype: Output dtype. Must be specified.
:return: The concatenation of the sparse arrays column wise.
:note: The number of line of the sparse matrix must agree.
:note: The grad implemented is regular, i.e. not structured.
"""
def __init__(self, format=None, dtype=None): def __init__(self, format=None, dtype=None):
if format is None: if format is None:
self.format = 'csc' self.format = 'csc'
...@@ -2667,19 +2550,7 @@ def hstack(blocks, format=None, dtype=None): ...@@ -2667,19 +2550,7 @@ def hstack(blocks, format=None, dtype=None):
class VStack(HStack): class VStack(HStack):
"""Stack sparse matrices vertically (row wise). # See doc in instance of this Op or function after this class definition.
:param blocks: Sequence of sparse array of compatible shape.
:param format: String representing the output format. Default
is csc.
:param dtype: Output dtype. Must be specified.
:return: The concatenation of the sparse arrays row wise.
:note: The number of column of the sparse matrix must agree.
:note: The grad implemented is regular, i.e. not structured.
"""
def perform(self, node, block, (out, )): def perform(self, node, block, (out, )):
for b in block: for b in block:
assert _is_sparse(b) assert _is_sparse(b)
...@@ -2743,15 +2614,7 @@ def vstack(blocks, format=None, dtype=None): ...@@ -2743,15 +2614,7 @@ def vstack(blocks, format=None, dtype=None):
class Remove0(gof.Op): class Remove0(gof.Op):
"""Remove explicit zeros from a sparse matrix. # See doc in instance of this Op or a function after the class definition.
:param x: Sparse matrix.
:return: Exactly `x` but with a data attribute
exempt of zeros.
:note: The grad implemented is regular, i.e. not structured.
"""
def __init__(self, inplace=False, *args, **kwargs): def __init__(self, inplace=False, *args, **kwargs):
gof.Op.__init__(self, *args, **kwargs) gof.Op.__init__(self, *args, **kwargs)
self.inplace = inplace self.inplace = inplace
...@@ -2789,6 +2652,14 @@ class Remove0(gof.Op): ...@@ -2789,6 +2652,14 @@ class Remove0(gof.Op):
def infer_shape(self, node, i0_shapes): def infer_shape(self, node, i0_shapes):
return i0_shapes return i0_shapes
remove0 = Remove0() remove0 = Remove0()
"""Remove explicit zeros from a sparse matrix.
:param x: Sparse matrix.
:return: Exactly `x` but with a data attribute
exempt of zeros.
:note: The grad implemented is regular, i.e. not structured.
"""
# Structured monoid # Structured monoid
...@@ -3008,28 +2879,6 @@ def sqrt(x): ...@@ -3008,28 +2879,6 @@ def sqrt(x):
class TrueDot(gof.op.Op): class TrueDot(gof.op.Op):
"""Calculate the true dot operation between two matrices.
`TrueDot` is different of `StructuredDot` for sparse matrix
since the grad of `TrueDot` is regular, i.e. not structured.
The parameter `grad_preserves_dense`, controlled by the
constructor, is a boolean flags to controls whether gradients
with respect to inputs are converted to dense matrices when the
corresponding input y is dense (not in a L{SparseVariable} wrapper).
This is generally a good idea when L{Dot} is in the middle of a
larger graph, because the types of gy will match that of y. This
conversion might be inefficient if the gradients are graph outputs
though, hence this mask.
:param x: Sparse matrix for the left operand.
:param y: Sparse or dense matrix for the right operand.
:return: The dot product `x` . `y` in a sparse matrix.
:note:
- The grad implemented is regular, i.e. not structured.
"""
# TODO # TODO
# Simplify code by splitting into DotSS and DotSD. # Simplify code by splitting into DotSS and DotSD.
...@@ -3133,14 +2982,15 @@ def true_dot(x, y, grad_preserves_dense=True): ...@@ -3133,14 +2982,15 @@ def true_dot(x, y, grad_preserves_dense=True):
one or all operands are sparse. Supported formats are CSC and CSR. one or all operands are sparse. Supported formats are CSC and CSR.
The output of the operation is sparse. The output of the operation is sparse.
:param x: Sparse matrix or 2d tensor variable. :param x: Sparse matrix.
:param y: Sparse matrix or 2d tensor variable. :param y: Sparse matrix or 2d tensor variable.
:param grad_preserves_dense: if True (default), makes the grad of :param grad_preserves_dense: if True (default), makes the grad of
dense inputs dense. Otherwise the grad is always sparse. dense inputs dense. Otherwise the grad is always sparse.
:return: The dot product `x`.`y` in a sparse format. :return: The dot product `x`.`y` in a sparse format.
:note: one of ``x`` or ``y`` must be sparse. :note:
- The grad implemented is regular, i.e. not structured.
""" """
# TODO # TODO
# Maybe the triple-transposition formulation # Maybe the triple-transposition formulation
...@@ -3168,21 +3018,7 @@ def true_dot(x, y, grad_preserves_dense=True): ...@@ -3168,21 +3018,7 @@ def true_dot(x, y, grad_preserves_dense=True):
# Dot # Dot
class StructuredDot(gof.Op): class StructuredDot(gof.Op):
"""Structured Dot is like dot, except that only the # See doc in instance of this Op or function after this class definition.
gradient wrt non-zero elements of the sparse matrix
`a` are calculated and propagated.
The output is presumed to be a dense matrix, and is represented by a
TensorType instance.
:param a: A sparse matrix.
:param b: A sparse or dense matrix.
:return: The dot product of `a` and `b` as a dense matrix.
:note: The grad implemented is structured.
"""
def __eq__(self, other): def __eq__(self, other):
return (type(self) == type(other)) return (type(self) == type(other))
...@@ -3597,33 +3433,7 @@ def structured_dot_grad(sparse_A, dense_B, ga): ...@@ -3597,33 +3433,7 @@ def structured_dot_grad(sparse_A, dense_B, ga):
class SamplingDot(gof.op.Op): class SamplingDot(gof.op.Op):
"""Operand for calculating the dot product dot(`x`, `y`.T) = `z` when you # See doc in instance of this Op or function after this class definition.
only want to calculate a subset of `z`.
It is equivalent to `p` o (`x` . `y`.T) where o is the element-wise
product, `x` and `y` operands of the dot product and `p` is a matrix that
contains 1 when the corresponding element of `z` should be calculated
and 0 when it shouldn't. Note that SamplingDot has a different interface
than `dot` because SamplingDot requires `x` to be a `m`x`k` matrix while
`y` is a `n`x`k` matrix instead of the usual `k`x`n` matrix.
.. note::
It will work if the pattern is not binary value, but if the
pattern doesn't have a high sparsity proportion it will be slower
then a more optimized dot followed by a normal elemwise
multiplication.
:param x: Tensor matrix.
:param y: Tensor matrix.
:param p: Sparse matrix in csr format.
:return: A dense matrix containing the dot product of `x` by `y`.T only
where `p` is 1.
:note: The grad implemented is regular, i.e. not structured.
"""
def __eq__(self, other): def __eq__(self, other):
return type(self) == type(other) return type(self) == type(other)
...@@ -3671,25 +3481,36 @@ class SamplingDot(gof.op.Op): ...@@ -3671,25 +3481,36 @@ class SamplingDot(gof.op.Op):
def __str__(self): def __str__(self):
return self.__class__.__name__ return self.__class__.__name__
sampling_dot = SamplingDot() sampling_dot = SamplingDot()
"""Operand for calculating the dot product dot(`x`, `y`.T) = `z` when you
only want to calculate a subset of `z`.
It is equivalent to `p` o (`x` . `y`.T) where o is the element-wise
product, `x` and `y` operands of the dot product and `p` is a matrix that
contains 1 when the corresponding element of `z` should be calculated
and 0 when it shouldn't. Note that SamplingDot has a different interface
than `dot` because SamplingDot requires `x` to be a `m`x`k` matrix while
`y` is a `n`x`k` matrix instead of the usual `k`x`n` matrix.
class Dot(gof.op.Op): .. note::
"""Operation for efficiently calculating the dot product when
one or all operands is sparse. Supported format are CSC and CSR.
The output of the operation is dense.
:param x: sparse or dense matrix variable. It will work if the pattern is not binary value, but if the
:param y: sparse or dense matrix variable. pattern doesn't have a high sparsity proportion it will be slower
then a more optimized dot followed by a normal elemwise
multiplication.
:return: The dot product `x`.`y` in a dense format. :param x: Tensor matrix.
:param y: Tensor matrix.
:param p: Sparse matrix in csr format.
:note: The grad implemented is regular, i.e. not structured. :return: A dense matrix containing the dot product of `x` by `y`.T only
:note: At least one of `x` or `y` must be a sparse matrix. where `p` is 1.
:note: When the operation has the form dot(csr_matrix, dense)
the gradient of this operation can be performed inplace
by UsmmCscDense. This leads to significant speed-ups.
"""
:note: The grad implemented is regular, i.e. not structured.
"""
class Dot(gof.op.Op):
# See doc in instance of this Op or function after this class definition.
def __eq__(self, other): def __eq__(self, other):
return type(self) == type(other) return type(self) == type(other)
...@@ -3787,13 +3608,17 @@ def dot(x, y): ...@@ -3787,13 +3608,17 @@ def dot(x, y):
one or all operands is sparse. Supported format are CSC and CSR. one or all operands is sparse. Supported format are CSC and CSR.
The output of the operation is dense. The output of the operation is dense.
:param x: Matrix variable. :param x: sparse or dense matrix variable.
:param y: Matrix variable. :param y: sparse or dense matrix variable.
:return: The dot product `x`.`y` in a dense format. :return: The dot product `x`.`y` in a dense format.
:note: The grad implemented is regular, i.e. not structured. :note: The grad implemented is regular, i.e. not structured.
:note: At least one of `x` or `y` must be a sparse matrix. :note: At least one of `x` or `y` must be a sparse matrix.
:note: At least one of `x` or `y` must be a sparse matrix.
:note: When the operation has the form dot(csr_matrix, dense)
the gradient of this operation can be performed inplace
by UsmmCscDense. This leads to significant speed-ups.
""" """
if hasattr(x, 'getnnz'): if hasattr(x, 'getnnz'):
...@@ -3811,19 +3636,7 @@ def dot(x, y): ...@@ -3811,19 +3636,7 @@ def dot(x, y):
class Usmm(gof.op.Op): class Usmm(gof.op.Op):
"""Performs the expression is `alpha` * `x` `y` + `z`. # See doc in instance of this Op or function after this class definition.
:param x: Matrix variable.
:param y: Matrix variable.
:param z: Dense matrix.
:param alpha: A tensor scalar.
:return: The dense matrix resulting from `alpha` * `x` `y` + `z`.
:note: The grad is not implemented for this op.
:note: At least one of `x` or `y` must be a sparse matrix.
"""
# We don't implement the infer_shape as it is # We don't implement the infer_shape as it is
# inserted by optimization only. # inserted by optimization only.
...@@ -3883,13 +3696,22 @@ class Usmm(gof.op.Op): ...@@ -3883,13 +3696,22 @@ class Usmm(gof.op.Op):
out[0] = rval out[0] = rval
usmm = Usmm() usmm = Usmm()
"""Performs the expression is `alpha` * `x` `y` + `z`.
:param x: Matrix variable.
:param y: Matrix variable.
:param z: Dense matrix.
:param alpha: A tensor scalar.
class ConstructSparseFromList(gof.Op): :return: The dense matrix resulting from `alpha` * `x` `y` + `z`.
"""Constructs a sparse matrix out of a list of 2-D matrix rows
:note: The grad implemented is regular, i.e. not structured. :note: The grad is not implemented for this op.
""" :note: At least one of `x` or `y` must be a sparse matrix.
"""
class ConstructSparseFromList(gof.Op):
# See doc in instance of this Op or function after this class definition.
def __hash__(self): def __hash__(self):
return hash((type(self))) return hash((type(self)))
...@@ -3979,3 +3801,7 @@ class ConstructSparseFromList(gof.Op): ...@@ -3979,3 +3801,7 @@ class ConstructSparseFromList(gof.Op):
return [gx, gy] + [DisconnectedType()()] * len(idx_list) return [gx, gy] + [DisconnectedType()()] * len(idx_list)
construct_sparse_from_list = ConstructSparseFromList() construct_sparse_from_list = ConstructSparseFromList()
"""Constructs a sparse matrix out of a list of 2-D matrix rows
:note: The grad implemented is regular, i.e. not structured.
"""
...@@ -50,9 +50,7 @@ TypedListType.Variable = TypedListVariable ...@@ -50,9 +50,7 @@ TypedListType.Variable = TypedListVariable
class GetItem(Op): class GetItem(Op):
""" # See doc in instance of this Op or function after this class definition.
get specified slice of a typed list
"""
def __eq__(self, other): def __eq__(self, other):
return type(self) == type(other) return type(self) == type(other)
...@@ -100,13 +98,16 @@ class GetItem(Op): ...@@ -100,13 +98,16 @@ class GetItem(Op):
return (1,) return (1,)
getitem = GetItem() getitem = GetItem()
"""
Get specified slice of a typed list.
:param x: type type list.
:param index: the index of the value to return from `x`.
"""
class Append(Op):
"""
#append an element at the end of another list
"""
class Append(Op):
# See doc in instance of this Op after the class definition.
def __init__(self, inplace=False): def __init__(self, inplace=False):
self.inplace = inplace self.inplace = inplace
if self.inplace: if self.inplace:
...@@ -159,13 +160,16 @@ class Append(Op): ...@@ -159,13 +160,16 @@ class Append(Op):
return (1,) return (1,)
append = Append() append = Append()
"""
Append an element at the end of another list.
:param x: the base typed list.
:param y: the element to append to `x`.
"""
class Extend(Op): class Extend(Op):
""" # See doc in instance of this Op after the class definition.
append all element of a list at the end of another list
"""
def __init__(self, inplace=False): def __init__(self, inplace=False):
self.inplace = inplace self.inplace = inplace
if self.inplace: if self.inplace:
...@@ -222,10 +226,16 @@ class Extend(Op): ...@@ -222,10 +226,16 @@ class Extend(Op):
return (1,) return (1,)
extend = Extend() extend = Extend()
"""
Append all element of a list at the end of another list.
:param x: The typed list to extend.
:param toAppend: The typed list that will be added at the end of `x`.
"""
class Insert(Op):
class Insert(Op):
# See doc in instance of this Op after the class definition.
def __init__(self, inplace=False): def __init__(self, inplace=False):
self.inplace = inplace self.inplace = inplace
if self.inplace: if self.inplace:
...@@ -283,10 +293,17 @@ class Insert(Op): ...@@ -283,10 +293,17 @@ class Insert(Op):
return (1,) return (1,)
insert = Insert() insert = Insert()
"""
Insert an element at an index in a typed list.
:param x: the typed list to modified.
:param index: the index where to put the new element in `x`.
:param toInsert: The new element to insert.
"""
class Remove(Op):
class Remove(Op):
# See doc in instance of this Op after the class definition.
def __init__(self, inplace=False): def __init__(self, inplace=False):
self.inplace = inplace self.inplace = inplace
if self.inplace: if self.inplace:
...@@ -324,10 +341,21 @@ class Remove(Op): ...@@ -324,10 +341,21 @@ class Remove(Op):
return self.__class__.__name__ return self.__class__.__name__
remove = Remove() remove = Remove()
"""Remove an element from a typed list.
:param x: the typed list to be changed.
:param toRemove: an element to be removed from the typed list.
We only remove the first instance.
class Reverse(Op): :note: Python implementation of remove doesn't work when we want to
remove an ndarray from a list. This implementation works in that
case.
"""
class Reverse(Op):
# See doc in instance of this Op after the class definition.
def __init__(self, inplace=False): def __init__(self, inplace=False):
self.inplace = inplace self.inplace = inplace
if self.inplace: if self.inplace:
...@@ -380,10 +408,15 @@ class Reverse(Op): ...@@ -380,10 +408,15 @@ class Reverse(Op):
return (1,) return (1,)
reverse = Reverse() reverse = Reverse()
"""
Reverse the order of a typed list.
:param x: the typed list to be reversed.
"""
class Index(Op):
class Index(Op):
# See doc in instance of this Op after the class definition.
def __eq__(self, other): def __eq__(self, other):
return type(self) == type(other) return type(self) == type(other)
...@@ -413,7 +446,7 @@ index_ = Index() ...@@ -413,7 +446,7 @@ index_ = Index()
class Count(Op): class Count(Op):
# See doc in instance of this Op after the class definition.
def __eq__(self, other): def __eq__(self, other):
return type(self) == type(other) return type(self) == type(other)
...@@ -441,6 +474,18 @@ class Count(Op): ...@@ -441,6 +474,18 @@ class Count(Op):
return self.__class__.__name__ return self.__class__.__name__
count = Count() count = Count()
"""
Count the number of time an element is in the typed list.
:param x: The typed list to look into.
:param elem: The element we want to count in list.
The element are compared with equals.
:note: Python implementation of count doesn't work when we want to
count an ndarray from a list. This implementation works in that
case.
"""
class Length(Op): class Length(Op):
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论