提交 5399a9e2 authored 作者: Frederic's avatar Frederic

Doc about new sparse __getitem__.

上级 26b9590f
...@@ -120,7 +120,8 @@ New features: ...@@ -120,7 +120,8 @@ New features:
* Added a_tensor.transpose(axes) axes is optional (James) * Added a_tensor.transpose(axes) axes is optional (James)
* theano.tensor.transpose(a_tensor, kwargs) We where ignoring kwargs, now it is used as the axes. * theano.tensor.transpose(a_tensor, kwargs) We where ignoring kwargs, now it is used as the axes.
* a_CudaNdarray_object[*] = int, now work (Frederic) * a_CudaNdarray_object[*] = int, now work (Frederic)
* sparse_variable[N, N] now work (Li Yao, Frederic)
* sparse_variable[M:N, O:P] now work (Li Yao, Frederic)
New optimizations: New optimizations:
* AdvancedSubtensor1 reuses preallocated memory if available (scan, c|py_nogc linker) (Frederic) * AdvancedSubtensor1 reuses preallocated memory if available (scan, c|py_nogc linker) (Frederic)
......
...@@ -53,9 +53,12 @@ grad? ...@@ -53,9 +53,12 @@ grad?
- When the operation has the form dot(csr_matrix, dense) the gradient of - When the operation has the form dot(csr_matrix, dense) the gradient of
this operation can be performed inplace by UsmmCscDense. This leads to this operation can be performed inplace by UsmmCscDense. This leads to
significant speed-ups. significant speed-ups.
- Subtensor
Subtensor selection (aka. square-bracket notation, aka indexing) is not implemented, but the - sparse_varible[N, N], return a tensor scalar
CSR and CSC datastructures support effecient implementations. - sparse_varible[M:N, O:P], return a sparse matrix
- Don't support [M, N:O] and [M:N, O] as we don't support sparse vector
and returning a sparse matrix would break the numpy interface.
Use [M:M+1, N:O] and [M:N, O:O+1] instead.
There are no GPU implementations for sparse matrices implemented in Theano. There are no GPU implementations for sparse matrices implemented in Theano.
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论