Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
7ebae191
提交
7ebae191
authored
8月 20, 2012
作者:
goodfeli
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #875 from nouiz/doc2
Better doc of extra_ops.
上级
18128f23
c6b32709
隐藏空白字符变更
内嵌
并排
正在显示
5 个修改的文件
包含
154 行增加
和
97 行删除
+154
-97
index.txt
doc/library/sparse/index.txt
+6
-3
index2.txt
doc/sandbox/index2.txt
+1
-1
extending_theano.txt
doc/tutorial/extending_theano.txt
+31
-1
doubleop.py
theano/misc/doubleop.py
+54
-0
extra_ops.py
theano/tensor/extra_ops.py
+62
-92
没有找到文件。
doc/library/sparse/index.txt
浏览文件 @
7ebae191
...
@@ -160,7 +160,8 @@ List of Implemented Operations
...
@@ -160,7 +160,8 @@ List of Implemented Operations
The grad implemented is structured.
The grad implemented is structured.
- Monoid (Element-wise operation with only one sparse input).
- Monoid (Element-wise operation with only one sparse input).
`They all have a structured grad.`
`They all have a structured grad.`
- ``structured_sigmoid``
- ``structured_sigmoid``
- ``structured_exp``
- ``structured_exp``
- ``structured_log``
- ``structured_log``
...
@@ -217,14 +218,16 @@ List of Implemented Operations
...
@@ -217,14 +218,16 @@ List of Implemented Operations
The grad implemented is regular.
The grad implemented is regular.
- Probability
- Probability
`There is no grad implemented for these operations.`
`There is no grad implemented for these operations.`
- :class:`Poisson <theano.sparse.basic.Poisson>` and ``poisson``
- :class:`Poisson <theano.sparse.basic.Poisson>` and ``poisson``
- :class:`Binomial <theano.sparse.basic.Binomial>` and ``csc_fbinomial``, ``csc_dbinomial``
- :class:`Binomial <theano.sparse.basic.Binomial>` and ``csc_fbinomial``, ``csc_dbinomial``
``csr_fbinomial``, ``csr_dbinomial``
``csr_fbinomial``, ``csr_dbinomial``
- :class:`Multinomial <theano.sparse.basic.Multinomial>` and ``multinomial``
- :class:`Multinomial <theano.sparse.basic.Multinomial>` and ``multinomial``
- Internal Representation
- Internal Representation
`They all have a regular grad implemented.`
`They all have a regular grad implemented.`
- :class:`EnsureSortedIndices <theano.sparse.basic.EnsureSortedIndices>` and ``ensure_sorted_indices``
- :class:`EnsureSortedIndices <theano.sparse.basic.EnsureSortedIndices>` and ``ensure_sorted_indices``
- :class:`Remove0 <theano.sparse.basic.Remove0>` and ``remove0``
- :class:`Remove0 <theano.sparse.basic.Remove0>` and ``remove0``
- :func:`clean <theano.sparse.basic.clean>` to resort indices and remove zeros
- :func:`clean <theano.sparse.basic.clean>` to resort indices and remove zeros
...
...
doc/sandbox/index2.txt
浏览文件 @
7ebae191
...
@@ -8,7 +8,7 @@ Advanced Topics (under construction)
...
@@ -8,7 +8,7 @@ Advanced Topics (under construction)
.. toctree::
.. toctree::
:maxdepth: 2
:maxdepth: 2
fg
raph
fg
compilation
compilation
ccodegen
ccodegen
function
function
...
...
doc/tutorial/extending_theano.txt
浏览文件 @
7ebae191
...
@@ -333,4 +333,34 @@ Documentation
...
@@ -333,4 +333,34 @@ Documentation
-------------
-------------
See :ref:`metadocumentation`, for some information on how to generate
See :ref:`metadocumentation`, for some information on how to generate
and do documentation.
the documentation.
Here is an example how to add docstring to an class.
.. code-block:: python
import theano
class DoubleOp(theano.Op):
""" Double each element of a tensor.
:param x: input tensor.
:return: a tensor of the shape shape and dtype as the input with all
values doubled.
:note:
this is a test note
:seealso:
You can use the elemwise op to replace this example.
Just execute `x * 2` with x being a Theano variable.
.. versionadded:: 0.6
"""
This is how it will show up for file that we auto list in the library documentation:
.. automodule:: theano.misc.doubleop
:members:
theano/misc/doubleop.py
0 → 100644
浏览文件 @
7ebae191
#This is the example in the Theano/doc/tutorial/extending_theano.txt
import
theano
class
DoubleOp
(
theano
.
Op
):
""" Double each element of a tensor.
:param x: input tensor.
:return: a tensor of the shape shape and dtype as the input with all
values doubled.
:note:
this is a test note
:seealso:
You can use the elemwise op to replace this example.
Just execute `x * 2` with x being a Theano variable.
.. versionadded:: 0.6
"""
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
def
__hash__
(
self
):
return
hash
(
type
(
self
))
def
__str__
(
self
):
return
self
.
__class__
.
__name__
def
make_node
(
self
,
x
):
x
=
theano
.
tensor
.
as_tensor_variable
(
x
)
return
theano
.
Apply
(
self
,
[
x
],
[
x
.
type
()])
def
perform
(
self
,
node
,
inputs
,
output_storage
):
x
=
inputs
[
0
]
z
=
output_storage
[
0
]
z
[
0
]
=
x
*
2
def
infer_shape
(
self
,
node
,
i0_shapes
):
return
i0_shapes
def
grad
(
self
,
inputs
,
output_grads
):
return
[
output_grads
[
0
]
*
2
]
def
R_op
(
self
,
inputs
,
eval_points
):
# R_op can receive None as eval_points.
# That mean there is no diferientiable path through that input
# If this imply that you cannot compute some outputs,
# return None for those.
if
eval_points
[
0
]
is
None
:
return
eval_points
return
self
.
grad
(
inputs
,
eval_points
)
theano/tensor/extra_ops.py
浏览文件 @
7ebae191
...
@@ -8,20 +8,7 @@ from theano.sandbox.linalg.ops import diag
...
@@ -8,20 +8,7 @@ from theano.sandbox.linalg.ops import diag
class
DiffOp
(
theano
.
Op
):
class
DiffOp
(
theano
.
Op
):
"""Calculate the n-th order discrete difference along given axis.
# See function diff for docstring
The first order difference is given by out[n] = a[n+1] - a[n]
along the given axis, higher order differences are calculated by
using diff recursively. Wraping of numpy.diff.
Parameter:
x -- Input vector.
Keywords arguments:
n -- The number of times values are differenced, default is 1.
"""
def
__init__
(
self
,
n
=
1
,
axis
=-
1
):
def
__init__
(
self
,
n
=
1
,
axis
=-
1
):
self
.
n
=
n
self
.
n
=
n
self
.
axis
=
axis
self
.
axis
=
axis
...
@@ -78,40 +65,24 @@ class DiffOp(theano.Op):
...
@@ -78,40 +65,24 @@ class DiffOp(theano.Op):
def
diff
(
x
,
n
=
1
,
axis
=-
1
):
def
diff
(
x
,
n
=
1
,
axis
=-
1
):
"""Calculate the n-th order discrete difference along given axis.
"""Calculate the n-th order discrete difference along given axis.
The first order difference is given by out[
n] = a[n+1] - a[n
]
The first order difference is given by out[
i] = a[i + 1] - a[i
]
along the given axis, higher order differences are calculated by
along the given axis, higher order differences are calculated by
using diff recursively. Wraping of numpy.diff.
using diff recursively. Wraping of numpy.diff.
Parameter:
:param x: Input tensor variable.
x -- Input vector.
Keywords arguments:
:param n: The number of times values are differenced, default is 1.
n -- The number of times values are differenced, default is 1.
:param axis: The axis along which the difference is taken,
default is the last axis.
.. versionadded:: 0.6
"""
"""
return
DiffOp
(
n
=
n
,
axis
=
axis
)(
x
)
return
DiffOp
(
n
=
n
,
axis
=
axis
)(
x
)
class
BinCountOp
(
theano
.
Op
):
class
BinCountOp
(
theano
.
Op
):
"""Count number of occurrences of each value in array of non-negative ints.
# See function bincount for docstring
The number of bins (of size 1) is one larger than the largest
value in x. If minlength is specified, there will be at least
this number of bins in the output array (though it will be longer
if necessary, depending on the contents of x). Each bin gives the
number of occurrences of its index value in x. If weights is
specified the input array is weighted by it, i.e. if a value n
is found at position i, out[n] += weight[i] instead of out[n] += 1.
Wraping of numpy.bincount
Parameter:
x -- 1 dimension, nonnegative ints
Keywords arguments:
weights -- Weights, array of the same shape as x.
minlength -- A minimum number of bins for the output array.
"""
compatible_type
=
(
'int8'
,
'int16'
,
'int32'
,
'int64'
,
compatible_type
=
(
'int8'
,
'int16'
,
'int32'
,
'int64'
,
'uint8'
,
'uint16'
,
'uint32'
,
'uint64'
)
'uint8'
,
'uint16'
,
'uint32'
,
'uint64'
)
...
@@ -202,13 +173,14 @@ def bincount(x, weights=None, minlength=None):
...
@@ -202,13 +173,14 @@ def bincount(x, weights=None, minlength=None):
is found at position i, out[n] += weight[i] instead of out[n] += 1.
is found at position i, out[n] += weight[i] instead of out[n] += 1.
Wraping of numpy.bincount
Wraping of numpy.bincount
Parameter:
:param x: 1 dimension, nonnegative ints
x -- 1 dimension, nonnegative ints
Keywords arguments:
:param weights: array of the same shape as x with corresponding weights.
weights -- Weights, array of the same shape as x.
Optional.
minlength -- A minimum number of bins for the output array.
:param minlength: A minimum number of bins for the output array.
Optional.
.. versionadded:: 0.6
"""
"""
return
BinCountOp
(
minlength
=
minlength
)(
x
,
weights
)
return
BinCountOp
(
minlength
=
minlength
)(
x
,
weights
)
...
@@ -224,6 +196,8 @@ def squeeze(x):
...
@@ -224,6 +196,8 @@ def squeeze(x):
:param x: Input data, tensor variable.
:param x: Input data, tensor variable.
:return: `x` without its broadcastable dimensions.
:return: `x` without its broadcastable dimensions.
.. versionadded:: 0.6
"""
"""
view
=
x
.
dimshuffle
([
i
for
i
in
range
(
x
.
ndim
)
view
=
x
.
dimshuffle
([
i
for
i
in
range
(
x
.
ndim
)
if
not
x
.
broadcastable
[
i
]])
if
not
x
.
broadcastable
[
i
]])
...
@@ -231,21 +205,7 @@ def squeeze(x):
...
@@ -231,21 +205,7 @@ def squeeze(x):
class
RepeatOp
(
theano
.
Op
):
class
RepeatOp
(
theano
.
Op
):
"""Repeat elements of an array.
# See the repeat function for docstring
It returns an array which has the same shape as `x`, except
along the given axis. The axis is used to speficy along which
axis to repeat values. By default, use the flattened input
array, and return a flat output array.
The number of repetitions for each element is `repeat`.
`repeats` is broadcasted to fit the length of the given `axis`.
:param x: Input data, tensor variable.
:param repeats: int, scalar or tensor variable.
:param axis: int, optional.
"""
def
__init__
(
self
,
axis
=
None
):
def
__init__
(
self
,
axis
=
None
):
self
.
axis
=
axis
self
.
axis
=
axis
...
@@ -360,26 +320,14 @@ def repeat(x, repeats, axis=None):
...
@@ -360,26 +320,14 @@ def repeat(x, repeats, axis=None):
:param repeats: int, scalar or tensor variable.
:param repeats: int, scalar or tensor variable.
:param axis: int, optional.
:param axis: int, optional.
.. versionadded:: 0.6
"""
"""
return
RepeatOp
(
axis
=
axis
)(
x
,
repeats
)
return
RepeatOp
(
axis
=
axis
)(
x
,
repeats
)
class
Bartlett
(
gof
.
Op
):
class
Bartlett
(
gof
.
Op
):
"""
# See function bartlett for docstring
An instance of this class returns the Bartlett spectral window in the
time-domain. The Bartlett window is very similar to a triangular window,
except that the end points are at zero. It is often used in signal
processing for tapering a signal, without generating too much ripple in
the frequency domain.
input : (integer scalar) Number of points in the output window. If zero or
less, an empty vector is returned.
output : (vector of doubles) The triangular window, with the maximum value
normalized to one (the value one appears only if the number of samples is
odd), with the first and last samples equal to zero.
"""
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
return
type
(
self
)
==
type
(
other
)
...
@@ -414,33 +362,34 @@ class Bartlett(gof.Op):
...
@@ -414,33 +362,34 @@ class Bartlett(gof.Op):
def
grad
(
self
,
inputs
,
output_grads
):
def
grad
(
self
,
inputs
,
output_grads
):
return
[
None
for
i
in
inputs
]
return
[
None
for
i
in
inputs
]
bartlett_
=
Bartlett
()
bartlett
=
Bartlett
()
#I create a function only to have the doc show well.
def
bartlett
(
M
):
"""An instance of this class returns the Bartlett spectral window in the
class
FillDiagonal
(
gof
.
Op
):
time-domain. The Bartlett window is very similar to a triangular window,
"""
except that the end points are at zero. It is often used in signal
An instance of this class returns a copy of an array with all elements of
processing for tapering a signal, without generating too much ripple in
the
main diagonal set to a specified scalar value
.
the
frequency domain
.
inputs:
:param M: (integer scalar) Number of points in the output
window. If zero or less, an empty vector is returned.
a : Rectangular array of at least two dimensions.
:return: (vector of doubles) The triangular window, with the
val : Scalar value to fill the diagonal whose type must be compatible with
maximum value normalized to one (the value one appears only if
that of array 'a' (i.e. 'val' cannot be viewed as an upcast of 'a').
the number of samples is odd), with the first and last samples
equal to zero.
output:
.. versionadded:: 0.6
An array identical to 'a' except that its main diagonal is filled with
"""
scalar 'val'. (For an array 'a' with a.ndim >= 2, the main diagonal is the
return
bartlett_
(
M
)
list of locations a[i, i, ..., i] (i.e. with indices all identical).)
Support rectangular matrix and tensor with more then 2 dimensions
if the later have all dimensions are equals.
"""
class
FillDiagonal
(
gof
.
Op
):
# See function fill_diagonal for docstring
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
return
type
(
self
)
==
type
(
other
)
...
@@ -499,6 +448,27 @@ class FillDiagonal(gof.Op):
...
@@ -499,6 +448,27 @@ class FillDiagonal(gof.Op):
wr_a
=
fill_diagonal
(
grad
,
0
)
# valid for any number of dimensions
wr_a
=
fill_diagonal
(
grad
,
0
)
# valid for any number of dimensions
wr_val
=
diag
(
grad
)
.
sum
()
# diag is only valid for matrices
wr_val
=
diag
(
grad
)
.
sum
()
# diag is only valid for matrices
return
[
wr_a
,
wr_val
]
return
[
wr_a
,
wr_val
]
fill_diagonal_
=
FillDiagonal
()
#I create a function only to have the doc show well.
def
fill_diagonal
(
a
,
val
):
""" Returns a copy of an array with all
elements of the main diagonal set to a specified scalar value.
:param a: Rectangular array of at least two dimensions.
:param val: Scalar value to fill the diagonal whose type must be
compatible with that of array 'a' (i.e. 'val' cannot be viewed
as an upcast of 'a').
:return: An array identical to 'a' except that its main diagonal
is filled with scalar 'val'. (For an array 'a' with a.ndim >=
2, the main diagonal is the list of locations a[i, i, ..., i]
(i.e. with indices all identical).)
fill_diagonal
=
FillDiagonal
()
Support rectangular matrix and tensor with more then 2 dimensions
if the later have all dimensions are equals.
.. versionadded:: 0.6
"""
return
fill_diagonal_
(
a
,
val
)
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论