Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
c172b4c4
提交
c172b4c4
authored
8月 05, 2015
作者:
Iban Harlouchet
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
numpydoc for theano/tensor/extra_ops.py
上级
48de5a3b
隐藏空白字符变更
内嵌
并排
正在显示
1 个修改的文件
包含
132 行增加
和
60 行删除
+132
-60
extra_ops.py
theano/tensor/extra_ops.py
+132
-60
没有找到文件。
theano/tensor/extra_ops.py
浏览文件 @
c172b4c4
...
@@ -14,8 +14,9 @@ tensor = basic
...
@@ -14,8 +14,9 @@ tensor = basic
class
CpuContiguous
(
theano
.
Op
):
class
CpuContiguous
(
theano
.
Op
):
"""
"""
Check to see if the input is c-contiguous,
Check to see if the input is c-contiguous,
if it is, do nothing, else return a contiguous array
if it is, do nothing, else return a contiguous array
.
"""
"""
__props__
=
()
__props__
=
()
view_map
=
{
0
:
[
0
]}
view_map
=
{
0
:
[
0
]}
...
@@ -171,12 +172,16 @@ def cumsum(x, axis=None):
...
@@ -171,12 +172,16 @@ def cumsum(x, axis=None):
Wraping of numpy.cumsum.
Wraping of numpy.cumsum.
:param x: Input tensor variable.
Parameters
----------
:param axis: The axis along which the cumulative sum is computed.
x
Input tensor variable.
axis
The axis along which the cumulative sum is computed.
The default (None) is to compute the cumsum over the flattened array.
The default (None) is to compute the cumsum over the flattened array.
.. versionadded:: 0.7
.. versionadded:: 0.7
"""
"""
return
CumsumOp
(
axis
=
axis
)(
x
)
return
CumsumOp
(
axis
=
axis
)(
x
)
...
@@ -291,18 +296,24 @@ def cumprod(x, axis=None):
...
@@ -291,18 +296,24 @@ def cumprod(x, axis=None):
Wraping of numpy.cumprod.
Wraping of numpy.cumprod.
:param x: Input tensor variable.
Parameters
----------
x
Input tensor variable.
:param axis: The axis along which the cumulative product is computed.
axis
The axis along which the cumulative product is computed.
The default (None) is to compute the cumprod over the flattened array.
The default (None) is to compute the cumprod over the flattened array.
.. versionadded:: 0.7
.. versionadded:: 0.7
"""
"""
return
CumprodOp
(
axis
=
axis
)(
x
)
return
CumprodOp
(
axis
=
axis
)(
x
)
class
DiffOp
(
theano
.
Op
):
class
DiffOp
(
theano
.
Op
):
# See function diff for docstring
# See function diff for docstring
__props__
=
(
"n"
,
"axis"
)
__props__
=
(
"n"
,
"axis"
)
def
__init__
(
self
,
n
=
1
,
axis
=-
1
):
def
__init__
(
self
,
n
=
1
,
axis
=-
1
):
...
@@ -354,23 +365,29 @@ def diff(x, n=1, axis=-1):
...
@@ -354,23 +365,29 @@ def diff(x, n=1, axis=-1):
along the given axis, higher order differences are calculated by
along the given axis, higher order differences are calculated by
using diff recursively. Wraping of numpy.diff.
using diff recursively. Wraping of numpy.diff.
:param x: Input tensor variable.
Parameters
----------
x
Input tensor variable.
:param n: The number of times values are differenced, default is 1.
n
The number of times values are differenced, default is 1.
:param axis: The axis along which the difference is taken,
axis
default is the last axis.
The axis along which the difference is taken,
default is the last axis.
.. versionadded:: 0.6
.. versionadded:: 0.6
"""
"""
return
DiffOp
(
n
=
n
,
axis
=
axis
)(
x
)
return
DiffOp
(
n
=
n
,
axis
=
axis
)(
x
)
class
BinCountOp
(
theano
.
Op
):
class
BinCountOp
(
theano
.
Op
):
"""
"""
DEPRECATED: use bincount() instead.
.. note:: Deprecated
Use bincount() instead.
See function bincount for docstring.
See function bincount for docstring
"""
"""
compatible_type
=
(
'int8'
,
'int16'
,
'int32'
,
'int64'
,
compatible_type
=
(
'int8'
,
'int16'
,
'int32'
,
'int64'
,
'uint8'
,
'uint16'
,
'uint32'
,
'uint64'
)
'uint8'
,
'uint16'
,
'uint32'
,
'uint64'
)
...
@@ -473,17 +490,19 @@ def bincount(x, weights=None, minlength=None, assert_nonneg=False):
...
@@ -473,17 +490,19 @@ def bincount(x, weights=None, minlength=None, assert_nonneg=False):
specified the input array is weighted by it, i.e. if a value n
specified the input array is weighted by it, i.e. if a value n
is found at position i, out[n] += weight[i] instead of out[n] += 1.
is found at position i, out[n] += weight[i] instead of out[n] += 1.
:param x: 1 dimension, nonnegative ints
Parameters
----------
:param weights: array of the same shape as x with corresponding weights.
x : 1 dimension, nonnegative ints
weights : array of the same shape as x with corresponding weights.
Optional.
Optional.
:param minlength
: A minimum number of bins for the output array.
minlength
: A minimum number of bins for the output array.
Optional.
Optional.
:param assert_nonneg
: A flag that inserts an assert_op to check if
assert_nonneg
: A flag that inserts an assert_op to check if
every input x is nonnegative.
every input x is nonnegative.
Optional.
Optional.
.. versionadded:: 0.6
.. versionadded:: 0.6
"""
"""
compatible_type
=
(
'int8'
,
'int16'
,
'int32'
,
'int64'
,
compatible_type
=
(
'int8'
,
'int16'
,
'int32'
,
'int64'
,
'uint8'
,
'uint16'
,
'uint32'
)
'uint8'
,
'uint16'
,
'uint32'
)
...
@@ -527,11 +546,17 @@ def squeeze(x):
...
@@ -527,11 +546,17 @@ def squeeze(x):
broadcastable dimensions removed. This is
broadcastable dimensions removed. This is
always `x` itself or a view into `x`.
always `x` itself or a view into `x`.
:param x: Input data, tensor variable.
Parameters
----------
x
Input data, tensor variable.
:return: `x` without its broadcastable dimensions.
Returns
-------
`x` without its broadcastable dimensions.
.. versionadded:: 0.6
.. versionadded:: 0.6
"""
"""
view
=
x
.
dimshuffle
([
i
for
i
in
range
(
x
.
ndim
)
view
=
x
.
dimshuffle
([
i
for
i
in
range
(
x
.
ndim
)
if
not
x
.
broadcastable
[
i
]])
if
not
x
.
broadcastable
[
i
]])
...
@@ -542,17 +567,24 @@ def compress(condition, x, axis=None):
...
@@ -542,17 +567,24 @@ def compress(condition, x, axis=None):
"""Return selected slices of an array along given axis.
"""Return selected slices of an array along given axis.
It returns the input tensor, but with selected slices along a given axis
It returns the input tensor, but with selected slices along a given axis
retained. If no axis is provided, the tensor is flattened
retained. If no axis is provided, the tensor is flattened
.
Corresponds to numpy.compress
Corresponds to numpy.compress
:param x: Input data, tensor variable
Parameters
----------
x
Input data, tensor variable.
:param condition: 1 dimensional array of non-zero and zero values
condition
corresponding to indices of slices along a selected axis
1 dimensional array of non-zero and zero values
corresponding to indices of slices along a selected axis.
:return: `x` with selected slices
Returns
-------
`x` with selected slices
.. versionadded:: 0.7
.. versionadded:: 0.7
"""
"""
indices
=
theano
.
tensor
.
basic
.
flatnonzero
(
condition
)
indices
=
theano
.
tensor
.
basic
.
flatnonzero
(
condition
)
return
x
.
take
(
indices
,
axis
=
axis
)
return
x
.
take
(
indices
,
axis
=
axis
)
...
@@ -560,6 +592,7 @@ def compress(condition, x, axis=None):
...
@@ -560,6 +592,7 @@ def compress(condition, x, axis=None):
class
RepeatOp
(
theano
.
Op
):
class
RepeatOp
(
theano
.
Op
):
# See the repeat function for docstring
# See the repeat function for docstring
__props__
=
(
"axis"
,)
__props__
=
(
"axis"
,)
def
__init__
(
self
,
axis
=
None
):
def
__init__
(
self
,
axis
=
None
):
...
@@ -678,14 +711,19 @@ def repeat(x, repeats, axis=None):
...
@@ -678,14 +711,19 @@ def repeat(x, repeats, axis=None):
The number of repetitions for each element is `repeat`.
The number of repetitions for each element is `repeat`.
`repeats` is broadcasted to fit the length of the given `axis`.
`repeats` is broadcasted to fit the length of the given `axis`.
:param x: Input data, tensor variable.
Parameters
:param repeats: int, scalar or tensor variable.
----------
x
Input data, tensor variable.
repeats : int, scalar or tensor variable
axis : int, optional
:param axis: int, optional.
See Also
--------
:see: :func:
`tensor.tile <tensor.tile>`
`tensor.tile <tensor.tile>`
.. versionadded:: 0.6
.. versionadded:: 0.6
"""
"""
repeats
=
tensor
.
as_tensor_variable
(
repeats
)
repeats
=
tensor
.
as_tensor_variable
(
repeats
)
...
@@ -769,13 +807,18 @@ def bartlett(M):
...
@@ -769,13 +807,18 @@ def bartlett(M):
processing for tapering a signal, without generating too much ripple in
processing for tapering a signal, without generating too much ripple in
the frequency domain.
the frequency domain.
:param M: (integer scalar) Number of points in the output
Parameters
window. If zero or less, an empty vector is returned.
----------
M : integer scalar
Number of points in the output window. If zero or less,
an empty vector is returned.
:return: (vector of doubles) The triangular window, with the
Returns
maximum value normalized to one (the value one appears only if
-------
the number of samples is odd), with the first and last samples
vector of doubles
equal to zero.
The triangular window, with the maximum value normalized to one
(the value one appears only if the number of samples is odd), with
the first and last samples equal to zero.
.. versionadded:: 0.6
.. versionadded:: 0.6
...
@@ -823,8 +866,10 @@ class FillDiagonal(gof.Op):
...
@@ -823,8 +866,10 @@ class FillDiagonal(gof.Op):
def
grad
(
self
,
inp
,
cost_grad
):
def
grad
(
self
,
inp
,
cost_grad
):
"""
"""
Note: The gradient is currently implemented for matrices
Notes
only.
-----
The gradient is currently implemented for matrices only.
"""
"""
a
,
val
=
inp
a
,
val
=
inp
grad
=
cost_grad
[
0
]
grad
=
cost_grad
[
0
]
...
@@ -846,12 +891,18 @@ def fill_diagonal(a, val):
...
@@ -846,12 +891,18 @@ def fill_diagonal(a, val):
""" Returns a copy of an array with all
""" Returns a copy of an array with all
elements of the main diagonal set to a specified scalar value.
elements of the main diagonal set to a specified scalar value.
:param a: Rectangular array of at least two dimensions.
Parameters
:param val: Scalar value to fill the diagonal whose type must be
----------
a :
Rectangular array of at least two dimensions.
val :
Scalar value to fill the diagonal whose type must be
compatible with that of array 'a' (i.e. 'val' cannot be viewed
compatible with that of array 'a' (i.e. 'val' cannot be viewed
as an upcast of 'a').
as an upcast of 'a').
:return: An array identical to 'a' except that its main diagonal
Returns
-------
An array identical to 'a' except that its main diagonal
is filled with scalar 'val'. (For an array 'a' with a.ndim >=
is filled with scalar 'val'. (For an array 'a' with a.ndim >=
2, the main diagonal is the list of locations a[i, i, ..., i]
2, the main diagonal is the list of locations a[i, i, ..., i]
(i.e. with indices all identical).)
(i.e. with indices all identical).)
...
@@ -860,6 +911,7 @@ def fill_diagonal(a, val):
...
@@ -860,6 +911,7 @@ def fill_diagonal(a, val):
if the later have all dimensions are equals.
if the later have all dimensions are equals.
.. versionadded:: 0.6
.. versionadded:: 0.6
"""
"""
return
fill_diagonal_
(
a
,
val
)
return
fill_diagonal_
(
a
,
val
)
...
@@ -902,13 +954,16 @@ class FillDiagonalOffset(gof.Op):
...
@@ -902,13 +954,16 @@ class FillDiagonalOffset(gof.Op):
height
,
width
=
a
.
shape
height
,
width
=
a
.
shape
"""
"""
Note: The fill_diagonal only support rectangular matrix. The output
Notes
-----
The fill_diagonal only support rectangular matrix. The output
of tall matrix is "wrapped", which is an option in numpy 1.9.0
of tall matrix is "wrapped", which is an option in numpy 1.9.0
but was regarded as a bug in numpy 1.6.2. Here I implement the
but was regarded as a bug in numpy 1.6.2. Here I implement the
fill_diagonal_offset with unwrapped output, so fill_diagonal_offset
fill_diagonal_offset with unwrapped output, so fill_diagonal_offset
supports tall matrix.(This make a little difference between the output
supports tall matrix.(This make a little difference between the output
of fill_diagonal and fill_diagonal_offset only in the case of tall
of fill_diagonal and fill_diagonal_offset only in the case of tall
matrix)
matrix)
"""
"""
if
offset
>=
0
:
if
offset
>=
0
:
start
=
offset
start
=
offset
...
@@ -925,8 +980,9 @@ class FillDiagonalOffset(gof.Op):
...
@@ -925,8 +980,9 @@ class FillDiagonalOffset(gof.Op):
def
grad
(
self
,
inp
,
cost_grad
):
def
grad
(
self
,
inp
,
cost_grad
):
"""
"""
Note: The gradient is currently implemented for matrices
Notes
only.
-----
The gradient is currently implemented for matrices only.
"""
"""
a
,
val
,
offset
=
inp
a
,
val
,
offset
=
inp
grad
=
cost_grad
[
0
]
grad
=
cost_grad
[
0
]
...
@@ -972,14 +1028,23 @@ def fill_diagonal_offset(a, val, offset):
...
@@ -972,14 +1028,23 @@ def fill_diagonal_offset(a, val, offset):
Returns a copy of an array with all
Returns a copy of an array with all
elements of the main diagonal set to a specified scalar value.
elements of the main diagonal set to a specified scalar value.
:param a: Rectangular array of two dimensions.
Parameters
:param val: Scalar value to fill the diagonal whose type must be
----------
compatible with that of array 'a' (i.e. 'val' cannot be viewed
a
as an upcast of 'a').
Rectangular array of two dimensions.
:param offset: Scalar value Offset of the diagonal from the main
val
diagonal. Can be positive or negative integer.
Scalar value to fill the diagonal whose type must be
:return: An array identical to 'a' except that its offset diagonal
compatible with that of array 'a' (i.e. 'val' cannot be viewed
is filled with scalar 'val'. The output is unwrapped.
as an upcast of 'a').
offset
Scalar value Offset of the diagonal from the main
diagonal. Can be positive or negative integer.
Returns
-------
An array identical to 'a' except that its offset diagonal
is filled with scalar 'val'. The output is unwrapped.
"""
"""
return
fill_diagonal_offset_
(
a
,
val
,
offset
)
return
fill_diagonal_offset_
(
a
,
val
,
offset
)
...
@@ -988,13 +1053,19 @@ def to_one_hot(y, nb_class, dtype=None):
...
@@ -988,13 +1053,19 @@ def to_one_hot(y, nb_class, dtype=None):
"""Return a matrix where each row correspond to the one hot
"""Return a matrix where each row correspond to the one hot
encoding of each element in y.
encoding of each element in y.
:param y: A vector of integer value between 0 and nb_class - 1.
Parameters
:param nb_class: The number of class in y.
----------
:param dtype: The dtype of the returned matrix. Default floatX.
y
A vector of integer value between 0 and nb_class - 1.
nb_class : int
The number of class in y.
dtype : data-type
The dtype of the returned matrix. Default floatX.
:return: A matrix of shape (y.shape[0], nb_class), where each
Returns
row ``i`` is the one hot encoding of the corresponding ``y[i]``
-------
value.
A matrix of shape (y.shape[0], nb_class), where each row ``i`` is
the one hot encoding of the corresponding ``y[i]`` value.
"""
"""
ret
=
theano
.
tensor
.
zeros
((
y
.
shape
[
0
],
nb_class
),
ret
=
theano
.
tensor
.
zeros
((
y
.
shape
[
0
],
nb_class
),
...
@@ -1006,11 +1077,10 @@ def to_one_hot(y, nb_class, dtype=None):
...
@@ -1006,11 +1077,10 @@ def to_one_hot(y, nb_class, dtype=None):
class
Unique
(
theano
.
Op
):
class
Unique
(
theano
.
Op
):
"""
"""
Wraps numpy.unique.
Wraps numpy.unique. This op is not implemented on the GPU.
This op is not implemented on the GPU.
Examples
Examples
========
--------
>>> import numpy as np
>>> import numpy as np
>>> x = theano.tensor.vector()
>>> x = theano.tensor.vector()
...
@@ -1022,7 +1092,9 @@ class Unique(theano.Op):
...
@@ -1022,7 +1092,9 @@ class Unique(theano.Op):
>>> g = theano.function([y], Unique(True, True, False)(y))
>>> g = theano.function([y], Unique(True, True, False)(y))
>>> g([[1, 1, 1.0], (2, 3, 3.0)])
>>> g([[1, 1, 1.0], (2, 3, 3.0)])
[array([ 1., 2., 3.]), array([0, 3, 4]), array([0, 0, 0, 1, 2, 2])]
[array([ 1., 2., 3.]), array([0, 3, 4]), array([0, 0, 0, 1, 2, 2])]
"""
"""
__props__
=
(
"return_index"
,
"return_inverse"
,
"return_counts"
)
__props__
=
(
"return_index"
,
"return_inverse"
,
"return_counts"
)
def
__init__
(
self
,
return_index
=
False
,
return_inverse
=
False
,
def
__init__
(
self
,
return_index
=
False
,
return_inverse
=
False
,
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论