Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
63fd27b5
提交
63fd27b5
authored
10月 18, 2020
作者:
Brandon T. Willard
提交者:
Brandon T. Willard
10月 18, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Update and add missing docstrings in theano.gof.op
上级
1d82da43
隐藏空白字符变更
内嵌
并排
正在显示
1 个修改的文件
包含
114 行增加
和
69 行删除
+114
-69
op.py
theano/gof/op.py
+114
-69
没有找到文件。
theano/gof/op.py
浏览文件 @
63fd27b5
...
@@ -486,50 +486,45 @@ class CLinkerOp(CLinkerObject):
...
@@ -486,50 +486,45 @@ class CLinkerOp(CLinkerObject):
class
PureOp
(
object
):
class
PureOp
(
object
):
"""
"""A class that models and constructs operations in a graph.
An :term:`Op` is a type of operation.
`Op` is an abstract class that documents the interface for theano's data
transformations. It has many subclasses, such as
`sparse dot <http://pylearn.org/epydoc/theano.sparse.Dot-class.html>`__,
and `Shape <http://pylearn.org/epydoc/theano.tensor.Shape-class.html>`__.
These subclasses are meant to be instantiated.
A `PureOp` instance has several responsibilities:
An instance has several responsabilities:
- making `Apply` instances, which mean "apply this type of operation to some
- construct `Apply` nodes via `PureOp.make_node` method,
particular inputs" (via `make_node`),
- perform
ing the calculation of outputs from given inputs
- perform
the numeric calculation of the modeled operation via
(via the `perform`)
,
the `PureOp.perform` method
,
- [optionally] building gradient-calculating graphs (via `grad`).
- and (optionally) build the gradient-calculating sub-graphs via the
`PureOp.grad` method.
To see how `
Op`, `Type`, `Variable`, and `Apply` fit together see the pag
e
To see how `
PureOp`, `Type`, `Variable`, and `Apply` fit together see th
e
on :doc:`graph`.
page
on :doc:`graph`.
For more
specifications on how these methods should behave: see the
For more
details regarding how these methods should behave: see the `Op
`Op Contract` in the sphinx docs (advanced tutorial on Op
-making).
Contract` in the sphinx docs (advanced tutorial on `Op`
-making).
"""
"""
default_output
=
None
default_output
=
None
"""
"""
Configuration variable for `__call__`.
An `int` that specifies which output `PureOp.__call__` should return. If
`None`, then all outputs are returned.
A subclass should not change this class variable, but instead over
-ride it with a subclass
A subclass should not change this class variable, but instead over
ride it
variable or an instance variable.
with a subclass
variable or an instance variable.
"""
"""
#############
# make_node #
#############
def
make_node
(
self
,
*
inputs
):
def
make_node
(
self
,
*
inputs
):
"""
"""Construct an `Apply` node that represent the application of this operation to the given inputs.
Required: return an Apply instance representing the
application of this Op to the provided inputs.
This must be implemented by sub-classes.
Returns
-------
node: Apply
The constructed `Apply` node.
"""
"""
raise
utils
.
MethodNotDefined
(
"make_node"
,
type
(
self
),
self
.
__class__
.
__name__
)
raise
utils
.
MethodNotDefined
(
"make_node"
,
type
(
self
),
self
.
__class__
.
__name__
)
...
@@ -560,8 +555,9 @@ class PureOp(object):
...
@@ -560,8 +555,9 @@ class PureOp(object):
raise
AttributeError
(
"
%
s has no test value
%
s"
%
(
v
,
detailed_err_msg
))
raise
AttributeError
(
"
%
s has no test value
%
s"
%
(
v
,
detailed_err_msg
))
def
__call__
(
self
,
*
inputs
,
**
kwargs
):
def
__call__
(
self
,
*
inputs
,
**
kwargs
):
"""
"""Construct an `Apply` node using `self.make_node` and return its outputs.
Optional: return some or all output[s] of `make_node`.
This method is just a wrapper around `PureOp.make_node`.
It is called by code such as:
It is called by code such as:
...
@@ -579,8 +575,8 @@ class PureOp(object):
...
@@ -579,8 +575,8 @@ class PureOp(object):
Parameters
Parameters
----------
----------
inputs
inputs
: tuple of Variable
The
Op's inputs, forwarded to the call to `make_node()`
.
The
`PureOp`'s inputs
.
kwargs
kwargs
Additional keyword arguments to be forwarded to
Additional keyword arguments to be forwarded to
`make_node()` *except* for optional argument `return_list` (which
`make_node()` *except* for optional argument `return_list` (which
...
@@ -687,15 +683,54 @@ class PureOp(object):
...
@@ -687,15 +683,54 @@ class PureOp(object):
# just to self.add_tag_trace
# just to self.add_tag_trace
add_tag_trace
=
staticmethod
(
utils
.
add_tag_trace
)
add_tag_trace
=
staticmethod
(
utils
.
add_tag_trace
)
#########################
def
grad
(
self
,
inputs
,
output_grads
):
# Python implementation #
"""Construct a graph for the gradient with respect to each input variable.
#########################
Each returned `Variable` represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type `NullType` for that
input.
Parameters
----------
inputs : list of Variable
The input variables.
output_grads : list of Variable
The gradients of the output variables.
Returns
-------
grads : list of Variable
The gradients with respect to each `Variable` in `inputs`.
"""
raise
NotImplementedError
()
def
L_op
(
self
,
inputs
,
outputs
,
output_grads
):
def
L_op
(
self
,
inputs
,
outputs
,
output_grads
):
r"""Construct a graph for the L-operator.
This method is primarily used by `tensor.Lop` and dispatches to
`PureOp.grad` by default.
The *L-operator* computes a *row* vector times the Jacobian. The
mathematical relationship is
:math:`v \frac{\partial f(x)}{\partial x}`.
The *L-operator* is also supported for generic tensors (not only for
vectors).
Parameters
----------
inputs : list of Variable
outputs : list of Variable
output_grads : list of Variable
"""
return
self
.
grad
(
inputs
,
output_grads
)
return
self
.
grad
(
inputs
,
output_grads
)
def
R_op
(
self
,
inputs
,
eval_points
):
def
R_op
(
self
,
inputs
,
eval_points
):
"""
"""Construct a graph for the R-operator.
This method is primarily used by tensor.Rop
This method is primarily used by tensor.Rop
Suppose the op outputs
Suppose the op outputs
...
@@ -718,12 +753,7 @@ class PureOp(object):
...
@@ -718,12 +753,7 @@ class PureOp(object):
eval_points=eval_points)
eval_points=eval_points)
"""
"""
raise
NotImplementedError
(
raise
NotImplementedError
()
"
%
s of class
%
s does not "
"implement R_op. If this is a theano op, write to the "
"theano-dev mailing list for assistance. If it is your "
"own op, implement the R_op method."
%
(
self
,
self
.
__class__
.
__name__
)
)
def
perform
(
self
,
node
,
inputs
,
output_storage
,
params
=
None
):
def
perform
(
self
,
node
,
inputs
,
output_storage
,
params
=
None
):
"""
"""
...
@@ -732,24 +762,31 @@ class PureOp(object):
...
@@ -732,24 +762,31 @@ class PureOp(object):
Parameters
Parameters
----------
----------
node : Apply instance
node : Apply
Contains the symbolic inputs and outputs.
The symbolic `Apply` node that represents this computation.
inputs : list
inputs : Sequence
Sequence of inputs (immutable).
Immutable sequence of non-symbolic/numeric inputs. These
output_storage : list
are the values of each `Variable` in `node.inputs`.
List of mutable 1-element lists (do not change the length of
output_storage : list of list
these lists)
List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
`Variable` in `node.outputs`. The primary purpose of this method
is to set the values of these sub-lists.
params : tuple
A tuple containing the values of each entry in `__props__`.
Notes
Notes
-----
-----
The `output_storage` list might contain data. If an element of
The `output_storage` list might contain data. If an element of
output_storage is not None, it has to be of the right type,
output_storage is not `None`, it has to be of the right type, for
for instance, for a TensorVariable, it has to be a Numpy ndarray,
instance, for a `TensorVariable`, it has to be a NumPy `ndarray`
with the right number of dimensions, and the correct dtype.
with the right number of dimensions and the correct dtype.
Its shape and stride pattern, can be arbitrary. It not is
Its shape and stride pattern can be arbitrary. It is not
guaranteed that it was produced by a previous call to impl. It
guaranteed that such pre-set values were produced by a previous call to
could be allocated by another Op impl is free to reuse it as it
this `PureOp.perform`; they could've been allocated by another
sees fit, or to discard it and allocate new memory.
`PureOp`'s `perform` method.
A `PureOp` is free to reuse `output_storage` as it sees fit, or to
discard it and allocate new memory.
Raises
Raises
------
------
...
@@ -766,12 +803,22 @@ class PureOp(object):
...
@@ -766,12 +803,22 @@ class PureOp(object):
)
)
def
do_constant_folding
(
self
,
node
):
def
do_constant_folding
(
self
,
node
):
"""
"""Determine whether or not constant folding should be performed for the given node.
This allows each op to determine if it wants to be constant
folded when all its inputs are constant. This allows it to
This allows each `PureOp` to determine if it wants to be constant
choose where it puts its memory/speed trade-off. Also, it
folded when all its inputs are constant. This allows it to choose where
could make things faster as constants can't be used for inplace
it puts its memory/speed trade-off. Also, it could make things faster
operations (see *IncSubtensor).
as constants can't be used for in-place operations (see
`*IncSubtensor`).
Parameters
----------
node : Apply
The node for which the constant folding determination is made.
Returns
-------
res : bool
"""
"""
return
True
return
True
...
@@ -994,16 +1041,14 @@ class Op(utils.object2, PureOp, CLinkerOp):
...
@@ -994,16 +1041,14 @@ class Op(utils.object2, PureOp, CLinkerOp):
def
get_test_value
(
v
):
def
get_test_value
(
v
):
"""
"""Get the test value for `v`.
Extract test value from `v`. Raises AttributeError if there is none.
If input `v` is not already a variable, it is turned into one by calling
If input `v` is not already a variable, it is turned into one by calling
`as_tensor_variable(v)`, so that this function can be applied e.g.
`as_tensor_variable(v)`.
on numpy arrays or Python lists and scalars, considering them as constants.
For a Constant, the test value is v.value.
Raises
For a Shared variable, it is the internal value.
------
For another Variable, it is the content of v.tag.test_value
.
AttributeError if no test value is set
.
"""
"""
if
not
isinstance
(
v
,
graph
.
Variable
):
if
not
isinstance
(
v
,
graph
.
Variable
):
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论