Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
f22d3165
提交
f22d3165
authored
6月 06, 2021
作者:
Brandon T. Willard
提交者:
Brandon T. Willard
6月 07, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix Sphinx documentation syntax errors, broken links, etc.
上级
b0a07a40
全部展开
隐藏空白字符变更
内嵌
并排
正在显示
50 个修改的文件
包含
337 行增加
和
303 行删除
+337
-303
basic.py
aesara/graph/basic.py
+0
-0
fg.py
aesara/graph/fg.py
+18
-17
op.py
aesara/graph/op.py
+70
-58
opt.py
aesara/graph/opt.py
+63
-58
type.py
aesara/graph/type.py
+3
-3
basic.py
aesara/link/basic.py
+1
-1
interface.py
aesara/link/c/interface.py
+31
-16
dispatch.py
aesara/link/jax/dispatch.py
+1
-1
math.py
aesara/scalar/math.py
+2
-2
__init__.py
aesara/scan/__init__.py
+2
-2
basic.py
aesara/sparse/basic.py
+18
-14
opt.py
aesara/sparse/opt.py
+14
-12
basic.py
aesara/tensor/basic.py
+6
-6
basic_opt.py
aesara/tensor/basic_opt.py
+8
-8
extra_ops.py
aesara/tensor/extra_ops.py
+9
-7
math_opt.py
aesara/tensor/math_opt.py
+3
-0
subtensor.py
aesara/tensor/subtensor.py
+5
-5
core_development_guide.rst
doc/core_development_guide.rst
+0
-3
dev_start_guide.rst
doc/dev_start_guide.rst
+0
-1
ctype.rst
doc/extending/ctype.rst
+1
-1
extending_aesara.rst
doc/extending/extending_aesara.rst
+0
-0
extending_aesara_c.rst
doc/extending/extending_aesara_c.rst
+2
-2
index.rst
doc/extending/index.rst
+0
-1
inplace.rst
doc/extending/inplace.rst
+4
-4
op.rst
doc/extending/op.rst
+3
-3
optimization.rst
doc/extending/optimization.rst
+6
-6
pipeline.rst
doc/extending/pipeline.rst
+1
-1
tips.rst
doc/extending/tips.rst
+1
-1
unittest.rst
doc/extending/unittest.rst
+1
-1
glossary.rst
doc/glossary.rst
+1
-1
index.rst
doc/index.rst
+1
-3
introduction.rst
doc/introduction.rst
+0
-2
config.rst
doc/library/config.rst
+8
-8
op.rst
doc/library/graph/op.rst
+2
-2
params_type.rst
doc/library/graph/params_type.rst
+3
-3
index.rst
doc/library/index.rst
+0
-1
scan.rst
doc/library/scan.rst
+1
-0
sandbox.rst
doc/library/sparse/sandbox.rst
+1
-3
basic.rst
doc/library/tensor/basic.rst
+19
-6
index.rst
doc/library/tensor/index.rst
+2
-1
batchnorm.rst
doc/library/tensor/nnet/batchnorm.rst
+8
-8
index.rst
doc/library/tensor/nnet/index.rst
+2
-2
basic.rst
doc/library/tensor/random/basic.rst
+5
-5
index.rst
doc/library/tensor/random/index.rst
+2
-5
conv.rst
doc/library/tensor/signal/conv.rst
+1
-1
tests.rst
doc/library/tests.rst
+0
-8
sandbox.rst
doc/sandbox/sandbox.rst
+1
-3
examples.rst
doc/tutorial/examples.rst
+4
-4
test_basic_opt.py
tests/tensor/test_basic_opt.py
+2
-2
test_type.py
tests/tensor/test_type.py
+1
-1
没有找到文件。
aesara/graph/basic.py
浏览文件 @
f22d3165
差异被折叠。
点击展开。
aesara/graph/fg.py
浏览文件 @
f22d3165
...
@@ -321,7 +321,7 @@ class FunctionGraph(MetaObject):
...
@@ -321,7 +321,7 @@ class FunctionGraph(MetaObject):
This will also import the `variable`'s `Apply` node.
This will also import the `variable`'s `Apply` node.
Parameters
:
Parameters
----------
----------
variable : aesara.graph.basic.Variable
variable : aesara.graph.basic.Variable
The variable to be imported.
The variable to be imported.
...
@@ -361,7 +361,7 @@ class FunctionGraph(MetaObject):
...
@@ -361,7 +361,7 @@ class FunctionGraph(MetaObject):
)
->
None
:
)
->
None
:
"""Recursively import everything between an `Apply` node and the `FunctionGraph`'s outputs.
"""Recursively import everything between an `Apply` node and the `FunctionGraph`'s outputs.
Parameters
:
Parameters
----------
----------
apply_node : aesara.graph.basic.Apply
apply_node : aesara.graph.basic.Apply
The node to be imported.
The node to be imported.
...
@@ -492,7 +492,7 @@ class FunctionGraph(MetaObject):
...
@@ -492,7 +492,7 @@ class FunctionGraph(MetaObject):
This is the main interface to manipulate the subgraph in `FunctionGraph`.
This is the main interface to manipulate the subgraph in `FunctionGraph`.
For every node that uses `var` as input, makes it use `new_var` instead.
For every node that uses `var` as input, makes it use `new_var` instead.
Parameters
:
Parameters
----------
----------
var : aesara.graph.basic.Variable
var : aesara.graph.basic.Variable
The variable to be replaced.
The variable to be replaced.
...
@@ -772,20 +772,21 @@ class FunctionGraph(MetaObject):
...
@@ -772,20 +772,21 @@ class FunctionGraph(MetaObject):
def
clone_get_equiv
(
def
clone_get_equiv
(
self
,
check_integrity
:
bool
=
True
,
attach_feature
:
bool
=
True
self
,
check_integrity
:
bool
=
True
,
attach_feature
:
bool
=
True
)
->
Union
[
"FunctionGraph"
,
Dict
[
Variable
,
Variable
]]:
)
->
Union
[
"FunctionGraph"
,
Dict
[
Variable
,
Variable
]]:
"""Clone the graph and get a dict that maps old nodes to new ones
"""Clone the graph and return a ``dict`` that maps old nodes to new nodes.
Parameters:
Parameters
check_integrity: bool
----------
Whether to check integrity. Default is True.
check_integrity
attach_feature: bool
Whether to check integrity.
Whether to attach feature of origin graph to cloned graph.
attach_feature
Default is True.
Whether to attach feature of origin graph to cloned graph.
Returns:
Returns
e: FunctionGraph
-------
Cloned fgraph. Every node in cloned graph is cloned.
e
equiv: dict
Cloned fgraph. Every node in cloned graph is cloned.
A dict that map old node to new node.
equiv
A ``dict`` that maps old nodes to the new nodes.
"""
"""
equiv
=
clone_get_equiv
(
self
.
inputs
,
self
.
outputs
)
equiv
=
clone_get_equiv
(
self
.
inputs
,
self
.
outputs
)
...
...
aesara/graph/op.py
浏览文件 @
f22d3165
...
@@ -56,7 +56,7 @@ ThunkType = Callable[[PerformMethodType, StorageMapType, ComputeMapType, Apply],
...
@@ -56,7 +56,7 @@ ThunkType = Callable[[PerformMethodType, StorageMapType, ComputeMapType, Apply],
def
compute_test_value
(
node
:
Apply
):
def
compute_test_value
(
node
:
Apply
):
"""Computes the test value of a node.
r
"""Computes the test value of a node.
Parameters
Parameters
----------
----------
...
@@ -66,7 +66,7 @@ def compute_test_value(node: Apply):
...
@@ -66,7 +66,7 @@ def compute_test_value(node: Apply):
Returns
Returns
-------
-------
None
None
The `tag.test_value`s are updated in each `Variable` in `node.outputs`.
The `tag.test_value`
\
s are updated in each `Variable` in `node.outputs`.
"""
"""
# Gather the test values for each input of the node
# Gather the test values for each input of the node
...
@@ -140,13 +140,11 @@ class Op(MetaObject):
...
@@ -140,13 +140,11 @@ class Op(MetaObject):
A `Op` instance has several responsibilities:
A `Op` instance has several responsibilities:
- construct `Apply` nodes via `Op.make_node` method,
* construct `Apply` nodes via :meth:`Op.make_node` method,
* perform the numeric calculation of the modeled operation via the
- perform the numeric calculation of the modeled operation via
:meth:`Op.perform` method,
the `Op.perform` method,
* and (optionally) build the gradient-calculating sub-graphs via the
:meth:`Op.grad` method.
- and (optionally) build the gradient-calculating sub-graphs via the
`Op.grad` method.
To see how `Op`, `Type`, `Variable`, and `Apply` fit together see the
To see how `Op`, `Type`, `Variable`, and `Apply` fit together see the
page on :doc:`graph`.
page on :doc:`graph`.
...
@@ -173,8 +171,12 @@ class Op(MetaObject):
...
@@ -173,8 +171,12 @@ class Op(MetaObject):
Examples
Examples
========
========
.. code-block:: python
view_map = {0: [1]} # first output is a view of second input
view_map = {0: [1]} # first output is a view of second input
view_map = {1: [0]} # second output is a view of first input
view_map = {1: [0]} # second output is a view of first input
"""
"""
destroy_map
:
Dict
[
int
,
List
[
int
]]
=
{}
destroy_map
:
Dict
[
int
,
List
[
int
]]
=
{}
...
@@ -184,6 +186,9 @@ class Op(MetaObject):
...
@@ -184,6 +186,9 @@ class Op(MetaObject):
Examples
Examples
========
========
.. code-block:: python
destroy_map = {0: [1]} # first output operates in-place on second input
destroy_map = {0: [1]} # first output operates in-place on second input
destroy_map = {1: [0]} # second output operates in-place on first input
destroy_map = {1: [0]} # second output operates in-place on first input
...
@@ -223,17 +228,17 @@ class Op(MetaObject):
...
@@ -223,17 +228,17 @@ class Op(MetaObject):
return
Apply
(
self
,
inputs
,
[
o
()
for
o
in
self
.
otypes
])
return
Apply
(
self
,
inputs
,
[
o
()
for
o
in
self
.
otypes
])
def
__call__
(
self
,
*
inputs
:
Any
,
**
kwargs
)
->
Union
[
Variable
,
List
[
Variable
]]:
def
__call__
(
self
,
*
inputs
:
Any
,
**
kwargs
)
->
Union
[
Variable
,
List
[
Variable
]]:
"""Construct an `Apply` node using `self.make_node` and return its outputs.
r
"""Construct an `Apply` node using `self.make_node` and return its outputs.
This method is just a wrapper around `Op.make_node`.
This method is just a wrapper around `Op.make_node`.
It is called by code such as:
It is called by code such as:
..
python::
..
code-block:: python
x = tensor.matrix()
x = aesara.tensor.matrix()
y = aesara.tensor.exp(x)
y = tensor.exp(x)
`tensor.exp` is an Op instance, so `tensor.exp(x)` calls
`tensor.exp` is an Op instance, so `tensor.exp(x)` calls
`tensor.exp.__call__` (i.e. this method) and returns its single output
`tensor.exp.__call__` (i.e. this method) and returns its single output
...
@@ -250,19 +255,19 @@ class Op(MetaObject):
...
@@ -250,19 +255,19 @@ class Op(MetaObject):
The `Op`'s inputs.
The `Op`'s inputs.
kwargs
kwargs
Additional keyword arguments to be forwarded to
Additional keyword arguments to be forwarded to
`make_node()` *except* for optional argument `return_list
` (which
:meth:`Op.make_node` *except* for optional argument ``return_list`
` (which
defaults to `
False`). If `return_list` is `True
`, then the returned
defaults to `
`False``). If ``return_list`` is ``True`
`, then the returned
value is always a `
list
`. Otherwise it is either a single `Variable`
value is always a `
`list`
`. Otherwise it is either a single `Variable`
when the output of
`make_node()
` contains a single element, or this
when the output of
:meth:`Op.make_node
` contains a single element, or this
output (unchanged) when it contains multiple elements.
output (unchanged) when it contains multiple elements.
Returns
Returns
-------
-------
outputs : list of Variable or Variable
outputs : list of Variable or Variable
Either a list of output `Variable`s, or a single `Variable`.
Either a list of output `Variable`
\
s, or a single `Variable`.
This is determined by the number of outputs produced by the
This is determined by the number of outputs produced by the
`Op`, the value of the keyword `
return_list
`, and the value of
`Op`, the value of the keyword `
`return_list`
`, and the value of
the `Op.default_output` property.
the
:attr:
`Op.default_output` property.
"""
"""
return_list
=
kwargs
.
pop
(
"return_list"
,
False
)
return_list
=
kwargs
.
pop
(
"return_list"
,
False
)
...
@@ -346,28 +351,24 @@ class Op(MetaObject):
...
@@ -346,28 +351,24 @@ class Op(MetaObject):
def
R_op
(
def
R_op
(
self
,
inputs
:
List
[
Variable
],
eval_points
:
Union
[
Variable
,
List
[
Variable
]]
self
,
inputs
:
List
[
Variable
],
eval_points
:
Union
[
Variable
,
List
[
Variable
]]
)
->
List
[
Variable
]:
)
->
List
[
Variable
]:
"""Construct a graph for the R-operator.
r"""Construct a graph for the R-operator.
This method is primarily used by `Rop`
Suppose the op outputs
This method is primarily used by `Rop`.
[ f_1(inputs), ..., f_n(inputs) ]
Suppose the `Op` outputs ``[ f_1(inputs), ..., f_n(inputs) ]``.
Parameters
Parameters
----------
----------
inputs : a Variable or list of Variables
inputs
The `Op` inputs.
eval_points
eval_points
A
Variable or list of Variable
s with the same length as inputs.
A
`Variable` or list of `Variable`\
s with the same length as inputs.
Each element of
eval_points
specifies the value of the corresponding
Each element of
`eval_points`
specifies the value of the corresponding
input at the point where the R
op
is to be evaluated.
input at the point where the R
-operator
is to be evaluated.
Returns
Returns
-------
-------
list of n elements
``rval[i]`` should be ``Rop(f=f_i(inputs), wrt=inputs, eval_points=eval_points)``.
rval[i] should be Rop(f=f_i(inputs),
wrt=inputs,
eval_points=eval_points)
"""
"""
raise
NotImplementedError
()
raise
NotImplementedError
()
...
@@ -682,14 +683,20 @@ def get_test_value(v: Variable) -> Any:
...
@@ -682,14 +683,20 @@ def get_test_value(v: Variable) -> Any:
def
missing_test_message
(
msg
:
Text
)
->
None
:
def
missing_test_message
(
msg
:
Text
)
->
None
:
"""
"""Display a message saying that some test_value is missing.
Displays msg, a message saying that some test_value is missing,
in the appropriate form based on config.compute_test_value:
This uses the appropriate form based on ``config.compute_test_value``:
off:
The interactive debugger is off, so we do nothing.
ignore:
The interactive debugger is set to ignore missing inputs, so do
nothing.
warn:
Display `msg` as a warning.
off: The interactive debugger is off, so we do nothing.
ignore: The interactive debugger is set to ignore missing inputs,
so do nothing.
warn: Display msg as a warning.
Raises
Raises
------
------
...
@@ -707,28 +714,33 @@ def missing_test_message(msg: Text) -> None:
...
@@ -707,28 +714,33 @@ def missing_test_message(msg: Text) -> None:
def
get_test_values
(
*
args
:
Variable
)
->
Union
[
Any
,
List
[
Any
]]:
def
get_test_values
(
*
args
:
Variable
)
->
Union
[
Any
,
List
[
Any
]]:
"""Get test values for multiple `Variable`
s.
r"""Get test values for multiple `Variable`\
s.
Intended use:
Intended use:
.. code-block:: python
for val_1, ..., val_n in get_debug_values(var_1, ..., var_n):
for val_1, ..., val_n in get_debug_values(var_1, ..., var_n):
if some condition on val_1, ..., val_n is not met:
if some condition on val_1, ..., val_n is not met:
missing_test_message("condition was not met")
missing_test_message("condition was not met")
Given a list of variables, get_debug_values does one of three things:
Given a list of variables, `get_debug_values` does one of three things:
1. If the interactive debugger is off, returns an empty list
2. If the interactive debugger is on, and all variables have
1. If the interactive debugger is off, returns an empty list
debug values, returns a list containing a single element.
2. If the interactive debugger is on, and all variables have
This single element is either:
debug values, returns a list containing a single element.
a) if there is only one variable, the element is its
This single element is either:
value
b) otherwise, a tuple containing debug values of all
a) if there is only one variable, the element is its
the variables.
value
3. If the interactive debugger is on, and some variable does
b) otherwise, a tuple containing debug values of all
not have a debug value, issue a missing_test_message about
the variables.
the variable, and, if still in control of execution, return
an empty list.
3. If the interactive debugger is on, and some variable does
not have a debug value, issue a `missing_test_message` about
the variable, and, if still in control of execution, return
an empty list.
"""
"""
...
@@ -754,10 +766,10 @@ def get_test_values(*args: Variable) -> Union[Any, List[Any]]:
...
@@ -754,10 +766,10 @@ def get_test_values(*args: Variable) -> Union[Any, List[Any]]:
ops_with_inner_function
:
Dict
[
Op
,
Text
]
=
{}
ops_with_inner_function
:
Dict
[
Op
,
Text
]
=
{}
"""
r
"""
Registry of
Op
s that have an inner compiled Aesara function.
Registry of
`Op`\
s that have an inner compiled Aesara function.
The keys are
Op
classes (not instances), and values are the name of the
The keys are
`Op`
classes (not instances), and values are the name of the
attribute that contains the function. For instance, if the function is
attribute that contains the function. For instance, if the function is
self.fn, the value will be 'fn'.
self.fn, the value will be 'fn'.
...
...
aesara/graph/opt.py
浏览文件 @
f22d3165
...
@@ -75,9 +75,10 @@ class GlobalOptimizer(abc.ABC):
...
@@ -75,9 +75,10 @@ class GlobalOptimizer(abc.ABC):
def
optimize
(
self
,
fgraph
,
*
args
,
**
kwargs
):
def
optimize
(
self
,
fgraph
,
*
args
,
**
kwargs
):
"""
"""
This is meant as a shortcut to:
This is meant as a shortcut for the following::
opt.add_requirements(fgraph)
opt.apply(fgraph)
opt.add_requirements(fgraph)
opt.apply(fgraph)
"""
"""
self
.
add_requirements
(
fgraph
)
self
.
add_requirements
(
fgraph
)
...
@@ -93,13 +94,13 @@ class GlobalOptimizer(abc.ABC):
...
@@ -93,13 +94,13 @@ class GlobalOptimizer(abc.ABC):
return
self
.
optimize
(
fgraph
)
return
self
.
optimize
(
fgraph
)
def
add_requirements
(
self
,
fgraph
):
def
add_requirements
(
self
,
fgraph
):
"""
"""
Add features to `fgraph` that are required to apply the optimization.
Add features to the fgraph that are required to apply the optimization.
For example::
For example:
fgraph.attach_feature(History())
fgraph.attach_feature(History())
fgraph.attach_feature(MyFeature())
fgraph.attach_feature(MyFeature())
etc.
#
etc.
"""
"""
...
@@ -1478,8 +1479,9 @@ class OpSub(LocalOptimizer):
...
@@ -1478,8 +1479,9 @@ class OpSub(LocalOptimizer):
Examples
Examples
--------
--------
OpSub(add, sub) ==>
add(div(x, y), add(y, x)) -> sub(div(x, y), sub(y, x))
OpSub(add, sub) ==>
add(div(x, y), add(y, x)) -> sub(div(x, y), sub(y, x))
"""
"""
...
@@ -1554,20 +1556,20 @@ class PatternSub(LocalOptimizer):
...
@@ -1554,20 +1556,20 @@ class PatternSub(LocalOptimizer):
Replaces all occurrences of the input pattern by the output pattern:
Replaces all occurrences of the input pattern by the output pattern:
input_pattern ::= (op, <sub_pattern1>, <sub_pattern2>, ...)
input_pattern ::= (op, <sub_pattern1>, <sub_pattern2>, ...)
input_pattern ::= dict(pattern = <input_pattern>,
input_pattern ::= dict(pattern = <input_pattern>,
constraint = <constraint>)
constraint = <constraint>)
sub_pattern ::= input_pattern
sub_pattern ::= input_pattern
sub_pattern ::= string
sub_pattern ::= string
sub_pattern ::= a Constant instance
sub_pattern ::= a Constant instance
sub_pattern ::= int
sub_pattern ::= int
sub_pattern ::= float
sub_pattern ::= float
constraint ::= lambda fgraph, expr: additional matching condition
constraint ::= lambda fgraph, expr: additional matching condition
output_pattern ::= (op, <output_pattern1>, <output_pattern2>, ...)
output_pattern ::= (op, <output_pattern1>, <output_pattern2>, ...)
output_pattern ::= string
output_pattern ::= string
output_pattern ::= int
output_pattern ::= int
output_pattern ::= float
output_pattern ::= float
Each string in the input pattern is a variable that will be set to
Each string in the input pattern is a variable that will be set to
whatever expression is found in its place. If the same string is
whatever expression is found in its place. If the same string is
...
@@ -1619,13 +1621,15 @@ class PatternSub(LocalOptimizer):
...
@@ -1619,13 +1621,15 @@ class PatternSub(LocalOptimizer):
Examples
Examples
--------
--------
PatternSub((add, 'x', 'y'), (add, 'y', 'x'))
PatternSub((multiply, 'x', 'x'), (square, 'x'))
PatternSub((add, 'x', 'y'), (add, 'y', 'x'))
PatternSub((subtract, (add, 'x', 'y'), 'y'), 'x')
PatternSub((multiply, 'x', 'x'), (square, 'x'))
PatternSub((power, 'x', Constant(double, 2.0)), (square, 'x'))
PatternSub((subtract, (add, 'x', 'y'), 'y'), 'x')
PatternSub((boggle, {'pattern': 'x',
PatternSub((power, 'x', Constant(double, 2.0)), (square, 'x'))
'constraint': lambda expr: expr.type == scrabble}),
PatternSub((boggle, {'pattern': 'x',
(scrabble, 'x'))
'constraint': lambda expr: expr.type == scrabble}),
(scrabble, 'x'))
"""
"""
def
__init__
(
def
__init__
(
...
@@ -1868,18 +1872,17 @@ class NavigatorOptimizer(GlobalOptimizer):
...
@@ -1868,18 +1872,17 @@ class NavigatorOptimizer(GlobalOptimizer):
- 'auto': let the local_opt set this parameter via its 'reentrant'
- 'auto': let the local_opt set this parameter via its 'reentrant'
attribute.
attribute.
failure_callback
failure_callback
A function that takes
(exception, navigator, [(old, new),
A function with the signature ``
(exception, navigator, [(old, new),
(old,new),...]) and we call it if
there's an exception.
(old,new),...])`` that is called when
there's an exception.
If the trouble is from local_opt.transform(), the new
variables
If the exception is raised in ``local_opt.transform``, the ``new``
variables
will be 'None'
.
will be ``None``
.
If the trouble is from validation (the new types don't match for
If the exception is raised during validation (e.g. the new types don't
example) then the new variables will be the ones created by
match) then the new variables will be the ones created by ``self.transform``.
transform().
If this parameter is None, then exceptions are not caught here
If this parameter is ``None``, then exceptions are not caught here and
(raised normally)
.
are raised normally
.
"""
"""
...
@@ -3078,33 +3081,35 @@ def inherit_stack_trace(from_var):
...
@@ -3078,33 +3081,35 @@ def inherit_stack_trace(from_var):
def
check_stack_trace
(
f_or_fgraph
,
ops_to_check
=
"last"
,
bug_print
=
"raise"
):
def
check_stack_trace
(
f_or_fgraph
,
ops_to_check
=
"last"
,
bug_print
=
"raise"
):
"""
r
"""
This function checks if the outputs of specific ops of a compiled graph
This function checks if the outputs of specific ops of a compiled graph
have a stack.
have a stack.
Parameters
Parameters
----------
----------
f_or_fgraph: aesara.compile.function.types.Function or
f_or_fgraph : Function or FunctionGraph
aesara.graph.fg.FunctionGraph
The compiled function or the function graph to be analysed.
The compiled function or the function graph to be analysed.
ops_to_check: it can be of four different types:
ops_to_check
- classes or instances inheriting from aesara.graph.op.Op
This value can be of four different types:
- tuple/list of classes or instances inheriting from aesara.graph.op.Op
- classes or instances inheriting from `Op`
- string
- tuple/list of classes or instances inheriting from `Op`
- function returning a boolean and taking as input an instance of
- string
aesara.graph.op.Op.
- function returning a boolean and taking as input an instance of `Op`
- if ops_to_check is a string, it should be either 'last' or 'all'.
'last' will check only the last op of the graph while 'all' will
- if `ops_to_check` is a string, it should be either ``'last'`` or ``'all'``.
check all the ops of the graph.
``'last'`` will check only the last `Op` of the graph while ``'all'`` will
- if ops_to_check is an op or a tuple/list of ops, the function will
check all the `Op`\s of the graph.
- if `ops_to_check` is an `Op` or a tuple/list of `Op`\s, the function will
check that all the outputs of their occurrences in the graph have a
check that all the outputs of their occurrences in the graph have a
stack trace.
stack trace.
- if
ops_to_check
is a function, it should take as input a
- if
`ops_to_check`
is a function, it should take as input a
aesara.graph.op.Op and return a boolean indicating if the input op
should
`Op` and return a boolean indicating if the input `Op`
should
be checked or not.
be checked or not.
bug_print: string belonging to {'raise', 'warn', 'ignore'}
bug_print
This value is a string belonging to ``{'raise', 'warn', 'ignore'}``.
You can specify the behaviour of the function when the specified
You can specify the behaviour of the function when the specified
ops_to_check are not in the graph of f_or_fgraph
: it can either raise
`ops_to_check` are not in the graph of `f_or_fgraph`
: it can either raise
an exception, write a warning or simply ignore it.
an exception, write a warning or simply ignore it.
Returns
Returns
...
...
aesara/graph/type.py
浏览文件 @
f22d3165
...
@@ -97,13 +97,13 @@ class Type(MetaObject):
...
@@ -97,13 +97,13 @@ class Type(MetaObject):
def
filter_variable
(
def
filter_variable
(
self
,
other
:
Union
[
Variable
,
D
],
allow_convert
:
bool
=
True
self
,
other
:
Union
[
Variable
,
D
],
allow_convert
:
bool
=
True
)
->
Variable
:
)
->
Variable
:
"""Convert a symbolic variable into this `Type`, if compatible.
r
"""Convert a symbolic variable into this `Type`, if compatible.
For the moment, the only `Type`s compatible with one another are
For the moment, the only `Type`
\
s compatible with one another are
`TensorType` and `GpuArrayType`, provided they have the same number of
`TensorType` and `GpuArrayType`, provided they have the same number of
dimensions, same broadcasting pattern, and same dtype.
dimensions, same broadcasting pattern, and same dtype.
If `Type`s are not compatible, a ``TypeError`` should be raised.
If `Type`
\
s are not compatible, a ``TypeError`` should be raised.
"""
"""
if
not
isinstance
(
other
,
Variable
):
if
not
isinstance
(
other
,
Variable
):
...
...
aesara/link/basic.py
浏览文件 @
f22d3165
...
@@ -655,7 +655,7 @@ class JITLinker(PerformLinker):
...
@@ -655,7 +655,7 @@ class JITLinker(PerformLinker):
def
create_jitable_thunk
(
def
create_jitable_thunk
(
self
,
compute_map
,
order
,
input_storage
,
output_storage
,
storage_map
self
,
compute_map
,
order
,
input_storage
,
output_storage
,
storage_map
):
):
"""Create a thunk for each output of the `Linker`
s `FunctionGraph`.
r"""Create a thunk for each output of the `Linker`\
s `FunctionGraph`.
This is differs from the other thunk-making function in that it only
This is differs from the other thunk-making function in that it only
produces thunks for the `FunctionGraph` output nodes.
produces thunks for the `FunctionGraph` output nodes.
...
...
aesara/link/c/interface.py
浏览文件 @
f22d3165
...
@@ -20,6 +20,8 @@ class CLinkerObject:
...
@@ -20,6 +20,8 @@ class CLinkerObject:
Examples
Examples
--------
--------
.. code-block:: python
def c_headers(self, **kwargs):
def c_headers(self, **kwargs):
return ['<iostream>', '<math.h>', '/full/path/to/header.h']
return ['<iostream>', '<math.h>', '/full/path/to/header.h']
...
@@ -39,6 +41,8 @@ class CLinkerObject:
...
@@ -39,6 +41,8 @@ class CLinkerObject:
Examples
Examples
--------
--------
.. code-block:: python
def c_header_dirs(self, **kwargs):
def c_header_dirs(self, **kwargs):
return ['/usr/local/include', '/opt/weirdpath/src/include']
return ['/usr/local/include', '/opt/weirdpath/src/include']
...
@@ -58,6 +62,8 @@ class CLinkerObject:
...
@@ -58,6 +62,8 @@ class CLinkerObject:
Examples
Examples
--------
--------
.. code-block:: python
def c_libraries(self, **kwargs):
def c_libraries(self, **kwargs):
return ['gsl', 'gslcblas', 'm', 'fftw3', 'g2c'].
return ['gsl', 'gslcblas', 'm', 'fftw3', 'g2c'].
...
@@ -76,6 +82,8 @@ class CLinkerObject:
...
@@ -76,6 +82,8 @@ class CLinkerObject:
Examples
Examples
--------
--------
.. code-block:: python
def c_lib_dirs(self, **kwargs):
def c_lib_dirs(self, **kwargs):
return ['/usr/local/lib', '/opt/weirdpath/build/libs'].
return ['/usr/local/lib', '/opt/weirdpath/build/libs'].
...
@@ -107,6 +115,8 @@ class CLinkerObject:
...
@@ -107,6 +115,8 @@ class CLinkerObject:
Examples
Examples
--------
--------
.. code-block:: python
def c_compile_args(self, **kwargs):
def c_compile_args(self, **kwargs):
return ['-ffast-math']
return ['-ffast-math']
...
@@ -173,8 +183,8 @@ class CLinkerOp(CLinkerObject):
...
@@ -173,8 +183,8 @@ class CLinkerOp(CLinkerObject):
Parameters
Parameters
----------
----------
node : Apply instance
node : Apply instance
The node for which we are compiling the current
c_
code.
The node for which we are compiling the current
C
code.
The same Op
may be used in more than one node.
The same ``Op``
may be used in more than one node.
name : str
name : str
A name that is automatically assigned and guaranteed to be
A name that is automatically assigned and guaranteed to be
unique.
unique.
...
@@ -183,13 +193,13 @@ class CLinkerOp(CLinkerObject):
...
@@ -183,13 +193,13 @@ class CLinkerOp(CLinkerObject):
string is the name of a C variable pointing to that input.
string is the name of a C variable pointing to that input.
The type of the variable depends on the declared type of
The type of the variable depends on the declared type of
the input. There is a corresponding python variable that
the input. There is a corresponding python variable that
can be accessed by prepending
"py_"
to the name in the
can be accessed by prepending
``"py_"``
to the name in the
list.
list.
outputs : list of strings
outputs : list of strings
Each string is the name of a C variable where the Op should
Each string is the name of a C variable where the Op should
store its output. The type depends on the declared type of
store its output. The type depends on the declared type of
the output. There is a corresponding
p
ython variable that
the output. There is a corresponding
P
ython variable that
can be accessed by prepending
"py_"
to the name in the
can be accessed by prepending
``"py_"``
to the name in the
list. In some cases the outputs will be preallocated and
list. In some cases the outputs will be preallocated and
the value of the variable may be pre-filled. The value for
the value of the variable may be pre-filled. The value for
an unallocated output is type-dependent.
an unallocated output is type-dependent.
...
@@ -246,13 +256,13 @@ class CLinkerOp(CLinkerObject):
...
@@ -246,13 +256,13 @@ class CLinkerOp(CLinkerObject):
string is the name of a C variable pointing to that input.
string is the name of a C variable pointing to that input.
The type of the variable depends on the declared type of
The type of the variable depends on the declared type of
the input. There is a corresponding python variable that
the input. There is a corresponding python variable that
can be accessed by prepending
"py_"
to the name in the
can be accessed by prepending
``"py_"``
to the name in the
list.
list.
outputs : list of str
outputs : list of str
Each string is the name of a C variable corresponding to
Each string is the name of a C variable corresponding to
one of the outputs of the Op. The type depends on the
one of the outputs of the Op. The type depends on the
declared type of the output. There is a corresponding
declared type of the output. There is a corresponding
python variable that can be accessed by prepending
"py_"
to
python variable that can be accessed by prepending
``"py_"``
to
the name in the list.
the name in the list.
sub : dict of str
sub : dict of str
extra symbols defined in `CLinker` sub symbols (such as 'fail').
extra symbols defined in `CLinker` sub symbols (such as 'fail').
...
@@ -287,7 +297,8 @@ class CLinkerOp(CLinkerObject):
...
@@ -287,7 +297,8 @@ class CLinkerOp(CLinkerObject):
Parameters
Parameters
----------
----------
node : an Apply instance in the graph being compiled
node
An `Apply` instance in the graph being compiled
name : str
name : str
A string or number that serves to uniquely identify this node.
A string or number that serves to uniquely identify this node.
Symbol names defined by this support code should include the name,
Symbol names defined by this support code should include the name,
...
@@ -366,12 +377,13 @@ class CLinkerType(CLinkerObject):
...
@@ -366,12 +377,13 @@ class CLinkerType(CLinkerObject):
Parameters
Parameters
----------
----------
name: str
name
: str
The name of the ``PyObject *`` pointer that will
The name of the ``PyObject *`` pointer that will
the value for this Type
the value for this Type
sub: dict string -> string
sub
a dictionary of special codes. Most importantly
A dictionary of special codes. Most importantly
sub['fail']. See CLinker for more info on `sub` and ``fail``.
``sub['fail']``. See `CLinker` for more info on ``sub`` and
``fail``.
Notes
Notes
-----
-----
...
@@ -388,6 +400,7 @@ class CLinkerType(CLinkerObject):
...
@@ -388,6 +400,7 @@ class CLinkerType(CLinkerObject):
Examples
Examples
--------
--------
.. code-block: python
.. code-block: python
def c_declare(self, name, sub, check_input=True):
def c_declare(self, name, sub, check_input=True):
...
@@ -410,6 +423,7 @@ class CLinkerType(CLinkerObject):
...
@@ -410,6 +423,7 @@ class CLinkerType(CLinkerObject):
Examples
Examples
--------
--------
.. code-block: python
.. code-block: python
def c_init(self, name, sub):
def c_init(self, name, sub):
...
@@ -421,15 +435,15 @@ class CLinkerType(CLinkerObject):
...
@@ -421,15 +435,15 @@ class CLinkerType(CLinkerObject):
def
c_extract
(
def
c_extract
(
self
,
name
:
Text
,
sub
:
Dict
[
Text
,
Text
],
check_input
:
bool
=
True
,
**
kwargs
self
,
name
:
Text
,
sub
:
Dict
[
Text
,
Text
],
check_input
:
bool
=
True
,
**
kwargs
)
->
Text
:
)
->
Text
:
"""Return C code to extract a ``PyObject *`` instance.
r
"""Return C code to extract a ``PyObject *`` instance.
The code returned from this function must be templated using
The code returned from this function must be templated using
``
%(name)
s``, representing the name that the caller wants to
``
%(name)
s``, representing the name that the caller wants to
call this `Variable`. The Python object ``self.data`` is in a
call this `Variable`. The Python object ``self.data`` is in a
variable called ``"py_
%(name)
s"`` and this code must set the
variable called ``"py_
%(name)
s"`` and this code must set the
variables declared by
c_declare to something representative
variables declared by
:meth:`CLinkerType.c_declare` to something
of ``py_
%(name)
``s. If the data is improper, set an appropriate
representative of ``py_
%(name)
``\s. If the data is improper, set an
exception and insert ``"
%(fail)
s"``.
appropriate
exception and insert ``"
%(fail)
s"``.
TODO: Point out that template filling (via sub) is now performed
TODO: Point out that template filling (via sub) is now performed
by this function. --jpt
by this function. --jpt
...
@@ -446,6 +460,7 @@ class CLinkerType(CLinkerObject):
...
@@ -446,6 +460,7 @@ class CLinkerType(CLinkerObject):
Examples
Examples
--------
--------
.. code-block: python
.. code-block: python
def c_extract(self, name, sub, check_input=True, **kwargs):
def c_extract(self, name, sub, check_input=True, **kwargs):
...
...
aesara/link/jax/dispatch.py
浏览文件 @
f22d3165
...
@@ -89,7 +89,7 @@ incsubtensor_ops = (IncSubtensor, AdvancedIncSubtensor1)
...
@@ -89,7 +89,7 @@ incsubtensor_ops = (IncSubtensor, AdvancedIncSubtensor1)
@singledispatch
@singledispatch
def
jax_typify
(
data
,
dtype
=
None
,
**
kwargs
):
def
jax_typify
(
data
,
dtype
=
None
,
**
kwargs
):
"""Convert instances of Aesara `Type`
s to JAX types."""
r"""Convert instances of Aesara `Type`\
s to JAX types."""
if
dtype
is
None
:
if
dtype
is
None
:
return
data
return
data
else
:
else
:
...
...
aesara/scalar/math.py
浏览文件 @
f22d3165
"""
r
"""
`Op`s that have their python implementations taken from SciPy.
`Op`
\
s that have their python implementations taken from SciPy.
As SciPy is not always available, we treat them separately.
As SciPy is not always available, we treat them separately.
"""
"""
...
...
aesara/scan/__init__.py
浏览文件 @
f22d3165
...
@@ -25,9 +25,9 @@ of using ``scan`` over `for` loops in python (among others) are:
...
@@ -25,9 +25,9 @@ of using ``scan`` over `for` loops in python (among others) are:
* it allows the number of iterations to be part of the symbolic graph
* it allows the number of iterations to be part of the symbolic graph
* it allows computing gradients through the for loop
* it allows computing gradients through the for loop
* there exist a bunch of optimizations that help re-write your loop
* there exist a bunch of optimizations that help re-write your loop
such that less memory is used and that it runs faster
such that less memory is used and that it runs faster
* it ensures that data is not copied from host to gpu and gpu to
* it ensures that data is not copied from host to gpu and gpu to
host at each step
host at each step
The Scan Op should typically be used by calling any of the following
The Scan Op should typically be used by calling any of the following
functions: ``scan()``, ``map()``, ``reduce()``, ``foldl()``,
functions: ``scan()``, ``map()``, ``reduce()``, ``foldl()``,
...
...
aesara/sparse/basic.py
浏览文件 @
f22d3165
...
@@ -4048,15 +4048,15 @@ class SamplingDot(Op):
...
@@ -4048,15 +4048,15 @@ class SamplingDot(Op):
sampling_dot
=
SamplingDot
()
sampling_dot
=
SamplingDot
()
"""
"""
Operand for calculating the dot product
dot(`x`, `y`.T) = `z
` when you
Operand for calculating the dot product
``dot(x, y.T) = z`
` when you
only want to calculate a subset of `z`.
only want to calculate a subset of `z`.
It is equivalent to `
p` o (`x` . `y`.T) where o
is the element-wise
It is equivalent to `
`p o (x . y.T)`` where ``o``
is the element-wise
product, `x` and `y` operands of the dot product and `p` is a matrix that
product, `x` and `y` operands of the dot product and `p` is a matrix that
contains 1 when the corresponding element of `z` should be calculated
contains 1 when the corresponding element of `z` should be calculated
and 0 when it shouldn't. Note that SamplingDot has a different interface
and 0 when it shouldn't. Note that SamplingDot has a different interface
than `dot` because SamplingDot requires `x` to be a `
m`x`k
` matrix while
than `dot` because SamplingDot requires `x` to be a `
`m x k`
` matrix while
`y` is a `
n`x`k` matrix instead of the usual `k`x`n
` matrix.
`y` is a `
`n x k`` matrix instead of the usual ``k x n`
` matrix.
Notes
Notes
-----
-----
...
@@ -4079,7 +4079,7 @@ p
...
@@ -4079,7 +4079,7 @@ p
Returns
Returns
-------
-------
sparse matrix
sparse matrix
A dense matrix containing the dot product of `x` by `
y`.T
only
A dense matrix containing the dot product of `x` by `
`y.T``
only
where `p` is 1.
where `p` is 1.
"""
"""
...
@@ -4333,22 +4333,26 @@ class ConstructSparseFromList(Op):
...
@@ -4333,22 +4333,26 @@ class ConstructSparseFromList(Op):
def
make_node
(
self
,
x
,
values
,
ilist
):
def
make_node
(
self
,
x
,
values
,
ilist
):
"""
"""
This creates a sparse matrix with the same shape as `x`. Its
values are the rows of `values` moved. It operates similar to
the following pseudo-code:
.. code-block:: python
output = csc_matrix.zeros_like(x, dtype=values.dtype)
for in_idx, out_idx in enumerate(ilist):
output[out_idx] = values[in_idx]
Parameters
Parameters
----------
----------
x
x
A dense matrix that specif
y
the output shape.
A dense matrix that specif
ies
the output shape.
values
values
A dense matrix with the values to use for output.
A dense matrix with the values to use for output.
ilist
ilist
A dense vector with the same length as the number of rows of values.
A dense vector with the same length as the number of rows of values.
It specify where in the output to put the corresponding rows.
It specifies where in the output to put the corresponding rows.
This create a sparse matrix with the same shape as `x`. Its
values are the rows of `values` moved. Pseudo-code::
output = csc_matrix.zeros_like(x, dtype=values.dtype)
for in_idx, out_idx in enumerate(ilist):
output[out_idx] = values[in_idx]
"""
"""
x_
=
aet
.
as_tensor_variable
(
x
)
x_
=
aet
.
as_tensor_variable
(
x
)
...
...
aesara/sparse/opt.py
浏览文件 @
f22d3165
...
@@ -1804,16 +1804,18 @@ register_specialize(local_structured_add_s_v, "cxx_only")
...
@@ -1804,16 +1804,18 @@ register_specialize(local_structured_add_s_v, "cxx_only")
class
SamplingDotCSR
(
_NoPythonCOp
):
class
SamplingDotCSR
(
_NoPythonCOp
):
"""
r"""
Operand optimized for calculating the dot product dot(`x`, `y`.T) = `z`
Operand optimized for calculating the dot product :math:`x y^\top = z`
when you only want to calculate a subset of `z`.
when you only want to calculate a subset of :math:`z`.
It is equivalent to `p` o (`x` . `y`.T) where o is the element-wise
It is equivalent to :math:`p \circ (x \cdot y^\top)` where :math:`\circ` is
product, `x` and `y` operands of the dot product and `p` is a matrix
the element-wise product, :math:`x` and :math:`y` operands of the dot
that contains 1 when the corresponding element of `z` should be
product, and :math:`p` is a matrix that contains 1 when the corresponding
calculated and 0 when it shouldn't. Note that SamplingDot has a different
element of :math:`z` should be calculated and 0 when it shouldn't. Note
interface than `dot` because SamplingDot requires `x` to be a `m`x`k`
that `SamplingDot` has a different interface than ``dot`` because
matrix while `y` is a `n`x`k` matrix instead of the usual `k`x`n` matrix.
`SamplingDot` requires :math:`x` to be a :math:`m \times k` matrix while
:math:`y` is a :math:`n \times k` matrix instead of the usual :math:``k
\times n` matrix.
Parameters
Parameters
----------
----------
...
@@ -1832,8 +1834,8 @@ class SamplingDotCSR(_NoPythonCOp):
...
@@ -1832,8 +1834,8 @@ class SamplingDotCSR(_NoPythonCOp):
Returns
Returns
-------
-------
A dense matrix containing the dot product of
`x` by `y`.T
only
A dense matrix containing the dot product of
:math:`x` by :math:`y^\top`
only
where `p` is 1.
where
:math:
`p` is 1.
Notes
Notes
-----
-----
...
...
aesara/tensor/basic.py
浏览文件 @
f22d3165
"""`Op` classes for working with ``numpy.ndarrays`` symbolically.
r
"""`Op` classes for working with ``numpy.ndarrays`` symbolically.
This module primarily defines `Op`s for the creation, conversion, and
This module primarily defines `Op`
\
s for the creation, conversion, and
manipulation of tensors.
manipulation of tensors.
"""
"""
...
@@ -2203,8 +2203,8 @@ def patternbroadcast(x, broadcastable):
...
@@ -2203,8 +2203,8 @@ def patternbroadcast(x, broadcastable):
class
Join
(
COp
):
class
Join
(
COp
):
"""
r
"""
Concatenate multiple `TensorVariable`s along some axis.
Concatenate multiple `TensorVariable`
\
s along some axis.
The axis must be given as first argument. All tensors must have the same
The axis must be given as first argument. All tensors must have the same
shape along all dimensions other than this axis.
shape along all dimensions other than this axis.
...
@@ -2533,8 +2533,8 @@ pprint.assign(Join, printing.FunctionPrinter("join"))
...
@@ -2533,8 +2533,8 @@ pprint.assign(Join, printing.FunctionPrinter("join"))
def
join
(
axis
,
*
tensors_list
):
def
join
(
axis
,
*
tensors_list
):
"""
r
"""
Convenience function to concatenate `TensorType`s along the given axis.
Convenience function to concatenate `TensorType`
\
s along the given axis.
This function will not add the op in the graph when it is not useful.
This function will not add the op in the graph when it is not useful.
For example, in the case that the list of tensors to be concatenated
For example, in the case that the list of tensors to be concatenated
...
...
aesara/tensor/basic_opt.py
浏览文件 @
f22d3165
...
@@ -4382,15 +4382,15 @@ register_specialize(topo_constant_folding, "fast_compile", final_opt=True)
...
@@ -4382,15 +4382,15 @@ register_specialize(topo_constant_folding, "fast_compile", final_opt=True)
def
local_elemwise_fusion_op
(
op_class
,
max_input_fct
=
lambda
node
:
32
,
maker
=
None
):
def
local_elemwise_fusion_op
(
op_class
,
max_input_fct
=
lambda
node
:
32
,
maker
=
None
):
"""Create a recursive function that fuses `Elemwise` `Op`
s.
r"""Create a recursive function that fuses `Elemwise` `Op`\
s.
The basic idea is that we loop through an `Elemwise` node's inputs, find
The basic idea is that we loop through an `Elemwise` node's inputs, find
other `Elemwise` nodes, determine the scalars input types for all of the
other `Elemwise` nodes, determine the scalars input types for all of the
`Elemwise` `Op`s, construct a new scalar `Op` using the scalar input types
`Elemwise` `Op`
\
s, construct a new scalar `Op` using the scalar input types
and each `Elemwise`'s scalar `Op`, and use the composite scalar `Op` in a
and each `Elemwise`'s scalar `Op`, and use the composite scalar `Op` in a
new "fused" `Elemwise`.
new "fused" `Elemwise`.
It's parameterized in order to work for `Elemwise` and `GpuElemwise` `Op`s.
It's parameterized in order to work for `Elemwise` and `GpuElemwise` `Op`
\
s.
Parameters
Parameters
----------
----------
...
@@ -4401,14 +4401,14 @@ def local_elemwise_fusion_op(op_class, max_input_fct=lambda node: 32, maker=None
...
@@ -4401,14 +4401,14 @@ def local_elemwise_fusion_op(op_class, max_input_fct=lambda node: 32, maker=None
can take (useful for `GpuElemwise`). The GPU kernel currently has a
can take (useful for `GpuElemwise`). The GPU kernel currently has a
limit of 256 bytes for the size of all parameters passed to it. As
limit of 256 bytes for the size of all parameters passed to it. As
currently we pass a lot of information only by parameter, we must limit how
currently we pass a lot of information only by parameter, we must limit how
many `Op`s we fuse together to avoid busting that 256 limit.
many `Op`
\
s we fuse together to avoid busting that 256 limit.
On the CPU we limit to 32 input variables since that is the maximum
On the CPU we limit to 32 input variables since that is the maximum
NumPy support.
NumPy support.
maker: callable
maker: callable
A function with the signature `
(node, *args)
` that constructs an
A function with the signature `
`(node, *args)`
` that constructs an
`op_class` instance (e.g. `
op_class(*args)
`).
`op_class` instance (e.g. `
`op_class(*args)`
`).
"""
"""
if
maker
is
None
:
if
maker
is
None
:
...
@@ -4417,9 +4417,9 @@ def local_elemwise_fusion_op(op_class, max_input_fct=lambda node: 32, maker=None
...
@@ -4417,9 +4417,9 @@ def local_elemwise_fusion_op(op_class, max_input_fct=lambda node: 32, maker=None
return
op_class
(
scalar_op
)
return
op_class
(
scalar_op
)
def
local_fuse
(
fgraph
,
node
):
def
local_fuse
(
fgraph
,
node
):
"""Fuse `Elemwise` `Op`
s in a node.
r"""Fuse `Elemwise` `Op`\
s in a node.
As part of specialization, we fuse two consecutive
elemwise `Op`
s of the
As part of specialization, we fuse two consecutive
`Elemwise` `Op`\
s of the
same shape.
same shape.
For mixed dtype, we let the `Composite` `Op` do the cast. It lets the C
For mixed dtype, we let the `Composite` `Op` do the cast. It lets the C
...
...
aesara/tensor/extra_ops.py
浏览文件 @
f22d3165
...
@@ -254,8 +254,9 @@ def searchsorted(x, v, side="left", sorter=None):
...
@@ -254,8 +254,9 @@ def searchsorted(x, v, side="left", sorter=None):
Notes
Notes
-----
-----
* Binary search is used to find the required insertion points.
* This Op is working **only on CPU** currently.
* Binary search is used to find the required insertion points.
* This Op is working **only on CPU** currently.
Examples
Examples
--------
--------
...
@@ -778,7 +779,7 @@ def repeat(x, repeats, axis=None):
...
@@ -778,7 +779,7 @@ def repeat(x, repeats, axis=None):
axis to repeat values. By default, use the flattened input
axis to repeat values. By default, use the flattened input
array, and return a flat output array.
array, and return a flat output array.
The number of repetitions for each element is `repeat`.
The number of repetitions for each element is `repeat
s
`.
`repeats` is broadcasted to fit the length of the given `axis`.
`repeats` is broadcasted to fit the length of the given `axis`.
Parameters
Parameters
...
@@ -1305,9 +1306,10 @@ def unique(
...
@@ -1305,9 +1306,10 @@ def unique(
Returns the sorted unique elements of an array. There are three optional
Returns the sorted unique elements of an array. There are three optional
outputs in addition to the unique elements:
outputs in addition to the unique elements:
* the indices of the input array that give the unique values
* the indices of the input array that give the unique values
* the indices of the unique array that reconstruct the input array
* the indices of the unique array that reconstruct the input array
* the number of times each unique value comes up in the input array
* the number of times each unique value comes up in the input array
"""
"""
return
Unique
(
return_index
,
return_inverse
,
return_counts
,
axis
)(
ar
)
return
Unique
(
return_index
,
return_inverse
,
return_counts
,
axis
)(
ar
)
...
@@ -1473,7 +1475,7 @@ def broadcast_shape(*arrays, **kwargs):
...
@@ -1473,7 +1475,7 @@ def broadcast_shape(*arrays, **kwargs):
Parameters
Parameters
----------
----------
*arrays:
`TensorVariable`s
*arrays:
TensorVariable
The tensor variables, or their shapes (as tuples),
The tensor variables, or their shapes (as tuples),
for which the broadcast shape is computed.
for which the broadcast shape is computed.
arrays_are_shapes: bool (Optional)
arrays_are_shapes: bool (Optional)
...
...
aesara/tensor/math_opt.py
浏览文件 @
f22d3165
...
@@ -3278,6 +3278,9 @@ def parse_mul_tree(root):
...
@@ -3278,6 +3278,9 @@ def parse_mul_tree(root):
Examples
Examples
--------
--------
.. code-block:: python
x * y -> [False, [[False, x], [False, y]]]
x * y -> [False, [[False, x], [False, y]]]
-(x * y) -> [True, [[False, x], [False, y]]]
-(x * y) -> [True, [[False, x], [False, y]]]
-x * y -> [False, [[True, x], [False, y]]]
-x * y -> [False, [[True, x], [False, y]]]
...
...
aesara/tensor/subtensor.py
浏览文件 @
f22d3165
...
@@ -112,9 +112,9 @@ def indices_from_subtensor(
...
@@ -112,9 +112,9 @@ def indices_from_subtensor(
def
as_index_constant
(
a
):
def
as_index_constant
(
a
):
"""Convert Python literals to Aesara constants--when possible--in Subtensor arguments.
r
"""Convert Python literals to Aesara constants--when possible--in Subtensor arguments.
This will leave `Variable`s untouched.
This will leave `Variable`
\
s untouched.
"""
"""
if
a
is
None
:
if
a
is
None
:
return
a
return
a
...
@@ -351,10 +351,10 @@ def is_basic_idx(idx):
...
@@ -351,10 +351,10 @@ def is_basic_idx(idx):
def
basic_shape
(
shape
,
indices
):
def
basic_shape
(
shape
,
indices
):
"""Computes the shape resulting from basic NumPy indexing.
r
"""Computes the shape resulting from basic NumPy indexing.
Basic indices are either `
slice`s or `None`s.
Basic indices are either `
`slice``\s or ``None``\s. ``Ellipsis`` are not
`Ellipsis` are not supported here; convert them to `slice`
s first.
supported here; convert them to ``slice``\
s first.
Parameters
Parameters
----------
----------
...
...
doc/core_development_guide.rst
浏览文件 @
f22d3165
...
@@ -31,9 +31,6 @@ some of them might be outdated though:
...
@@ -31,9 +31,6 @@ some of them might be outdated though:
* :ref:`sandbox_elemwise` -- Description of element wise operations.
* :ref:`sandbox_elemwise` -- Description of element wise operations.
* :ref:`sandbox_maxgotcha` -- Describes the difference between ``numpy.max``
and Python max (something to consider when using max).
* :ref:`sandbox_randnb` -- Description of how Aesara deals with random
* :ref:`sandbox_randnb` -- Description of how Aesara deals with random
numbers.
numbers.
...
...
doc/dev_start_guide.rst
浏览文件 @
f22d3165
...
@@ -79,7 +79,6 @@ make sure there are no broader problems.
...
@@ -79,7 +79,6 @@ make sure there are no broader problems.
To run the test suite with the default options, see
To run the test suite with the default options, see
:ref:`test_aesara`.
:ref:`test_aesara`.
For more detail, see :ref:`metadocumentation_nightly_build`.
Setting up your Editor for PEP8
Setting up your Editor for PEP8
-------------------------------
-------------------------------
...
...
doc/extending/ctype.rst
浏览文件 @
f22d3165
...
@@ -565,7 +565,7 @@ default, it will recompile the c code for each process.
...
@@ -565,7 +565,7 @@ default, it will recompile the c code for each process.
Shape and Shape_i
Shape and Shape_i
=================
=================
We have 2 generic `Op`s, `Shape` and `Shape_i`, that return the shape of any
We have 2 generic `Op`
\
s, `Shape` and `Shape_i`, that return the shape of any
Aesara `Variable` that has a shape attribute (`Shape_i` returns only one of
Aesara `Variable` that has a shape attribute (`Shape_i` returns only one of
the elements of the shape).
the elements of the shape).
...
...
doc/extending/extending_aesara.rst
浏览文件 @
f22d3165
差异被折叠。
点击展开。
doc/extending/extending_aesara_c.rst
浏览文件 @
f22d3165
...
@@ -435,7 +435,7 @@ wrong but DebugMode will not detect this.
...
@@ -435,7 +435,7 @@ wrong but DebugMode will not detect this.
TODO: jpt: I don't understand the following sentence.
TODO: jpt: I don't understand the following sentence.
`Op`
s and `Type`
s should usually be considered immutable -- you should
`Op`
\s and `Type`\
s should usually be considered immutable -- you should
definitely not make a change that would have an impact on ``__eq__``,
definitely not make a change that would have an impact on ``__eq__``,
``__hash__``, or the mathematical value that would be computed by ``perform``
``__hash__``, or the mathematical value that would be computed by ``perform``
or ``c_code``.
or ``c_code``.
...
@@ -969,7 +969,7 @@ In addition to these macros, the ``init_code_struct``, ``code``, and
...
@@ -969,7 +969,7 @@ In addition to these macros, the ``init_code_struct``, ``code``, and
happy.
happy.
* ``PARAMS`` : Name of the params variable for this node. (only
* ``PARAMS`` : Name of the params variable for this node. (only
for `Op`s which have params, which is discussed elsewhere)
for `Op`
\
s which have params, which is discussed elsewhere)
Finally the tag ``code`` and ``code_cleanup`` have macros to
Finally the tag ``code`` and ``code_cleanup`` have macros to
pass the inputs and output names. These are name ``INPUT_{i}`` and
pass the inputs and output names. These are name ``INPUT_{i}`` and
...
...
doc/extending/index.rst
浏览文件 @
f22d3165
...
@@ -45,7 +45,6 @@ with Aesara itself.
...
@@ -45,7 +45,6 @@ with Aesara itself.
ctype
ctype
cop
cop
using_params
using_params
extending_aesara_gpu
optimization
optimization
tips
tips
unittest
unittest
...
...
doc/extending/inplace.rst
浏览文件 @
f22d3165
...
@@ -5,11 +5,11 @@
...
@@ -5,11 +5,11 @@
Views and inplace operations
Views and inplace operations
============================
============================
Aesara allows the definition of ``Op``s which return a :term:`view` on one
Aesara allows the definition of ``Op``
\
s which return a :term:`view` on one
of their inputs or operate :term:`inplace` on one or several
of their inputs or operate :term:`inplace` on one or several
inputs. This allows more efficient operations on NumPy's ``ndarray``
inputs. This allows more efficient operations on NumPy's ``ndarray``
data type than would be possible otherwise.
data type than would be possible otherwise.
However, in order to work correctly, these ``Op``s need to
However, in order to work correctly, these ``Op``
\
s need to
implement an additional interface.
implement an additional interface.
Aesara recognizes views and inplace operations specially. It ensures
Aesara recognizes views and inplace operations specially. It ensures
...
@@ -206,7 +206,7 @@ input(s)'s memory). From there, go to the previous section.
...
@@ -206,7 +206,7 @@ input(s)'s memory). From there, go to the previous section.
Inplace optimization and DebugMode
Inplace optimization and DebugMode
==================================
==================================
It is recommended that during the graph construction, all ``Op``s are not inplace.
It is recommended that during the graph construction, all ``Op``
\
s are not inplace.
Then an optimization replaces them with inplace ones. Currently ``DebugMode`` checks
Then an optimization replaces them with inplace ones. Currently ``DebugMode`` checks
all optimizations that were tried even if they got rejected. One reason an inplace
all optimizations that were tried even if they got rejected. One reason an inplace
optimization can get rejected is when there is another ``Op`` that is already being applied
optimization can get rejected is when there is another ``Op`` that is already being applied
...
@@ -218,6 +218,6 @@ checking a rejected inplace optimization, since it will lead to wrong results.
...
@@ -218,6 +218,6 @@ checking a rejected inplace optimization, since it will lead to wrong results.
In order to be able to use ``DebugMode`` in more situations, your inplace
In order to be able to use ``DebugMode`` in more situations, your inplace
optimization can pre-check whether it will get rejected by using the
optimization can pre-check whether it will get rejected by using the
``aesara.graph.destroyhandler.fast_inplace_check()`` function, that will tell
``aesara.graph.destroyhandler.fast_inplace_check()`` function, that will tell
which ``Op``s can be performed inplace. You may then skip the optimization if it is
which ``Op``
\
s can be performed inplace. You may then skip the optimization if it is
incompatible with this check. Note however that this check does not cover all
incompatible with this check. Note however that this check does not cover all
cases where an optimization may be rejected (it will not detect cycles).
cases where an optimization may be rejected (it will not detect cycles).
doc/extending/op.rst
浏览文件 @
f22d3165
...
@@ -49,9 +49,9 @@ define the following methods.
...
@@ -49,9 +49,9 @@ define the following methods.
.. function:: make_node(*inputs)
.. function:: make_node(*inputs)
This method is responsible for creating output
Variable
s of a
This method is responsible for creating output
:class:`Variable`\
s of a
suitable symbolic `Type` to serve as the outputs of this
Op
's
suitable symbolic `Type` to serve as the outputs of this
:Class:`Op`
's
application. The
Variable
s found in ``*inputs`` must be operated on
application. The
:class:`Variable`\
s found in ``*inputs`` must be operated on
using Aesara's symbolic language to compute the symbolic output
using Aesara's symbolic language to compute the symbolic output
Variables. This method should put these outputs into an Apply
Variables. This method should put these outputs into an Apply
instance, and return the Apply instance.
instance, and return the Apply instance.
...
...
doc/extending/optimization.rst
浏览文件 @
f22d3165
...
@@ -91,11 +91,11 @@ A local optimization is an object which defines the following methods:
...
@@ -91,11 +91,11 @@ A local optimization is an object which defines the following methods:
.. method:: transform(fgraph, node)
.. method:: transform(fgraph, node)
This method takes a :
ref
:`FunctionGraph` and an :ref:`Apply` node and
This method takes a :
class
:`FunctionGraph` and an :ref:`Apply` node and
returns either ``False`` to signify that no changes are to be done or a
returns either ``False`` to signify that no changes are to be done or a
list of
`Variable`
s which matches the length of the node's ``outputs``
list of
:class:`Variable`\
s which matches the length of the node's ``outputs``
list. When the
`LocalOptimizer` is applied by a `Navigato
r`, the outputs
list. When the
:class:`LocalOptimizer` is applied by a :class:`NavigatorOptimize
r`, the outputs
of the node passed as argument to the `LocalOptimizer` will be replaced by
of the node passed as argument to the
:class:
`LocalOptimizer` will be replaced by
the list returned.
the list returned.
...
@@ -423,8 +423,8 @@ optdb is a SequenceDB, so, at the top level, Aesara applies a sequence
...
@@ -423,8 +423,8 @@ optdb is a SequenceDB, so, at the top level, Aesara applies a sequence
of global optimizations to the computation graphs.
of global optimizations to the computation graphs.
OptimizationQuery
:class:`OptimizationQuery`
-----
-----
---------------------
A OptimizationQuery is built by the following call:
A OptimizationQuery is built by the following call:
...
...
doc/extending/pipeline.rst
浏览文件 @
f22d3165
...
@@ -75,7 +75,7 @@ produce a ``thunk``, which is a function with no arguments that
...
@@ -75,7 +75,7 @@ produce a ``thunk``, which is a function with no arguments that
returns nothing. Along with the thunk, one list of input containers (a
returns nothing. Along with the thunk, one list of input containers (a
`aesara.link.basic.Container` is a sort of object that wraps another and does
`aesara.link.basic.Container` is a sort of object that wraps another and does
type casting) and one list of output containers are produced,
type casting) and one list of output containers are produced,
corresponding to the input and output
`Variable`
s as well as the updates
corresponding to the input and output
:class:`Variable`\
s as well as the updates
defined for the inputs when applicable. To perform the computations,
defined for the inputs when applicable. To perform the computations,
the inputs must be placed in the input containers, the thunk must be
the inputs must be placed in the input containers, the thunk must be
called, and the outputs must be retrieved from the output containers
called, and the outputs must be retrieved from the output containers
...
...
doc/extending/tips.rst
浏览文件 @
f22d3165
...
@@ -38,7 +38,7 @@ Use Aesara's high order Ops when applicable
...
@@ -38,7 +38,7 @@ Use Aesara's high order Ops when applicable
Aesara provides some generic Op classes which allow you to generate a
Aesara provides some generic Op classes which allow you to generate a
lot of Ops at a lesser effort. For instance, Elemwise can be used to
lot of Ops at a lesser effort. For instance, Elemwise can be used to
make :term:`elem
ent
wise` operations easily whereas DimShuffle can be
make :term:`elemwise` operations easily whereas DimShuffle can be
used to make transpose-like transformations. These higher order Ops
used to make transpose-like transformations. These higher order Ops
are mostly Tensor-related, as this is Aesara's specialty.
are mostly Tensor-related, as this is Aesara's specialty.
...
...
doc/extending/unittest.rst
浏览文件 @
f22d3165
...
@@ -102,7 +102,7 @@ Example:
...
@@ -102,7 +102,7 @@ Example:
Creating an Op Unit Test
Creating an Op Unit Test
=======================
=======================
=
A few tools have been developed to help automate the development of
A few tools have been developed to help automate the development of
unit tests for Aesara Ops.
unit tests for Aesara Ops.
...
...
doc/glossary.rst
浏览文件 @
f22d3165
...
@@ -39,7 +39,7 @@ Glossary
...
@@ -39,7 +39,7 @@ Glossary
See also: :class:`graph.basic.Constant`
See also: :class:`graph.basic.Constant`
Elemwise
(i.e. element-wise)
Elemwise
An element-wise operation ``f`` on two tensor variables ``M`` and ``N``
An element-wise operation ``f`` on two tensor variables ``M`` and ``N``
is one such that:
is one such that:
...
...
doc/index.rst
浏览文件 @
f22d3165
...
@@ -55,12 +55,11 @@ Roughly in order of what you'll want to check out:
...
@@ -55,12 +55,11 @@ Roughly in order of what you'll want to check out:
* :ref:`extending` -- Learn to add a Type, Op, or graph optimization.
* :ref:`extending` -- Learn to add a Type, Op, or graph optimization.
* :ref:`dev_start_guide` -- How to contribute code to Aesara.
* :ref:`dev_start_guide` -- How to contribute code to Aesara.
* :ref:`internal` -- How to maintain Aesara and more...
* :ref:`internal` -- How to maintain Aesara and more...
* :ref:`release` -- How our release should work.
* :ref:`acknowledgement` -- What we took from other projects.
* :ref:`acknowledgement` -- What we took from other projects.
* `Related Projects`_ -- link to other projects that implement new functionalities on top of Aesara
* `Related Projects`_ -- link to other projects that implement new functionalities on top of Aesara
.. _aesara
_
community:
.. _aesara
-
community:
Community
Community
=========
=========
...
@@ -88,7 +87,6 @@ Community
...
@@ -88,7 +87,6 @@ Community
links
links
internal/index
internal/index
acknowledgement
acknowledgement
LICENSE
.. _Theano: https://github.com/Theano/Theano
.. _Theano: https://github.com/Theano/Theano
...
...
doc/introduction.rst
浏览文件 @
f22d3165
...
@@ -40,8 +40,6 @@ support rapid development of efficient machine learning algorithms. Theano was
...
@@ -40,8 +40,6 @@ support rapid development of efficient machine learning algorithms. Theano was
named after the `Greek mathematician`_, who may have been Pythagoras' wife.
named after the `Greek mathematician`_, who may have been Pythagoras' wife.
Aesara is an alleged daughter of Pythagoras and Theano.
Aesara is an alleged daughter of Pythagoras and Theano.
Aesara is released under a BSD license (:ref:`link <license>`).
Sneak peek
Sneak peek
==========
==========
...
...
doc/library/config.rst
浏览文件 @
f22d3165
...
@@ -204,7 +204,7 @@ import ``aesara`` and print the config variable, as in:
...
@@ -204,7 +204,7 @@ import ``aesara`` and print the config variable, as in:
collection allows Aesara to reuse buffers for intermediate results between
collection allows Aesara to reuse buffers for intermediate results between
function calls. This speeds up Aesara by spending less time reallocating
function calls. This speeds up Aesara by spending less time reallocating
space during function evaluation and can provide significant speed-ups for
space during function evaluation and can provide significant speed-ups for
functions with many fast
``Op``
s, but it also increases Aesara's memory
functions with many fast
:class:`Op`\
s, but it also increases Aesara's memory
usage.
usage.
.. note:: If :attr:`config.gpuarray__preallocate` is the default value
.. note:: If :attr:`config.gpuarray__preallocate` is the default value
...
@@ -226,7 +226,7 @@ import ``aesara`` and print the config variable, as in:
...
@@ -226,7 +226,7 @@ import ``aesara`` and print the config variable, as in:
Default: ``False``
Default: ``False``
Allow garbage collection inside of
``Scan`` ``Op``
s.
Allow garbage collection inside of
:class:`Scan` :class:`Op`\
s.
If :attr:`config.allow_gc` is ``True``, but :attr:`config.scan__allow_gc` is
If :attr:`config.allow_gc` is ``True``, but :attr:`config.scan__allow_gc` is
``False``, then Aesara will perform garbage collection during the inner
``False``, then Aesara will perform garbage collection during the inner
...
@@ -272,7 +272,7 @@ import ``aesara`` and print the config variable, as in:
...
@@ -272,7 +272,7 @@ import ``aesara`` and print the config variable, as in:
Default: ``False``
Default: ``False``
Enable or disable parallel computation on the CPU with OpenMP.
Enable or disable parallel computation on the CPU with OpenMP.
It is the default value used by
``Op``
s that support OpenMP.
It is the default value used by
:class:`Op`\
s that support OpenMP.
It is best to specify this setting in ``.aesararc`` or in the environment
It is best to specify this setting in ``.aesararc`` or in the environment
variable ``AESARA_FLAGS``.
variable ``AESARA_FLAGS``.
...
@@ -281,7 +281,7 @@ import ``aesara`` and print the config variable, as in:
...
@@ -281,7 +281,7 @@ import ``aesara`` and print the config variable, as in:
Positive int value, default: 200000.
Positive int value, default: 200000.
This specifies the minimum size of a vector for which OpenMP will be used by
This specifies the minimum size of a vector for which OpenMP will be used by
``Elemwise`` ``Op``
s, when OpenMP is enabled.
:class:`Elemwise` :class:`Op`\
s, when OpenMP is enabled.
.. attribute:: cast_policy
.. attribute:: cast_policy
...
@@ -382,7 +382,7 @@ import ``aesara`` and print the config variable, as in:
...
@@ -382,7 +382,7 @@ import ``aesara`` and print the config variable, as in:
Positive int value, default: 20.
Positive int value, default: 20.
The number of
``Op``
s to print in the profiler output.
The number of
:class:`Op`\
s to print in the profiler output.
.. attribute:: config.profiling__min_memory_size
.. attribute:: config.profiling__min_memory_size
...
@@ -664,7 +664,7 @@ import ``aesara`` and print the config variable, as in:
...
@@ -664,7 +664,7 @@ import ``aesara`` and print the config variable, as in:
.. attribute:: config.conv__assert_shape
.. attribute:: config.conv__assert_shape
If ``True``, ``AbstractConv*``
``Op``
s will verify that user-provided shapes
If ``True``, ``AbstractConv*``
:class:`Op`\
s will verify that user-provided shapes
match the run-time shapes. This is a debugging option, and may slow down
match the run-time shapes. This is a debugging option, and may slow down
compilation.
compilation.
...
@@ -823,7 +823,7 @@ import ``aesara`` and print the config variable, as in:
...
@@ -823,7 +823,7 @@ import ``aesara`` and print the config variable, as in:
.. attribute:: compile
.. attribute:: compile
This section contains attributes which influence the compilation of
This section contains attributes which influence the compilation of
C code for
``Op``
s. Due to historical reasons many attributes outside
C code for
:class:`Op`\
s. Due to historical reasons many attributes outside
of this section also have an influence over compilation, most
of this section also have an influence over compilation, most
notably ``cxx``.
notably ``cxx``.
...
@@ -972,7 +972,7 @@ import ``aesara`` and print the config variable, as in:
...
@@ -972,7 +972,7 @@ import ``aesara`` and print the config variable, as in:
If ``True``, will print a warning when compiling one or more ``Op`` with C
If ``True``, will print a warning when compiling one or more ``Op`` with C
code that can't be cached because there is no ``c_code_cache_version()``
code that can't be cached because there is no ``c_code_cache_version()``
function associated to at least one of those
``Op``
s.
function associated to at least one of those
:class:`Op`\
s.
.. attribute:: config.cmodule__remove_gxx_opt
.. attribute:: config.cmodule__remove_gxx_opt
...
...
doc/library/graph/op.rst
浏览文件 @
f22d3165
.. _libdoc_graph_op:
.. _libdoc_graph_op:
================================================
================================================
==============
:mod:`graph` -- Objects and functions for computational graphs
:mod:`graph` -- Objects and functions for computational graphs
================================================
================================================
==============
.. automodule:: aesara.graph.op
.. automodule:: aesara.graph.op
:platform: Unix, Windows
:platform: Unix, Windows
...
...
doc/library/graph/params_type.rst
浏览文件 @
f22d3165
.. _libdoc_graph_params_type:
.. _libdoc_graph_params_type:
============================================================
============================================================
===========
:mod:`aesara.graph.params_type` -- Wrapper class for
op
params
:mod:`aesara.graph.params_type` -- Wrapper class for
:class:`Op`
params
============================================================
============================================================
===========
---------
---------
Reference
Reference
...
...
doc/library/index.rst
浏览文件 @
f22d3165
...
@@ -27,7 +27,6 @@ Types and Ops that you can use to build and compile expression graphs.
...
@@ -27,7 +27,6 @@ Types and Ops that you can use to build and compile expression graphs.
sparse/sandbox
sparse/sandbox
tensor/index
tensor/index
typed_list
typed_list
tests
There are also some top-level imports that you might find more convenient:
There are also some top-level imports that you might find more convenient:
...
...
doc/library/scan.rst
浏览文件 @
f22d3165
...
@@ -682,4 +682,5 @@ reference
...
@@ -682,4 +682,5 @@ reference
.. autofunction:: aesara.foldl
.. autofunction:: aesara.foldl
.. autofunction:: aesara.foldr
.. autofunction:: aesara.foldr
.. autofunction:: aesara.scan
.. autofunction:: aesara.scan
:noindex:
.. autofunction:: aesara.scan.scan_checkpoints
.. autofunction:: aesara.scan.scan_checkpoints
doc/library/sparse/sandbox.rst
浏览文件 @
f22d3165
.. ../../../../aesara/sparse/sandbox/sp.py
.. ../../../../aesara/sparse/sandbox/sp.py
.. ../../../../aesara/sparse/
sandbox
/truedot.py
.. ../../../../aesara/sparse/
basic
/truedot.py
.. _libdoc_sparse_sandbox:
.. _libdoc_sparse_sandbox:
...
@@ -19,5 +19,3 @@ API
...
@@ -19,5 +19,3 @@ API
:members:
:members:
.. automodule:: aesara.sparse.sandbox.sp2
.. automodule:: aesara.sparse.sandbox.sp2
:members:
:members:
.. automodule:: aesara.sparse.sandbox.truedot
:members:
doc/library/tensor/basic.rst
浏览文件 @
f22d3165
...
@@ -465,7 +465,7 @@ TensorVariable
...
@@ -465,7 +465,7 @@ TensorVariable
you'll want to call.
you'll want to call.
.. autoclass:: var._tensor_py_operators
.. autoclass::
aesara.tensor.
var._tensor_py_operators
:members:
:members:
This mix-in class adds convenient attributes, methods, and support
This mix-in class adds convenient attributes, methods, and support
...
@@ -478,16 +478,19 @@ TensorVariable
...
@@ -478,16 +478,19 @@ TensorVariable
values that might be associated with this variable.
values that might be associated with this variable.
.. attribute:: ndim
.. attribute:: ndim
:noindex:
The number of dimensions of this tensor. Aliased to
The number of dimensions of this tensor. Aliased to
:attr:`TensorType.ndim`.
:attr:`TensorType.ndim`.
.. attribute:: dtype
.. attribute:: dtype
:noindex:
The numeric type of this tensor. Aliased to
The numeric type of this tensor. Aliased to
:attr:`TensorType.dtype`.
:attr:`TensorType.dtype`.
.. method:: reshape(shape, ndim=None)
.. method:: reshape(shape, ndim=None)
:noindex:
Returns a view of this tensor that has been reshaped as in
Returns a view of this tensor that has been reshaped as in
numpy.reshape. If the shape is a Variable argument, then you might
numpy.reshape. If the shape is a Variable argument, then you might
...
@@ -498,6 +501,7 @@ TensorVariable
...
@@ -498,6 +501,7 @@ TensorVariable
See :func:`reshape`.
See :func:`reshape`.
.. method:: dimshuffle(*pattern)
.. method:: dimshuffle(*pattern)
:noindex:
Returns a view of this tensor with permuted dimensions. Typically the
Returns a view of this tensor with permuted dimensions. Typically the
pattern will include the integers 0, 1, ... ndim-1, and any number of
pattern will include the integers 0, 1, ... ndim-1, and any number of
...
@@ -549,13 +553,19 @@ TensorVariable
...
@@ -549,13 +553,19 @@ TensorVariable
.. method:: copy() Return a new symbolic variable that is a copy of the variable. Does not copy the tag.
.. method:: copy() Return a new symbolic variable that is a copy of the variable. Does not copy the tag.
.. method:: norm(L, axis=None)
.. method:: norm(L, axis=None)
.. method:: nonzero(self, return_matrix=False)
.. method:: nonzero(self, return_matrix=False)
:noindex:
.. method:: nonzero_values(self)
.. method:: nonzero_values(self)
:noindex:
.. method:: sort(self, axis=-1, kind='quicksort', order=None)
.. method:: sort(self, axis=-1, kind='quicksort', order=None)
:noindex:
.. method:: argsort(self, axis=-1, kind='quicksort', order=None)
.. method:: argsort(self, axis=-1, kind='quicksort', order=None)
:noindex:
.. method:: clip(self, a_min, a_max) with a_min <= a_max
.. method:: clip(self, a_min, a_max) with a_min <= a_max
.. method:: conf()
.. method:: conf()
.. method:: repeat(repeats, axis=None)
.. method:: repeat(repeats, axis=None)
:noindex:
.. method:: round(mode="half_away_from_zero")
.. method:: round(mode="half_away_from_zero")
:noindex:
.. method:: trace()
.. method:: trace()
.. method:: get_scalar_constant_value()
.. method:: get_scalar_constant_value()
.. method:: zeros_like(model, dtype=None)
.. method:: zeros_like(model, dtype=None)
...
@@ -577,6 +587,7 @@ dimensions, see :meth:`_tensor_py_operators.dimshuffle`.
...
@@ -577,6 +587,7 @@ dimensions, see :meth:`_tensor_py_operators.dimshuffle`.
Returns an lvector representing the shape of `x`.
Returns an lvector representing the shape of `x`.
.. function:: reshape(x, newshape, ndim=None)
.. function:: reshape(x, newshape, ndim=None)
:noindex:
:type x: any TensorVariable (or compatible)
:type x: any TensorVariable (or compatible)
:param x: variable to be reshaped
:param x: variable to be reshaped
...
@@ -810,6 +821,7 @@ Creating Tensor
...
@@ -810,6 +821,7 @@ Creating Tensor
(2, 2, 2, 3, 2)
(2, 2, 2, 3, 2)
.. function:: stack(*tensors)
.. function:: stack(*tensors)
:noindex:
.. warning::
.. warning::
...
@@ -1175,7 +1187,7 @@ Bitwise
...
@@ -1175,7 +1187,7 @@ Bitwise
>>> ~a # aet.invert(a) bitwise invert (alias aet.bitwise_not)
>>> ~a # aet.invert(a) bitwise invert (alias aet.bitwise_not)
Inplace
Inplace
-------
------
-------
In-place operators are *not* supported. Aesara's graph-optimizations
In-place operators are *not* supported. Aesara's graph-optimizations
will determine which intermediate values to use for in-place
will determine which intermediate values to use for in-place
...
@@ -1183,10 +1195,10 @@ computations. If you would like to update the value of a
...
@@ -1183,10 +1195,10 @@ computations. If you would like to update the value of a
:term:`shared variable`, consider using the ``updates`` argument to
:term:`shared variable`, consider using the ``updates`` argument to
:func:`Aesara.function`.
:func:`Aesara.function`.
.. _libdoc_tensor_elem
ent
wise:
.. _libdoc_tensor_elemwise:
Elementwise
:class:`Elemwise`
===========
===========
======
Casting
Casting
-------
-------
...
@@ -1220,7 +1232,7 @@ Casting
...
@@ -1220,7 +1232,7 @@ Casting
Comparisons
Comparisons
-----------
-
-----------
The six usual equality and inequality operators share the same interface.
The six usual equality and inequality operators share the same interface.
:Parameter: *a* - symbolic Tensor (or compatible)
:Parameter: *a* - symbolic Tensor (or compatible)
...
@@ -1456,6 +1468,7 @@ Mathematical
...
@@ -1456,6 +1468,7 @@ Mathematical
Returns a variable representing the floor of a (for example floor(2.9) is 2).
Returns a variable representing the floor of a (for example floor(2.9) is 2).
.. function:: round(a, mode="half_away_from_zero")
.. function:: round(a, mode="half_away_from_zero")
:noindex:
Returns a variable representing the rounding of a in the same dtype as a. Implemented rounding mode are half_away_from_zero and half_to_even.
Returns a variable representing the rounding of a in the same dtype as a. Implemented rounding mode are half_away_from_zero and half_to_even.
...
...
doc/library/tensor/index.rst
浏览文件 @
f22d3165
...
@@ -25,7 +25,8 @@ They are grouped into the following sections:
...
@@ -25,7 +25,8 @@ They are grouped into the following sections:
elemwise
elemwise
extra_ops
extra_ops
io
io
opt
basic_
opt
slinalg
slinalg
nlinalg
nlinalg
fft
fft
math_opt
doc/library/tensor/nnet/b
n
.rst
→
doc/library/tensor/nnet/b
atchnorm
.rst
浏览文件 @
f22d3165
.. _libdoc_tensor_nnet_b
n
:
.. _libdoc_tensor_nnet_b
atchnorm
:
================================
================================
=======
:mod:`b
n
` -- Batch Normalization
:mod:`b
atchnorm
` -- Batch Normalization
================================
================================
=======
.. module:: tensor.nnet.b
n
.. module:: tensor.nnet.b
atchnorm
:platform: Unix, Windows
:platform: Unix, Windows
:synopsis: Batch Normalization
:synopsis: Batch Normalization
.. moduleauthor:: LISA
.. moduleauthor:: LISA
.. autofunction:: aesara.tensor.nnet.b
n
.batch_normalization_train
.. autofunction:: aesara.tensor.nnet.b
atchnorm
.batch_normalization_train
.. autofunction:: aesara.tensor.nnet.b
n
.batch_normalization_test
.. autofunction:: aesara.tensor.nnet.b
atchnorm
.batch_normalization_test
.. seealso:: cuDNN batch normalization: :class:`aesara.gpuarray.dnn.dnn_batch_normalization_train`, :class:`aesara.gpuarray.dnn.dnn_batch_normalization_test>`.
.. seealso:: cuDNN batch normalization: :class:`aesara.gpuarray.dnn.dnn_batch_normalization_train`, :class:`aesara.gpuarray.dnn.dnn_batch_normalization_test>`.
.. autofunction:: aesara.tensor.nnet.b
n
.batch_normalization
.. autofunction:: aesara.tensor.nnet.b
atchnorm
.batch_normalization
doc/library/tensor/nnet/index.rst
浏览文件 @
f22d3165
...
@@ -17,8 +17,8 @@ and ops which are particular to neural networks and deep learning.
...
@@ -17,8 +17,8 @@ and ops which are particular to neural networks and deep learning.
:maxdepth: 1
:maxdepth: 1
conv
conv
nnet
basic
neighbours
neighbours
b
n
b
atchnorm
blocksparse
blocksparse
ctc
ctc
doc/library/tensor/random/basic.rst
浏览文件 @
f22d3165
.. _libdoc_tensor_random:
.. _libdoc_tensor_random
_basic
:
=============================================
=============================================
:mod:`
random
` -- Low-level random numbers
:mod:`
basic
` -- Low-level random numbers
=============================================
=============================================
.. module:: aesara.tensor.random
.. module:: aesara.tensor.random
...
@@ -17,9 +17,9 @@ Reference
...
@@ -17,9 +17,9 @@ Reference
.. class:: RandomStream()
.. class:: RandomStream()
A helper class that tracks changes in a shared
``numpy.random.RandomState`
`
A helper class that tracks changes in a shared
:class:`numpy.random.RandomState
`
and behaves like
``numpy.random.RandomState`
` by managing access
and behaves like
:class:`numpy.random.RandomState
` by managing access
to
`RandomVariable`
s. For example:
to
:class:`RandomVariable`\
s. For example:
.. testcode:: constructors
.. testcode:: constructors
...
...
doc/library/tensor/random/index.rst
浏览文件 @
f22d3165
...
@@ -7,11 +7,8 @@
...
@@ -7,11 +7,8 @@
Low-level random numbers
Low-level random numbers
------------------------
------------------------
.. module:: aesara.tensor.random
The :mod:`aesara.tensor.random` module provides random-number drawing functionality
:synopsis: symbolic random variables
that closely resembles the :mod:`numpy.random` module.
The `aesara.tensor.random` module provides random-number drawing functionality
that closely resembles the `numpy.random` module.
.. toctree::
.. toctree::
:maxdepth: 2
:maxdepth: 2
...
...
doc/library/tensor/signal/conv.rst
浏览文件 @
f22d3165
...
@@ -16,7 +16,7 @@
...
@@ -16,7 +16,7 @@
present in convolutional neural networks (where filters are 3D and pool
present in convolutional neural networks (where filters are 3D and pool
over several input channels).
over several input channels).
.. module:: conv
.. module::
aesara.tensor.signal.
conv
:platform: Unix, Windows
:platform: Unix, Windows
:synopsis: ops for performing convolutions
:synopsis: ops for performing convolutions
.. moduleauthor:: LISA
.. moduleauthor:: LISA
...
...
doc/library/tests.rst
deleted
100644 → 0
浏览文件 @
b0a07a40
.. _libdoc_tests:
=====================
:mod:`tests` -- Tests
=====================
.. automodule:: tests.breakpoint
:members:
doc/sandbox/sandbox.rst
浏览文件 @
f22d3165
...
@@ -33,9 +33,7 @@ you compute the gradient, **WRITEME**.
...
@@ -33,9 +33,7 @@ you compute the gradient, **WRITEME**.
Gradients for a particular variable can be one of four kinds:
Gradients for a particular variable can be one of four kinds:
1) forgot to implement it
1) forgot to implement it
You will get an exception of the following form.
You will get an exception of the following form::
.. code-block:: python
aesara.graph.utils.MethodNotDefined: ('grad', <class 'pylearn.algorithms.sandbox.cost.LogFactorial'>, 'LogFactorial')
aesara.graph.utils.MethodNotDefined: ('grad', <class 'pylearn.algorithms.sandbox.cost.LogFactorial'>, 'LogFactorial')
...
...
doc/tutorial/examples.rst
浏览文件 @
f22d3165
...
@@ -30,8 +30,8 @@ the logistic curve, which is given by:
...
@@ -30,8 +30,8 @@ the logistic curve, which is given by:
A plot of the logistic function, with x on the x-axis and s(x) on the
A plot of the logistic function, with x on the x-axis and s(x) on the
y-axis.
y-axis.
You want to compute the function :ref:`elementwise
You want to compute the function :ref:`element
-
wise
<libdoc_tensor_elem
ent
wise>` on matrices of doubles, which means that
<libdoc_tensor_elemwise>` on matrices of doubles, which means that
you want to apply this function to each individual element of the
you want to apply this function to each individual element of the
matrix.
matrix.
...
@@ -75,7 +75,7 @@ Computing More than one Thing at the Same Time
...
@@ -75,7 +75,7 @@ Computing More than one Thing at the Same Time
==============================================
==============================================
Aesara supports functions with multiple outputs. For example, we can
Aesara supports functions with multiple outputs. For example, we can
compute the :ref:`element
wise <libdoc_tensor_element
wise>` difference, absolute difference, and
compute the :ref:`element
-wise <libdoc_tensor_elem
wise>` difference, absolute difference, and
squared difference between two matrices *a* and *b* at the same time:
squared difference between two matrices *a* and *b* at the same time:
.. If you modify this code, also change :
.. If you modify this code, also change :
...
@@ -373,7 +373,7 @@ Here's a brief example. The setup code is:
...
@@ -373,7 +373,7 @@ Here's a brief example. The setup code is:
Here, 'rv_u' represents a random stream of 2x2 matrices of draws from a uniform
Here, 'rv_u' represents a random stream of 2x2 matrices of draws from a uniform
distribution. Likewise, 'rv_n' represents a random stream of 2x2 matrices of
distribution. Likewise, 'rv_n' represents a random stream of 2x2 matrices of
draws from a normal distribution. The distributions that are implemented are
draws from a normal distribution. The distributions that are implemented are
defined as :class:`RandomVariable`s
defined as :class:`RandomVariable`
\
s
in :ref:`basic<libdoc_tensor_random_basic>`. They only work on CPU.
in :ref:`basic<libdoc_tensor_random_basic>`. They only work on CPU.
See `Other Implementations`_ for GPU version.
See `Other Implementations`_ for GPU version.
...
...
tests/tensor/test_basic_opt.py
浏览文件 @
f22d3165
...
@@ -1065,7 +1065,7 @@ class TestFusion:
...
@@ -1065,7 +1065,7 @@ class TestFusion:
self
.
do
(
self
.
mode
,
self
.
_shared
,
shp
)
self
.
do
(
self
.
mode
,
self
.
_shared
,
shp
)
def
test_fusion_35_inputs
(
self
):
def
test_fusion_35_inputs
(
self
):
"""Make sure we don't fuse too many `Op`
s and go past the 31 function arguments limit."""
r"""Make sure we don't fuse too many `Op`\
s and go past the 31 function arguments limit."""
inpts
=
vectors
([
"i
%
i"
%
i
for
i
in
range
(
35
)])
inpts
=
vectors
([
"i
%
i"
%
i
for
i
in
range
(
35
)])
# Make an elemwise graph looking like:
# Make an elemwise graph looking like:
...
@@ -1228,7 +1228,7 @@ class TestFusion:
...
@@ -1228,7 +1228,7 @@ class TestFusion:
@pytest.mark.skipif
(
not
config
.
cxx
,
reason
=
"No cxx compiler"
)
@pytest.mark.skipif
(
not
config
.
cxx
,
reason
=
"No cxx compiler"
)
def
test_no_c_code
(
self
):
def
test_no_c_code
(
self
):
"""Make sure we avoid fusions for `Op`
s without C code implementations."""
r"""Make sure we avoid fusions for `Op`\
s without C code implementations."""
# This custom `Op` has no `c_code` method
# This custom `Op` has no `c_code` method
class
NoCCodeOp
(
aes
.
basic
.
UnaryScalarOp
):
class
NoCCodeOp
(
aes
.
basic
.
UnaryScalarOp
):
...
...
tests/tensor/test_type.py
浏览文件 @
f22d3165
...
@@ -68,7 +68,7 @@ def test_filter_float_subclass():
...
@@ -68,7 +68,7 @@ def test_filter_float_subclass():
def
test_filter_memmap
():
def
test_filter_memmap
():
"""Make sure `TensorType.filter` can handle NumPy `memmap`
s subclasses."""
r"""Make sure `TensorType.filter` can handle NumPy `memmap`\
s subclasses."""
data
=
np
.
arange
(
12
,
dtype
=
config
.
floatX
)
data
=
np
.
arange
(
12
,
dtype
=
config
.
floatX
)
data
.
resize
((
3
,
4
))
data
.
resize
((
3
,
4
))
filename
=
path
.
join
(
mkdtemp
(),
"newfile.dat"
)
filename
=
path
.
join
(
mkdtemp
(),
"newfile.dat"
)
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论