Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
cae78759
提交
cae78759
authored
4月 10, 2021
作者:
Brandon T. Willard
提交者:
Brandon T. Willard
4月 11, 2021
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Add missing formatting to extending_aesara.txt and inplace.txt
上级
5057f618
显示空白字符变更
内嵌
并排
正在显示
2 个修改的文件
包含
106 行增加
和
113 行删除
+106
-113
extending_aesara.txt
doc/extending/extending_aesara.txt
+87
-88
inplace.txt
doc/extending/inplace.txt
+19
-25
没有找到文件。
doc/extending/extending_aesara.txt
浏览文件 @
cae78759
...
@@ -7,14 +7,14 @@ Creating a new Op: Python implementation
...
@@ -7,14 +7,14 @@ Creating a new Op: Python implementation
So suppose you have looked through the library documentation and you don't see
So suppose you have looked through the library documentation and you don't see
a function that does what you want.
a function that does what you want.
If you can implement something in terms of existing
Op
s, you should do that.
If you can implement something in terms of existing
``Op``
s, you should do that.
Odds are your function that uses existing Aesara expressions is short,
Odds are your function that uses existing Aesara expressions is short,
has no bugs, and potentially profits from optimizations that have already been
has no bugs, and potentially profits from optimizations that have already been
implemented.
implemented.
However, if you cannot implement an
Op in terms of existing Op
s, you have to
However, if you cannot implement an
``Op`` in terms of existing ``Op``
s, you have to
write a new one. Don't worry, Aesara was designed to make it easy to add new
write a new one. Don't worry, Aesara was designed to make it easy to add new
Op
s, Types, and Optimizations.
``Op``
s, Types, and Optimizations.
.. These first few pages will walk you through the definition of a new :ref:`type`,
.. These first few pages will walk you through the definition of a new :ref:`type`,
.. ``double``, and a basic arithmetic :ref:`operations <op>` on that `Type`.
.. ``double``, and a basic arithmetic :ref:`operations <op>` on that `Type`.
...
@@ -23,23 +23,23 @@ As an illustration, this tutorial shows how to write a simple Python-based
...
@@ -23,23 +23,23 @@ As an illustration, this tutorial shows how to write a simple Python-based
:ref:`operations <op>` which performs operations on
:ref:`operations <op>` which performs operations on
:ref:`type`, ``double<Double>``.
:ref:`type`, ``double<Double>``.
.. It also shows how to implement tests that
.. It also shows how to implement tests that
.. ensure the proper working of an
op
.
.. ensure the proper working of an
``Op``
.
.. note::
.. note::
This is an introductury tutorial and as such it does not cover how to make
This is an introductury tutorial and as such it does not cover how to make
an
op
that returns a view or modifies the values in its inputs. Thus, all
an
``Op``
that returns a view or modifies the values in its inputs. Thus, all
op
s created with the instructions described here MUST return newly
``Op``
s created with the instructions described here MUST return newly
allocated memory or reuse the memory provided in the parameter
allocated memory or reuse the memory provided in the parameter
``output_storage`` of the :func:`perform` function. See
``output_storage`` of the :func:`perform` function. See
:ref:`views_and_inplace` for an explanation on how to do this.
:ref:`views_and_inplace` for an explanation on how to do this.
If your
op
returns a view or changes the value of its inputs
If your
``Op``
returns a view or changes the value of its inputs
without doing as prescribed in that page, Aesara will run, but will
without doing as prescribed in that page, Aesara will run, but will
return correct results for some graphs and wrong results for others.
return correct results for some graphs and wrong results for others.
It is recommended that you run your tests in DebugMode (Aesara *flag*
It is recommended that you run your tests in DebugMode (Aesara *flag*
``mode=DebugMode``) since it verifies if your
op
behaves correctly in this
``mode=DebugMode``) since it verifies if your
``Op``
behaves correctly in this
regard.
regard.
...
@@ -57,7 +57,7 @@ intermediary values. As such, Inputs and Outputs of a graph are lists of Aesara
...
@@ -57,7 +57,7 @@ intermediary values. As such, Inputs and Outputs of a graph are lists of Aesara
:ref:`variable` nodes. :ref:`apply` nodes perform computation on these
:ref:`variable` nodes. :ref:`apply` nodes perform computation on these
variables to produce new variables. Each :ref:`apply` node has a link to an
variables to produce new variables. Each :ref:`apply` node has a link to an
instance of :ref:`Op` which describes the computation to perform. This tutorial
instance of :ref:`Op` which describes the computation to perform. This tutorial
details how to write such an
Op
instance. Please refers to
details how to write such an
``Op``
instance. Please refers to
:ref:`graphstructures` for a more detailed explanation about the graph
:ref:`graphstructures` for a more detailed explanation about the graph
structure.
structure.
...
@@ -65,9 +65,9 @@ structure.
...
@@ -65,9 +65,9 @@ structure.
Op's basic methods
Op's basic methods
------------------
------------------
An
op
is any Python object which inherits from :class:`Op`.
An
``Op``
is any Python object which inherits from :class:`Op`.
This section provides an overview of the basic methods you typically have to
This section provides an overview of the basic methods you typically have to
implement to make a new
op
. It does not provide extensive coverage of all the
implement to make a new
``Op``
. It does not provide extensive coverage of all the
possibilities you may encounter or need. For that refer to
possibilities you may encounter or need. For that refer to
:ref:`op_contract`.
:ref:`op_contract`.
...
@@ -119,46 +119,46 @@ possibilities you may encounter or need. For that refer to
...
@@ -119,46 +119,46 @@ possibilities you may encounter or need. For that refer to
def infer_shape(self, fgraph, node, input_shapes):
def infer_shape(self, fgraph, node, input_shapes):
pass
pass
An
op
has to implement some methods defined in the the interface of
An
``Op``
has to implement some methods defined in the the interface of
:class:`Op`. More specifically, it is mandatory for an
op
to define either
:class:`Op`. More specifically, it is mandatory for an
``Op``
to define either
the method :func:`make_node` or :attr:`itypes`, :attr:`otypes` and one of the
the method :func:`make_node` or :attr:`itypes`, :attr:`otypes` and one of the
implementation methods, either :func:`perform`, :meth:`Op.c_code`
implementation methods, either :func:`perform`, :meth:`
C
Op.c_code`
or :func:`make_thunk`.
or :func:`make_thunk`.
:func:`make_node` method creates an Apply node representing the application
:func:`make_node` method creates an Apply node representing the application
of the
op
on the inputs provided. This method is reponsible for three things:
of the
``Op``
on the inputs provided. This method is reponsible for three things:
- it first checks that the input
Variable
s types are compatible
- it first checks that the input
``Variable``
s types are compatible
with the current
op. If the op
cannot be applied on the provided
with the current
``Op``. If the ``Op``
cannot be applied on the provided
input types, it must raises an exception (such as :class:`TypeError`).
input types, it must raises an exception (such as :class:`TypeError`).
- it operates on the
Variable
s found in
- it operates on the
``Variable``
s found in
``*inputs`` in Aesara's symbolic language to infer the type of
``*inputs`` in Aesara's symbolic language to infer the type of
the symbolic output
Variables. It creates output Variable
s of a suitable
the symbolic output
``Variable``s. It creates output ``Variable``
s of a suitable
symbolic `Type` to serve as the outputs of this
op
's
symbolic `Type` to serve as the outputs of this
``Op``
's
application.
application.
- it creates an Apply instance with the input and output
Variable
, and
- it creates an Apply instance with the input and output
``Variable``
, and
return the Apply instance.
return the Apply instance.
:func:`perform` method defines the Python implementation of an
op
.
:func:`perform` method defines the Python implementation of an
``Op``
.
It takes several arguments:
It takes several arguments:
- ``node`` is a reference to an Apply node which was previously
- ``node`` is a reference to an Apply node which was previously
obtained via the ``Op``'s :func:`make_node` method. It is typically not
obtained via the ``Op``'s :func:`make_node` method. It is typically not
used in simple
op
s, but it contains symbolic information that
used in simple
``Op``
s, but it contains symbolic information that
could be required for complex
op
s.
could be required for complex
``Op``
s.
- ``inputs`` is a list of references to data which can be operated on using
- ``inputs`` is a list of references to data which can be operated on using
non-symbolic statements, (i.e., statements in Python, Numpy).
non-symbolic statements, (i.e., statements in Python, Numpy).
- ``output_storage`` is a list of storage cells where the output
- ``output_storage`` is a list of storage cells where the output
is to be stored. There is one storage cell for each output of the
op
.
is to be stored. There is one storage cell for each output of the
``Op``
.
The data put in ``output_storage`` must match the type of the
The data put in ``output_storage`` must match the type of the
symbolic output. It is forbidden to change the length of the list(s)
symbolic output. It is forbidden to change the length of the list(s)
contained in ``output_storage``.
contained in ``output_storage``.
A function Mode may allow ``output_storage`` elements to persist
A function Mode may allow ``output_storage`` elements to persist
between evaluations, or it may reset ``output_storage`` cells to
between evaluations, or it may reset ``output_storage`` cells to
hold a value of ``None``. It can also pre-allocate some memory
hold a value of ``None``. It can also pre-allocate some memory
for the
op
to use. This feature can allow ``perform`` to reuse
for the
``Op``
to use. This feature can allow ``perform`` to reuse
memory between calls, for example. If there is something
memory between calls, for example. If there is something
preallocated in the ``output_storage``, it will be of the good
preallocated in the ``output_storage``, it will be of the good
dtype, but can have the wrong shape and have any stride pattern.
dtype, but can have the wrong shape and have any stride pattern.
...
@@ -166,17 +166,17 @@ or :func:`make_thunk`.
...
@@ -166,17 +166,17 @@ or :func:`make_thunk`.
:func:`perform` method must be determined by the inputs. That is to say,
:func:`perform` method must be determined by the inputs. That is to say,
when applied to identical inputs the method must return the same outputs.
when applied to identical inputs the method must return the same outputs.
:class:`Op` allows some other way to define the
op
implentation.
:class:`Op` allows some other way to define the
``Op``
implentation.
For instance, it is possible to define :meth:`Op.c_code` to provide a
For instance, it is possible to define :meth:`
C
Op.c_code` to provide a
C-implementation to the
op
. Please refers to tutorial
C-implementation to the
``Op``
. Please refers to tutorial
:ref:`extending_aesara_c` for a description of :meth:`Op.c_code` and other
:ref:`extending_aesara_c` for a description of :meth:`
C
Op.c_code` and other
related c_methods. Note that an
op
can provide both Python and C
related c_methods. Note that an
``Op``
can provide both Python and C
implementation.
implementation.
:func:`make_thunk` method is another alternative to :func:`perform`.
:func:`make_thunk` method is another alternative to :func:`perform`.
It returns a thunk. A thunk is defined as a zero-arguments
It returns a thunk. A thunk is defined as a zero-arguments
function which encapsulates the computation to be performed by an
function which encapsulates the computation to be performed by an
op
on the arguments of its corresponding node. It takes several parameters:
``Op``
on the arguments of its corresponding node. It takes several parameters:
- ``node`` is the Apply instance for which a thunk is requested,
- ``node`` is the Apply instance for which a thunk is requested,
- ``storage_map`` is a dict of lists which maps variables to a one-element
- ``storage_map`` is a dict of lists which maps variables to a one-element
...
@@ -198,28 +198,28 @@ or :func:`make_thunk`.
...
@@ -198,28 +198,28 @@ or :func:`make_thunk`.
:func:`make_thunk` is useful if you want to generate code and compile
:func:`make_thunk` is useful if you want to generate code and compile
it yourself.
it yourself.
If :func:`make_thunk()` is defined by an
op
, it will be used by Aesara
If :func:`make_thunk()` is defined by an
``Op``
, it will be used by Aesara
to obtain the
op
's implementation.
to obtain the
``Op``
's implementation.
:func:`perform` and :meth:`Op.c_code` will be ignored.
:func:`perform` and :meth:`
C
Op.c_code` will be ignored.
If :func:`make_node` is not defined, the :attr:`itypes` and :attr:`otypes`
If :func:`make_node` is not defined, the :attr:`itypes` and :attr:`otypes`
are used by the
Op
's :func:`make_node` method to implement the functionality
are used by the
``Op``
's :func:`make_node` method to implement the functionality
of :func:`make_node` method mentioned above.
of :func:`make_node` method mentioned above.
Op's auxiliary methods
Op's auxiliary methods
----------------------
----------------------
There are other methods that can be optionally defined by the
op
:
There are other methods that can be optionally defined by the
``Op``
:
The :func:`__str__` method provides a meaningful string representation of
The :func:`__str__` method provides a meaningful string representation of
your
op
.
your
``Op``
.
:func:`__eq__` and :func:`__hash__` define respectivelly equality
:func:`__eq__` and :func:`__hash__` define respectivelly equality
between two
ops and the hash of an op
instance.
between two
``Op``s and the hash of an ``Op``
instance.
They will be used by the optimization
They will be used by the optimization
phase to merge nodes that are doing equivalent computations (same
phase to merge nodes that are doing equivalent computations (same
inputs, same operation).
inputs, same operation).
Two
op
s that are equal according :func:`__eq__`
Two
``Op``
s that are equal according :func:`__eq__`
should return the same output when they are applied on the same inputs.
should return the same output when they are applied on the same inputs.
The :attr:`__props__` lists the properties
The :attr:`__props__` lists the properties
...
@@ -231,19 +231,19 @@ There are other methods that can be optionally defined by the op:
...
@@ -231,19 +231,19 @@ There are other methods that can be optionally defined by the op:
:attr:`__props__` enables the automatic generation of appropriate
:attr:`__props__` enables the automatic generation of appropriate
:func:`__eq__` and :func:`__hash__`.
:func:`__eq__` and :func:`__hash__`.
Given the method :func:`__eq__`, automatically generated from
Given the method :func:`__eq__`, automatically generated from
:attr:`__props__`, two
op
s will be equal if they have the same values for all
:attr:`__props__`, two
``Op``
s will be equal if they have the same values for all
the properties listed in :attr:`__props__`.
the properties listed in :attr:`__props__`.
Given to the method :func:`__hash__` automatically generated from
Given to the method :func:`__hash__` automatically generated from
:attr:`__props__`, two
op
s will be have the same hash if they have the same
:attr:`__props__`, two
``Op``
s will be have the same hash if they have the same
values for all the properties listed in :attr:`__props__`.
values for all the properties listed in :attr:`__props__`.
:attr:`__props__` will also generate a suitable :func:`__str__` for your
op
.
:attr:`__props__` will also generate a suitable :func:`__str__` for your
``Op``
.
This requires development version after September 1st, 2014 or version 0.7.
This requires development version after September 1st, 2014 or version 0.7.
The :func:`infer_shape` method allows an `Op` to infer the shape of its
The :func:`infer_shape` method allows an `Op` to infer the shape of its
output variables without actually computing them.
output variables without actually computing them.
It takes as input ``fgraph``, a `FunctionGraph`; ``node``, a reference to the
op
Apply node;
It takes as input ``fgraph``, a `FunctionGraph`; ``node``, a reference to the
``Op``
Apply node;
and a list of Aesara symbolic Varables (``i0_shape``, ``i1_shape``, ...)
and a list of Aesara symbolic Varables (``i0_shape``, ``i1_shape``, ...)
which are the shape of the
op input Variable
s.
which are the shape of the
``Op`` input ``Variable``
s.
:func:`infer_shape` returns a list where each element is a tuple representing
:func:`infer_shape` returns a list where each element is a tuple representing
the shape of one output.
the shape of one output.
This could be helpful if one only
This could be helpful if one only
...
@@ -251,12 +251,12 @@ There are other methods that can be optionally defined by the op:
...
@@ -251,12 +251,12 @@ There are other methods that can be optionally defined by the op:
can be useful, for instance, for optimization procedures.
can be useful, for instance, for optimization procedures.
The :func:`grad` method is required if you want to differentiate some cost
The :func:`grad` method is required if you want to differentiate some cost
whose expression includes your
op
. The gradient may be
whose expression includes your
``Op``
. The gradient may be
specified symbolically in this method. It takes two arguments ``inputs`` and
specified symbolically in this method. It takes two arguments ``inputs`` and
``output_gradients`` which are both lists of symbolic Aesara
Variable
s and
``output_gradients`` which are both lists of symbolic Aesara
``Variable``
s and
those must be operated on using Aesara's symbolic language. The grad
those must be operated on using Aesara's symbolic language. The grad
method must return a list containing one
Variable
for each
method must return a list containing one
``Variable``
for each
input. Each returned
Variable
represents the gradient with respect
input. Each returned
``Variable``
represents the gradient with respect
to that input computed based on the symbolic gradients with respect
to that input computed based on the symbolic gradients with respect
to each output.
to each output.
If the output is not differentiable with respect to an input then
If the output is not differentiable with respect to an input then
...
@@ -275,8 +275,8 @@ There are other methods that can be optionally defined by the op:
...
@@ -275,8 +275,8 @@ There are other methods that can be optionally defined by the op:
point, namely: :math:`\frac{\partial f}{\partial x} v`.
point, namely: :math:`\frac{\partial f}{\partial x} v`.
The optional boolean :attr:`check_input` attribute is used to specify
The optional boolean :attr:`check_input` attribute is used to specify
if you want the types used in your
op
to check their inputs in their
if you want the types used in your
``COp``
to check their inputs in their
c_code
. It can be used to speed up compilation, reduce overhead
``COp.c_code``
. It can be used to speed up compilation, reduce overhead
(particularly for scalars) and reduce the number of generated C files.
(particularly for scalars) and reduce the number of generated C files.
...
@@ -356,22 +356,22 @@ At a high level, the code fragment declares a class (e.g., ``DoubleOp1``) and th
...
@@ -356,22 +356,22 @@ At a high level, the code fragment declares a class (e.g., ``DoubleOp1``) and th
creates one instance of it (e.g., ``doubleOp1``).
creates one instance of it (e.g., ``doubleOp1``).
We often gloss over this distinction, but will be precise here:
We often gloss over this distinction, but will be precise here:
``doubleOp1`` (the instance) is an
Op
, not ``DoubleOp1`` (the class which is a
``doubleOp1`` (the instance) is an
``Op``
, not ``DoubleOp1`` (the class which is a
subclass of ``Op``). You can call ``doubleOp1(tensor.vector())`` on a
subclass of ``Op``). You can call ``doubleOp1(tensor.vector())`` on a
Variable
to build an expression, and in the expression there will be
``Variable``
to build an expression, and in the expression there will be
a ``.op`` attribute that refers to ``doubleOp1``.
a ``.op`` attribute that refers to ``doubleOp1``.
.. The first two methods in the
Op
are relatively boilerplate: ``__eq__``
.. The first two methods in the
``Op``
are relatively boilerplate: ``__eq__``
.. and ``__hash__``.
.. and ``__hash__``.
.. When two
Op
s are equal, Aesara will merge their outputs if they are applied to the same inputs.
.. When two
``Op``
s are equal, Aesara will merge their outputs if they are applied to the same inputs.
.. The base class (Op) says two objects are equal if (and only if)
.. The base class (Op) says two objects are equal if (and only if)
.. they are the same object.
.. they are the same object.
.. Writing these boilerplate definitions ensures that the logic of the equality comparison is always explicit.
.. Writing these boilerplate definitions ensures that the logic of the equality comparison is always explicit.
.. It is an essential part of the :ref:`op_contract` that if two
Op
s compare
.. It is an essential part of the :ref:`op_contract` that if two
``Op``
s compare
.. equal, then they must compute the same result when presented with the same
.. equal, then they must compute the same result when presented with the same
.. inputs. Here, if we allocated another instance of ``Fibby`` by typing ``fibby2
.. inputs. Here, if we allocated another instance of ``Fibby`` by typing ``fibby2
.. = Fibby()`` then we would have two
Op
s that behave identically.
.. = Fibby()`` then we would have two
``Op``
s that behave identically.
..
..
.. When should the implementation of ``__eq__`` be more complicated?
.. When should the implementation of ``__eq__`` be more complicated?
.. If ``Fibby.__init__`` had parameters, then we could
.. If ``Fibby.__init__`` had parameters, then we could
...
@@ -379,27 +379,27 @@ a ``.op`` attribute that refers to ``doubleOp1``.
...
@@ -379,27 +379,27 @@ a ``.op`` attribute that refers to ``doubleOp1``.
.. arguments to the constructor. If we had done that, and if that different
.. arguments to the constructor. If we had done that, and if that different
.. configuration made ``fibby2`` compute different results from ``fibby`` (for the
.. configuration made ``fibby2`` compute different results from ``fibby`` (for the
.. same inputs) then we would have to add logic to the ``__eq__`` and ``__hash__``
.. same inputs) then we would have to add logic to the ``__eq__`` and ``__hash__``
.. function so that he two ``Fibby``
Op
s would *not be equal*. The reason why: Aesara's merge
.. function so that he two ``Fibby``
``Op``
s would *not be equal*. The reason why: Aesara's merge
.. optimization looks for
Ops comparing equal and merges them. If two Op
s compare
.. optimization looks for
``Op``s comparing equal and merges them. If two ``Op``
s compare
.. equal but don't always produce equal results from equal inputs, then you might
.. equal but don't always produce equal results from equal inputs, then you might
.. see wrong calculation.
.. see wrong calculation.
The ``make_node`` method creates a node to be included in the expression graph.
The ``make_node`` method creates a node to be included in the expression graph.
It runs when we apply our
Op (``doubleOp1``) to the Variable
(``x``), as
It runs when we apply our
``Op`` (``doubleOp1``) to the ``Variable``
(``x``), as
in ``doubleOp1(tensor.vector())``.
in ``doubleOp1(tensor.vector())``.
When an
Op
has multiple inputs, their order in the inputs argument to ``Apply``
When an
``Op``
has multiple inputs, their order in the inputs argument to ``Apply``
is important: Aesara will call ``make_node(*inputs)`` to copy the graph,
is important: Aesara will call ``make_node(*inputs)`` to copy the graph,
so it is important not to change the semantics of the expression by changing
so it is important not to change the semantics of the expression by changing
the argument order.
the argument order.
All the ``inputs`` and ``outputs`` arguments to ``Apply`` must be
Variable
s.
All the ``inputs`` and ``outputs`` arguments to ``Apply`` must be
``Variable``
s.
A common and easy way to ensure inputs are variables is to run them through
A common and easy way to ensure inputs are variables is to run them through
``as_tensor_variable``. This function leaves TensorType variables alone, raises
``as_tensor_variable``. This function leaves TensorType variables alone, raises
an error for non-TensorType variables, and copies any ``numpy.ndarray`` into
an error for non-TensorType variables, and copies any ``numpy.ndarray`` into
the storage for a TensorType Constant. The ``make_node`` method dictates the
the storage for a TensorType Constant. The ``make_node`` method dictates the
appropriate `Type` for all output variables.
appropriate `Type` for all output variables.
The ``perform`` method implements the
Op
's mathematical logic in Python.
The ``perform`` method implements the
``Op``
's mathematical logic in Python.
The inputs (here ``x``) are passed by value, but a single output is returned
The inputs (here ``x``) are passed by value, but a single output is returned
indirectly as the first element of single-element lists. If ``doubleOp1`` had
indirectly as the first element of single-element lists. If ``doubleOp1`` had
a second output, it would be stored in ``output_storage[1][0]``.
a second output, it would be stored in ``output_storage[1][0]``.
...
@@ -408,9 +408,9 @@ a second output, it would be stored in ``output_storage[1][0]``.
...
@@ -408,9 +408,9 @@ a second output, it would be stored in ``output_storage[1][0]``.
In some execution modes, the output storage might contain the return value of
In some execution modes, the output storage might contain the return value of
a previous call. That old value can be reused to avoid memory re-allocation,
a previous call. That old value can be reused to avoid memory re-allocation,
but it must not influence the semantics of the
Op
output.
but it must not influence the semantics of the
``Op``
output.
You can try the new
Op
as follows:
You can try the new
``Op``
as follows:
.. testcode:: example
.. testcode:: example
...
@@ -482,8 +482,8 @@ Example: __props__ definition
...
@@ -482,8 +482,8 @@ Example: __props__ definition
We can modify the previous piece of code in order to demonstrate
We can modify the previous piece of code in order to demonstrate
the usage of the :attr:`__props__` attribute.
the usage of the :attr:`__props__` attribute.
We create an
Op
that takes a variable ``x`` and returns ``a*x+b``.
We create an
``Op``
that takes a variable ``x`` and returns ``a*x+b``.
We want to say that two such
op
s are equal when their values of ``a``
We want to say that two such
``Op``
s are equal when their values of ``a``
and ``b`` are equal.
and ``b`` are equal.
.. testcode:: properties
.. testcode:: properties
...
@@ -556,7 +556,7 @@ in a file and execute it with the ``pytest`` program.
...
@@ -556,7 +556,7 @@ in a file and execute it with the ``pytest`` program.
Basic Tests
Basic Tests
^^^^^^^^^^^
^^^^^^^^^^^
Basic tests are done by you just by using the
op
and checking that it
Basic tests are done by you just by using the
``Op``
and checking that it
returns the right answer. If you detect an error, you must raise an
returns the right answer. If you detect an error, you must raise an
*exception*. You can use the ``assert`` keyword to automatically raise an
*exception*. You can use the ``assert`` keyword to automatically raise an
``AssertionError``.
``AssertionError``.
...
@@ -593,8 +593,8 @@ Testing the infer_shape
...
@@ -593,8 +593,8 @@ Testing the infer_shape
^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^
When a class inherits from the ``InferShapeTester`` class, it gets the
When a class inherits from the ``InferShapeTester`` class, it gets the
``self._compile_and_check`` method that tests the
op
's ``infer_shape``
``self._compile_and_check`` method that tests the
``Op``
's ``infer_shape``
method. It tests that the
op
gets optimized out of the graph if only
method. It tests that the
``Op``
gets optimized out of the graph if only
the shape of the output is needed and not the output
the shape of the output is needed and not the output
itself. Additionally, it checks that the optimized graph computes
itself. Additionally, it checks that the optimized graph computes
the correct shape, by comparing it to the actual shape of the computed
the correct shape, by comparing it to the actual shape of the computed
...
@@ -603,7 +603,7 @@ output.
...
@@ -603,7 +603,7 @@ output.
``self._compile_and_check`` compiles an Aesara function. It takes as
``self._compile_and_check`` compiles an Aesara function. It takes as
parameters the lists of input and output Aesara variables, as would be
parameters the lists of input and output Aesara variables, as would be
provided to ``aesara.function``, and a list of real values to pass to the
provided to ``aesara.function``, and a list of real values to pass to the
compiled function. It also takes the
op
class as a parameter
compiled function. It also takes the
``Op``
class as a parameter
in order to verify that no instance of it appears in the shape-optimized graph.
in order to verify that no instance of it appears in the shape-optimized graph.
If there is an error, the function raises an exception. If you want to
If there is an error, the function raises an exception. If you want to
...
@@ -617,7 +617,7 @@ same value have been mixed up. For instance, if the infer_shape uses
...
@@ -617,7 +617,7 @@ same value have been mixed up. For instance, if the infer_shape uses
the width of a matrix instead of its height, then testing with only
the width of a matrix instead of its height, then testing with only
square matrices will not detect the problem. This is why the
square matrices will not detect the problem. This is why the
``self._compile_and_check`` method prints a warning in such a case. If
``self._compile_and_check`` method prints a warning in such a case. If
your
op
works only with such matrices, you can disable the warning with the
your
``Op``
works only with such matrices, you can disable the warning with the
``warn=False`` parameter.
``warn=False`` parameter.
.. testcode:: tests
.. testcode:: tests
...
@@ -641,7 +641,7 @@ Testing the gradient
...
@@ -641,7 +641,7 @@ Testing the gradient
^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^
The function :ref:`verify_grad <validating_grad>`
The function :ref:`verify_grad <validating_grad>`
verifies the gradient of an
op
or Aesara graph. It compares the
verifies the gradient of an
``Op``
or Aesara graph. It compares the
analytic (symbolically computed) gradient and the numeric
analytic (symbolically computed) gradient and the numeric
gradient (computed through the Finite Difference Method).
gradient (computed through the Finite Difference Method).
...
@@ -663,7 +663,7 @@ Testing the Rop
...
@@ -663,7 +663,7 @@ Testing the Rop
The class :class:`RopLop_checker` defines the functions
The class :class:`RopLop_checker` defines the functions
:func:`RopLop_checker.check_mat_rop_lop`, :func:`RopLop_checker.check_rop_lop` and
:func:`RopLop_checker.check_mat_rop_lop`, :func:`RopLop_checker.check_rop_lop` and
:func:`RopLop_checker.check_nondiff_rop`. These allow to test the
:func:`RopLop_checker.check_nondiff_rop`. These allow to test the
implementation of the Rop method of a particular
op
.
implementation of the Rop method of a particular
``Op``
.
For instance, to verify the Rop method of the DoubleOp, you can use this:
For instance, to verify the Rop method of the DoubleOp, you can use this:
...
@@ -745,9 +745,9 @@ as_op
...
@@ -745,9 +745,9 @@ as_op
-----
-----
as_op is a python decorator that converts a python function into a
as_op is a python decorator that converts a python function into a
basic Aesara
op
that will call the supplied function during execution.
basic Aesara
``Op``
that will call the supplied function during execution.
This isn't the recommended way to build an
op
, but allows for a quick
This isn't the recommended way to build an
``Op``
, but allows for a quick
implementation.
implementation.
It takes an optional :func:`infer_shape` parameter that must have this
It takes an optional :func:`infer_shape` parameter that must have this
...
@@ -766,14 +766,14 @@ signature:
...
@@ -766,14 +766,14 @@ signature:
.. note::
.. note::
Not providing the `infer_shape` method prevents shape-related
Not providing the `infer_shape` method prevents shape-related
optimizations from working with this
op
. For example
optimizations from working with this
``Op``
. For example
`your_op(inputs, ...).shape` will need the
op
to be executed just
`your_op(inputs, ...).shape` will need the
``Op``
to be executed just
to get the shape.
to get the shape.
.. note::
.. note::
As no grad is defined, this means you won't be able to
As no grad is defined, this means you won't be able to
differentiate paths that include this
op
.
differentiate paths that include this
``Op``
.
.. note::
.. note::
...
@@ -818,12 +818,11 @@ You can try it as follows:
...
@@ -818,12 +818,11 @@ You can try it as follows:
Exercise
Exercise
^^^^^^^^
^^^^^^^^
Run the code of the *
numpy_dot
* example above.
Run the code of the *
``numpy_dot``
* example above.
Modify and execute to compute:
numpy.add and numpy.subtract
.
Modify and execute to compute:
``numpy.add`` and ``numpy.subtract``
.
Modify and execute the example to return two outputs: x + y
Modify and execute the example to return two outputs: ``x + y`` and ``x - y``.
and x - y.
.. _Documentation:
.. _Documentation:
...
@@ -835,14 +834,14 @@ will not be accepted.
...
@@ -835,14 +834,14 @@ will not be accepted.
NanGuardMode and AllocEmpty
NanGuardMode and AllocEmpty
---------------------------
---------------------------
NanGuardMode
help users find where in the graph NaN appear. But
``NanGuardMode``
help users find where in the graph NaN appear. But
sometimes, we want some variables to not be checked. For example, in
sometimes, we want some variables to not be checked. For example, in
the old GPU back-end, we use a float32 CudaNdarray to store the MRG
the old GPU back-end, we use a float32 CudaNdarray to store the MRG
random number generator state (they are integers). So if
NanGuardMode
random number generator state (they are integers). So if
``NanGuardMode``
check it, it will generate false positive. Another case is related to
check it, it will generate false positive. Another case is related to
[Gpu]AllocEmpty or some computation on it (like done by Scan
).
``[Gpu]AllocEmpty`` or some computation on it (like done by ``Scan``
).
You can tell
NanGuardMode
to do not check a variable with:
You can tell
``NanGuardMode``
to do not check a variable with:
``variable.tag.nan_guard_mode_check``. Also, this tag automatically
``variable.tag.nan_guard_mode_check``. Also, this tag automatically
follow that variable during optimization. This mean if you tag a
follow that variable during optimization. This mean if you tag a
variable that get replaced by an inplace version, it will keep that
variable that get replaced by an inplace version, it will keep that
...
@@ -855,7 +854,7 @@ Final Note
...
@@ -855,7 +854,7 @@ Final Note
A more extensive discussion of this section's content may be found in
A more extensive discussion of this section's content may be found in
the advanced tutorial :ref:`Extending Aesara<extending>`.
the advanced tutorial :ref:`Extending Aesara<extending>`.
The section :ref:`Other
op
s <other_ops>` includes more instructions for
The section :ref:`Other
``Op``
s <other_ops>` includes more instructions for
the following specific cases:
the following specific cases:
- :ref:`scalar_ops`
- :ref:`scalar_ops`
...
...
doc/extending/inplace.txt
浏览文件 @
cae78759
...
@@ -5,23 +5,17 @@
...
@@ -5,23 +5,17 @@
Views and inplace operations
Views and inplace operations
============================
============================
Aesara allows the definition of
Op
s which return a :term:`view` on one
Aesara allows the definition of
``Op``
s which return a :term:`view` on one
of their inputs or operate :term:`inplace` on one or several
of their inputs or operate :term:`inplace` on one or several
inputs. This allows more efficient operations on
nump
y's ``ndarray``
inputs. This allows more efficient operations on
NumP
y's ``ndarray``
data type than would be possible otherwise.
data type than would be possible otherwise.
However, in order to work correctly, these
Op
s need to
However, in order to work correctly, these
``Op``
s need to
implement an additional interface.
implement an additional interface.
Aesara recognizes views and inplace operations specially. It ensures
Aesara recognizes views and inplace operations specially. It ensures
that they are used in a consistent manner and it ensures that
that they are used in a consistent manner and it ensures that
operations will be carried in a compatible order.
operations will be carried in a compatible order.
An unfortunate fact is that it is impossible to return a view on an
input with the ``double`` type or to operate inplace on it (Python
floats are immutable). Therefore, we can't make examples of these
concepts out of what we've just built. Nonetheless, we will present
the concepts:
.. _views:
.. _views:
Views
Views
...
@@ -50,7 +44,7 @@ range ``0xDEADBEFF - 0xDEADBFDF`` and z the range ``0xCAFEBABE -
...
@@ -50,7 +44,7 @@ range ``0xDEADBEFF - 0xDEADBFDF`` and z the range ``0xCAFEBABE -
0xCAFEBBBE``. Since the ranges for ``x`` and ``y`` overlap, ``y`` is
0xCAFEBBBE``. Since the ranges for ``x`` and ``y`` overlap, ``y`` is
considered to be a view of ``x`` and vice versa.
considered to be a view of ``x`` and vice versa.
Suppose you had an
Op
which took ``x`` as input and returned
Suppose you had an
``Op``
which took ``x`` as input and returned
``y``. You would need to tell Aesara that ``y`` is a view of ``x``. For this
``y``. You would need to tell Aesara that ``y`` is a view of ``x``. For this
purpose, you would set the ``view_map`` field as follows:
purpose, you would set the ``view_map`` field as follows:
...
@@ -126,7 +120,7 @@ operation on ``x``.
...
@@ -126,7 +120,7 @@ operation on ``x``.
r4 = log(r2)
r4 = log(r2)
Needless to say, this goes for user-defined inplace operations as
Needless to say, this goes for user-defined inplace operations as
well
:
the modified input must figure in the list of outputs you
well
;
the modified input must figure in the list of outputs you
give to ``Apply`` in the definition of ``make_node``.
give to ``Apply`` in the definition of ``make_node``.
Also, for technical reasons but also because they are slightly
Also, for technical reasons but also because they are slightly
...
@@ -140,13 +134,13 @@ operation on ``x``.
...
@@ -140,13 +134,13 @@ operation on ``x``.
introduces inconsistencies.
introduces inconsistencies.
Take the previous definitions of ``x``, ``y`` and ``z`` and suppose an
Op
which
Take the previous definitions of ``x``, ``y`` and ``z`` and suppose an
``Op``
which
adds one to every byte of its input. If we give ``x`` as an input to
adds one to every byte of its input. If we give ``x`` as an input to
that
Op
, it can either allocate a new buffer of the same size as ``x``
that
``Op``
, it can either allocate a new buffer of the same size as ``x``
(that could be ``z``) and set that new buffer's bytes to the variable of
(that could be ``z``) and set that new buffer's bytes to the variable of
the addition. That would be a normal, :term:`pure`
Op
. Alternatively,
the addition. That would be a normal, :term:`pure`
``Op``
. Alternatively,
it could add one to each byte *in* the buffer ``x``, therefore
it could add one to each byte *in* the buffer ``x``, therefore
changing it. That would be an inplace
Op
.
changing it. That would be an inplace
``Op``
.
Aesara needs to be notified of this fact. The syntax is similar to
Aesara needs to be notified of this fact. The syntax is similar to
that of ``view_map``:
that of ``view_map``:
...
@@ -181,11 +175,11 @@ Destructive Operations
...
@@ -181,11 +175,11 @@ Destructive Operations
======================
======================
While some operations will operate inplace on their inputs, some might
While some operations will operate inplace on their inputs, some might
simply destroy or corrupt them. For example, an
Op
could do temporary
simply destroy or corrupt them. For example, an
``Op``
could do temporary
calculations right in its inputs. If that is the case, Aesara also
calculations right in its inputs. If that is the case, Aesara also
needs to be notified. The way to notify Aesara is to assume that some
needs to be notified. The way to notify Aesara is to assume that some
output operated inplace on whatever inputs are changed or corrupted by
output operated inplace on whatever inputs are changed or corrupted by
the
Op
(even if the output does not technically reuse any of the
the
``Op``
(even if the output does not technically reuse any of the
input(s)'s memory). From there, go to the previous section.
input(s)'s memory). From there, go to the previous section.
...
@@ -203,24 +197,24 @@ input(s)'s memory). From there, go to the previous section.
...
@@ -203,24 +197,24 @@ input(s)'s memory). From there, go to the previous section.
certainly lead to erroneous computations.
certainly lead to erroneous computations.
You can often identify an incorrect ``view_map`` or ``destroy_map``
You can often identify an incorrect ``view_map`` or ``destroy_map``
by using :ref:`DebugMode`. *Be sure to use
DebugMode
when developing
by using :ref:`DebugMode`. *Be sure to use
``DebugMode``
when developing
a new
Op
that uses ``view_map`` and/or ``destroy_map``.*
a new
``Op``
that uses ``view_map`` and/or ``destroy_map``.*
Inplace optimization and DebugMode
Inplace optimization and DebugMode
==================================
==================================
It is recommended that during the graph construction, all
Op
s are not inplace.
It is recommended that during the graph construction, all
``Op``
s are not inplace.
Then an optimization replaces them with inplace ones. Currently
DebugMode
checks
Then an optimization replaces them with inplace ones. Currently
``DebugMode``
checks
all optimizations that were tried even if they got rejected. One reason an inplace
all optimizations that were tried even if they got rejected. One reason an inplace
optimization can get rejected is when there is another
Op
that is already being applied
optimization can get rejected is when there is another
``Op``
that is already being applied
inplace on the same input. Another reason to reject an inplace optimization is
inplace on the same input. Another reason to reject an inplace optimization is
if it would introduce a cycle into the graph.
if it would introduce a cycle into the graph.
The problem with
DebugMode
is that it will trigger a useless error when
The problem with
``DebugMode``
is that it will trigger a useless error when
checking a rejected inplace optimization, since it will lead to wrong results.
checking a rejected inplace optimization, since it will lead to wrong results.
In order to be able to use
DebugMode
in more situations, your inplace
In order to be able to use
``DebugMode``
in more situations, your inplace
optimization can pre-check whether it will get rejected by using the
optimization can pre-check whether it will get rejected by using the
``aesara.graph.destroyhandler.fast_inplace_check()`` function, that will tell
``aesara.graph.destroyhandler.fast_inplace_check()`` function, that will tell
which
Op
s can be performed inplace. You may then skip the optimization if it is
which
``Op``
s can be performed inplace. You may then skip the optimization if it is
incompatible with this check. Note however that this check does not cover all
incompatible with this check. Note however that this check does not cover all
cases where an optimization may be rejected (it will not detect cycles).
cases where an optimization may be rejected (it will not detect cycles).
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论