提交 261bfadf authored 作者: nouiz's avatar nouiz

Merge pull request #137 from pascanur/op_documentations

Op documentation
...@@ -10,8 +10,8 @@ Theano graphs ...@@ -10,8 +10,8 @@ Theano graphs
- Theano works with symbolic graphs - Theano works with symbolic graphs
- Those graphs are bi-partite graphs (graph with 2 types of nodes) - Those graphs are bi-partite graphs (graph with 2 types of nodes)
- Those 2 nodes types are Apply and Variable nodes - The 2 types are Apply nodes and Variable nodes
- Apply node have a link to the Op that it execute - Apply nodes have a link to the Op they execute
Inputs and Outputs are lists of Theano variables Inputs and Outputs are lists of Theano variables
...@@ -50,33 +50,35 @@ Op contract ...@@ -50,33 +50,35 @@ Op contract
.. ../extending/op.txt .. ../extending/op.txt
There is 2 mandatory function. The first is :func:`make_node`. The There are 2 mandatory methods that one needs to implement.
second is the one that do/tell the computation to do at run The first one is :func:`make_node`. The second one
time. Currently you have 4 posibility: implement the :func:`perform` would describe the computations that are required to be done
and/or :func:`c_code <Op.c_code>` (and other related :ref:`c functions at run time. Currently there are 2 different possibilites:
<cop>`), or the :func:`make_thunk` function. The ``perform`` allow you implement the :func:`perform`
to easily wrap an existing python function in Theano. The ``c_code`` and/or :func:`c_code <Op.c_code>` (and other related :ref:`c methods
and related function allow you to have your op generate c code and <cop>`), or the :func:`make_thunk` method. The ``perform`` allows
have Theano compile and link to it. The ``make_thunk`` function will to easily wrap an existing python function into Theano. The ``c_code``
be called during compilation and should generate a ``thunk``: a and related methods allow the op to generate c code that will be
function that when called will do the wanted computation. This is compiled and linked by Theano. On the other hand, the ``make_thunk``
usefull if you want to generate code and compile it yourself. For method will be called only once during compilation and should generate
example, this allow you to use PyCUDA to compile gpu code. a ``thunk``: a standalone function that when called will do the wanted computations.
This is usefull if you want to generate code and compile it yourself. For
There is 2 mandatory/highly suggested function. They are needed to for a basic example, this allows you to use PyCUDA to compile gpu code.
optimization that merge duplicate computation in a Theano function. So
if you don't want Theano to do you computation multiple time for no Also there are 2 methods that are highly recommended to be implemented. They are
good reason, implement them! Those function are :func:`__eq__` and needed in order to merge duplicate computations involving your op. So if you
do not want Theano to execute your op multiple times with the same inputs,
do implement them. Those methods are :func:`__eq__` and
:func:`__hash__`. :func:`__hash__`.
The :func:`infer_shape` method allow some very interesting The :func:`infer_shape` method allows to infer shape of some variable, somewhere in the
optimization like don't performing the computation of your op just to middle of the computational graph without actually computing the outputs (when possible).
take the shape your Op's output. This could be helpful if one only needs the shape of the output instead of the actual outputs.
The :func:`grad` method is needed you want want differentiation to The :func:`grad` method is required if you want to differentiate some cost whose expression
work with your op. includes your op.
The :func:`__str__` is usefull to have a better printing of you op. The :func:`__str__` is usefull to generate a better name for your op when printing.
The :func:`R_op` is needed if you want theano.tensor.Rop to work with your op. The :func:`R_op` is needed if you want theano.tensor.Rop to work with your op.
......
...@@ -142,13 +142,14 @@ following methods: ...@@ -142,13 +142,14 @@ following methods:
Optional. Optional.
This function is needed for shape optimization. ``shapes`` is a This function is needed for shape optimization. ``shapes`` is a
list with one tuple for each input the Apply node linked to this op list with one tuple for each input of the Apply node (which corresponds
have. Each tuple contain 1 element for each dimensions of the to the inputs of the op). Each tuple contains 1 element for
corresponding inputs. The value is the the corresponding each dimension of the corresponding input. The value is the
dimensions shape of the corresponding inputs. shape (number of elements) along the corresponding dimension of that
specific input.
This sound complicated, but this is just the corresponding inputs While this might sound complicated, it is nothing more then the shape
shape in symbolic variable. of each input as symbolic variables (one per dimension).
The function should return a list with one tuple for each output. The function should return a list with one tuple for each output.
Each tuple should contain the corresponding output's shape. Each tuple should contain the corresponding output's shape.
...@@ -161,9 +162,30 @@ following methods: ...@@ -161,9 +162,30 @@ following methods:
Optional. Optional.
This function is needed for theano.tensor.Rop to work with this op. This function implements the application of the R-operator on the
function represented by your op. Let assume that function is :math:`f`,
TODO: add more detail. with input :math:`x`, applying the R-operator means computing the
Jacobian of :math:`f` and right-multiplying it by :math:`v`, the evaluation
point, namely: :math:`\frac{\partial f}{\partial x} v`.
``inputs`` are the symbolic variables corresponding to the value of
the input where you want to evaluate the jacobian, and ``eval_points``
are the symbolic variables corresponding to the value you want to
right multiply the jacobian with.
Same conventions as for the grad method hold. If your op is not
differentiable, you can return None. Note that in contrast to
the method :func:`grad`, for :func:`R_op` you need to return the
same number of outputs as there are ouputs of the op. You can think
of it in the following terms. You have all your inputs concatenated
into a single vector :math:`x`. You do the same with the evaluation
points (which are as many as inputs and of the shame shape) and obtain
another vector :math:`v`. For each output, you reshape it into a vector,
compute the jacobian of that vector with respect to :math:`x` and
multiply it by :math:`v`. As a last step you reshape each of these
vectors you obtained for each outputs (that have the same shape as
the outputs) back to their corresponding shapes and return them as the
output of the :func:`R_op` method.
.. attribute:: default_output .. attribute:: default_output
...@@ -180,15 +202,15 @@ following methods: ...@@ -180,15 +202,15 @@ following methods:
Syntactic shortcut to make_node which returns the output Syntactic shortcut to make_node which returns the output
Variables of the Op. Variables of the Op.
*Default:* this is done for you by Op. *Default:* this is implemented in the parent class and you do not need to change it.
.. function:: __str__() .. function:: __str__()
*Default:* python default: module_path_to_your_class.CLASSNAME *Default:* python default: module_path_to_your_class.CLASSNAME
This allow you to have a better printing of Op. If an Op have parameter This allows for better printing of the Op. If the Op parameterizable, it is highly
it is highly recommented that it make the ``__str__`` function recommended to implement this method, showing the value of the different parameters
print the name of the op and the Op's parameters values. in the current instance's name.
At a bare minimum, a new Op must define ``make_node`` and ``perform``, which have no defaults. At a bare minimum, a new Op must define ``make_node`` and ``perform``, which have no defaults.
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论