提交 7409025b authored 作者: Pascal Lamblin's avatar Pascal Lamblin

more doc formatting

上级 5c231a77
...@@ -5,6 +5,8 @@ ...@@ -5,6 +5,8 @@
List of Env Features List of Env Features
==================== ====================
See :api:`gof.env.Env`.
WRITEME WRITEME
.. _nodefinder: .. _nodefinder:
...@@ -12,4 +14,6 @@ WRITEME ...@@ -12,4 +14,6 @@ WRITEME
NodeFinder NodeFinder
========== ==========
See :api:`gof.toolbox.NodeFinder`.
WRITEME WRITEME
...@@ -35,7 +35,7 @@ There are less methods to define for an Op than for a Type: ...@@ -35,7 +35,7 @@ There are less methods to define for an Op than for a Type:
This must return C code that cleans up whatever c_code allocated and This must return C code that cleans up whatever c_code allocated and
that we must free. that we must free.
*Default* The default behavior is to do nothing. *Default:* The default behavior is to do nothing.
.. function:: c_compile_args() .. function:: c_compile_args()
c_headers() c_headers()
...@@ -118,14 +118,14 @@ version that it produces in the code I gave above. ...@@ -118,14 +118,14 @@ version that it produces in the code I gave above.
.. code-block:: python .. code-block:: python
from theano import gof from theano import gof
class BinaryDoubleOp(gof.Op): class BinaryDoubleOp(gof.Op):
def __init__(self, name, fn, ccode): def __init__(self, name, fn, ccode):
self.name = name self.name = name
self.fn = fn self.fn = fn
self.ccode = ccode self.ccode = ccode
def make_node(self, x, y): def make_node(self, x, y):
if isinstance(x, (int, float)): if isinstance(x, (int, float)):
x = gof.Constant(double, x) x = gof.Constant(double, x)
...@@ -134,29 +134,29 @@ version that it produces in the code I gave above. ...@@ -134,29 +134,29 @@ version that it produces in the code I gave above.
if x.type != double or y.type != double: if x.type != double or y.type != double:
raise TypeError('%s only works on doubles' % self.name) raise TypeError('%s only works on doubles' % self.name)
return gof.Apply(self, [x, y], [double()]) return gof.Apply(self, [x, y], [double()])
def perform(self, node, (x, y), (z, )): def perform(self, node, (x, y), (z, )):
z[0] = self.fn(x, y) z[0] = self.fn(x, y)
def __str__(self): def __str__(self):
return self.name return self.name
def c_code(self, node, name, (x, y), (z, ), sub): def c_code(self, node, name, (x, y), (z, ), sub):
return self.ccode % locals() return self.ccode % locals()
add = BinaryDoubleOp(name = 'add', add = BinaryDoubleOp(name = 'add',
fn = lambda x, y: x + y, fn = lambda x, y: x + y,
ccode = "%(z)s = %(x)s + %(y)s;") ccode = "%(z)s = %(x)s + %(y)s;")
sub = BinaryDoubleOp(name = 'sub', sub = BinaryDoubleOp(name = 'sub',
fn = lambda x, y: x - y, fn = lambda x, y: x - y,
ccode = "%(z)s = %(x)s - %(y)s;") ccode = "%(z)s = %(x)s - %(y)s;")
mul = BinaryDoubleOp(name = 'mul', mul = BinaryDoubleOp(name = 'mul',
fn = lambda x, y: x * y, fn = lambda x, y: x * y,
ccode = "%(z)s = %(x)s * %(y)s;") ccode = "%(z)s = %(x)s * %(y)s;")
div = BinaryDoubleOp(name = 'div', div = BinaryDoubleOp(name = 'div',
fn = lambda x, y: x / y, fn = lambda x, y: x / y,
ccode = "%(z)s = %(x)s / %(y)s;") ccode = "%(z)s = %(x)s / %(y)s;")
...@@ -151,7 +151,7 @@ Op instance is written so that: ...@@ -151,7 +151,7 @@ Op instance is written so that:
.. code-block:: python .. code-block:: python
e = scalar('x') + 1 e = dscalar('x') + 1
builds the following graph: builds the following graph:
......
...@@ -7,8 +7,8 @@ Views and inplace operations ...@@ -7,8 +7,8 @@ Views and inplace operations
Theano allows the definition of Ops which return a :term:`view` on one Theano allows the definition of Ops which return a :term:`view` on one
of their inputs or operates :term:`inplace` on one or several of their inputs or operates :term:`inplace` on one or several
inputs. This allows more efficient operations on numpy's ndarray data type than inputs. This allows more efficient operations on numpy's ``ndarray``
would be possible otherwise. data type than would be possible otherwise.
However, in order to work correctly, these Ops need to However, in order to work correctly, these Ops need to
implement an additional interface. implement an additional interface.
...@@ -29,7 +29,7 @@ Views ...@@ -29,7 +29,7 @@ Views
A "view" on an object ``x`` is an object ``y`` which shares memory A "view" on an object ``x`` is an object ``y`` which shares memory
with ``x`` in some way. In other words, changing ``x`` might also with ``x`` in some way. In other words, changing ``x`` might also
change ``y`` and vice versa. For example, imagine a "vector" structure change ``y`` and vice versa. For example, imagine a ``vector`` structure
which contains two fields: an integer length and a pointer to a memory which contains two fields: an integer length and a pointer to a memory
buffer. Suppose we have: buffer. Suppose we have:
...@@ -51,7 +51,7 @@ range ``0xDEADBEFF - 0xDEADBFDF`` and z the range ``0xCAFEBABE - ...@@ -51,7 +51,7 @@ range ``0xDEADBEFF - 0xDEADBFDF`` and z the range ``0xCAFEBABE -
considered to be a view of ``x`` and vice versa. considered to be a view of ``x`` and vice versa.
Suppose you had an Op which took ``x`` as input and returned Suppose you had an Op which took ``x`` as input and returned
``y``. You would need to tell Theano that y is a view of x. For this ``y``. You would need to tell Theano that ``y`` is a view of ``x``. For this
purpose, you would set the ``view_map`` field as follows: purpose, you would set the ``view_map`` field as follows:
...@@ -103,7 +103,7 @@ operation on ``x``. ...@@ -103,7 +103,7 @@ operation on ``x``.
.. code-block:: python .. code-block:: python
x, y = dscalars('xy') x, y = dscalars('x', 'y')
r1 = log(x) r1 = log(x)
# r2 is x AFTER the add_inplace - x still represents the value before adding y # r2 is x AFTER the add_inplace - x still represents the value before adding y
...@@ -119,7 +119,7 @@ operation on ``x``. ...@@ -119,7 +119,7 @@ operation on ``x``.
Needless to say, this goes for user-defined inplace operations as Needless to say, this goes for user-defined inplace operations as
well: the modified input must figure in the list of outputs you well: the modified input must figure in the list of outputs you
give to Apply in the definition of make_node. give to ``Apply`` in the definition of ``make_node``.
Also, for technical reasons but also because they are slightly Also, for technical reasons but also because they are slightly
confusing to use as evidenced by the previous code, Theano does not confusing to use as evidenced by the previous code, Theano does not
...@@ -132,7 +132,7 @@ operation on ``x``. ...@@ -132,7 +132,7 @@ operation on ``x``.
introduces inconsistencies. introduces inconsistencies.
Take the previous definitions of x, y and z and suppose an Op which Take the previous definitions of ``x``, ``y`` and ``z`` and suppose an Op which
adds one to every byte of its input. If we give ``x`` as an input to adds one to every byte of its input. If we give ``x`` as an input to
that Op, it can either allocate a new buffer of the same size as ``x`` that Op, it can either allocate a new buffer of the same size as ``x``
(that could be ``z``) and set that new buffer's bytes to the variable of (that could be ``z``) and set that new buffer's bytes to the variable of
...@@ -141,7 +141,7 @@ it could add one to each byte *in* the buffer ``x``, therefore ...@@ -141,7 +141,7 @@ it could add one to each byte *in* the buffer ``x``, therefore
changing it. That would be an inplace Op. changing it. That would be an inplace Op.
Theano needs to be notified of this fact. The syntax is similar to Theano needs to be notified of this fact. The syntax is similar to
that of view_map: that of ``view_map``:
.. code-block:: python .. code-block:: python
...@@ -160,10 +160,10 @@ first input (position 0). ...@@ -160,10 +160,10 @@ first input (position 0).
myop.destroy_map = {1: [0]} # second output operates inplace on first input myop.destroy_map = {1: [0]} # second output operates inplace on first input
myop.destroy_map = {0: [0], # first output operates inplace on first input myop.destroy_map = {0: [0], # first output operates inplace on first input
1: [1]} # *AND* second output operates inplace on second input 1: [1]} # *AND* second output operates inplace on second input
myop.destroy_map = {0: [0], # first output operates inplace on first input myop.destroy_map = {0: [0], # first output operates inplace on first input
1: [0]} # *AND* second output *ALSO* operates inplace on first input 1: [0]} # *AND* second output *ALSO* operates inplace on first input
myop.destroy_map = {0: [0, 1]} # first output operates inplace on both the first and second input myop.destroy_map = {0: [0, 1]} # first output operates inplace on both the first and second input
# unlike for views, the previous line is legal and supported # unlike for views, the previous line is legal and supported
...@@ -194,7 +194,7 @@ input(s)'s memory). From there, go to the previous section. ...@@ -194,7 +194,7 @@ input(s)'s memory). From there, go to the previous section.
the value of ``x`` it might invert the order and that will the value of ``x`` it might invert the order and that will
certainly lead to erroneous computations. certainly lead to erroneous computations.
You can often identify an incorrect view_map or destroy_map by using You can often identify an incorrect ``view_map`` or ``destroy_map``
:ref:`DebugMode`. *Be sure to use DebugMode when developing a new Op that by using :ref:`DebugMode`. *Be sure to use DebugMode when developing
uses view_map and/or destroy_map.* a new Op that uses ``view_map`` and/or ``destroy_map``.*
...@@ -12,16 +12,17 @@ computations. We'll start by defining multiplication. ...@@ -12,16 +12,17 @@ computations. We'll start by defining multiplication.
Op's contract Op's contract
============= =============
An Op (:api:`gof.op.Op`) is any object which defines the following methods: An Op (:api:`gof.op.Op`) is any object which defines the
following methods:
.. function:: make_node(*inputs) .. function:: make_node(*inputs)
This method is responsible for creating output Variables of a This method is responsible for creating output Variables of a
suitable Type to serve as the outputs of this Op's application. suitable Type to serve as the outputs of this Op's application.
This method should put these outputs into an Apply instance, and This method should put these outputs into an Apply instance, and
return the Apply instance. return the Apply instance.
This method creates an Apply node representing the application of This method creates an Apply node representing the application of
the Op on the inputs provided. If the Op cannot be applied on the Op on the inputs provided. If the Op cannot be applied on
these inputs, it must raise an appropriate exception. these inputs, it must raise an appropriate exception.
...@@ -30,13 +31,13 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods: ...@@ -30,13 +31,13 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
ordered correctly: a subsequent ``self.make_node(*apply.inputs)`` ordered correctly: a subsequent ``self.make_node(*apply.inputs)``
must produce something equivalent to the first ``apply``. must produce something equivalent to the first ``apply``.
``default_output`` .. attribute:: default_output
*Default*: None *Default:* None
If this member variable is an integer, then the default If this member variable is an integer, then the default
implementation of ``__call__`` will return implementation of ``__call__`` will return
`node.outputs[self.default_output]``, where `node` was returned ``node.outputs[self.default_output]``, where ``node`` was returned
by ``make_node``. Otherwise, the entire list of outputs will be by ``make_node``. Otherwise, the entire list of outputs will be
returned. returned.
...@@ -45,7 +46,7 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods: ...@@ -45,7 +46,7 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
Syntactic shortcut to make_node which returns the output Syntactic shortcut to make_node which returns the output
Variables of the Op. Variables of the Op.
*Default*: this is done for you by Op. *Default:* this is done for you by Op.
.. function:: perform(node, inputs, output_storage) .. function:: perform(node, inputs, output_storage)
...@@ -64,26 +65,26 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods: ...@@ -64,26 +65,26 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
- ``output_storage``: This is a list of storage cells. - ``output_storage``: This is a list of storage cells.
A storage cell is a one-element list. It is forbidden to change A storage cell is a one-element list. It is forbidden to change
the length of the list(s) contained in output_storage. There is the length of the list(s) contained in ``output_storage``. There is
one storage cell for each output of the Op. one storage cell for each output of the Op.
The data you put in ``output_storage`` must match the type of the The data you put in ``output_storage`` must match the type of the
symbolic output. This is a situation where the ``node`` argument symbolic output. This is a situation where the ``node`` argument
can come in handy. can come in handy.
A function Mode may allow output_storage elements to persist between A function Mode may allow ``output_storage`` elements to persist between
evaluations, or it may reset output_storage cells to hold a value of evaluations, or it may reset ``output_storage`` cells to hold a value of
None. This feature can allow perform to reuse memory between calls, for ``None``. This feature can allow ``perform`` to reuse memory between
example. calls, for example.
This method must be determined by the inputs. That is to say, if This method must be determined by the inputs. That is to say, if
it is evaluated once on inputs A and returned B, then if ever it is evaluated once on inputs A and returned B, then if ever
inputs C, equal to A, are presented again, then outputs equal to inputs C, equal to A, are presented again, then outputs equal to
B must be returned again. B must be returned again.
You must be careful about aliasing outputs to inputs, and making You must be careful about aliasing outputs to inputs, and making
modifications to any of the inputs. See `Views and inplace modifications to any of the inputs. See :ref:`Views and inplace
operations <views_and_inplace>`_ before writing a ``perform`` operations <views_and_inplace>` before writing a ``perform``
implementation that does either of these things. implementation that does either of these things.
.. function:: __eq__(other) .. function:: __eq__(other)
...@@ -95,20 +96,21 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods: ...@@ -95,20 +96,21 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
(from perform) as this one, given identical inputs. This means it (from perform) as this one, given identical inputs. This means it
will produce the same output values, it will destroy the same will produce the same output values, it will destroy the same
inputs (same destroy_map), and will alias outputs to the same inputs (same destroy_map), and will alias outputs to the same
inputs (same view_map). inputs (same view_map). For more details, see
:ref:`views_and_inplace`.
.. function:: __hash__() .. function:: __hash__()
If two Op instances compare equal, then they **must** return the If two Op instances compare equal, then they **must** return the
same hash value. same hash value.
Equally important, this hash value must not change during the Equally important, this hash value must not change during the
lifetime of self. Op instances should be immutable in this lifetime of self. Op instances should be immutable in this
sense. sense.
.. function:: __ne__(other) .. function:: __ne__(other)
Default: ``(not (self==other))`` *Default:* ``(not (self==other))``
.. function:: grad(inputs, output_gradients) .. function:: grad(inputs, output_gradients)
...@@ -116,30 +118,28 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods: ...@@ -116,30 +118,28 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
If the Op you are defining is differentiable, you can define its If the Op you are defining is differentiable, you can define its
gradient symbolically in this method. gradient symbolically in this method.
Both the ``inputs`` and ``output_gradients`` will be Both the ``inputs`` and ``output_gradients`` will be
Variables. This method must return a list containing one Variable Variables. This method must return a list containing one Variable
(or None) for each input. Each returned Variable represents the (or ``None``) for each input. Each returned Variable represents the
gradient with respect to that input given the symbolic gradients gradient with respect to that input given the symbolic gradients
with respect to each output. with respect to each output.
If the output is not differentiable with respect to any inputs, If the output is not differentiable with respect to any inputs,
then this method should be defined to return [None for i in then this method should be defined to return ``[None for i in
inputs]. inputs]``.
If this method is not defined, then theano assumes it has been If this method is not defined, then Theano assumes it has been
forgotten. Symbolic differentiation will fail on a graph that forgotten. Symbolic differentiation will fail on a graph that
includes this Op. includes this Op.
For more information on the use of this method, see ``grad``.
For each method, the *default* is what :api:`theano.gof.op.Op` defines For each method, the *default* is what :api:`theano.gof.op.Op` defines
for you. At a bare minimum, a new Op must define ``make_node`` and for you. At a bare minimum, a new Op must define ``make_node`` and
``perform``, which have no defaults. ``perform``, which have no defaults.
For more details, including the interface for providing a C For more details, including the interface for providing a C
implementation of perform(), refer to the documentation for :ref:`op`. implementation of ``perform()``, refer to the documentation for :ref:`op`.
Defining an Op: ``mul`` Defining an Op: ``mul``
...@@ -252,7 +252,7 @@ AttributeError: 'int' object has no attribute 'type' ...@@ -252,7 +252,7 @@ AttributeError: 'int' object has no attribute 'type'
Automatic Constant Wrapping Automatic Constant Wrapping
--------------------------- ---------------------------
Well, ok. We'd like our Op to be a bit more flexible. This can be done Well, OK. We'd like our Op to be a bit more flexible. This can be done
by modifying ``make_node`` to accept Python ``int`` or ``float`` as by modifying ``make_node`` to accept Python ``int`` or ``float`` as
``x`` and/or ``y``: ``x`` and/or ``y``:
......
...@@ -18,7 +18,7 @@ Env is a wrapper around a whole computation graph, you can see its ...@@ -18,7 +18,7 @@ Env is a wrapper around a whole computation graph, you can see its
:ref:`documentation <env>` for more details) and navigates through it :ref:`documentation <env>` for more details) and navigates through it
in a suitable way, replacing some Variables by others in the process. A in a suitable way, replacing some Variables by others in the process. A
local optimization, on the other hand, is defined as a function on a local optimization, on the other hand, is defined as a function on a
*single* :ref:`apply` node and must return either False (to mean that *single* :ref:`apply` node and must return either ``False`` (to mean that
nothing is to be done) or a list of new Variables that we would like to nothing is to be done) or a list of new Variables that we would like to
replace the node's outputs with. A :ref:`navigator` is a special kind replace the node's outputs with. A :ref:`navigator` is a special kind
of global optimization which navigates the computation graph in some of global optimization which navigates the computation graph in some
...@@ -49,7 +49,7 @@ methods: ...@@ -49,7 +49,7 @@ methods:
This method takes an Env object and adds :ref:`features This method takes an Env object and adds :ref:`features
<envfeature>` to it. These features are "plugins" that are needed <envfeature>` to it. These features are "plugins" that are needed
for the apply method to do its job properly. for the ``apply`` method to do its job properly.
.. function:: optimize(env) .. function:: optimize(env)
...@@ -69,7 +69,7 @@ A local optimization is an object which defines the following methods: ...@@ -69,7 +69,7 @@ A local optimization is an object which defines the following methods:
.. function:: transform(node) .. function:: transform(node)
This method takes an :ref:`apply` node and returns either False to This method takes an :ref:`apply` node and returns either ``False`` to
signify that no changes are to be done or a list of Variables which signify that no changes are to be done or a list of Variables which
matches the length of the node's ``outputs`` list. When the matches the length of the node's ``outputs`` list. When the
LocalOptimizer is applied by a Navigator, the outputs of the node LocalOptimizer is applied by a Navigator, the outputs of the node
...@@ -99,9 +99,9 @@ Here is the code for a global optimization implementing the ...@@ -99,9 +99,9 @@ Here is the code for a global optimization implementing the
simplification described above: simplification described above:
.. code-block:: python .. code-block:: python
from theano.gof import toolbox from theano.gof import toolbox
class Simplify(gof.Optimizer): class Simplify(gof.Optimizer):
def add_requirements(self, env): def add_requirements(self, env):
env.extend(toolbox.ReplaceValidate()) env.extend(toolbox.ReplaceValidate())
...@@ -116,38 +116,39 @@ simplification described above: ...@@ -116,38 +116,39 @@ simplification described above:
env.replace_validate(z, b) env.replace_validate(z, b)
elif y == b: elif y == b:
env.replace_validate(z, a) env.replace_validate(z, a)
simplify = Simplify() simplify = Simplify()
Here's how it works: first, in ``add_requirements``, we add the Here's how it works: first, in ``add_requirements``, we add the
``ReplaceValidate`` :ref:`envfeature` located in ``ReplaceValidate`` :ref:`envfeature` located in
``theano.gof.toolbox``. This feature adds the ``replace_validate`` :api:`theano.gof.toolbox`. This feature adds the ``replace_validate``
method to the env, which is an enhanced version of ``replace`` that method to ``env``, which is an enhanced version of ``replace`` that
does additional checks to ensure that we are not messing up the does additional checks to ensure that we are not messing up the
computation graph (note: if ReplaceValidate was already added by computation graph (note: if ``ReplaceValidate`` was already added by
another optimizer, ``extend`` will do nothing). In a nutshell, another optimizer, ``extend`` will do nothing). In a nutshell,
``toolbox.ReplaceValidate`` grants access to ``env.replace_validate`` ``toolbox.ReplaceValidate`` grants access to ``env.replace_validate``,
and ``env.replace_validate`` allows us to replace a Variable with and ``env.replace_validate`` allows us to replace a Variable with
another while respecting certain validation constraints. You can another while respecting certain validation constraints. You can
browse the list of :ref:`features <envfeaturelist>` and see if some of browse the list of :ref:`features <envfeaturelist>` and see if some of
them might be useful to write optimizations with. For example, as an them might be useful to write optimizations with. For example, as an
exercise, try to rewrite Simplify using :ref:`nodefinder` (hint: you exercise, try to rewrite Simplify using :ref:`nodefinder`. (Hint: you
want to use the method it publishes in place of the call to toposort!) want to use the method it publishes instead of the call to toposort!)
Then, in ``apply`` we do the actual job of simplification. We start by Then, in ``apply`` we do the actual job of simplification. We start by
iterating through the graph in topological order. For each node iterating through the graph in topological order. For each node
encountered, we check if it's a ``div`` node. If not, we have nothing encountered, we check if it's a ``div`` node. If not, we have nothing
to do here. If so, we put in x, y and z the numerator, denominator and to do here. If so, we put in ``x``, ``y`` and ``z`` the numerator,
quotient (output) of the division. The simplification only occurs when denominator and quotient (output) of the division.
the numerator is a multiplication, so we check for that. If the The simplification only occurs when the numerator is a multiplication,
numerator is a multiplication we put the two operands in a and b, so so we check for that. If the numerator is a multiplication we put the
two operands in ``a`` and ``b``, so
we can now say that ``z == (a*b)/y``. If ``y==a`` then ``z==b`` and if we can now say that ``z == (a*b)/y``. If ``y==a`` then ``z==b`` and if
``y==b`` then ``z==a``. When either case happens then we can replace z ``y==b`` then ``z==a``. When either case happens then we can replace
by either a or b using ``env.replace_validate`` - else we do ``z`` by either ``a`` or ``b`` using ``env.replace_validate`` - else we do
nothing. You might want to check the documentation about :ref:`variable` nothing. You might want to check the documentation about :ref:`variable`
and :ref:`apply` to get a better understanding of the and :ref:`apply` to get a better understanding of the
pointer-following game you need to get ahold of the nodes of interest pointer-following game you need to get ahold of the nodes of interest
for the simplification (x, y, z, a, b, etc.) for the simplification (``x``, ``y``, ``z``, ``a``, ``b``, etc.).
Test time: Test time:
...@@ -217,7 +218,7 @@ The local version of the above code would be the following: ...@@ -217,7 +218,7 @@ The local version of the above code would be the following:
.. code-block:: python .. code-block:: python
class LocalSimplify(gof.LocalOptimizer): class LocalSimplify(gof.LocalOptimizer):
def transform(self, node): def transform(self, node):
if node.op == div: if node.op == div:
...@@ -234,7 +235,7 @@ The local version of the above code would be the following: ...@@ -234,7 +235,7 @@ The local version of the above code would be the following:
# but it isn't now # but it isn't now
# TODO: do this and explain it # TODO: do this and explain it
return [] # that's not what you should do return [] # that's not what you should do
local_simplify = LocalSimplify() local_simplify = LocalSimplify()
The definition of transform is the inner loop of the global optimizer, The definition of transform is the inner loop of the global optimizer,
......
...@@ -39,30 +39,30 @@ default values. ...@@ -39,30 +39,30 @@ default values.
``filter(value, strict = True)`` does not raise an exception, the ``filter(value, strict = True)`` does not raise an exception, the
value is compatible with the Type. value is compatible with the Type.
*Default*: True iff ``filter(value, strict = True)`` does not raise *Default:* True iff ``filter(value, strict = True)`` does not raise
an exception. an exception.
.. function:: values_eq(a, b) .. function:: values_eq(a, b)
Returns True iff ``a`` and ``b`` are equal. Returns True iff ``a`` and ``b`` are equal.
*Default*: ``a == b`` *Default:* ``a == b``
.. function:: values_eq_approx(a, b) .. function:: values_eq_approx(a, b)
Returns True iff ``a`` and ``b`` are approximately equal, for a Returns True iff ``a`` and ``b`` are approximately equal, for a
definition of "approximately" which varies from Type to Type. definition of "approximately" which varies from Type to Type.
*Default*: ``values_eq(a, b)`` *Default:* ``values_eq(a, b)``
.. function:: make_variable(name=None) .. function:: make_variable(name=None)
Makes a :term:`Variable` of this Type with the specified name, if Makes a :term:`Variable` of this Type with the specified name, if
``name is not None``. If ``name is ``None``, then the Variable does ``name`` is not ``None``. If ``name`` is ``None``, then the Variable does
not have a name. The Variable will have its ``type`` field set to not have a name. The Variable will have its ``type`` field set to
the Type object. the Type object.
*Default*: there is a generic definition of this in Type. The *Default:* there is a generic definition of this in Type. The
Variable's ``type`` will be the object that defines this method (in Variable's ``type`` will be the object that defines this method (in
other words, ``self``). other words, ``self``).
...@@ -70,21 +70,21 @@ default values. ...@@ -70,21 +70,21 @@ default values.
Syntactic shortcut to ``make_variable``. Syntactic shortcut to ``make_variable``.
*Default*: ``make_variable`` *Default:* ``make_variable``
.. function:: __eq__(other) .. function:: __eq__(other)
Used to compare Type instances themselves Used to compare Type instances themselves
*Default*: ``object.__eq__`` *Default:* ``object.__eq__``
.. function:: __hash__() .. function:: __hash__()
Types should not be mutable, so it should be Ok to define a hash Types should not be mutable, so it should be OK to define a hash
function. Typically this function should hash all of the terms function. Typically this function should hash all of the terms
involved in ``__eq__``. involved in ``__eq__``.
*Default*: ``id(self)`` *Default:* ``id(self)``
For each method, the *default* is what ``Type`` defines For each method, the *default* is what ``Type`` defines
for you. So, if you create an instance of ``Type`` or an for you. So, if you create an instance of ``Type`` or an
...@@ -99,7 +99,7 @@ For more details you can go see the documentation for :ref:`type`. ...@@ -99,7 +99,7 @@ For more details you can go see the documentation for :ref:`type`.
Defining double Defining double
=============== ===============
We are going to base Type ``double`` on Python's ``float``. We are We are going to base Type ``double`` on Python's ``float``. We
must define ``filter`` and shall override ``values_eq_approx``. must define ``filter`` and shall override ``values_eq_approx``.
...@@ -139,17 +139,17 @@ graph in such a way that it produces slightly different variables, for ...@@ -139,17 +139,17 @@ graph in such a way that it produces slightly different variables, for
example because of numerical instability like rounding errors at the example because of numerical instability like rounding errors at the
end of the mantissa. For instance, ``a + a + a + a + a + a`` might not end of the mantissa. For instance, ``a + a + a + a + a + a`` might not
actually produce the exact same output as ``6 * a`` (try with a=0.1), actually produce the exact same output as ``6 * a`` (try with a=0.1),
but with ``values_eq_approx`` we with don't necessarily mind. but with ``values_eq_approx`` we don't necessarily mind.
We added an extra ``tolerance`` argument here. Since this argument is We added an extra ``tolerance`` argument here. Since this argument is
not part of the API, it must have a default value which we not part of the API, it must have a default value, which we
chose to be 1e-4. chose to be 1e-4.
.. note:: .. note::
``values_eq`` is never actually used by Theano, but it might be used ``values_eq`` is never actually used by Theano, but it might be used
internally in the future. Equality testing in DebugMode is done internally in the future. Equality testing in
using ``values_eq_approx``. :ref:`DebugMode <debugmode>` is done using ``values_eq_approx``.
**Putting them together** **Putting them together**
...@@ -160,7 +160,7 @@ the Type is to instantiate a plain Type and set the needed fields: ...@@ -160,7 +160,7 @@ the Type is to instantiate a plain Type and set the needed fields:
.. code-block:: python .. code-block:: python
from Theano import gof from theano import gof
double = gof.Type() double = gof.Type()
double.filter = filter double.filter = filter
...@@ -175,19 +175,19 @@ and define ``filter`` and ``values_eq_approx`` in the subclass: ...@@ -175,19 +175,19 @@ and define ``filter`` and ``values_eq_approx`` in the subclass:
from theano import gof from theano import gof
class Double(gof.Type): class Double(gof.Type):
def filter(self, x, strict=False): def filter(self, x, strict=False):
if strict and not isinstance(x, float): if strict and not isinstance(x, float):
raise TypeError('Expected a float!') raise TypeError('Expected a float!')
return float(x) return float(x)
def values_eq_approx(self, x, y, tolerance=1e-4): def values_eq_approx(self, x, y, tolerance=1e-4):
return abs(x - y) / (abs(x) + abs(y)) < tolerance return abs(x - y) / (abs(x) + abs(y)) < tolerance
double = Double() double = Double()
``double`` is then an instance of Type ``Double``, which in turn is a ``double`` is then an instance of Type ``Double``, which in turn is a
sublcass of ``Type``. subclass of ``Type``.
There is a small issue with defining ``double`` this way. All There is a small issue with defining ``double`` this way. All
instances of ``Double`` are technically the same Type. However, different instances of ``Double`` are technically the same Type. However, different
...@@ -199,7 +199,7 @@ instances of ``Double`` are technically the same Type. However, different ...@@ -199,7 +199,7 @@ instances of ``Double`` are technically the same Type. However, different
False False
Theano compares Types using ``==`` to see if they are the same. Theano compares Types using ``==`` to see if they are the same.
This happens in DebugMode. Also, ops can (and should) ensure that their inputs This happens in DebugMode. Also, Ops can (and should) ensure that their inputs
have the expected Type by checking something like ``if x.type == lvector``. have the expected Type by checking something like ``if x.type == lvector``.
There are several ways to make sure that equality testing works properly: There are several ways to make sure that equality testing works properly:
...@@ -243,7 +243,7 @@ attempt to clear up the confusion: ...@@ -243,7 +243,7 @@ attempt to clear up the confusion:
that Type instance. If you were to parse the C expression ``c = a + that Type instance. If you were to parse the C expression ``c = a +
b;``, ``a``, ``b`` and ``c`` would all be Variable instances. b;``, ``a``, ``b`` and ``c`` would all be Variable instances.
* A **subclass of Type** is a way of implementing * A **subclass of Type** is a way of implementing
a set of Type instances that share a set of Type instances that share
structural similarities. In the ``double`` example that we are doing, structural similarities. In the ``double`` example that we are doing,
there is actually only one Type in that set, therefore the subclass there is actually only one Type in that set, therefore the subclass
...@@ -265,18 +265,18 @@ Final version ...@@ -265,18 +265,18 @@ Final version
from theano import gof from theano import gof
class Double(gof.Type): class Double(gof.Type):
def filter(self, x, strict=False): def filter(self, x, strict=False):
if strict and not isinstance(x, float): if strict and not isinstance(x, float):
raise TypeError('Expected a float!') raise TypeError('Expected a float!')
return float(x) return float(x)
def values_eq_approx(self, x, y, tolerance=1e-4): def values_eq_approx(self, x, y, tolerance=1e-4):
return abs(x - y) / (abs(x) + abs(y)) < tolerance return abs(x - y) / (abs(x) + abs(y)) < tolerance
def __str__(self): def __str__(self):
return "double" return "double"
double = Double() double = Double()
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论