提交 b1f7979f authored 作者: Arnaud Bergeron's avatar Arnaud Bergeron

Fixup extending/* and delete associated tests.

上级 3e303fc9
...@@ -253,8 +253,10 @@ We will be defining C code for the multiplication Op on doubles. ...@@ -253,8 +253,10 @@ We will be defining C code for the multiplication Op on doubles.
**c_code** **c_code**
.. If you modify this code, also change : .. testsetup::
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
from theano import Op
mul = Op()
.. testcode:: .. testcode::
...@@ -298,10 +300,6 @@ As before, I tried to organize the code in order to minimize ...@@ -298,10 +300,6 @@ As before, I tried to organize the code in order to minimize
repetition. You can check that mul produces the same C code in this repetition. You can check that mul produces the same C code in this
version that it produces in the code I gave above. version that it produces in the code I gave above.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
.. testcode:: .. testcode::
from theano import gof from theano import gof
......
...@@ -159,9 +159,7 @@ Defining the methods ...@@ -159,9 +159,7 @@ Defining the methods
.. testsetup:: .. testsetup::
import theano import theano
double = theano.Type()
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
**c_declare** **c_declare**
...@@ -193,9 +191,6 @@ your Type. If you wish people to develop operations that make use of ...@@ -193,9 +191,6 @@ your Type. If you wish people to develop operations that make use of
it, it's best to publish it somewhere. it, it's best to publish it somewhere.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
**c_init** **c_init**
.. testcode:: .. testcode::
...@@ -222,9 +217,6 @@ you should only assume that either ``c_init`` or ``c_extract`` has been ...@@ -222,9 +217,6 @@ you should only assume that either ``c_init`` or ``c_extract`` has been
called, without knowing for sure which of the two. called, without knowing for sure which of the two.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
**c_extract** **c_extract**
.. testcode:: .. testcode::
...@@ -261,9 +253,6 @@ using the ``PyFloat_AsDouble`` function (yet again provided by CPython's C ...@@ -261,9 +253,6 @@ using the ``PyFloat_AsDouble`` function (yet again provided by CPython's C
API) and we put it in our double variable that we declared previously. API) and we put it in our double variable that we declared previously.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
**c_sync** **c_sync**
.. testcode:: .. testcode::
...@@ -323,9 +312,6 @@ than sorry. ...@@ -323,9 +312,6 @@ than sorry.
do *NOT* decrease its reference count! do *NOT* decrease its reference count!
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
**c_cleanup** **c_cleanup**
.. testcode:: .. testcode::
...@@ -374,14 +360,8 @@ depends on the the relationship between Python and C with respect to ...@@ -374,14 +360,8 @@ depends on the the relationship between Python and C with respect to
that Variable. For instance, imagine you define the following function that Variable. For instance, imagine you define the following function
and call it: and call it:
.. If you modify this code, also change : .. code-block:: python
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
.. testcode::
from theano import function
from theano.tensor import double
x, y, z = double('x'), double('y'), double('z') x, y, z = double('x'), double('y'), double('z')
a = add(x, y) a = add(x, y)
b = mul(a, z) b = mul(a, z)
...@@ -463,9 +443,6 @@ multiplication block. ...@@ -463,9 +443,6 @@ multiplication block.
Final version Final version
============= =============
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
.. testcode:: .. testcode::
from theano import gof from theano import gof
...@@ -530,7 +507,7 @@ know how to generate C code. ...@@ -530,7 +507,7 @@ know how to generate C code.
You can implement c_code for this op. You register it like this: You can implement c_code for this op. You register it like this:
.. testcode:: .. code-block:: python
theano.compile.ops.register_deep_copy_op_c_code(YOUR_TYPE_CLASS, THE_C_CODE, version=()) theano.compile.ops.register_deep_copy_op_c_code(YOUR_TYPE_CLASS, THE_C_CODE, version=())
...@@ -552,7 +529,7 @@ ViewOp to generate C code when working with this type, as ...@@ -552,7 +529,7 @@ ViewOp to generate C code when working with this type, as
otherwise it will use Python code instead. This is achieved by otherwise it will use Python code instead. This is achieved by
calling: calling:
.. testcode:: .. code-block:: python
theano.compile.ops.register_view_op_c_code(YOUR_TYPE_CLASS, THE_C_CODE, version=()) theano.compile.ops.register_view_op_c_code(YOUR_TYPE_CLASS, THE_C_CODE, version=())
...@@ -572,7 +549,7 @@ Theano Variable that has a shape attribute (Shape_i returns only one of ...@@ -572,7 +549,7 @@ Theano Variable that has a shape attribute (Shape_i returns only one of
the elements of the shape). the elements of the shape).
.. testcode:: .. code-block:: python
theano.compile.ops.register_shape_c_code(YOUR_TYPE_CLASS, THE_C_CODE, version=()) theano.compile.ops.register_shape_c_code(YOUR_TYPE_CLASS, THE_C_CODE, version=())
theano.compile.ops.register_shape_i_c_code(YOUR_TYPE_CLASS, THE_C_CODE, CHECK_INPUT, version=()) theano.compile.ops.register_shape_i_c_code(YOUR_TYPE_CLASS, THE_C_CODE, CHECK_INPUT, version=())
......
...@@ -7,7 +7,7 @@ So suppose you have looked through the library documentation and you don't see a ...@@ -7,7 +7,7 @@ So suppose you have looked through the library documentation and you don't see a
function that does what you want. function that does what you want.
If you can implement something in terms of existing Ops, you should do that. If you can implement something in terms of existing Ops, you should do that.
Odds are your function that uses existing Theano expressions is short, Odds are your function that uses existing Theano expressions is short,
has no bugs, and potentially profits from optimizations that have already been has no bugs, and potentially profits from optimizations that have already been
implemented. implemented.
...@@ -18,7 +18,7 @@ Theano was designed to make it easy to add new Ops, Types, and Optimizations. ...@@ -18,7 +18,7 @@ Theano was designed to make it easy to add new Ops, Types, and Optimizations.
This section walks through a non-trivial example Op that does something pretty This section walks through a non-trivial example Op that does something pretty
weird and unrealistic, that is hard to express with existing Ops. weird and unrealistic, that is hard to express with existing Ops.
(Technically, we could use ``Scan`` to implement the Op we're about to describe, (Technically, we could use ``Scan`` to implement the Op we're about to describe,
but we ignore that possibility for the sake of example.) but we ignore that possibility for the sake of example.)
The following code works, but important error-checking has been omitted for The following code works, but important error-checking has been omitted for
...@@ -26,55 +26,52 @@ clarity. For example, when you write C code that assumes memory is contiguous, ...@@ -26,55 +26,52 @@ clarity. For example, when you write C code that assumes memory is contiguous,
you should check the strides and alignment. you should check the strides and alignment.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_fibby.test_fibby_1
.. testcode:: .. testcode::
import theano import theano
class Fibby(theano.Op): class Fibby(theano.Op):
""" """
An arbitrarily generalized Fibbonacci sequence An arbitrarily generalized Fibbonacci sequence
""" """
__props__ = () __props__ = ()
def make_node(self, x): def make_node(self, x):
x_ = tensor.as_tensor_variable(x) x_ = tensor.as_tensor_variable(x)
assert x_.ndim == 1 assert x_.ndim == 1
return theano.Apply(self, return theano.Apply(self,
inputs=[x_], inputs=[x_],
outputs=[x_.type()]) outputs=[x_.type()])
# using x_.type() is dangerous, it copies x's broadcasting behaviour # using x_.type() is dangerous, it copies x's broadcasting behaviour
def perform(self, node, inputs, output_storage): def perform(self, node, inputs, output_storage):
x, = inputs x, = inputs
y = output_storage[0][0] = x.copy() y = output_storage[0][0] = x.copy()
for i in range(2, len(x)): for i in range(2, len(x)):
y[i] = y[i-1] * y[i-2] + x[i] y[i] = y[i-1] * y[i-2] + x[i]
def c_code(self, node, name, inames, onames, sub): def c_code(self, node, name, inames, onames, sub):
x, = inames x, = inames
y, = onames y, = onames
fail = sub['fail'] fail = sub['fail']
return """ return """
Py_XDECREF(%(y)s); Py_XDECREF(%(y)s);
%(y)s = (PyArrayObject*)PyArray_FromArray( %(y)s = (PyArrayObject*)PyArray_FromArray(
%(x)s, 0, NPY_ARRAY_ENSURECOPY); %(x)s, 0, NPY_ARRAY_ENSURECOPY);
if (!%(y)s) if (!%(y)s)
%(fail)s; %(fail)s;
{//New scope needed to make compilation work {//New scope needed to make compilation work
dtype_%(y)s * y = (dtype_%(y)s*)PyArray_DATA(%(y)s); dtype_%(y)s * y = (dtype_%(y)s*)PyArray_DATA(%(y)s);
dtype_%(x)s * x = (dtype_%(x)s*)PyArray_DATA(%(x)s); dtype_%(x)s * x = (dtype_%(x)s*)PyArray_DATA(%(x)s);
for (int i = 2; i < PyArray_DIMS(%(x)s)[0]; ++i) for (int i = 2; i < PyArray_DIMS(%(x)s)[0]; ++i)
y[i] = y[i-1]*y[i-2] + x[i]; y[i] = y[i-1]*y[i-2] + x[i];
} }
""" % locals() """ % locals()
def c_code_cache_version(self): def c_code_cache_version(self):
return (1,) return (1,)
fibby = Fibby() fibby = Fibby()
At a high level, the code fragment declares a class (``Fibby``) and then At a high level, the code fragment declares a class (``Fibby``) and then
creates one instance of it (``fibby``). creates one instance of it (``fibby``).
...@@ -82,7 +79,7 @@ We often gloss over this distinction, but will be precise here: ...@@ -82,7 +79,7 @@ We often gloss over this distinction, but will be precise here:
``fibby`` (the instance) is an Op, not ``Fibby`` (the class which is a subclass of ``theano.Op``). ``fibby`` (the instance) is an Op, not ``Fibby`` (the class which is a subclass of ``theano.Op``).
You can call ``fibby(tensor.vector())`` on a Variable to build an You can call ``fibby(tensor.vector())`` on a Variable to build an
expression, and in the expression there will be a ``.op`` attribute that refers expression, and in the expression there will be a ``.op`` attribute that refers
to ``fibby``. to ``fibby``.
The first two methods in the Op are relatively boilerplate: ``__eq__`` and ``__hash__``. The first two methods in the Op are relatively boilerplate: ``__eq__`` and ``__hash__``.
When two Ops are equal, Theano will merge their outputs if they are applied to the same inputs. When two Ops are equal, Theano will merge their outputs if they are applied to the same inputs.
...@@ -110,14 +107,14 @@ see wrong calculation. ...@@ -110,14 +107,14 @@ see wrong calculation.
The ``make_node`` method creates a node to be included in the expression graph. The ``make_node`` method creates a node to be included in the expression graph.
It runs when we apply our Op (``fibby``) to Variable (``x``), as in ``fibby(tensor.vector())``. It runs when we apply our Op (``fibby``) to Variable (``x``), as in ``fibby(tensor.vector())``.
When an Op has multiple inputs, their order in the inputs argument to ``Apply`` When an Op has multiple inputs, their order in the inputs argument to ``Apply``
is important: Theano will call ``make_node(*inputs)`` to copy the graph, is important: Theano will call ``make_node(*inputs)`` to copy the graph,
so it is important not to change the semantics of the expression by changing the argument order. so it is important not to change the semantics of the expression by changing the argument order.
All the ``inputs`` and ``outputs`` arguments to ``Apply`` must be Variables. All the ``inputs`` and ``outputs`` arguments to ``Apply`` must be Variables.
A common and easy way to ensure inputs are variables is to run them through A common and easy way to ensure inputs are variables is to run them through
``as_tensor_variable``. ``as_tensor_variable``.
This function leaves TensorType variables alone, raises an This function leaves TensorType variables alone, raises an
error for non-TensorType variables, and copies any ``numpy.ndarray`` into the error for non-TensorType variables, and copies any ``numpy.ndarray`` into the
storage for a TensorType Constant. storage for a TensorType Constant.
...@@ -125,7 +122,7 @@ The ``make_node`` method dictates the appropriate Type for all output ...@@ -125,7 +122,7 @@ The ``make_node`` method dictates the appropriate Type for all output
variables. variables.
The ``perform`` method implements the Op's mathematical logic in Python. The ``perform`` method implements the Op's mathematical logic in Python.
The inputs (here ``x``) are passed by value, The inputs (here ``x``) are passed by value,
but a single output is returned indirectly as the first element of but a single output is returned indirectly as the first element of
single-element lists. If ``fibby`` had a second output, it would be stored single-element lists. If ``fibby`` had a second output, it would be stored
in ``output_storage[1][0]``. in ``output_storage[1][0]``.
...@@ -145,7 +142,7 @@ the correct size for the output. This is essentially simulating the line ...@@ -145,7 +142,7 @@ the correct size for the output. This is essentially simulating the line
``y = x.copy()``. ``y = x.copy()``.
.. testcode:: .. code-block:: c
Py_XDECREF(%(y)s); Py_XDECREF(%(y)s);
%(y)s = (PyArrayObject*)PyArray_FromArray( %(y)s = (PyArrayObject*)PyArray_FromArray(
...@@ -155,7 +152,7 @@ The first line reduces the reference count of the data that y originally ...@@ -155,7 +152,7 @@ The first line reduces the reference count of the data that y originally
pointed to. The second line allocates the new data and makes y point to it. pointed to. The second line allocates the new data and makes y point to it.
In C code for a theano op, numpy arrays are represented as ``PyArrayObject`` C In C code for a theano op, numpy arrays are represented as ``PyArrayObject`` C
structs. This is part of the numpy/scipy C API documented at structs. This is part of the numpy/scipy C API documented at
http://docs.scipy.org/doc/numpy/reference/c-api.types-and-structures.html http://docs.scipy.org/doc/numpy/reference/c-api.types-and-structures.html
TODO: NEEDS MORE EXPLANATION. TODO: NEEDS MORE EXPLANATION.
...@@ -163,7 +160,7 @@ TODO: NEEDS MORE EXPLANATION. ...@@ -163,7 +160,7 @@ TODO: NEEDS MORE EXPLANATION.
There are some important restrictions to remember when implementing an Op. There are some important restrictions to remember when implementing an Op.
Unless your Op correctly defines a ``view_map`` attribute, the ``perform`` and ``c_code`` must not Unless your Op correctly defines a ``view_map`` attribute, the ``perform`` and ``c_code`` must not
produce outputs whose memory is aliased to any input (technically, if changing the produce outputs whose memory is aliased to any input (technically, if changing the
output could change the input object in some sense, they are aliased). output could change the input object in some sense, they are aliased).
Unless your Op correctly defines a ``destroy_map`` attribute, ``perform`` and ``c_code`` must Unless your Op correctly defines a ``destroy_map`` attribute, ``perform`` and ``c_code`` must
not modify any of the inputs. not modify any of the inputs.
...@@ -210,19 +207,19 @@ TODO: talk about OPTIMIZATION STAGES ...@@ -210,19 +207,19 @@ TODO: talk about OPTIMIZATION STAGES
.. testcode:: .. testcode::
from theano.tensor.opt import get_scalar_constant_value, NotScalarConstantError from theano.tensor.opt import get_scalar_constant_value, NotScalarConstantError
# Remove any fibby(zeros(...)) # Remove any fibby(zeros(...))
@theano.tensor.opt.register_specialize @theano.tensor.opt.register_specialize
@theano.gof.local_optimizer([fibby]) @theano.gof.local_optimizer([fibby])
def fibby_of_zero(node): def fibby_of_zero(node):
if node.op == fibby: if node.op == fibby:
x = node.inputs[0] x = node.inputs[0]
try: try:
if numpy.all(0 == get_scalar_constant_value(x)): if numpy.all(0 == get_scalar_constant_value(x)):
return [x] return [x]
except NotScalarConstantError: except NotScalarConstantError:
pass pass
The ``register_specialize`` decorator is what activates our optimization, and The ``register_specialize`` decorator is what activates our optimization, and
tells Theano to use it in the specialization stage. tells Theano to use it in the specialization stage.
...@@ -241,35 +238,33 @@ Here is some code to test that the optimization is applied only when needed. ...@@ -241,35 +238,33 @@ Here is some code to test that the optimization is applied only when needed.
.. testcode:: .. testcode::
import numpy import numpy
import theano.tensor as T import theano.tensor as T
from theano import function from theano import function
from theano import tensor from theano import tensor
# Test it does not apply when not needed # Test it does not apply when not needed
x = T.dvector() x = T.dvector()
f = function([x], fibby(x)) f = function([x], fibby(x))
#theano.printing.debugprint(f)
# We call the function to make sure it runs.
# We call the function to make sure it runs. # If you run in DebugMode, it will compare the C and Python outputs.
# If you run in DebugMode, it will compare the C and Python outputs. f(numpy.random.rand(5))
f(numpy.random.rand(5)) topo = f.maker.fgraph.toposort()
topo = f.maker.fgraph.toposort() assert len(topo) == 1
assert len(topo) == 1 assert isinstance(topo[0].op, Fibby)
assert isinstance(topo[0].op, Fibby)
# Test that the optimization gets applied.
# Test that the optimization gets applied. f_zero = function([], fibby(T.zeros([5])))
f_zero = function([], fibby(T.zeros([5])))
#theano.printing.debugprint(f_zero) # If you run in DebugMode, it will compare the output before
# and after the optimization.
# If you run in DebugMode, it will compare the output before f_zero()
# and after the optimization.
f_zero() # Check that the optimization removes the Fibby Op.
# For security, the Theano memory interface ensures that the output
# Check that the optimization removes the Fibby Op. # of the function is always memory not aliased to the input.
# For security, the Theano memory interface ensures that the output # That is why there is a DeepCopyOp op.
# of the function is always memory not aliased to the input. topo = f_zero.maker.fgraph.toposort()
# That is why there is a DeepCopyOp op. assert len(topo) == 1
topo = f_zero.maker.fgraph.toposort() assert isinstance(topo[0].op, theano.compile.ops.DeepCopyOp)
assert len(topo) == 1
assert isinstance(topo[0].op, theano.compile.ops.DeepCopyOp)
...@@ -20,11 +20,11 @@ should help you understand how these pieces fit together: ...@@ -20,11 +20,11 @@ should help you understand how these pieces fit together:
.. testcode:: .. testcode::
import theano.tensor as T import theano.tensor as T
x = T.dmatrix('x') x = T.dmatrix('x')
y = T.dmatrix('y') y = T.dmatrix('y')
z = x + y z = x + y
**Diagram** **Diagram**
...@@ -71,73 +71,67 @@ without any shortcuts, that will make the graph construction very explicit. ...@@ -71,73 +71,67 @@ without any shortcuts, that will make the graph construction very explicit.
This is what you would normally type: This is what you would normally type:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_graphstructures.test_graphstructures_1
.. testcode:: .. testcode::
# create 3 Variables with owner = None # create 3 Variables with owner = None
x = T.matrix('x') x = T.matrix('x')
y = T.matrix('y') y = T.matrix('y')
z = T.matrix('z') z = T.matrix('z')
# create 2 Variables (one for 'e', one intermediate for y*z) # create 2 Variables (one for 'e', one intermediate for y*z)
# create 2 Apply instances (one for '+', one for '*') # create 2 Apply instances (one for '+', one for '*')
e = x + y * z e = x + y * z
**Long example** **Long example**
This is what you would type to build the graph explicitly: This is what you would type to build the graph explicitly:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_graphstructures.test_graphstructures_1
.. testcode:: .. testcode::
from theano.tensor import add, mul, Apply, Variable, TensorType from theano.tensor import add, mul, Apply, Variable, Constant, TensorType
# Instantiate a type that represents a matrix of doubles # Instantiate a type that represents a matrix of doubles
float64_matrix = TensorType(dtype = 'float64', # double float64_matrix = TensorType(dtype='float64', # double
broadcastable = (False, False)) # matrix broadcastable=(False, False)) # matrix
# We make the Variable instances we need. # We make the Variable instances we need.
x = Variable(type = float64_matrix, name = 'x') x = Variable(type=float64_matrix, name='x')
y = Variable(type = float64_matrix, name = 'y') y = Variable(type=float64_matrix, name='y')
z = Variable(type = float64_matrix, name = 'z') z = Variable(type=float64_matrix, name='z')
# This is the Variable that we want to symbolically represents y*z # This is the Variable that we want to symbolically represents y*z
mul_variable = Variable(type = float64_matrix) mul_variable = Variable(type=float64_matrix)
assert mul_variable.owner is None assert mul_variable.owner is None
# Instantiate a symbolic multiplication # Instantiate a symbolic multiplication
node_mul = Apply(op = mul, node_mul = Apply(op=mul,
inputs = [y, z], inputs=[y, z],
outputs = [mul_variable]) outputs=[mul_variable])
# Fields 'owner' and 'index' are set by Apply # Fields 'owner' and 'index' are set by Apply
assert mul_variable.owner is node_mul assert mul_variable.owner is node_mul
# 'index' is the position of mul_variable in mode_mul's outputs # 'index' is the position of mul_variable in mode_mul's outputs
assert mul_variable.index == 0 assert mul_variable.index == 0
# This is the Variable that we want to symbolically represents x+(y*z) # This is the Variable that we want to symbolically represents x+(y*z)
add_variable = Variable(type = float64_matrix) add_variable = Variable(type=float64_matrix)
assert add_variable.owner is None assert add_variable.owner is None
# Instantiate a symbolic addition # Instantiate a symbolic addition
node_add = Apply(op = add, node_add = Apply(op=add,
inputs = [x, mul_variable], inputs=[x, mul_variable],
outputs = [add_variable]) outputs=[add_variable])
# Fields 'owner' and 'index' are set by Apply # Fields 'owner' and 'index' are set by Apply
assert add_variable.owner is node_add assert add_variable.owner is node_add
assert add_variable.index == 0 assert add_variable.index == 0
e = add_variable e = add_variable
# We have access to x, y and z through pointers # We have access to x, y and z through pointers
assert e.owner.inputs[0] is x assert e.owner.inputs[0] is x
assert e.owner.inputs[1] is mul_variable assert e.owner.inputs[1] is mul_variable
assert e.owner.inputs[1].owner.inputs[0] is y assert e.owner.inputs[1].owner.inputs[0] is y
assert e.owner.inputs[1].owner.inputs[1] is z assert e.owner.inputs[1].owner.inputs[1] is z
Note how the call to ``Apply`` modifies the ``owner`` and ``index`` Note how the call to ``Apply`` modifies the ``owner`` and ``index``
...@@ -163,12 +157,11 @@ builds the following graph: ...@@ -163,12 +157,11 @@ builds the following graph:
.. testcode:: .. testcode::
node = Apply(op = add, node = Apply(op=add,
inputs = [Variable(type = dscalar, name = 'x'), inputs=[Variable(type=T.dscalar, name='x'),
Constant(type = lscalar, data = 1)], Constant(type=T.lscalar, data=1)],
outputs = [Variable(type = dscalar)]) outputs=[Variable(type=T.dscalar)])
e = node.outputs[0] e = node.outputs[0]
Graph Structures Graph Structures
...@@ -402,39 +395,34 @@ In both types of pairs, the second element of the tuple is an index, ...@@ -402,39 +395,34 @@ In both types of pairs, the second element of the tuple is an index,
such that: ``var.clients[*][0].inputs[index]`` or such that: ``var.clients[*][0].inputs[index]`` or
``fgraph.outputs[index]`` is that variable. ``fgraph.outputs[index]`` is that variable.
.. testcode::
import theano >>> import theano
v = theano.tensor.vector() >>> v = theano.tensor.vector()
f = theano.function([v], (v+1).sum()) >>> f = theano.function([v], (v+1).sum())
theano.printing.debugprint(f) >>> theano.printing.debugprint(f)
# Sorted list of all nodes in the compiled graph. Sum{acc_dtype=float64} [@A] '' 1
topo = f.maker.fgraph.toposort() |Elemwise{add,no_inplace} [@B] '' 0
topo[0].outputs[0].clients |TensorConstant{(1,) of 1.0} [@C]
# [(Sum(Elemwise{add,no_inplace}.0), 0)] |<TensorType(float64, vector)> [@D]
topo[1].outputs[0].clients >>> # Sorted list of all nodes in the compiled graph.
# [('output', 0)] >>> topo = f.maker.fgraph.toposort()
>>> topo[0].outputs[0].clients
# An internal variable [(Sum{acc_dtype=float64}(Elemwise{add,no_inplace}.0), 0)]
var = topo[0].outputs[0] >>> topo[1].outputs[0].clients
client = var.clients[0] [('output', 0)]
client
# (Sum(Elemwise{add,no_inplace}.0), 0) >>> # An internal variable
type(client[0]) >>> var = topo[0].outputs[0]
# <class 'theano.gof.graph.Apply'> >>> client = var.clients[0]
assert client[0].inputs[client[1]] is var >>> client
(Sum{acc_dtype=float64}(Elemwise{add,no_inplace}.0), 0)
# An output of the graph >>> type(client[0])
var = topo[1].outputs[0] <class 'theano.gof.graph.Apply'>
client = var.clients[0] >>> assert client[0].inputs[client[1]] is var
client
# ('output', 0) >>> # An output of the graph
assert f.maker.fgraph.outputs[client[1]] is var >>> var = topo[1].outputs[0]
>>> client = var.clients[0]
.. testoutput:: >>> client
('output', 0)
Sum{acc_dtype=float64} [@A] '' 1 >>> assert f.maker.fgraph.outputs[client[1]] is var
|Elemwise{add,no_inplace} [@B] '' 0
|TensorConstant{(1,) of 1.0} [@C]
|<TensorType(float64, vector)> [@D]
\ No newline at end of file
...@@ -55,6 +55,11 @@ Suppose you had an Op which took ``x`` as input and returned ...@@ -55,6 +55,11 @@ Suppose you had an Op which took ``x`` as input and returned
purpose, you would set the ``view_map`` field as follows: purpose, you would set the ``view_map`` field as follows:
.. testsetup::
from theano import Op
myop = Op()
.. testcode:: .. testcode::
myop.view_map = {0: [0]} myop.view_map = {0: [0]}
......
...@@ -541,9 +541,6 @@ multiplication Op could take an arbitrary number of arguments. ...@@ -541,9 +541,6 @@ multiplication Op could take an arbitrary number of arguments.
First, we'll instantiate a ``mul`` Op: First, we'll instantiate a ``mul`` Op:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode:: mul .. testcode:: mul
from theano import gof from theano import gof
...@@ -558,9 +555,6 @@ two. This function ensures that both inputs have the ``double`` type. ...@@ -558,9 +555,6 @@ two. This function ensures that both inputs have the ``double`` type.
Since multiplying two doubles yields a double, this function makes an Since multiplying two doubles yields a double, this function makes an
Apply node with an output Variable of type ``double``. Apply node with an output Variable of type ``double``.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode:: mul .. testcode:: mul
def make_node(x, y): def make_node(x, y):
...@@ -594,8 +588,6 @@ built-in type ``float`` because this is the type that ``double.filter()`` ...@@ -594,8 +588,6 @@ built-in type ``float`` because this is the type that ``double.filter()``
will always return, per our own definition. ``output_storage`` will will always return, per our own definition. ``output_storage`` will
contain a single storage cell for the multiplication's variable. contain a single storage cell for the multiplication's variable.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode:: mul .. testcode:: mul
def perform(node, inputs, output_storage): def perform(node, inputs, output_storage):
...@@ -626,31 +618,32 @@ Here, ``z`` is a list of one element. By default, ``z == [None]``. ...@@ -626,31 +618,32 @@ Here, ``z`` is a list of one element. By default, ``z == [None]``.
Trying out our new Op Trying out our new Op
===================== =====================
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
In the following code, we use our new Op: In the following code, we use our new Op:
>>> import theano .. doctest:: mul
>>> x, y = double('x'), double('y')
>>> z = mul(x, y) >>> import theano
>>> f = theano.function([x, y], z) >>> x, y = double('x'), double('y')
>>> f(5, 6) >>> z = mul(x, y)
30.0 >>> f = theano.function([x, y], z)
>>> f(5.6, 6.7) >>> f(5, 6)
37.519999999999996 30.0
>>> f(5.6, 6.7)
37.519999999999996
Note that there is an implicit call to Note that there is an implicit call to
``double.filter()`` on each argument, so if we give integers as inputs ``double.filter()`` on each argument, so if we give integers as inputs
they are magically cast to the right type. Now, what if we try this? they are magically cast to the right type. Now, what if we try this?
>>> x = double('x') .. doctest:: mul
>>> z = mul(x, 2)
Traceback (most recent call last): >>> x = double('x')
File "<stdin>", line 1, in <module> >>> z = mul(x, 2)
File "/u/breuleuo/hg/theano/theano/gof/op.py", line 207, in __call__ Traceback (most recent call last):
File "<stdin>", line 2, in make_node File "<stdin>", line 1, in <module>
AttributeError: 'int' object has no attribute 'type' File "/u/breuleuo/hg/theano/theano/gof/op.py", line 207, in __call__
File "<stdin>", line 2, in make_node
AttributeError: 'int' object has no attribute 'type'
Automatic Constant Wrapping Automatic Constant Wrapping
--------------------------- ---------------------------
...@@ -659,8 +652,6 @@ Well, OK. We'd like our Op to be a bit more flexible. This can be done ...@@ -659,8 +652,6 @@ Well, OK. We'd like our Op to be a bit more flexible. This can be done
by modifying ``make_node`` to accept Python ``int`` or ``float`` as by modifying ``make_node`` to accept Python ``int`` or ``float`` as
``x`` and/or ``y``: ``x`` and/or ``y``:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode:: mul .. testcode:: mul
def make_node(x, y): def make_node(x, y):
...@@ -677,16 +668,15 @@ Whenever we pass a Python int or float instead of a Variable as ``x`` or ...@@ -677,16 +668,15 @@ Whenever we pass a Python int or float instead of a Variable as ``x`` or
``y``, ``make_node`` will convert it to :ref:`constant` for us. ``gof.Constant`` ``y``, ``make_node`` will convert it to :ref:`constant` for us. ``gof.Constant``
is a :ref:`variable` we statically know the value of. is a :ref:`variable` we statically know the value of.
.. If you modify this code, also change : .. doctest:: mul
.. theano/tests/test_tutorial.py:T_op.test_op_1
>>> x = double('x') >>> x = double('x')
>>> z = mul(x, 2) >>> z = mul(x, 2)
>>> f = theano.function([x], z) >>> f = theano.function([x], z)
>>> f(10) >>> f(10)
20.0 20.0
>>> f(3.4) >>> f(3.4)
6.7999999999999998 6.8
Now the code works the way we want it to. Now the code works the way we want it to.
...@@ -707,9 +697,6 @@ operations ``add``, ``sub`` and ``div``, code for ``make_node`` can be ...@@ -707,9 +697,6 @@ operations ``add``, ``sub`` and ``div``, code for ``make_node`` can be
shared between these Ops. Here is revised implementation of these four shared between these Ops. Here is revised implementation of these four
arithmetic operators: arithmetic operators:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode:: .. testcode::
from theano import gof from theano import gof
......
...@@ -119,9 +119,6 @@ Global optimization ...@@ -119,9 +119,6 @@ Global optimization
Here is the code for a global optimization implementing the Here is the code for a global optimization implementing the
simplification described above: simplification described above:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
.. testcode:: .. testcode::
import theano import theano
...@@ -182,9 +179,6 @@ pointer-following game you need to get ahold of the nodes of interest ...@@ -182,9 +179,6 @@ pointer-following game you need to get ahold of the nodes of interest
for the simplification (``x``, ``y``, ``z``, ``a``, ``b``, etc.). for the simplification (``x``, ``y``, ``z``, ``a``, ``b``, etc.).
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
Test time: Test time:
>>> from theano.scalar import float64, add, mul, true_div >>> from theano.scalar import float64, add, mul, true_div
...@@ -222,8 +216,8 @@ computation, using the ``merge_optimizer`` defined in ...@@ -222,8 +216,8 @@ computation, using the ``merge_optimizer`` defined in
``theano.gof.opt``. ``theano.gof.opt``.
>>> from theano.gof.opt import merge_optimizer >>> from theano.gof.opt import merge_optimizer
>>> merge_optimizer.optimize(e) >>> merge_optimizer.optimize(e) # doctest: +ELLIPSIS
(0, 0.0001430511474609375, None, None, {}, 1, 0) (0, ..., None, None, {}, 1, 0)
>>> e >>> e
[true_div(mul(*1 -> add(y, z), x), *1)] [true_div(mul(*1 -> add(y, z), x), *1)]
>>> simplify.optimize(e) >>> simplify.optimize(e)
...@@ -254,9 +248,6 @@ Local optimization ...@@ -254,9 +248,6 @@ Local optimization
The local version of the above code would be the following: The local version of the above code would be the following:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
.. testcode:: .. testcode::
...@@ -295,9 +286,6 @@ with a :ref:`navigator`. Basically, a :ref:`navigator` is a global ...@@ -295,9 +286,6 @@ with a :ref:`navigator`. Basically, a :ref:`navigator` is a global
optimizer that loops through all nodes in the graph (or a well-defined optimizer that loops through all nodes in the graph (or a well-defined
subset of them) and applies one or several local optimizers on them. subset of them) and applies one or several local optimizers on them.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
>>> x = float64('x') >>> x = float64('x')
>>> y = float64('y') >>> y = float64('y')
>>> z = float64('z') >>> z = float64('z')
...@@ -307,7 +295,7 @@ subset of them) and applies one or several local optimizers on them. ...@@ -307,7 +295,7 @@ subset of them) and applies one or several local optimizers on them.
[add(z, mul(true_div(mul(y, x), y), true_div(z, x)))] [add(z, mul(true_div(mul(y, x), y), true_div(z, x)))]
>>> simplify = gof.TopoOptimizer(local_simplify) >>> simplify = gof.TopoOptimizer(local_simplify)
>>> simplify.optimize(e) >>> simplify.optimize(e)
(<theano.gof.opt.TopoOptimizer object at 0x7f3219787f90>, 1, 5, 3, 0.00017309188842773438, 0.00020599365234375, 6.4849853515625e-05) (<theano.gof.opt.TopoOptimizer object at 0x...>, 1, 5, 3, ..., ..., ...)
>>> e >>> e
[add(z, mul(x, true_div(z, x)))] [add(z, mul(x, true_div(z, x)))]
...@@ -334,6 +322,9 @@ Theano defines some shortcuts to make LocalOptimizers: ...@@ -334,6 +322,9 @@ Theano defines some shortcuts to make LocalOptimizers:
Replaces all occurrences of the first pattern by the second pattern. Replaces all occurrences of the first pattern by the second pattern.
See :class:`PatternSub`. See :class:`PatternSub`.
.. testsetup::
from theano.scalar import identity
.. testcode:: .. testcode::
...@@ -438,9 +429,9 @@ Query ...@@ -438,9 +429,9 @@ Query
A Query is built by the following call: A Query is built by the following call:
.. testcode:: .. code-block:: python
theano.gof.Query(include, require = None, exclude = None, subquery = None) theano.gof.Query(include, require=None, exclude=None, subquery=None)
.. class:: Query .. class:: Query
...@@ -481,20 +472,21 @@ Optimizer: ...@@ -481,20 +472,21 @@ Optimizer:
.. testcode:: .. testcode::
from theano.gof import Query
from theano.compile import optdb from theano.compile import optdb
# This is how the optimizer for the fast_run mode is defined # This is how the optimizer for the fast_run mode is defined
fast_run = optdb.query(Query(include = ['fast_run'])) fast_run = optdb.query(Query(include=['fast_run']))
# This is how the optimizer for the fast_compile mode is defined # This is how the optimizer for the fast_compile mode is defined
fast_compile = optdb.query(Query(include = ['fast_compile'])) fast_compile = optdb.query(Query(include=['fast_compile']))
# This is the same as fast_run but no optimizations will replace # This is the same as fast_run but no optimizations will replace
# any operation by an inplace version. This assumes, of course, # any operation by an inplace version. This assumes, of course,
# that all inplace operations are tagged as 'inplace' (as they # that all inplace operations are tagged as 'inplace' (as they
# should!) # should!)
fast_run_no_inplace = optdb.query(Query(include = ['fast_run'], exclude = ['inplace'])) fast_run_no_inplace = optdb.query(Query(include=['fast_run'],
fast_run_no_inplace = fast_run.excluding('inplace') exclude=['inplace']))
Registering an Optimizer Registering an Optimizer
......
...@@ -90,7 +90,7 @@ and (like in SciPy) they do not support broadcasting operations by default ...@@ -90,7 +90,7 @@ and (like in SciPy) they do not support broadcasting operations by default
formats for sparse type: ``csr`` and ``csc``. So in ``make_mode()``, formats for sparse type: ``csr`` and ``csc``. So in ``make_mode()``,
you can create output variables like this: you can create output variables like this:
.. testcode:: .. code-block:: python
out_format = inputs[0].format # or 'csr' or 'csc' if the output format is fixed out_format = inputs[0].format # or 'csr' or 'csc' if the output format is fixed
SparseType(dtype=inputs[0].dtype, format=out_format).make_variable() SparseType(dtype=inputs[0].dtype, format=out_format).make_variable()
......
...@@ -176,8 +176,6 @@ must define ``filter`` and shall override ``values_eq_approx``. ...@@ -176,8 +176,6 @@ must define ``filter`` and shall override ``values_eq_approx``.
**filter** **filter**
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode:: .. testcode::
# Note that we shadow Python's function ``filter`` with this # Note that we shadow Python's function ``filter`` with this
...@@ -246,8 +244,6 @@ contract. Recall that Type defines default implementations for all ...@@ -246,8 +244,6 @@ contract. Recall that Type defines default implementations for all
required methods of the interface, except ``filter``. One way to make required methods of the interface, except ``filter``. One way to make
the Type is to instantiate a plain Type and set the needed fields: the Type is to instantiate a plain Type and set the needed fields:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode:: .. testcode::
from theano import gof from theano import gof
...@@ -260,8 +256,6 @@ the Type is to instantiate a plain Type and set the needed fields: ...@@ -260,8 +256,6 @@ the Type is to instantiate a plain Type and set the needed fields:
Another way to make this Type is to make a subclass of ``gof.Type`` Another way to make this Type is to make a subclass of ``gof.Type``
and define ``filter`` and ``values_eq_approx`` in the subclass: and define ``filter`` and ``values_eq_approx`` in the subclass:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. code-block:: python .. code-block:: python
from theano import gof from theano import gof
...@@ -331,9 +325,6 @@ There are several ways to make sure that equality testing works properly: ...@@ -331,9 +325,6 @@ There are several ways to make sure that equality testing works properly:
#. Define ``Double.__eq__`` so that instances of type Double #. Define ``Double.__eq__`` so that instances of type Double
are equal. For example: are equal. For example:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode:: .. testcode::
def __eq__(self, other): def __eq__(self, other):
...@@ -387,8 +378,6 @@ attempt to clear up the confusion: ...@@ -387,8 +378,6 @@ attempt to clear up the confusion:
Final version Final version
============= =============
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode:: .. testcode::
from theano import gof from theano import gof
......
...@@ -236,16 +236,16 @@ Example: ...@@ -236,16 +236,16 @@ Example:
def test_validity(self): def test_validity(self):
a = T.dmatrix('a') a = T.dmatrix('a')
b = T.dmatrix('b') b = T.dmatrix('b')
c = T.dot(a,b) c = T.dot(a, b)
f = theano.function([a,b],[c]) f = theano.function([a, b], [c])
cmp = f(self.avals,self.bvals) == numpy.dot(self.avals,self.bvals) cmp = f(self.avals, self.bvals) == numpy.dot(self.avals, self.bvals)
self.assertTrue(numpy.all(cmp)) self.assertTrue(numpy.all(cmp))
Avoid hard-coding variables, as in the following case: Avoid hard-coding variables, as in the following case:
.. testcode:: writeUnitest .. code-block:: python
self.assertTrue(numpy.all(f(self.avals,self.bvals)==numpy.array([[25,25,30,28],[21,18,14,25]]))) self.assertTrue(numpy.all(f(self.avals, self.bvals) == numpy.array([[25, 25, 30, 28], [21, 18, 14, 25]])))
This makes the test case less manageable and forces the user to update This makes the test case less manageable and forces the user to update
the variables each time the input is changed or possibly when the the variables each time the input is changed or possibly when the
......
...@@ -22,475 +22,6 @@ from theano.sandbox.rng_mrg import MRG_RandomStreams ...@@ -22,475 +22,6 @@ from theano.sandbox.rng_mrg import MRG_RandomStreams
from theano.tensor.shared_randomstreams import RandomStreams from theano.tensor.shared_randomstreams import RandomStreams
class T_extending(unittest.TestCase):
# All tests here belong to files in
# http://deeplearning.net/software/theano/extending
# Theano/doc/extending/*.txt
# Any change you do here also add it to the tutorial!
# This belongs to an entire folder since code-snippets are connected
# from one file to another .. and they do not make sense on their
# own.
def test_extending_1(self):
# Note that we shadow Python's function ``filter`` with this
# definition.
def filter(x, strict=False, allow_downcast=None):
if strict:
if isinstance(x, float):
return x
else:
raise TypeError('Expected a float!')
else:
return float(x)
def values_eq_approx(x, y, tolerance=1e-4):
return abs(x - y) / (abs(x) + abs(y)) < tolerance
from theano import gof
double = gof.Type()
double.filter = filter
double.values_eq_approx = values_eq_approx
from theano import gof
class Double(gof.Type):
def filter(self, x, strict=False):
if strict and not isinstance(x, float):
raise TypeError('Expected a float!')
return float(x)
def values_eq_approx(self, x, y, tolerance=1e-4):
return abs(x - y) / (abs(x) + abs(y)) < tolerance
# Added to make those tests pass in DebugMode
@staticmethod
def may_share_memory(a, b):
return a is b
double = Double()
def __eq__(self, other):
return type(self) is Double and type(other) is Double
from theano import gof
class Double(gof.Type):
def filter(self, x, strict=False, allow_downcast=None):
if strict and not isinstance(x, float):
raise TypeError('Expected a float!')
return float(x)
def values_eq_approx(self, x, y, tolerance=1e-4):
return abs(x - y) / (abs(x) + abs(y)) < tolerance
def __str__(self):
return "double"
# Added to make those tests pass in DebugMode
@staticmethod
def may_share_memory(a, b):
return a is b
double = Double()
from theano import gof
mul = gof.Op()
def make_node(x, y):
if x.type != double or y.type != double:
raise TypeError('mul only works on doubles')
return gof.Apply(mul, [x, y], [double()])
mul.make_node = make_node
def perform(node, inputs, output_storage):
x, y = inputs[0], inputs[1]
z = output_storage[0]
z[0] = x * y
mul.perform = perform
x, y = double('x'), double('y')
z = mul(x, y)
f = theano.function([x, y], z)
assert f(5, 6) == 30.0
assert f(5.6, 6.7) == 37.519999999999996
x = double('x')
self.assertRaises(AttributeError, mul, x, 2)
def make_node(x, y):
if isinstance(x, (int, float)):
x = gof.Constant(double, x)
if isinstance(y, (int, float)):
y = gof.Constant(double, y)
if x.type != double or y.type != double:
raise TypeError('mul only works on doubles')
return gof.Apply(mul, [x, y], [double()])
mul.make_node = make_node
x = double('x')
z = mul(x, 2)
f = theano.function([x], z)
assert f(10) == 20.0
assert f(3.4) == 6.7999999999999998
from theano import gof
class BinaryDoubleOp(gof.Op):
__props__ = ("name", "fn")
def __init__(self, name, fn):
self.name = name
self.fn = fn
def make_node(self, x, y):
if isinstance(x, (int, float)):
x = gof.Constant(double, x)
if isinstance(y, (int, float)):
y = gof.Constant(double, y)
if x.type != double or y.type != double:
raise TypeError('%s only works on doubles' % self.name)
return gof.Apply(self, [x, y], [double()])
def perform(self, node, inp, out):
x, y = inp
z, = out
z[0] = self.fn(x, y)
def __str__(self):
return self.name
add = BinaryDoubleOp(name='add',
fn=lambda x, y: x + y)
sub = BinaryDoubleOp(name='sub',
fn=lambda x, y: x - y)
mul = BinaryDoubleOp(name='mul',
fn=lambda x, y: x * y)
div = BinaryDoubleOp(name='div',
fn=lambda x, y: x / y)
def test_extending_2(self):
'''
This test fails in DebugMode for the same reasons the test in
tensor/tests/test_basic.py:T_scalarfromtensor.test0
fails on debug mode ( as much as I could tell - Razvan )
'''
from theano import gof
class Double(gof.Type):
def filter(self, x, strict=False, allow_downcast=None):
if strict and not isinstance(x, float):
raise TypeError('Expected a float!')
return float(x)
def values_eq_approx(self, x, y, tolerance=1e-4):
return abs(x - y) / (abs(x) + abs(y)) < tolerance
def __str__(self):
return "double"
# Added to make those tests pass in DebugMode
@staticmethod
def may_share_memory(a, b):
return a is b
double = Double()
class BinaryDoubleOp(gof.Op):
__props__ = ("name", "fn")
def __init__(self, name, fn):
self.name = name
self.fn = fn
def make_node(self, x, y):
if isinstance(x, (int, float)):
x = gof.Constant(double, x)
if isinstance(y, (int, float)):
y = gof.Constant(double, y)
if x.type != double or y.type != double:
raise TypeError('%s only works on doubles' % self.name)
return gof.Apply(self, [x, y], [double()])
def perform(self, node, inp, out):
x, y = inp
z, = out
z[0] = self.fn(x, y)
def __str__(self):
return self.name
add = BinaryDoubleOp(name='add',
fn=lambda x, y: x + y)
sub = BinaryDoubleOp(name='sub',
fn=lambda x, y: x - y)
mul = BinaryDoubleOp(name='mul',
fn=lambda x, y: x * y)
div = BinaryDoubleOp(name='div',
fn=lambda x, y: x / y)
def c_declare(name, sub, check_input=True):
return """
double %(name)s;
""" % dict(name=name)
double.c_declare = c_declare
def c_init(name, sub):
return """
%(name)s = 0.0;
""" % dict(name=name)
double.c_init = c_init
def c_extract(name, sub, check_input=True):
if(check_input):
pre = """
if (!PyFloat_Check(py_%(name)s)) {
PyErr_SetString(PyExc_TypeError, "expected a float");
%(fail)s
}""" % dict(name=name, fail=sub['fail'])
else:
pre = ""
return pre + """
%(name)s = PyFloat_AsDouble(py_%(name)s);
""" % dict(name=name, fail=sub['fail'])
double.c_extract = c_extract
def c_sync( name, sub):
return """
Py_XDECREF(py_%(name)s);
py_%(name)s = PyFloat_FromDouble(%(name)s);
if (!py_%(name)s) {
printf("PyFloat_FromDouble failed on: %%f\\n", %(name)s);
Py_XINCREF(Py_None);
py_%(name)s = Py_None;
}
""" % dict(name=name)
double.c_sync = c_sync
def c_cleanup(name, sub):
return ""
double.c_cleanup = c_cleanup
from theano import function
x, y, z = double('x'), double('y'), double('z')
a = add(x, y)
b = mul(a, z)
f = function([x, y, z], b)
assert f(1.0, 2.0, 3.0) == 9.0
from theano import gof
class Double(gof.Type):
def filter(self, x, strict=False, allow_downcast=None):
if strict and not isinstance(x, float):
raise TypeError('Expected a float!')
return float(x)
def values_eq_approx(self, x, y, tolerance=1e-4):
return abs(x - y) / (x + y) < tolerance
def __str__(self):
return "double"
def c_declare(self, name, sub, check_input=True):
return """
double %(name)s;
""" % dict(name=name)
def c_init(self, name, sub):
return """
%(name)s = 0.0;
""" % dict(name=name)
def c_extract(self, name, sub, check_input=True):
if(check_input):
pre = """
if (!PyFloat_Check(py_%(name)s)) {
PyErr_SetString(PyExc_TypeError, "expected a float");
%(fail)s
}
""" % dict(sub, name=name)
else:
pre = ""
return pre + """
%(name)s = PyFloat_AsDouble(py_%(name)s);
""" % dict(sub, name=name)
def c_sync(self, name, sub):
return """
Py_XDECREF(py_%(name)s);
py_%(name)s = PyFloat_FromDouble(%(name)s);
if (!py_%(name)s) {
printf("PyFloat_FromDouble failed on: %%f\\n", %(name)s);
Py_XINCREF(Py_None);
py_%(name)s = Py_None;
}
""" % dict(name=name)
def c_cleanup(self, name, sub):
return ""
# Added to make those tests pass in DebugMode
@staticmethod
def may_share_memory(a, b):
return a is b
double = Double()
def c_code(node, name, input_names, output_names, sub):
x_name, y_name = input_names[0], input_names[1]
output_name = output_names[0]
return """
%(output_name)s = %(x_name)s * %(y_name)s;
""" % locals()
mul.c_code = c_code
from theano import gof
class BinaryDoubleOp(gof.Op):
__props__ = ("name", "fn", "ccode")
def __init__(self, name, fn, ccode):
self.name = name
self.fn = fn
self.ccode = ccode
def make_node(self, x, y):
if isinstance(x, (int, float)):
x = gof.Constant(double, x)
if isinstance(y, (int, float)):
y = gof.Constant(double, y)
if x.type != double or y.type != double:
raise TypeError('%s only works on doubles' % self.name)
return gof.Apply(self, [x, y], [double()])
def perform(self, node, inp, out):
x, y = inp
z, = out
z[0] = self.fn(x, y)
def __str__(self):
return self.name
def c_code(self, node, name, inp, out, sub):
x, y = inp
z, = out
return self.ccode % locals()
add = BinaryDoubleOp(name='add',
fn=lambda x, y: x + y,
ccode="%(z)s = %(x)s + %(y)s;")
sub = BinaryDoubleOp(name='sub',
fn=lambda x, y: x - y,
ccode="%(z)s = %(x)s - %(y)s;")
mul = BinaryDoubleOp(name='mul',
fn=lambda x, y: x * y,
ccode="%(z)s = %(x)s * %(y)s;")
div = BinaryDoubleOp(name='div',
fn=lambda x, y: x / y,
ccode="%(z)s = %(x)s / %(y)s;")
from theano.gof import toolbox
class Simplify(gof.Optimizer):
def add_requirements(self, fgraph):
fgraph.attach_feature(toolbox.ReplaceValidate())
def apply(self, fgraph):
for node in fgraph.toposort():
if node.op == div:
x, y = node.inputs
z = node.outputs[0]
if x.owner and x.owner.op == mul:
a, b = x.owner.inputs
if y == a:
fgraph.replace_validate(z, b)
elif y == b:
fgraph.replace_validate(z, a)
simplify = Simplify()
x = double('x')
y = double('y')
z = double('z')
a = add(z, mul(div(mul(y, x), y), div(z, x)))
e = gof.FunctionGraph([x, y, z], [a])
simplify.optimize(e)
class LocalSimplify(gof.LocalOptimizer):
def transform(self, node):
if node.op == div:
x, y = node.inputs
if x.owner and x.owner.op == mul:
a, b = x.owner.inputs
if y == a:
return [b]
elif y == b:
return [a]
return False
def tracks(self):
# This should be needed for the EquilibriumOptimizer
# but it isn't now
# TODO: do this and explain it
return [] # that's not what you should do
local_simplify = LocalSimplify()
x = double('x')
y = double('y')
z = double('z')
a = add(z, mul(div(mul(y, x), y), div(z, x)))
e = gof.FunctionGraph([x, y, z], [a])
simplify = gof.TopoOptimizer(local_simplify)
simplify.optimize(e)
def test_as_op(self):
import theano
import numpy
from theano.compile.ops import as_op
def infer_shape_numpy_dot(node, input_shapes):
ashp, bshp = input_shapes
return [ashp[:-1] + bshp[-1:]]
@as_op(itypes=[theano.tensor.fmatrix, theano.tensor.fmatrix],
otypes=[theano.tensor.fmatrix],
infer_shape=infer_shape_numpy_dot)
def numpy_add(a, b):
return numpy.add(a, b)
def infer_shape_numpy_add_sub(node, input_shapes):
ashp, bshp = input_shapes
# Both inputs should have that same shape, so we just
# return one of them.
return [ashp[0]]
@as_op(itypes=[theano.tensor.fmatrix, theano.tensor.fmatrix],
otypes=[theano.tensor.fmatrix],
infer_shape=infer_shape_numpy_add_sub)
def numpy_add(a, b):
return numpy.add(a, b)
@as_op(itypes=[theano.tensor.fmatrix, theano.tensor.fmatrix],
otypes=[theano.tensor.fmatrix],
infer_shape=infer_shape_numpy_add_sub)
def numpy_sub(a, b):
return numpy.sub(a, b)
class T_using_gpu(unittest.TestCase): class T_using_gpu(unittest.TestCase):
# All tests here belog to # All tests here belog to
# http://deeplearning.net/software/theano/tutorial/using_gpu.html # http://deeplearning.net/software/theano/tutorial/using_gpu.html
...@@ -684,127 +215,6 @@ class Fibby(theano.Op): ...@@ -684,127 +215,6 @@ class Fibby(theano.Op):
return (1,) return (1,)
class T_fibby(unittest.TestCase):
# All tests here belong to
# http://deeplearning.net/software/theano/extending/fibby.html
# Theano/doc/extending/fibby.txt
# Any change you do here also add it to the tutorial !
def test_fibby_1(self):
# The definition of class Fibby is done outside of the test,
# so the object can be pickled.
fibby = Fibby()
from theano.tensor.opt import (get_scalar_constant_value,
NotScalarConstantError)
# Remove any fibby(zeros(...))
@theano.tensor.opt.register_specialize
@theano.gof.local_optimizer([fibby])
def fibby_of_zero(node):
if node.op == fibby:
x = node.inputs[0]
try:
if numpy.all(0 == get_scalar_constant_value(x)):
return [x]
except NotScalarConstantError:
pass
# Test it does not apply when not needed
x = T.dvector()
f = function([x], fibby(x))
# theano.printing.debugprint(f)
# We call the function to make sure it runs.
# If you run in DebugMode, it will compare the C and Python outputs.
f(numpy.random.rand(5))
topo = f.maker.fgraph.toposort()
assert len(topo) == 1
assert isinstance(topo[0].op, Fibby)
# Test that the optimization gets applied.
f_zero = function([], fibby(T.zeros([5])))
# theano.printing.debugprint(f_zero)
# If you run in DebugMode, it will compare the output before
# and after the optimization.
f_zero()
# Check that the optimization removes the Fibby Op.
# For security, the Theano memory interface ensures that the output
# of the function is always memory not aliased to the input.
# That is why there is a DeepCopyOp op.
topo = f_zero.maker.fgraph.toposort()
assert len(topo) == 1
assert isinstance(topo[0].op, theano.compile.ops.DeepCopyOp)
class T_graphstructures(unittest.TestCase):
# All tests here belong to
# http://deeplearning.net/software/theano/extending/graphstructures.html
# Theano/doc/extending/graphstructures.txt
# Any change you do here also add it to the tutorial !
def test_graphstructures_1(self):
x = T.dmatrix('x')
y = T.dmatrix('y')
z = x + y
x = T.matrix('x')
y = T.matrix('y')
z = T.matrix('z')
# create 2 Variables (one for 'e', one intermediate for y*z)
# create 2 Apply instances (one for '+', one for '*')
e = x + y * z
from theano.tensor import add, mul, Apply, Variable, TensorType
# Instantiate a type that represents a matrix of doubles
float64_matrix = TensorType(dtype='float64', # double
broadcastable=(False, False)) # matrix
# We make the Variable instances we need.
x = Variable(type=float64_matrix, name='x')
y = Variable(type=float64_matrix, name='y')
z = Variable(type=float64_matrix, name='z')
# This is the Variable that we want to symbolically represents y*z
mul_variable = Variable(type=float64_matrix)
assert mul_variable.owner is None
# Instantiate a symbolic multiplication
node_mul = Apply(op=mul,
inputs=[y, z],
outputs=[mul_variable])
# Fields 'owner' and 'index' are set by Apply
assert mul_variable.owner is node_mul
# 'index' is the position of mul_variable in mode_mul's outputs
assert mul_variable.index == 0
# This is the Variable that we want to symbolically represents x+(y*z)
add_variable = Variable(type=float64_matrix)
assert add_variable.owner is None
# Instantiate a symbolic addition
node_add = Apply(op=add,
inputs=[x, mul_variable],
outputs=[add_variable])
# Fields 'owner' and 'index' are set by Apply
assert add_variable.owner is node_add
assert add_variable.index == 0
e = add_variable
# We have access to x, y and z through pointers
assert e.owner.inputs[0] is x
assert e.owner.inputs[1] is mul_variable
assert e.owner.inputs[1].owner.inputs[0] is y
assert e.owner.inputs[1].owner.inputs[1] is z
class T_scan(unittest.TestCase): class T_scan(unittest.TestCase):
# All tests here belong to # All tests here belong to
# http://deeplearning.net/software/theano/tutorial/loop.html # http://deeplearning.net/software/theano/tutorial/loop.html
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论