提交 3d91144f authored 作者: Olivier Breuleux's avatar Olivier Breuleux

Result -> Variable, NDArray* -> Tensor*

上级 99647a3b
......@@ -117,12 +117,12 @@ Example:
>>> cmp = f(self.avals,self.bvals) == numpy.dot(self.avals,self.bvals)
>>> self.failUnless(numpy.all(cmp))
Avoid hard-coding results, as in the following case:
Avoid hard-coding variables, as in the following case:
>>> self.failUnless(numpy.all(f(self.avals,self.bvals)==numpy.array([[25,25,30,28],[21,18,14,25]])))
This makes the test case less manageable and forces the user to update the
results each time the input is changed or possibly when the module being
variables each time the input is changed or possibly when the module being
tested changes (after a bug fix for example). It also constrains the test case
to specific input/output data pairs. The section on random values covers why this
might not be such a good idea.
......@@ -148,7 +148,7 @@ Example:
>>> ...
>>> def test_3D_dot_fail(self):
>>> def func():
>>> a = T.NDArrayType('float64', (False,False,False)) # create 3d tensor
>>> a = T.TensorType('float64', (False,False,False)) # create 3d tensor
>>> b = T.dmatrix()
>>> c = T.dot(a,b) # we expect this to fail
>>> # above should fail as dot operates on 2D tensors only
......@@ -209,7 +209,7 @@ The main advantage of using unittest_tools.seed_rng is that it allows us to
change the seed used in the unitests, without having to manually edit all the
files. For example, this allows the nightly build to run nosetests repeatedly,
changing the seed on every run (hence achieving a higher confidence that the
results are correct), while still making sure unittests are deterministic.
variables are correct), while still making sure unittests are deterministic.
Users who prefer their unittests to be random (when run on their local machine)
can simply undefine THEANO_UNITTEST_SEED.
......@@ -220,4 +220,4 @@ Similarly, to provide a seed to numpy.random.RandomState, simply use:
>>> # OR providing an explicit seed
>>> rng = numpy.random.RandomState(unittest_tools.fetch_seed(1231))
Note that the ability to change the seed from one nosetest to another, is incompatible with the method of hard-coding the baseline results (against which we compare the theano outputs). These must then be determined "algorithmically". Although this represents more work, the test suite will be better because of it.
Note that the ability to change the seed from one nosetest to another, is incompatible with the method of hard-coding the baseline variables (against which we compare the theano outputs). These must then be determined "algorithmically". Although this represents more work, the test suite will be better because of it.
......@@ -18,7 +18,7 @@ have been calculated by another operation. For each of the outputs,
the variables associated to them will be declared and initialized.
The operation then has to compute what it needs to using the
input variables and place the results in the output variables.
input variables and place the variables in the output variables.
What needs to be defined
......@@ -54,13 +54,13 @@ application of the current Op on a list of inputs, producing a list of
outputs. ``input_names`` and ``output_names`` arguments contain as
many strings as there are inputs and outputs to the application of the
Op and they correspond to the ``name`` that is passed to the type of
each Result in these lists. For example, if ``node.inputs[0].type ==
each Variable in these lists. For example, if ``node.inputs[0].type ==
double``, then ``input_names[0]`` is the ``name`` argument passed to
``double.c_declare`` etc. when the first input is processed by Theano.
In a nutshell, ``input_names`` and ``output_names`` parameterize the
names of the inputs your operation needs to use and the outputs it
needs to put results into. But this will be clear with the examples.
needs to put variables into. But this will be clear with the examples.
Defining the methods
......@@ -92,7 +92,7 @@ had more than one output, you would just set the variable(s) for
each output to what they should be.
.. warning::
Do *NOT* use C's ``return`` statement to return the result(s) of
Do *NOT* use C's ``return`` statement to return the variable(s) of
the computations. Set the output variables directly as shown
above. Theano will pick them up for you.
......
......@@ -18,7 +18,7 @@ Python data that satisfy the constraints it puts forward. In other
words, it must define C code that can convert a Python reference into
some type suitable for manipulation in C and it must define C code
that can convert some C structure in which the C implementation of an
operation stores its results into a reference to an object that can be
operation stores its variables into a reference to an object that can be
used from Python and is a valid value for the Type.
For example, in the current example, we have a Type which represents a
......@@ -65,7 +65,7 @@ the most important ones:
- **c_sync(name, sub)**
- When the computations are done, transfer the results from the C
- When the computations are done, transfer the variables from the C
structure we put them in to the destination Python object. This
will only be called for the outputs.
......@@ -82,12 +82,12 @@ the most important ones:
Each of these functions take two arguments, ``name`` and ``sub`` which
must be used to parameterize the C code they return. ``name`` is a
string which is chosen by the compiler to represent a :ref:`result` of
string which is chosen by the compiler to represent a :ref:`variable` of
the Type in such a way that there are no name conflicts between
different pieces of data. Therefore, all variables declared in
``c_declare`` should have a name which includes ``name``. Furthermore,
the name of the variable containing a pointer to the Python object
associated to the Result is ``py_<name>``.
associated to the Variable is ``py_<name>``.
``sub``, on the other hand, is a dictionary containing bits of C code
suitable for use in certain situations. For instance, ``sub['fail']``
......@@ -129,7 +129,7 @@ double. That double will be named whatever is passed to our function
in the "name" argument. That will usually be some mangled name like
"V0", "V2" or "V92" depending on how many nodes there are in the
computation graph and what rank the current node has. This function
will be called for all Results whose type is ``double``.
will be called for all Variables whose type is ``double``.
You can declare as many variables as you want there and you can also
do typedefs. Make sure that the name of each variable contains the
......@@ -157,15 +157,15 @@ it, it's best to publish it somewhere.
This function has to initialize the
double we declared previously to a suitable value. This is useful if
we want to avoid dealing with garbage values, especially if our data
type is a pointer. This is not going to be called for all Results with
the ``double`` type. Indeed, if a Result is an input which we pass
type is a pointer. This is not going to be called for all Variables with
the ``double`` type. Indeed, if a Variable is an input which we pass
from Python we will want to extract that input from a Python object,
therefore it is the c_extract method that will be called instead of
c_init. You can therefore not assume, when writing c_extract, that the
initialization has been done (in fact you can assume that it *hasn't*
been done).
``c_init`` will typically be called on output Results, but in general
``c_init`` will typically be called on output Variables, but in general
you should only assume that either c_init or c_extract has been
called, without knowing for sure which of the two.
......@@ -190,7 +190,7 @@ we have a reference to a Python object which Theano has placed in
given in the inputs. This special variable is declared by Theano as
``PyObject* py_%(name)s`` where ``PyObject*`` is a pointer to a Python
object as defined by CPython's C API. This is the reference that
corresponds, on the Python side of things, to a Result with the
corresponds, on the Python side of things, to a Variable with the
``double`` type. It is what the end user will give and what he or she
expects to get back.
......@@ -223,7 +223,7 @@ API) and we put it in our double variable that we declared previously.
double.c_sync = c_sync
This function is probably the trickiest. What happens here is that we
have computed some operation on doubles and we have put the result
have computed some operation on doubles and we have put the variable
into the double variable ``%(name)s``. Now, we need to put this data
into a Python object that we can manipulate on the Python side of
things. This Python object must be put into the ``py_%(name)s``
......@@ -306,9 +306,9 @@ object on which we want to apply computations using C
code. Conversely, ``c_sync`` will only be called if we want to
communicate the values we have computed to Python and ``c_cleanup``
will only be called when we don't need to process the data with C
anymore. In other words, the use of these functions for a given Result
anymore. In other words, the use of these functions for a given Variable
depends on the the relationship between Python and C with respect to
that Result. For instance, imagine you define the following function
that Variable. For instance, imagine you define the following function
and call it:
.. code-block:: python
......
......@@ -14,7 +14,7 @@ An Op is any object which defines the following methods:
- **make_node(*inputs)**
- This method is responsible for creating output Results of a suitable Type
- This method is responsible for creating output Variables of a suitable Type
to serve as the outputs of this Op's application. This method should put these
outputs into an Apply instance, and return the Apply instance.
......@@ -38,7 +38,7 @@ An Op is any object which defines the following methods:
- **__call__(*inputs)**
- Syntactic shortcut to make_node which returns the output Results
- Syntactic shortcut to make_node which returns the output Variables
of the Op.
- *Default*: this is done for you by Op.
......@@ -48,7 +48,7 @@ An Op is any object which defines the following methods:
- This method computes the function associated to this Op. The
``node`` is an Apply node created by the Op's ``make_node``
method, ``inputs`` is a list of references to data to operate on,
and ``output_storage`` is a list of storage cells where the results of
and ``output_storage`` is a list of storage cells where the variables of
the computation must be put. More specifically:
- ``node``: This is a reference to an Apply node which was previously
......@@ -112,9 +112,9 @@ An Op is any object which defines the following methods:
- If the Op you are defining is differentiable, you can define its
gradient symbolically in this method.
- Both the ``inputs`` and ``output_gradients`` will be Results. This
method must return a list containing one Result (or None) for each
input. Each returned Result represents the gradient with respect to
- Both the ``inputs`` and ``output_gradients`` will be Variables. This
method must return a list containing one Variable (or None) for each
input. Each returned Variable represents the gradient with respect to
that input given the symbolic gradients with respect to each output.
- If the output is not differentiable with respect to any inputs, then this
......@@ -193,7 +193,7 @@ two.
This function ensures that both inputs have the ``double``
type.
Since multiplying two doubles yields a double,
this function makes an Apply node with an output Result of type
this function makes an Apply node with an output Variable of type
``double``.
.. code-block:: python
......@@ -205,14 +205,14 @@ this function makes an Apply node with an output Result of type
mul.make_node = make_node
The first two lines make sure that both inputs are Results of the
The first two lines make sure that both inputs are Variables of the
``double`` type that we created in the previous section. We would not
want to multiply two arbitrary types, it would not make much sense
(and we'd be screwed when we implement this in C!)
The last line is the meat of the definition. There we create an Apply
node representing the application of Op ``mul`` to inputs ``x`` and
``y``, giving a Result instance of type ``double`` as the output.
``y``, giving a Variable instance of type ``double`` as the output.
.. note::
Theano relies on the fact that if you call the ``make_node`` method
......@@ -228,7 +228,7 @@ This code actually computes the function.
In our example, the data in ``inputs`` will be instances of Python's
built-in type ``float`` because this is the type that ``double.filter()``
will always return, per our own definition. ``output_storage`` will
contain a single storage cell for the multiplication's result.
contain a single storage cell for the multiplication's variable.
.. code-block:: python
......@@ -296,9 +296,9 @@ by modifying ``make_node`` to accept Python ``int`` or ``float`` as
return gof.Apply(mul, [x, y], [double()])
mul.make_node = make_node
Whenever we pass a Python int or float instead of a Result as ``x`` or
Whenever we pass a Python int or float instead of a Variable as ``x`` or
``y``, ``make_node`` will convert it to :ref:`constant` for us. ``gof.Constant``
is a :ref:`result` we statically know the value of.
is a :ref:`variable` we statically know the value of.
>>> x = double('x')
>>> z = mul(x, 2)
......@@ -365,7 +365,7 @@ arithmetic operators:
Instead of working directly on an instance of Op, we create a subclass of
Op that we can parametrize. All the operations we define are binary. They
all work on two inputs with type ``double``. They all return a single
Result of type ``double``. Therefore, ``make_node`` does the same thing
Variable of type ``double``. Therefore, ``make_node`` does the same thing
for all these operations, except for the Op reference ``self`` passed
as first argument to Apply. We define ``perform`` using the function
``fn`` passed in the constructor.
......
......@@ -53,22 +53,22 @@ default values.
- *Default*: ``values_eq(a, b)``
- **make_result(name=None)**
- **make_variable(name=None)**
- Makes a :term:`Result` of this Type with the specified name, if
``name is not None``. If ``name is ``None``, then the Result does
not have a name. The Result will have its ``type`` field set to the
- Makes a :term:`Variable` of this Type with the specified name, if
``name is not None``. If ``name is ``None``, then the Variable does
not have a name. The Variable will have its ``type`` field set to the
Type object.
- *Default*: there is a generic definition of this in Type. The Result's
- *Default*: there is a generic definition of this in Type. The Variable's
``type`` will be the object that defines this method (in other words,
``self``).
- **__call__(name=None)**:
- Syntactic shortcut to ``make_result``.
- Syntactic shortcut to ``make_variable``.
- *Default*: ``make_result``
- *Default*: ``make_variable``
For each method, the *default* is what :api:``theano.gof.Type`` defines
......@@ -120,7 +120,7 @@ so if ``x`` is an ``int`` it we will return an equivalent ``float``.
The second method we define is ``values_eq_approx``. This method
allows approximate comparison between two values respecting our Type's
constraints. It might happen that an optimization changes the computation
graph in such a way that it produces slightly different results, for
graph in such a way that it produces slightly different variables, for
example because of numerical instability like rounding errors at the
end of the mantissa. For instance, ``a + a + a + a + a + a`` might not
actually produce the exact same output as ``6 * a`` (try with a=0.1),
......@@ -209,7 +209,7 @@ Untangling some concepts
========================
Initially, confusion is common on what an instance of Type is versus
a subclass of Type or an instance of Result. Some of this confusion is
a subclass of Type or an instance of Variable. Some of this confusion is
syntactic. A Type is any object which has fields corresponding to the
functions defined above. The Type class provides sensible defaults for
all of them except ``filter``, so when defining new Types it is natural
......@@ -222,17 +222,17 @@ attempt to clear up the confusion:
akin to a primitive type or class in C. It is a *static*
annotation.
* An **instance of Result** symbolizes data nodes in a data flow
* An **instance of Variable** symbolizes data nodes in a data flow
graph. If you were to parse the C expression ``int x;``, ``int``
would be a Type instance and ``x`` would be a Result instance of
would be a Type instance and ``x`` would be a Variable instance of
that Type instance. If you were to parse the C expression ``c = a +
b;``, ``a``, ``b`` and ``c`` would all be Result instances.
b;``, ``a``, ``b`` and ``c`` would all be Variable instances.
* A **subclass of Type** represents a set of Type instances that share
structural similarities. In the ``double`` example that we are doing,
there is actually only one Type in that set, therefore the subclass
doesn't represent anything that one of its instances doesn't. In this
case it is a singleton, a set with one element. However, the NDArrayType
case it is a singleton, a set with one element. However, the TensorType
class which is a subclass of Type represents a set of types of tensors
parametrized by their data type or number of dimensions. We could say
that subclassing Type builds a hierarchy of Types which is based upon
......
......@@ -6,7 +6,7 @@ Graph Structures
================
Theano represents symbolic mathematical computations as graphs. These
graphs are composed of interconnected :ref:`apply` and :ref:`result`
graphs are composed of interconnected :ref:`apply` and :ref:`variable`
nodes. They are associated to *function application* and *data*,
respectively. Operations are represented :ref:`op` instances and data
types are represented by :ref:`type` instances. Here is a piece of code
......@@ -31,19 +31,19 @@ should help you understand how these pieces fit together:
-----------------------
Arrows represent references to the Python objects pointed at. The blue
box is an :ref:`apply` node. Red boxes are :ref:`result` nodes. Green
box is an :ref:`apply` node. Red boxes are :ref:`variable` nodes. Green
circles are :ref:`Ops <op>`. Purple boxes are :ref:`Types <type>`.
When we create :ref:`Results <result>` and then :ref:`apply`
:ref:`Ops <op>` to them to make more Results, we build a
bi-partite, directed, acyclic graph. Results point to the Apply nodes
When we create :ref:`Variables <variable>` and then :ref:`apply`
:ref:`Ops <op>` to them to make more Variables, we build a
bi-partite, directed, acyclic graph. Variables point to the Apply nodes
representing the function application producing them via their
``owner`` field. These Apply nodes point in turn to their input and
output Results via their ``inputs`` and ``outputs`` fields.
output Variables via their ``inputs`` and ``outputs`` fields.
The ``owner`` field of both ``x`` and ``y`` point to ``None`` because
they are not the result of another computation. If they were the
result of another computation, they would point to another blue box
they are not the variable of another computation. If they were the
variable of another computation, they would point to another blue box
like ``z`` does, and so on.
Note that the ``Apply`` instance's outputs points to
......@@ -67,12 +67,12 @@ This is what you would normally type:
from theano.tensor import *
# create 3 Results with owner = None
# create 3 Variables with owner = None
x = matrix('x')
y = matrix('y')
z = matrix('z')
# create 2 Results (one for 'e', one intermediate for y*z)
# create 2 Variables (one for 'e', one intermediate for y*z)
# create 2 Apply instances (one for '+', one for '*')
e = x + y * z
......@@ -86,45 +86,45 @@ This is what you would type to build the graph explicitly:
from theano.tensor import *
# Instantiate a type that represents a matrix of doubles
float64_matrix = NDArrayType(dtype = 'float64', # double
float64_matrix = TensorType(dtype = 'float64', # double
broadcastable = (False, False)) # matrix
# We make the Result instances we need.
x = Result(type = float64_matrix, name = 'x')
y = Result(type = float64_matrix, name = 'y')
z = Result(type = float64_matrix, name = 'z')
# We make the Variable instances we need.
x = Variable(type = float64_matrix, name = 'x')
y = Variable(type = float64_matrix, name = 'y')
z = Variable(type = float64_matrix, name = 'z')
# This is the Result that we want to symbolically represents y*z
mul_result = Result(type = float64_matrix)
assert mul_result.owner is None
# This is the Variable that we want to symbolically represents y*z
mul_variable = Variable(type = float64_matrix)
assert mul_variable.owner is None
# Instantiate a symbolic multiplication
node_mul = Apply(op = mul,
inputs = [y, z],
outputs = [mul_result])
assert mul_result.owner is node_mul and mul_result.index == 0 # these fields are set by Apply
outputs = [mul_variable])
assert mul_variable.owner is node_mul and mul_variable.index == 0 # these fields are set by Apply
# This is the Result that we want to symbolically represents x+(y*z)
add_result = Result(type = float64_matrix)
assert add_result.owner is None
# This is the Variable that we want to symbolically represents x+(y*z)
add_variable = Variable(type = float64_matrix)
assert add_variable.owner is None
# Instantiate a symbolic addition
node_add = Apply(op = add,
inputs = [x, mul_result],
outputs = [add_result])
assert add_result.owner is node_add and add_result.index == 0 # these fields are set by Apply
inputs = [x, mul_variable],
outputs = [add_variable])
assert add_variable.owner is node_add and add_variable.index == 0 # these fields are set by Apply
e = add_result
e = add_variable
# We have access to x, y and z through pointers
assert e.owner.inputs[0] is x
assert e.owner.inputs[1] is mul_result
assert e.owner.inputs[1] is mul_variable
assert e.owner.inputs[1].owner.inputs[0] is y
assert e.owner.inputs[1].owner.inputs[1] is z
Note how the call to ``Apply`` modifies the ``owner`` and ``index``
fields of the :ref:`Results <result>` passed as outputs to point to
fields of the :ref:`Variables <variable>` passed as outputs to point to
itself and the rank they occupy in the output list. This whole
machinery builds a DAG (Directed Acyclic Graph) representing the
computation, a graph that theano can compile and optimize.
......@@ -135,7 +135,7 @@ Graph Structures
The following section outlines each type of structure that may be used
in a Theano-built computation graph. The following structures are
explained: :ref:`apply`, :ref:`constant`, :ref:`op`, :ref:`result` and
explained: :ref:`apply`, :ref:`constant`, :ref:`op`, :ref:`variable` and
:ref:`type`.
......@@ -151,14 +151,14 @@ Apply
An *Apply node* is a type of internal node used to represent a
:term:`computation graph <graph>` in Theano. Unlike
:ref:`Result nodes <result>`, Apply nodes are usually not
:ref:`Variable nodes <variable>`, Apply nodes are usually not
manipulated directly by the end user. They may be accessed via
a Result's ``owner`` field.
a Variable's ``owner`` field.
An Apply node is typically an instance of the :api:`Apply
<theano.gof.graph.Apply>` class. It represents the application
of an :ref:`op` on one or more inputs, where each input is a
:ref:`result`. By convention, each Op is responsible for
:ref:`variable`. By convention, each Op is responsible for
knowing how to build an Apply node from a list of
inputs. Therefore, an Apply node may be obtained from an Op
and a list of inputs by calling ``Op.make_node(*inputs)``.
......@@ -174,99 +174,17 @@ An Apply instance has three important fields:
applied here.
**inputs**
A list of :ref:`Results <result>` that represent the arguments of
A list of :ref:`Variables <variable>` that represent the arguments of
the function.
**outputs**
A list of :ref:`Results <result>` that represent the return values
A list of :ref:`Variables <variable>` that represent the return values
of the function.
An Apply instance can be created by calling ``gof.Apply(op, inputs,
outputs)``.
.. index::
single: Result
single: graph construct; Result
.. _result:
------
Result
------
A :ref:`result` is the main data structure you work with when using
Theano. The symbolic inputs that you operate on are Results and what
you get from applying various Ops to these inputs are also
Results. For example, when I type
>>> x = theano.tensor.ivector()
>>> y = -x
``x`` and ``y`` are both Results, i.e. instances of the :api:`Result
<theano.gof.graph.Result>` class. The :ref:`type` of both ``x`` and
``y`` is ``theano.tensor.ivector``.
Despite what the name might suggest, a Result is not necessarily
produced by a computation. Indeed, in the example above, ``x`` is only
an input. However, it is still called a Result for historical reasons
(and because the data structure is identical).
Now, unlike ``x``, ``y`` is indeed produced by a computation (in this
case, negation of x). ``y`` is the Result corresponding to the output
of the computation, while ``x`` is the Result corresponding to its
input. The computation itself is represented by another type of node,
an :ref:`apply` node, and may be accessed through ``y.owner``.
More specifically, a Result is a basic structure in Theano that
represents a datum at a certain point in computation. It is typically
an instance of the class :api:`Result <theano.gof.graph.Result>` or
one of its subclasses.
A Result ``r`` contains four important fields:
**type**
a :ref:`type` defining the kind of value this Result can hold in
computation.
**owner**
this is either None or an :ref:`apply` node of which the Result is
an output.
**index**
the integer such that ``owner.outputs[index] is r`` (ignored if
``owner`` is None)
**name**
a string to use in pretty-printing and debugging.
Result has one special subclass: :ref:`constant <constant>`.
.. index::
single: Constant
single: graph construct; Constant
.. _constant:
Constant
^^^^^^^^
A constant is a :ref:`Result` with one extra field, *data* (only
settable once). When used in a computation graph as the input of an
:ref:`Op` :ref:`application <Apply>`, it is assumed that said input
will *always* take the value contained in the constant's data
field. Furthermore, it is assumed that the :ref:`Op` will not under
any circumstances modify the input. This means that a constant is
eligible to participate in numerous optimizations: constant inlining
in C code, constant folding, etc.
A constant does not need to be specified in a :ref:`function`'s list
of inputs.
.. index::
single: Op
......@@ -281,7 +199,7 @@ Op
An :ref:`op` in Theano defines a certain computation on some types of
inputs, producing some types of outputs. It is equivalent to a
function definition in most programming languages. From a list of
input :ref:`Results <result>` and an Op, you can build an :ref:`apply`
input :ref:`Variables <variable>` and an Op, you can build an :ref:`apply`
node representing the application of the Op to the inputs.
It is important to understand the distinction between an Op (the
......@@ -309,7 +227,7 @@ A :ref:`type` in Theano represents a set of constraints on potential
data objects. These constraints allow Theano to tailor C code to handle
them and to statically optimize the computation graph. For instance,
the :ref:`irow <predefinedtypes>` type in the ``theano.tensor`` package
gives the following constraints on the data the Results of type ``irow``
gives the following constraints on the data the Variables of type ``irow``
may contain:
#. Must be an instance of ``numpy.ndarray``: ``isinstance(x, numpy.ndarray)``
......@@ -339,3 +257,80 @@ the case. Unless specified otherwise, when we say "Type" we mean a
Theano Type.
.. index::
single: Variable
single: graph construct; Variable
.. _variable:
--------
Variable
--------
A :ref:`variable` is the main data structure you work with when using
Theano. The symbolic inputs that you operate on are Variables and what
you get from applying various Ops to these inputs are also
Variables. For example, when I type
>>> x = theano.tensor.ivector()
>>> y = -x
``x`` and ``y`` are both Variables, i.e. instances of the :api:`Variable
<theano.gof.graph.Variable>` class. The :ref:`type` of both ``x`` and
``y`` is ``theano.tensor.ivector``.
Unlike ``x``, ``y`` is a Variable produced by a computation (in this
case, it is the negation of x). ``y`` is the Variable corresponding to
the output of the computation, while ``x`` is the Variable
corresponding to its input. The computation itself is represented by
another type of node, an :ref:`apply` node, and may be accessed
through ``y.owner``.
More specifically, a Variable is a basic structure in Theano that
represents a datum at a certain point in computation. It is typically
an instance of the class :api:`Variable <theano.gof.graph.Variable>` or
one of its subclasses.
A Variable ``r`` contains four important fields:
**type**
a :ref:`type` defining the kind of value this Variable can hold in
computation.
**owner**
this is either None or an :ref:`apply` node of which the Variable is
an output.
**index**
the integer such that ``owner.outputs[index] is r`` (ignored if
``owner`` is None)
**name**
a string to use in pretty-printing and debugging.
Variable has one special subclass: :ref:`constant <constant>`.
.. index::
single: Constant
single: graph construct; Constant
.. _constant:
Constant
^^^^^^^^
A constant is a :ref:`Variable` with one extra field, *data* (only
settable once). When used in a computation graph as the input of an
:ref:`Op` :ref:`application <Apply>`, it is assumed that said input
will *always* take the value contained in the constant's data
field. Furthermore, it is assumed that the :ref:`Op` will not under
any circumstances modify the input. This means that a constant is
eligible to participate in numerous optimizations: constant inlining
in C code, constant folding, etc.
A constant does not need to be specified in a :ref:`function`'s list
of inputs.
......@@ -90,8 +90,8 @@ operation on ``x``.
Inplace operations in theano still work in a functional setting:
they need to return the modified input. Symbolically, Theano
requires one Result standing for the input *before* being modified
and *another* Result representing the input *after* being
requires one Variable standing for the input *before* being modified
and *another* Variable representing the input *after* being
modified. Therefore, code using inplace operations would look like
this:
......@@ -129,7 +129,7 @@ operation on ``x``.
Take the previous definitions of x, y and z and suppose an Op which
adds one to every byte of its input. If we give ``x`` as an input to
that Op, it can either allocate a new buffer of the same size as ``x``
(that could be ``z``) and set that new buffer's bytes to the result of
(that could be ``z``) and set that new buffer's bytes to the variable of
the addition. That would be a normal, :term:`pure` Op. Alternatively,
it could add one to each byte *in* the buffer ``x``, therefore
changing it. That would be an inplace Op.
......
......@@ -15,10 +15,10 @@ two types of optimizations: *global* optimizations and *local*
optimizations. A global optimization takes an :ref:`env` object (an
Env is a wrapper around a whole computation graph, you can see its
:ref:`documentation <env>` for more details) and navigates through it
in a suitable way, replacing some Results by others in the process. A
in a suitable way, replacing some Variables by others in the process. A
local optimization, on the other hand, is defined as a function on a
*single* :ref:`apply` node and must return either False (to mean that
nothing is to be done) or a list of new Results that we would like to
nothing is to be done) or a list of new Variables that we would like to
replace the node's outputs with. A :ref:`navigator` is a special kind
of global optimization which navigates the computation graph in some
fashion (in topological order, reverse-topological order, random
......@@ -68,7 +68,7 @@ A local optimization is an object which defines the following methods:
- **transform(node)**
- This method takes an :ref:`apply` node and returns either False to
signify that no changes are to be done or a list of Results which
signify that no changes are to be done or a list of Variables which
matches the length of the node's ``outputs`` list. When the
LocalOptimizer is applied by a Navigator, the outputs of the node
passed as argument to the LocalOptimizer will be replaced by the
......@@ -125,7 +125,7 @@ does additional checks to ensure that we are not messing up the
computation graph (note: if ReplaceValidate was already added by
another optimizer, ``extend`` will do nothing). In a nutshell,
``toolbox.ReplaceValidate`` grants access to ``env.replace_validate``
and ``env.replace_validate`` allows us to replace a Result with
and ``env.replace_validate`` allows us to replace a Variable with
another while respecting certain validation constraints. You can
browse the list of :ref:`features <envfeaturelist>` and see if some of
them might be useful to write optimizations with. For example, as an
......@@ -142,7 +142,7 @@ numerator is a multiplication we put the two operands in a and b, so
we can now say that ``z == (a*b)/y``. If ``y==a`` then ``z==b`` and if
``y==b`` then ``z==a``. When either case happens then we can replace z
by either a or b using ``env.replace_validate`` - else we do
nothing. You might want to check the documentation about :ref:`result`
nothing. You might want to check the documentation about :ref:`variable`
and :ref:`apply` to get a better understanding of the
pointer-following game you need to get ahold of the nodes of interest
for the simplification (x, y, z, a, b, etc.)
......
......@@ -12,7 +12,7 @@ analogue in Python:
Theano Python
=============== ===========================================================
Apply function application / function call
Result function data / variable
Variable function data / variable
Op operations carried out in computation / function definition
Type data types
Module ??? class?
......
......@@ -40,7 +40,7 @@ Theano provides some generic Op classes which allow you to generate a
lot of ops at a lesser effort. For instance, Elemwise can be used to
make :term:`elementwise` operations easily whereas DimShuffle can be
used to make transpose-like transformations. These higher order Ops
are mostly NDArray-related, as this is Theano's specialty. An exposé of
are mostly Tensor-related, as this is Theano's specialty. An exposé of
them can therefore be found in :ref:`tensoroptools`.
......
......@@ -25,9 +25,9 @@ array(28.4)
Let's break this down into several steps. The first step is to define
two symbols, or Results, representing the quantities that you want
to add. Note that from now on, we will use the term :term:`Result`
to mean "symbol" (in other words, ``x``, ``y``, ``z`` are all Result
two symbols, or Variables, representing the quantities that you want
to add. Note that from now on, we will use the term :term:`Variable`
to mean "symbol" (in other words, ``x``, ``y``, ``z`` are all Variable
objects). The output of the function ``f`` is a :api:`numpy.ndarray`
with zero dimensions.
......@@ -50,16 +50,16 @@ is the type we assign to "0-dimensional arrays (`scalar`) of doubles
``dscalar`` is not a class. Therefore, neither ``x`` nor ``y``
are actually instances of ``dscalar``. They are instances of
:api:`NDArrayResult <theano.tensor.basic.NDArrayResult>`. ``x`` and ``y``
:api:`TensorVariable <theano.tensor.basic.TensorVariable>`. ``x`` and ``y``
are, however, assigned the theano Type ``dscalar`` in their ``type``
field, as you can see here:
>>> type(x)
<class 'theano.tensor.basic.NDArrayResult'>
<class 'theano.tensor.basic.TensorVariable'>
>>> x.type
NDArrayType(float64, scalar)
TensorType(float64, scalar)
>>> T.dscalar
NDArrayType(float64, scalar)
TensorType(float64, scalar)
>>> x.type == T.dscalar
True
......@@ -67,7 +67,7 @@ You can learn more about the structures in Theano in
the :ref:`advtutorial` and in :ref:`graphstructures`.
By calling ``T.dscalar`` with a string argument, you create a
:term:`Result` representing a floating-point scalar quantity with the
:term:`Variable` representing a floating-point scalar quantity with the
given name. If you provide no argument, the symbol will be unnamed. Names
are not require, but they can aid debugging.
......@@ -79,7 +79,7 @@ The second step is to combine ``x`` and ``y`` into their sum ``z``:
>>> z = x + y
``z`` is yet another :term:`Result` which represents the addition of
``z`` is yet another :term:`Variable` which represents the addition of
``x`` and ``y``. You can use the :api:`pp <theano.printing.pp>`
function to pretty-print out the computation associated to ``z``.
......@@ -95,9 +95,9 @@ and giving ``z`` as output:
>>> f = function([x, y], z)
The first argument to ``function`` is a list of :term:`Results <Result>`
The first argument to ``function`` is a list of :term:`Variables <Variable>`
that will be provided as inputs to the function. The second argument
is a single Result *or* a list of Results. For either case, the second
is a single Variable *or* a list of Variables. For either case, the second
argument is what we want to see as output when we apply the function.
``f`` may then be used like a normal Python function.
......@@ -122,7 +122,7 @@ our new function on 2D arrays:
array([[ 11., 22.],
[ 33., 44.]])
The result is a numpy array. We can also use numpy arrays directly as
The variable is a numpy array. We can also use numpy arrays directly as
inputs:
>>> import numpy
......
......@@ -67,7 +67,7 @@ squared difference between two matrices ``x`` and ``y`` at the same time:
>>> diff_squared = diff**2
>>> f = function([x, y], [diff, abs_diff, diff_squared])
When we use the function, it will return the two results (the printing
When we use the function, it will return the two variables (the printing
was reformatted for readability):
>>> f([[1, 1], [1, 1]], [[0, 1], [2, 3]])
......@@ -136,7 +136,7 @@ with respect to the second. In this way, Theano can be used for
.. note::
The result of ``T.grad`` has the same dimensions as the
The variable of ``T.grad`` has the same dimensions as the
second argument. This is exactly like the first derivative if the
first argument is a scalar or a tensor of size 1 but not if it is
larger. For more information on the semantics when the first
......@@ -205,11 +205,11 @@ First let's define the accumulator function:
The first argument is a pair. As we saw in the previous section, this
means that ``inc`` is an input with a default value of 1. The second
argument has syntax that creates an internal state. The syntax is
``((state_result, new_state_result), initial_value)``.
The internal storage associated with ``state_result`` is initialized to
``((state_variable, new_state_variable), initial_value)``.
The internal storage associated with ``state_variable`` is initialized to
``initial_value``. Every time ``accumulator`` is called, the value
of the internal ``state`` will be replaced by the value computed as
``new_state``. In this case, the state will be replaced by the result
``new_state``. In this case, the state will be replaced by the variable
of incrementing it by ``inc``.
We recommend (insist?) that internal state arguments occur after any
......@@ -223,8 +223,8 @@ of other inputs.
Anyway, let's try it out! The state can be accessed using the square
brackets notation ``[]``. You may access the state either by using
the :ref:`result` representing it or the name of that
:ref:`result`. In our example we can access the state either with the
the :ref:`variable` representing it or the name of that
:ref:`variable`. In our example we can access the state either with the
``state`` object or the string 'state'.
>>> accumulator[state]
......
......@@ -26,17 +26,17 @@ The idea here is that we've compiled the symbolic graph (``2*x``) into a functio
Inputs
======
The ``inputs`` argument to ``theano.function`` is a list, containing the ``Result`` instances for which values will be specified at the time of the function call. But inputs can be more than just Results.
``In`` instances let us attach properties to ``Results`` to tell function more about how to use them.
The ``inputs`` argument to ``theano.function`` is a list, containing the ``Variable`` instances for which values will be specified at the time of the function call. But inputs can be more than just Variables.
``In`` instances let us attach properties to ``Variables`` to tell function more about how to use them.
**In(result, name=None, value=None, update=None, mutable=False)** returns an ``In`` instance:
**In(variable, name=None, value=None, update=None, mutable=False)** returns an ``In`` instance:
- ``result``: a Result instance.
- ``variable``: a Variable instance.
This will be assigned a value before running the function,
not computed from its owner.
- ``name``: Any type. (If autoname_input=True, defaults to result.name).
- ``name``: Any type. (If autoname_input=True, defaults to variable.name).
If name is a valid Python identifier, this input can be set by
``kwarg``, and its value can be accessed by ``self.<name>``.
......@@ -49,9 +49,9 @@ The ``inputs`` argument to ``theano.function`` is a list, containing the ``Resul
Default: ``None``
- ``update``: Result instance
- ``update``: Variable instance
This expression Result will replace ``value`` after each function call.
This expression Variable will replace ``value`` after each function call.
Default: ``None``
......@@ -63,7 +63,7 @@ The ``inputs`` argument to ``theano.function`` is a list, containing the ``Resul
- ``autoname``: Bool
``True``: if ``name`` is None and the Result has a name, it will be taken
``True``: if ``name`` is None and the Variable has a name, it will be taken
as the input's name.
``False``: the name is the exact value passed as the name parameter
......@@ -121,7 +121,7 @@ Advanced: Sharing Storage Between Functions
-------------------------------------------
``value`` can be a :api:`theano.gof.Container` as well as a literal.
This permits linking a value of a Result in one function to the value of a Result in another function.
This permits linking a value of a Variable in one function to the value of a Variable in another function.
By using a ``Container`` as a value we can implement shared variables between functions.
For example, consider the following program.
......@@ -141,7 +141,7 @@ For example, consider the following program.
The functions ``inc`` and ``dec`` operate on a shared internal value for ``s``.
Theano's Module system uses this mechanism to share storage between Methods.
The container being shared doesn't have to correspond to the same Result in both functions,
The container being shared doesn't have to correspond to the same Variable in both functions,
but that's usually how this mechanism is used.
Input Argument Restrictions
......@@ -161,11 +161,11 @@ The following restrictions apply to the inputs to ``theano.function``:
have the same name, then the function will raise an exception. [***Which
exception?**]
- Two ``In`` instances may not name the same Result. I.e. you cannot
- Two ``In`` instances may not name the same Variable. I.e. you cannot
give the same parameter multiple times.
If no name is specified explicitly for an In instance, then its name
will be taken from the Result's name. Note that this feature can cause
will be taken from the Variable's name. Note that this feature can cause
harmless-looking input lists to not satisfy the two conditions above.
In such cases, Inputs should be named explicitly to avoid problems
such as duplicate names, and named arguments preceding unnamed ones.
......@@ -198,7 +198,7 @@ Both ``value`` and ``container`` properties provide dictionary-like access based
- integer keys: you can look up a value/container by its position in the input list;
- name keys: you can look up a value/container by its name;
- Result keys: you can look up a value/container by the Result it corresponds to.
- Variable keys: you can look up a value/container by the Variable it corresponds to.
In addition to these access mechanisms, there is an even more convenient
method to access values by indexing a Function directly by typing
......@@ -234,7 +234,7 @@ Input Shortcuts
Every element of the inputs list will be upgraded to an In instance if necessary.
- a Result instance ``r`` will be upgraded like ``In(r)``
- a Variable instance ``r`` will be upgraded like ``In(r)``
- a tuple ``(name, r)`` will be ``In(r, name=name)``
......@@ -285,13 +285,13 @@ Outputs
The ``outputs`` argument to function can be one of
- ``None``, or
- a Result or ``Out`` instance, or
- a list of Results or ``Out`` instances.
- a Variable or ``Out`` instance, or
- a list of Variables or ``Out`` instances.
An ``Out`` instance is a structure that lets us attach options to individual output ``Result`` instances,
similarly to how ``In`` lets us attach options to individual input ``Result`` instances.
An ``Out`` instance is a structure that lets us attach options to individual output ``Variable`` instances,
similarly to how ``In`` lets us attach options to individual input ``Variable`` instances.
**Out(result, borrow=False)** returns an ``Out`` instance:
**Out(variable, borrow=False)** returns an ``Out`` instance:
* ``borrow``
......@@ -304,9 +304,9 @@ similarly to how ``In`` lets us attach options to individual input ``Result`` in
If a single ``Result`` or ``Out`` instance is given as argument, then the compiled function will return a single value.
If a single ``Variable`` or ``Out`` instance is given as argument, then the compiled function will return a single value.
If a list of ``Result`` or ``Out`` instances is given as argument, then the compiled function will return a list of their values.
If a list of ``Variable`` or ``Out`` instances is given as argument, then the compiled function will return a list of their values.
.. code-block:: python
......
......@@ -44,7 +44,7 @@ Here we instantiate an empty Module.
>>> m.state = Member(T.dscalar())
Then we declare a Result for use with our Module. That Result will
Then we declare a Variable for use with our Module. That Variable will
be a :ref:`member` of the Module, which means that it will be
accessible as a field of the object we will create later (for reading
and writing). It will also be accessible from any :ref:`method`
......@@ -52,7 +52,7 @@ defined in our Module.
.. note::
There is no need to name the Result explicitly here. ``m.state`` will
There is no need to name the Variable explicitly here. ``m.state`` will
be given the name 'state' automatically.
......@@ -82,7 +82,7 @@ This line describes how to compute the new state.
.. note::
Here new_state is implicitly declared as External since it is
illegal to declare a Result as a Member if it is the result of
illegal to declare a Variable as a Member if it is the variable of
previous computations.
......@@ -90,10 +90,10 @@ This line describes how to compute the new state.
Here we declare a Method. The three arguments are as follow:
* **inputs**: a list of input Results
* **outputs**: a list of output Results
* **updates**: a dictionary mapping a Result declared as a Member to a
Result representing the computation of the next state of the member.
* **inputs**: a list of input Variables
* **outputs**: a list of output Variables
* **updates**: a dictionary mapping a Variable declared as a Member to a
Variable representing the computation of the next state of the member.
If possible, you may also give the updates as keyword arguments, as
in: ``Method(m.inc, m.new_state, state = m.new_state)``. This implies
......@@ -206,7 +206,7 @@ give a method called ``_instance_print_state`` to our Module.
acc.print_state() # --> prints "state is: 0.0"
Any method called like ``_instance_XXX`` will result in the object
Any method called like ``_instance_XXX`` will variable in the object
obtained through a call to ``make`` to gain an ``XXX`` method. Note
that when we define ``_instance_print_state`` there are two "self"
arguments: ``self`` which is *symbolic* and ``obj`` which contains
......
......@@ -60,23 +60,23 @@ as ``theano.tensor.frow``. If you want a matrix of unsigned
Each of the types described above can be constructed by two methods:
a singular version (e.g., ``dmatrix``) and a plural version
(``dmatrices``). When called, the singular version takes a single
argument which is the name of the :term:`Result` we want to make and it
makes a single Result of that type. The plural version can either take
argument which is the name of the :term:`Variable` we want to make and it
makes a single Variable of that type. The plural version can either take
an integer or several strings. If an integer is provided, the method
will return that many Results and if strings are provided, it will
create one Result for each string, using the string as the Result's
will return that many Variables and if strings are provided, it will
create one Variable for each string, using the string as the Variable's
name. For example:
.. code-block:: python
from theano.tensor import *
x = dmatrix() # creates one Result with no name
x = dmatrix('x') # creates one Result with name 'x'
xyz = dmatrix('xyz') # creates one Result with name 'xyz'
x = dmatrix() # creates one Variable with no name
x = dmatrix('x') # creates one Variable with name 'x'
xyz = dmatrix('xyz') # creates one Variable with name 'xyz'
x, y, z = dmatrices(3) # creates three Results with no names
x, y, z = dmatrices('x', 'y', 'z') # creates three Results named 'x', 'y' and 'z'
x, y, z = dmatrices(3) # creates three Variables with no names
x, y, z = dmatrices('x', 'y', 'z') # creates three Variables named 'x', 'y' and 'z'
Custom tensor types
......@@ -84,7 +84,7 @@ Custom tensor types
If you wish to use a type of tensor which is not already available here
(for example, a 3D tensor) you can build an appropriate type using
``theano.tensor.NDArrayType``. The first argument you pass is the ``dtype``
``theano.tensor.TensorType``. The first argument you pass is the ``dtype``
and the second is the ``broadcastable pattern``.
Where ``dtype`` is one of:
......@@ -110,7 +110,7 @@ complex128 complex 128 (two float64)
Even though ``theano.tensor`` does not define any type
using ``complex`` dtypes (``complex64`` or ``complex128``),
you can define them explicitly with ``NDArrayType`` (see example
you can define them explicitly with ``TensorType`` (see example
below). However, few operations are fully supported for complex
types: as of version 0.1, only elementary operations (``+-*/``)
have C implementations. Additionally, complex types have received
......@@ -154,11 +154,11 @@ bytes, we would do:
.. code-block:: python
# 3D tensor of signed bytes
mytype = theano.tensor.NDArrayType('uint8', [False]*3)
mytype = theano.tensor.TensorType('uint8', [False]*3)
# complex types (based on complex64)
my_cscalar = theano.tensor.NDArrayType('complex64', [])
my_cmatrix = theano.tensor.NDArrayType('complex64', [False, False])
my_cscalar = theano.tensor.TensorType('complex64', [])
my_cmatrix = theano.tensor.TensorType('complex64', [False, False])
Ops
......
......@@ -35,7 +35,7 @@ Glossary of terminology
Unlike numpy which does broadcasting dynamically, Theano needs
to know, for any operation which supports broadcasting, which
dimensions will need to be broadcasted. When applicable, this
information is given in the :term:`Type` of a :term:`Result`.
information is given in the :term:`Type` of a :term:`Variable`.
See also:
......@@ -84,24 +84,24 @@ Glossary of terminology
pure
WRITEME
Result
A :ref:`result` is the main data structure you work with when
type
WRITEME
Variable
A :ref:`variable` is the main data structure you work with when
using Theano. The symbolic inputs that you operate on are
Results and what you get from applying various operations to
these inputs are also Results. For example, when I type
Variables and what you get from applying various operations to
these inputs are also Variables. For example, when I type
>>> x = theano.tensor.ivector()
>>> y = -x
``x`` and ``y`` are both Results, i.e. instances of the
:api:`Result <theano.gof.graph.Result>` class. The
``x`` and ``y`` are both Variables, i.e. instances of the
:api:`Variable <theano.gof.graph.Variable>` class. The
:term:`Type` of both ``x`` and ``y`` is
``theano.tensor.ivector``.
For more information, see: :ref:`result`.
type
WRITEME
For more information, see: :ref:`variable`.
view
WRITEME
......@@ -177,14 +177,14 @@ Glossary of terminology
A :term:`Tensor` is for storing a number of objects that
all have the same type. In computations, the storage for
:term:`TensorResult` instances is a ``numpy.ndarray``.
:term:`TensorVariable` instances is a ``numpy.ndarray``.
Instances of ``numpy.ndarray`` have a ``dtype`` property
to indicate which data type (i.e., byte, float, double, python
object) can be stored in each element. The ``dtype`` property
of Tensors is a little different: it is a string which can be
converted to a numpy ``dtype`` object. Still the meaning
is pretty much the same: elements of the ``numpy.ndarray``
corresponding to a :term:`TensorResult` in a particular
corresponding to a :term:`TensorVariable` in a particular
computation must have the corresponding data type.
......@@ -266,30 +266,30 @@ Glossary of terminology
WRITEME.
Result
Variable
a Type-related graph node (a variable)
A Result
(`Result API <http://lgcm.iro.umontreal.ca/epydoc/theano.gof.graph.Result-class.html>`_)
A Variable
(`Variable API <http://lgcm.iro.umontreal.ca/epydoc/theano.gof.graph.Variable-class.html>`_)
is theano's variable. It symbolically represents a value (which
can be a number, vector, matrix, tensor, etc.).
The inputs and outputs of every :term:`Op` are Result instances.
The input and output arguments to create a :term:`function` are also Results.
A Result is like a strongly-typed variable in some other languages; each Result contains a reference to a :term:`TTI` (Theano Type Instance) that defines the kind of value that can be associated to the Result by a :term:`function`.
The inputs and outputs of every :term:`Op` are Variable instances.
The input and output arguments to create a :term:`function` are also Variables.
A Variable is like a strongly-typed variable in some other languages; each Variable contains a reference to a :term:`TTI` (Theano Type Instance) that defines the kind of value that can be associated to the Variable by a :term:`function`.
A Result is a container for four important fields:
A Variable is a container for four important fields:
type
a :term:`TTI` defining the kind of value this Result can have,
a :term:`TTI` defining the kind of value this Variable can have,
owner
either None (for graph roots) or the :term:`Apply` instance (i.e. result of applying an :term:`Op`) of which ``self`` is an output,
either None (for graph roots) or the :term:`Apply` instance (i.e. variable of applying an :term:`Op`) of which ``self`` is an output,
index
the integer such that ``owner.outputs[index] is this_result`` (ignored if ``owner`` is None)
the integer such that ``owner.outputs[index] is this_variable`` (ignored if ``owner`` is None)
name
a string to use in pretty-printing and debugging.
There are two subclasses related to Result:
There are two subclasses related to Variable:
:term:`Value`
a Result with a data field.
a Variable with a data field.
:term:`Constant`
like ``Value``, but the data it contains cannot be modified.
......@@ -320,10 +320,10 @@ Glossary of terminology
e = d + b
theano.function([d,b], [e]) # this works. d's default value of 1.5 is ignored.
The python variables ``a,b,c`` all refer to instances of type Result.
The Result refered to by ``a`` is also an instance of ``Constant``.
The python variables ``a,b,c`` all refer to instances of type Variable.
The Variable refered to by ``a`` is also an instance of ``Constant``.
Theano.:term:`function` uses the :term:`Apply` instances' ``inputs`` field together with each Result's ``owner`` field to determine which inputs are necessary to compute the function's outputs.
Theano.:term:`function` uses the :term:`Apply` instances' ``inputs`` field together with each Variable's ``owner`` field to determine which inputs are necessary to compute the function's outputs.
Scalar
......@@ -344,7 +344,7 @@ Glossary of terminology
Stabilizations are like :term:`optimizations <Optimization>` in the sense that they are often pattern-based sub-graph substitutions.
Stabilizations are unlike :term:`optimizations <Optimization>` in that
- they are typically applied even when intermediate results in the subgraph have external :term:`clients`,
- they are typically applied even when intermediate variables in the subgraph have external :term:`clients`,
- they are typically prioritized over transformations which improve run-time speed, and
- they are typically not faster than the naive implementation.
......@@ -359,17 +359,17 @@ Glossary of terminology
* :term:`broadcastable <Broadcasting>` - which dimensions are broadcastable
* :term:`dtype` - what kind of elements will the tensor contain
See also :term:`TensorResult`.
See also :term:`TensorVariable`.
TensorResult
:term:`Results <Result>` of type :term:`Tensor` are of class
TensorResult (`TensorResult API <http://lgcm.iro.umontreal.ca/epydoc/theano.tensor.TensorResult-class.html>`_).
``TensorResult`` adds operator overloading so that ``TensorResult`` instances can be used
in mathematical expressions. When any input to an expression is a ``TensorResult`` then the
expression will evaluate to an ``TensorResult`` and a :term:`graph` corresponding to
TensorVariable
:term:`Variables <Variable>` of type :term:`Tensor` are of class
TensorVariable (`TensorVariable API <http://lgcm.iro.umontreal.ca/epydoc/theano.tensor.TensorVariable-class.html>`_).
``TensorVariable`` adds operator overloading so that ``TensorVariable`` instances can be used
in mathematical expressions. When any input to an expression is a ``TensorVariable`` then the
expression will evaluate to an ``TensorVariable`` and a :term:`graph` corresponding to
the expression.
Many shortcuts exist for creating ``TensorResult`` instances:
Many shortcuts exist for creating ``TensorVariable`` instances:
* ``<t>scalar`` - create a tensor of rank 0
* ``<t>vector`` - create a tensor of rank 1
......@@ -397,24 +397,24 @@ Glossary of terminology
# declare a symbolic floating-point vector using __call__
b = tensor.fvector()
# create a second Result with the same TTI
# create a second Variable with the same TTI
c = tensor.fvector()
(``tensor.fvector``) is a TTI because
(``tensor.fvector``) is an instance of the (``theano.tensor.Tensor``) class, which is a subclass of (``theano.Type``).
Whenever you create a variable in theano (technically, a :term:`Result`) it will contain a reference to a TTI.
That reference is typically constant during the lifetime of the Result.
Whenever you create a variable in theano (technically, a :term:`Variable`) it will contain a reference to a TTI.
That reference is typically constant during the lifetime of the Variable.
Many variables can refer to a single TTI, as do ``b`` and ``c`` above.
The TTI defines the kind of value which might end up in that variable when executing a :term:`function`.
In this sense, theano is like a strongly-typed language.
In our example above, ``b`` is a result which is guaranteed to corresond to a ``numpy.ndarray`` of rank 1 when we try to do some computations with it.
In our example above, ``b`` is a variable which is guaranteed to corresond to a ``numpy.ndarray`` of rank 1 when we try to do some computations with it.
Many :term:`Ops <Op>` will raise an exception if their inputs do not have the correct types (TTI references).
TTI references are also useful to do type-checking in pattern-based optimizations.
Type
:term:`Results <Result>` are strongly typed by :term:`Type` instances
:term:`Variables <Variable>` are strongly typed by :term:`Type` instances
`theano.Type <http://lgcm.iro.umontreal.ca/epydoc/theano.gof.type.Type-class.html>`_
is an important abstract class in the creation and compilation of theano graphs.
......@@ -422,7 +422,7 @@ Glossary of terminology
Type instances are mainly responsible for two things:
* filtering potential values to conform to the restrictions imposed by the type (also known as casting),
* creating :term:`Result` instances whose type is ``self`` (conventionally, ``__call__`` does this), and
* creating :term:`Variable` instances whose type is ``self`` (conventionally, ``__call__`` does this), and
* providing the C code that interfaces python objects with C :term:`Op` implementations.
Theano comes with several subclasses of ``theano.type`` such as:
......@@ -446,11 +446,11 @@ Glossary of terminology
Value
:term:`Value` (`Value API <http://lgcm.iro.umontreal.ca/epydoc/theano.gof.graph.Value-class.html doc>`_)
and :term:`Constant` are subclasses of
:term:`Result`, which means they serve more or less the
:term:`Variable`, which means they serve more or less the
same purpose. There is however one important difference:
whereas :term:`Result` is purely symbolic, :term:`Value` and
whereas :term:`Variable` is purely symbolic, :term:`Value` and
:term:`Constant` can hold data in their ``data`` pointer and
''must'' have a ''None'' owner (can't be the result of some other
''must'' have a ''None'' owner (can't be the variable of some other
Theano computation). This can be practical because the compiler
knows how to assign values to those nodes, thereby creating a
sort of closure.
......
......@@ -43,7 +43,7 @@ Extending Theano
================
- Read about `How Theano Works <UserAdvanced.html>`__. This introduces the
major interface data structures: Op, Type, Result, Apply.
major interface data structures: Op, Type, Variable, Apply.
- Read about `Extending theano <extending.html>`__.
......
......@@ -18,7 +18,7 @@ Examples of parameterized Ops in theano:
``Reduce(<scalar op>, <axes>)``
reduces the specified axes using the provided scalar op.
``Add(<output type inferrer>)``
adds scalars and puts the result in a scalar whose type is inferred from the input types using ``output_type_inferrer(*inputs)``
adds scalars and puts the variable in a scalar whose type is inferred from the input types using ``output_type_inferrer(*inputs)``
``Composite(<graph>)``
makes a single Op out of a graph of scalar operations.
......@@ -46,14 +46,14 @@ The ``make_node`` method is expected to have the following signature:
make_node(self, *inputs)
``inputs`` may be a list of anything that the user wants to provide as symbolic input (symbolic: standing for the actual values that will be passed when the graph is compiled into an executable function). [*The Theano intro should describe symbolic in greater depth, and we should link to that from here.*] This may or may not include Result instances (but if you want the inputs of this Op to sometimes be outputs of another Op, then the inputs should be Result instances). [*What else could they be? Constant, Values, ...*] The return value should be an instance of [GraphStructures Apply] (see the example below). Here are the tasks typically handled in ``make_node``.
``inputs`` may be a list of anything that the user wants to provide as symbolic input (symbolic: standing for the actual values that will be passed when the graph is compiled into an executable function). [*The Theano intro should describe symbolic in greater depth, and we should link to that from here.*] This may or may not include Variable instances (but if you want the inputs of this Op to sometimes be outputs of another Op, then the inputs should be Variable instances). [*What else could they be? Constant, Values, ...*] The return value should be an instance of [GraphStructures Apply] (see the example below). Here are the tasks typically handled in ``make_node``.
* Check that the inputs are valid (type checking, etc.). [*Since we don't actually have values, what can we do besides type checking?*]
* If needed, wrap the inputs in Result instances with the proper type.
* Make the Result instances that will serve as the outputs of the node.
* If needed, wrap the inputs in Variable instances with the proper type.
* Make the Variable instances that will serve as the outputs of the node.
* ``return Apply(self, <wrapped inputs>, <outputs>)``
The ``inputs`` and ``outputs`` arguments to ``Apply`` must be lists of ``Result`` instances (or instances of subclasses of ``Result``). The inputs given to ``Apply`` do not have to be the same as the inputs passed to ``make_node``, but it is recommended that the order corresponds. [*why?*] The behavior of ``make_node`` should not depend on the structure of the graph of [*or?*] its inputs: it may look at the type and type fields of its inputs, but not at their owner field, because modifications to the graph structure do not use ``make_node``. [*???*]
The ``inputs`` and ``outputs`` arguments to ``Apply`` must be lists of ``Variable`` instances (or instances of subclasses of ``Variable``). The inputs given to ``Apply`` do not have to be the same as the inputs passed to ``make_node``, but it is recommended that the order corresponds. [*why?*] The behavior of ``make_node`` should not depend on the structure of the graph of [*or?*] its inputs: it may look at the type and type fields of its inputs, but not at their owner field, because modifications to the graph structure do not use ``make_node``. [*???*]
Example:
......@@ -66,14 +66,14 @@ Example:
def make_node(self, x, y):
# note 1: constant, int64 and Scalar are defined in theano.scalar
# note 2: constant(x) is equivalent to Constant(type = int64, data = x)
# note 3: the call int64() is equivalent to Result(type = int64) or Result(type = Scalar(dtype = 'int64'))
# note 3: the call int64() is equivalent to Variable(type = int64) or Variable(type = Scalar(dtype = 'int64'))
if isinstance(x, int):
x = constant(x)
elif not isinstance(x, Result) or not x.type == int64:
elif not isinstance(x, Variable) or not x.type == int64:
raise TypeError("expected an int64 Scalar")
if isinstance(y, int):
y = constant(y)
elif not isinstance(y, Result) or not x.type == int64:
elif not isinstance(y, Variable) or not x.type == int64:
raise TypeError("expected an int64 Scalar")
inputs = [x, y]
outputs = [int64()]
......@@ -82,12 +82,12 @@ Example:
#...
add = Add() # I make an instance of Add
node1 = add.make_node(int64(), int64()) # I make a node with two Result inputs
node1 = add.make_node(int64(), int64()) # I make a node with two Variable inputs
node2 = add.make_node(1, 2) # this works too
node3 = add.make_node(int64(), 79) # this works three
node4 = add.make_node(float64(), int64()) # this raises a TypeError
[*What type is an instance of Add? It's an Apply? But that's not a Result, and cannot be used as input for another Op.*]
[*What type is an instance of Add? It's an Apply? But that's not a Variable, and cannot be used as input for another Op.*]
Two Apply nodes ``node1`` and ``node2`` are *assumed* by the compiler to represent the same behavior if:
1. ``node1.op == node2.op``
......@@ -99,7 +99,7 @@ It is considered an *error* to have conditions 1 and 2 but not condition 3. A co
``__call__``
----------------
In ``Op``, ``__call__`` is defined in terms of ``make_node``. Instead of returning a node, it returns the output Results directly, which is practical from a UI standpoint. Here is pseudocode:
In ``Op``, ``__call__`` is defined in terms of ``make_node``. Instead of returning a node, it returns the output Variables directly, which is practical from a UI standpoint. Here is pseudocode:
.. code-block:: python
......@@ -122,7 +122,7 @@ perform(self, node, inputs, output_storage)
Where:
* *node*: a pointer to an Apply instance - ``node`` is assumed to be produced by a previous call to ``self.make_node``.
* *inputs*: *not* the same as ``node.inputs`` - it is a list of values. [*i.e. actually data, not just symbolic stuff?*]
* *output_storage*: *not* the same as ``node.outputs`` - it is a list of lists of length 1 where the results of the computation must be put.
* *output_storage*: *not* the same as ``node.outputs`` - it is a list of lists of length 1 where the variables of the computation must be put.
[*Can you explain better how inputs is not node.inputs and output_storage is not node.outputs?*]
......@@ -138,7 +138,7 @@ Here is an example of a properly defined ``perform``:
# this does z = x + y
x, y = inputs # extract the two inputs
z, = output_storage # extract the one storage (the comma after z is not optional)
z[0] = x + y # we must put the result in z[0]
z[0] = x + y # we must put the variable in z[0]
...
add = Add() # I make an instance of Add
......@@ -175,8 +175,8 @@ grad
where:
* ``inputs`` is a list of Result instances. It is assumed to be the ``inputs`` field of a node produced by ``make_node``.
* ``output_gradients`` is a list of Result instances. They have the same properties as the outputs of the node, but are filled with gradient values.
* ``inputs`` is a list of Variable instances. It is assumed to be the ``inputs`` field of a node produced by ``make_node``.
* ``output_gradients`` is a list of Variable instances. They have the same properties as the outputs of the node, but are filled with gradient values.
Essentially, the semantics are:
......@@ -192,7 +192,7 @@ Essentially, the semantics are:
return gz*dz/dx + gw*dw/dx, gz*dz/dy + gw*dw/dy
More specifically,
``grad`` must return a list or tuple of input gradients, as many as there are inputs. Let C be a Result (currently assumed to be a scalar) that depends through a theano symbolic expression on the node outputs. Then each output_gradients[i] represents symbolically dC/doutputs[i]. The returned input gradients should represent symbolically dC/dinputs[i].
``grad`` must return a list or tuple of input gradients, as many as there are inputs. Let C be a Variable (currently assumed to be a scalar) that depends through a theano symbolic expression on the node outputs. Then each output_gradients[i] represents symbolically dC/doutputs[i]. The returned input gradients should represent symbolically dC/dinputs[i].
Example:
......@@ -253,7 +253,7 @@ Example: if we expect to call the op repeatedly on incrementally bigger inputs,
"""
default_output = 0
def make_node(self, x, y):
return Apply(self, [x,y], [x.type.make_result(), x.type.make_result()])
return Apply(self, [x,y], [x.type.make_variable(), x.type.make_variable()])
def perform(self, node, (x, y), (z, stor)):
if z[0] is None or stor[0] is None:
......
......@@ -50,12 +50,12 @@ Usage:
.. code-block:: python
#module.state = result
#module.state = variable
module.state = T.scalar()
A ``Member`` represents a state variable (i.e., whose value remains after a ``Method`` is called). It will be named automatically after that field and it will be an implicit input of all ``Methods`` of the ``Module``. Its storage (i.e. where the value is stored) will be shared by all ``Methods`` of the ``Module``.
A ``Result`` which is the result of a previous computation (by opposition to being ``updated``) is not a ``Member``. Internally this is called an External. You should not need to care about this.
A ``Variable`` which is the variable of a previous computation (by opposition to being ``updated``) is not a ``Member``. Internally this is called an External. You should not need to care about this.
For sharing state between modules, see ``Inner Module`` section.
......@@ -100,7 +100,7 @@ Module Interface
def resolve(self, symbol, filter = None)
Resolves a symbol in this module. The symbol can be a string or a ``Result``. If the string contains dots (eg ``"x.y"``), the module will resolve the symbol hierarchically in its inner modules. The filter argument is None or a class and it can be used to restrict the search to ``Member`` or ``Method`` instances for example.
Resolves a symbol in this module. The symbol can be a string or a ``Variable``. If the string contains dots (eg ``"x.y"``), the module will resolve the symbol hierarchically in its inner modules. The filter argument is None or a class and it can be used to restrict the search to ``Member`` or ``Method`` instances for example.
.. code-block:: python
......
......@@ -30,14 +30,14 @@ you compute the gradient, then there is no problem.
If an Op does not define ``grad``, and this Op *does* appear in the path when
you compute the gradient, **WRITEME**.
Gradients for a particular result can be one of four kinds:
Gradients for a particular variable can be one of four kinds:
1) forgot to implement it
You will get an exception of the following form.
theano.gof.utils.MethodNotDefined: ('grad', <class 'pylearn.algorithms.sandbox.cost.LogFactorial'>, 'LogFactorial')
2) a symbolic result
2) a symbolic variable
3) None / zero
4) undefined mathematically
currently, there is no way for a grad() method to distinguish between cases 3
......@@ -123,7 +123,7 @@ Guillaume can you make sure to hit these points:
* There are a lot of tests that define their own epsilon, but this should be standardized. e.g. in test_elemwise.py ``self.failUnless((numpy.abs(f(xv) - zv) < 1e-10).all())``
* If the expected result of a test is that an Exception is thrown, how do we correctly detect and handle that?
* If the expected variable of a test is that an Exception is thrown, how do we correctly detect and handle that?
nosetests has ``failUnlessRaises``
......
......@@ -2,7 +2,7 @@
.. _tensoroptools:
================
NDArray Op Tools
Tensor Op Tools
================
WRITEME - describe how to use Elemwise here
......
......@@ -2,7 +2,7 @@
Theano is an optimizing compiler in Python, built to evaluate complicated expressions
(especially matrix-valued ones) as quickly as possible.
Theano compiles expression graphs (see :doc:`graph` ) that are built by Python code.
The expressions in these graphs are called `Apply` nodes and the variables in these graphs are called `Result` nodes.
The expressions in these graphs are called `Apply` nodes and the variables in these graphs are called `Variable` nodes.
You compile a graph by calling `function`, which takes a graph, and returns a callable object.
One of theano's most important features is that `function` can transform your graph before
......@@ -29,7 +29,7 @@ from gof import \
CLinker, OpWiseCLinker, DualLinker, Linker, LocalLinker, PerformLinker, \
Container, \
InconsistencyError, Env, \
Apply, Result, Constant, Value, \
Apply, Variable, Constant, Value, \
Op, \
opt, \
toolbox, \
......
......@@ -6,8 +6,8 @@ from function_module import function
class OpFromGraph(gof.Op):
"""
This create an L{Op} from a list of input results and a list of output
results.
This create an L{Op} from a list of input variables and a list of output
variables.
The signature is the same as the signature of L{FunctionFactory}
and/or function and the resulting L{Op}'s perform will do the same
......@@ -62,9 +62,9 @@ class OpFromGraph(gof.Op):
[type() for type in self.output_types])
def perform(self, node, inputs, outputs):
results = self.fn(*inputs)
for output, result in zip(outputs, results):
output[0] = result
variables = self.fn(*inputs)
for output, variable in zip(outputs, variables):
output[0] = variable
def grad(self, inputs, output_grads):
if hasattr(self, 'grad_ops'):
......
......@@ -28,7 +28,7 @@ class BadClinkerOutput(DebugModeError):
"""Exception: an Op's c_code and perform implementations don't agree."""
r = None
"""The `Result` instance for which conflicting values were computed"""
"""The `Variable` instance for which conflicting values were computed"""
val_py = None
"""The value computed by `r.owner.op.perform`"""
......@@ -48,14 +48,14 @@ class BadClinkerOutput(DebugModeError):
return type(self.r.owner.op)
class BadOptimization(DebugModeError):
"""Exception: some result and its substitute take different runtime values.
"""Exception: some variable and its substitute take different runtime values.
"""
new_r = None
"""A `Result` instance that took a different value from `old_r`, but which replaced `old_r`."""
"""A `Variable` instance that took a different value from `old_r`, but which replaced `old_r`."""
old_r = None
"""A `Result` instance that was replaced by `new_r`."""
"""A `Variable` instance that was replaced by `new_r`."""
old_r_val = None
"""The value computed for `old_r`."""
......@@ -93,7 +93,7 @@ class BadOptimization(DebugModeError):
"""Return a pretty multiline string representating the cause of the exception"""
sio = StringIO()
print >> sio, "BadOptimization Error", super(BadOptimization, self).__str__()
print >> sio, " Result: id", id(self.new_r), self.new_r
print >> sio, " Variable: id", id(self.new_r), self.new_r
print >> sio, " Op", self.new_r.owner
print >> sio, " Value Type:", type(self.new_r_val)
print >> sio, " Old Value: ", self.old_r_val
......@@ -144,13 +144,13 @@ class InvalidValueError(DebugModeError):
def __str__(self):
r, v = self.r, self.v
return "InvalidValueError: Result %s, Type %s, type(Value) %s, Value %s"\
return "InvalidValueError: Variable %s, Type %s, type(Value) %s, Value %s"\
% (str(r), str(r.type), str(type(v)), str(v)[0:100])
def _debugprint(r, prefix='', depth=-1, done=None, file=sys.stdout):
"""Print the graph leading to `r` to given depth.
:param r: Result instance
:param r: Variable instance
:param prefix: prefix to each line (typically some number of spaces)
:param depth: maximum recursion depth (Default -1 for unlimited).
:param done: set of Apply instances that have already been printed
......@@ -161,7 +161,7 @@ def _debugprint(r, prefix='', depth=-1, done=None, file=sys.stdout):
return
done = set() if done is None else done
if hasattr(r.owner, 'op'):
# this result is the output of computation,
# this variable is the output of computation,
# so just print out the apply
a = r.owner
print >> file, prefix, a.op, id(a)
......@@ -170,7 +170,7 @@ def _debugprint(r, prefix='', depth=-1, done=None, file=sys.stdout):
for i in a.inputs:
_debugprint(i, prefix+' ', depth=depth-1, done=done, file=file)
else:
#this is a result
#this is a variable
print >> file, prefix, r, id(r)
return file
......@@ -188,15 +188,15 @@ def _optcheck_env(input_specs, output_specs, accept_inplace = False):
:returns: a new Env with a cloned graph, with debugging `Feature` instances already installed.
"""
orig_inputs = [spec.result for spec in input_specs]
orig_inputs = [spec.variable for spec in input_specs]
updates = [spec.update for spec in input_specs if spec.update]
orig_outputs = [spec.result for spec in output_specs] + updates
orig_outputs = [spec.variable for spec in output_specs] + updates
inputs, outputs = gof.graph.clone(orig_inputs, orig_outputs)
equivalence_tracker = _ResultEquivalenceTracker()
equivalence_tracker = _VariableEquivalenceTracker()
env = gof.env.Env(inputs, outputs,
#DestroyHandler is not needed because it is actually installed by an optimization
# after canonicalization. This results in a big speed gain.
# after canonicalization. This variables in a big speed gain.
#features=[equivalence_tracker, gof.DestroyHandler(do_imports_on_attach=False)])
features=[equivalence_tracker])
......@@ -225,7 +225,7 @@ def _check_inputs(node, storage_map, r_vals, dr_vals, active_nodes, clobber_dr_v
# ok, we expected r to be destroyed
if node in active_nodes:
if dr_vals.get(r, (0, node))[1] is not node:
# bad: there should only be one active node that destroys any result
# bad: there should only be one active node that destroys any variable
raise Exception('failure in topological ordering')
if clobber_dr_vals:
dr_vals[r] = (storage_map[r][0], node) #no copy, this is the last use of this variable
......@@ -249,8 +249,8 @@ def _find_bad_optimizations0(order, reasons, r_vals):
understand, but sometimes when there's a problem it identifies the wrong optimization as
the culprit.
"""
# iterate over results looking for values that don't match the values of the
# results they replaced. This is the sign of a broken optimization.
# iterate over variables looking for values that don't match the values of the
# variables they replaced. This is the sign of a broken optimization.
for i, node in enumerate(order):
for new_r in node.outputs:
for reason, r, old_graph_str, new_graph_str in reasons[new_r]:
......@@ -271,10 +271,10 @@ def _find_bad_optimizations0(order, reasons, r_vals):
new_graph=new_graph_str)
def _find_bad_optimizations1(order, reasons, r_vals):
# iterate over results looking for values that don't match the values of the
# results they replaced. This is the sign of a broken optimization.
# iterate over variables looking for values that don't match the values of the
# variables they replaced. This is the sign of a broken optimization.
#identify sets of results that are supposed to be equivalent
#identify sets of variables that are supposed to be equivalent
equivalence_sets = {}
program_position = {} #node -> order idx
......@@ -293,7 +293,7 @@ def _find_bad_optimizations1(order, reasons, r_vals):
for r, r_equiv in equivalence_sets.iteritems():
if id(r_equiv) not in equivalence_sets_broken:
equivalence_sets_broken[id(r_equiv)] = False
#loop over the results in the set comparing them to be equal enough
#loop over the variables in the set comparing them to be equal enough
re0 = None
for re in r_equiv:
if re0:
......@@ -336,7 +336,7 @@ class _EnvEvent(object):
"""Either 'output' or an Op instance"""
idx = None
"""change events involve an position index of the input result"""
"""change events involve an position index of the input variable"""
reason = None
"""change events sometimes have a reason"""
......@@ -374,7 +374,7 @@ class _EnvEvent(object):
def __ne__(self, other):
return not (self == other)
class _ResultEquivalenceTracker(object):
class _VariableEquivalenceTracker(object):
"""A Env Feature that keeps tabs on an Env and tries to detect problems."""
env = None
......@@ -389,7 +389,7 @@ class _ResultEquivalenceTracker(object):
inactive_nodes = None
"""WRITEME"""
all_results_ever = None
all_variables_ever = None
"""WRITEME"""
reasons = None
......@@ -410,7 +410,7 @@ class _ResultEquivalenceTracker(object):
self.active_nodes = set()
self.inactive_nodes = set()
self.env = env
self.all_results_ever = []
self.all_variables_ever = []
self.reasons = {}
self.replaced_by = {}
self.event_list = []
......@@ -442,7 +442,7 @@ class _ResultEquivalenceTracker(object):
for r in node.outputs:
assert r not in self.equiv
self.equiv[r] = set([r])
self.all_results_ever.append(r)
self.all_variables_ever.append(r)
self.reasons.setdefault(r, [])
self.replaced_by.setdefault(r, [])
for r in node.inputs:
......@@ -474,13 +474,13 @@ class _ResultEquivalenceTracker(object):
r_set = self.equiv[r]
else:
r_set = self.equiv.setdefault(r, set([r]))
self.all_results_ever.append(r)
self.all_variables_ever.append(r)
if new_r in self.equiv:
new_r_set = self.equiv[new_r]
else:
new_r_set = self.equiv.setdefault(new_r, set([new_r]))
self.all_results_ever.append(new_r)
self.all_variables_ever.append(new_r)
assert new_r in new_r_set
assert r in r_set
......@@ -525,7 +525,7 @@ class _Linker(gof.link.LocalLinker):
#Compute a topological ordering that IGNORES the destroy_map of destructive Ops.
#This will be OK, because every thunk is evaluated on a copy of its input.
order_outputs = copy.copy(env.equivalence_tracker.all_results_ever)
order_outputs = copy.copy(env.equivalence_tracker.all_variables_ever)
order_outputs.reverse()
order = graph.io_toposort(env.inputs, order_outputs)
......@@ -603,14 +603,14 @@ class _Linker(gof.link.LocalLinker):
equiv_vals = {}
problematic = set()
# r_vals are the true values associated with each result in the graph
# r_vals are the true values associated with each variable in the graph
# they should not change during the evaluation of this function, even when the
# graph has destructive ops in it
#
# This dictionary is used to populate the storage_map as necessary
r_vals = {}
# dr_vals are the values taken by results after being destroyed
# dr_vals are the values taken by variables after being destroyed
dr_vals = {}
assert len(thunks_py) == len(order)
......@@ -630,13 +630,13 @@ class _Linker(gof.link.LocalLinker):
assert s[0] is None
try:
# compute the value of all results
# compute the value of all variables
for i, (thunk_py, thunk_c, node) in enumerate(zip(thunks_py, thunks_c, order)):
this_node_destroyed_results = set()
this_node_destroyed_variables = set()
# put a copy of each input into the storage_map
for r in node.inputs:
assert isinstance(r, gof.Result)
assert isinstance(r, gof.Variable)
assert r in r_vals
storage_map[r][0] = _lessbroken_deepcopy(r_vals[r])
if not r.type.is_valid_value(storage_map[r][0]):
......@@ -688,7 +688,7 @@ class _Linker(gof.link.LocalLinker):
_find_bad_optimizations0(order, env.equivalence_tracker.reasons, r_vals)
#####
# Postcondition: the input and output results are in the storage map, nothing more
# Postcondition: the input and output variables are in the storage map, nothing more
#####
# Nothing should be in storage map after evaluating each the thunk (specifically the
......@@ -697,7 +697,7 @@ class _Linker(gof.link.LocalLinker):
assert type(s) is list
assert s[0] is None
# store our output results to their respective storage lists
# store our output variables to their respective storage lists
for output, storage in zip(env.outputs, output_storage):
storage[0] = r_vals[output]
......@@ -758,7 +758,7 @@ class _Maker(FunctionMaker): #inheritance buys a few helper functions
:type inputs: a list of SymbolicInput instances
:type outputs: a list of SymbolicOutput instances
outputs may also be a single Result (not a list), in which
outputs may also be a single Variable (not a list), in which
case the functions produced by FunctionMaker will return
their output value directly
......@@ -766,7 +766,7 @@ class _Maker(FunctionMaker): #inheritance buys a few helper functions
in the graph from the inputs to the outputs
"""
# Handle the case where inputs and/or outputs is a single Result (not in a list)
# Handle the case where inputs and/or outputs is a single Variable (not in a list)
unpack_single = False
if not isinstance(outputs, (list, tuple)):
unpack_single = True
......@@ -776,7 +776,7 @@ class _Maker(FunctionMaker): #inheritance buys a few helper functions
# Wrap them in In or Out instances if needed.
inputs, outputs = map(self.wrap_in, inputs), map(self.wrap_out, outputs)
_inputs = gof.graph.inputs([o.result for o in outputs] + [i.update for i in inputs if getattr(i, 'update', False)])
_inputs = gof.graph.inputs([o.variable for o in outputs] + [i.update for i in inputs if getattr(i, 'update', False)])
indices = [[input] + self.expand_in(input, _inputs) for input in inputs]
expanded_inputs = reduce(list.__add__, [list(z) for x, y, z in indices], [])
......@@ -935,14 +935,14 @@ class DebugMode(Mode):
- inconsistent c_code and perform implementations (see `BadClinkerOutput`)
- a result replacing another when their runtime values don't match. This is a symptom of
- a variable replacing another when their runtime values don't match. This is a symptom of
an incorrect optimization step, or faulty Op implementation (raises `BadOptimization`)
- stochastic optimization ordering (raises `StochasticOrder`)
- incomplete `destroy_map` specification (raises `BadDestroyMap`)
- an op that returns an illegal value not matching the output Result Type (raises
- an op that returns an illegal value not matching the output Variable Type (raises
InvalidValueError)
Each of these exceptions inherits from the more generic `DebugModeError`.
......@@ -954,7 +954,7 @@ class DebugMode(Mode):
diagnostic information to a file.
:remark: The work of debugging is implemented by the `_Maker`, `_Linker`, and
`_ResultEquivalenceTracker` classes.
`_VariableEquivalenceTracker` classes.
"""
# This function will be used to create a FunctionMaker in
......
......@@ -18,9 +18,9 @@ from io import *
def infer_reuse_pattern(env, outputs_to_disown):
"""
Given an env and a list of results, returns the list of all
results which may share the same underlying data storage as any of
the specified results. Used internally by function, FunctionMaker.
Given an env and a list of variables, returns the list of all
variables which may share the same underlying data storage as any of
the specified variables. Used internally by function, FunctionMaker.
This list is also refered to as no_recycling sometimes.
"""
......@@ -46,7 +46,7 @@ def infer_reuse_pattern(env, outputs_to_disown):
class Supervisor:
"""
Listener for Env events which makes sure that no operation overwrites the
contents of protected Results. The outputs of the Env are protected by default.
contents of protected Variables. The outputs of the Env are protected by default.
"""
def __init__(self, protected):
......@@ -57,7 +57,7 @@ class Supervisor:
return True
for r in self.protected + list(env.outputs):
if env.destroyers(r):
raise gof.InconsistencyError("Trying to destroy a protected Result.", r)
raise gof.InconsistencyError("Trying to destroy a protected Variable.", r)
def std_env(input_specs, output_specs, accept_inplace = False):
......@@ -76,9 +76,9 @@ def std_env(input_specs, output_specs, accept_inplace = False):
The returned Env is a clone of the graph between the provided
inputs and outputs.
"""
orig_inputs = [spec.result for spec in input_specs]
orig_inputs = [spec.variable for spec in input_specs]
updates = [spec.update for spec in input_specs if spec.update]
orig_outputs = [spec.result for spec in output_specs] + updates
orig_outputs = [spec.variable for spec in output_specs] + updates
inputs, outputs = gof.graph.clone(orig_inputs, orig_outputs)
env = gof.env.Env(inputs, outputs)
......@@ -185,11 +185,11 @@ class Function(object):
c.provided = 0 # this is a count of how many times the input has been provided (reinitialized to 0 on __call__)
# We set an entry in finder for:
# - the index of the input
# - the result instance the input is based on
# - the variable instance the input is based on
# - the name of the input
# All entries map to the container or to DUPLICATE if an ambiguity is detected
finder[i] = c
finder[input.result] = c
finder[input.variable] = c
finder[input.name] = c if input.name not in finder else DUPLICATE
# inv_finder maps the container to the input (useful for one error message)
inv_finder[c] = input
......@@ -212,7 +212,7 @@ class Function(object):
# This allows the user to micro-manage elements of the kit if need be.
# All containers inherit the required field and have their own "provided" counter
for c, sin in zip(cs, sinputs):
finder[sin.result] = c
finder[sin.variable] = c
finder[sin.name] = c
finder[sin.name] = c if sin.name not in finder else DUPLICATE
inv_finder[c] = input
......@@ -296,9 +296,9 @@ class Function(object):
# Check if inputs are missing or if inputs were set more than once
for c in self.input_storage:
if c.required and not c.provided:
raise TypeError("Missing required input: %s" % getattr(self.inv_finder[c], 'result', self.inv_finder[c]))
raise TypeError("Missing required input: %s" % getattr(self.inv_finder[c], 'variable', self.inv_finder[c]))
if c.provided > 1:
raise TypeError("Multiple values for input: %s" % getattr(self.inv_finder[c], 'result', self.inv_finder[c]))
raise TypeError("Multiple values for input: %s" % getattr(self.inv_finder[c], 'variable', self.inv_finder[c]))
# Do the actual work
self.fn()
......@@ -315,9 +315,9 @@ class Function(object):
# storage cells
if getattr(self.fn, 'allow_gc', False):
assert len(self.output_storage) == len(self.maker.env.outputs)
for o_container, o_result in zip(self.output_storage, self.maker.env.outputs):
if o_result.owner is not None:
# this node is the result of computation
for o_container, o_variable in zip(self.output_storage, self.maker.env.outputs):
if o_variable.owner is not None:
# this node is the variable of computation
# WARNING: This circumvents the 'readonly' attribute in x
o_container.storage[0] = None
......@@ -415,7 +415,7 @@ class SanityCheckFunction(Function):
for stor1, stor2 in zip(self.input_storage, fn.input_storage):
stor2.value = copy(stor1.value)
results = super(SanityCheckFunction, self).__call__(*args, **kwargs)
variables = super(SanityCheckFunction, self).__call__(*args, **kwargs)
all_outputs = [copy(c.value) for c in self.output_storage] # we keep a copy to make sure it's not overwritten
for fn in self.others:
......@@ -433,18 +433,18 @@ class SanityCheckFunction(Function):
c1.value, c2.value)
# This checks all output storage (this includes state variables that we updated)
# This is ok because the results of a call stick around in their storage
# This is ok because the variables of a call stick around in their storage
for i, (r1, c2) in enumerate(zip(all_outputs, fn.output_storage)):
r2 = c2.value
if not self.check_equal(r1, r2):
name = c2.name
raise ValueError("Result #%i%s using %s and %s differs."
raise ValueError("Variable #%i%s using %s and %s differs."
% (i,
" (%s)" % name if name else "",
self.maker.mode,
fn.maker.mode),
r1, r2)
return results
return variables
......@@ -458,7 +458,7 @@ class FunctionMaker(object):
This class has the env, the optimizer, and the linker. When copying a `Function`, there is
no need to duplicate the `FunctionMaker` instance. Deepcopy still copies both, which can
result in re-compilation.
variable in re-compilation.
"""
......@@ -466,23 +466,23 @@ class FunctionMaker(object):
def wrap_in(input):
if isinstance(input, (SymbolicInput, SymbolicInputKit)):
return input
elif isinstance(input, gof.Result):
# r -> SymbolicInput(result=r)
elif isinstance(input, gof.Variable):
# r -> SymbolicInput(variable=r)
return SymbolicInput(input)
elif isinstance(input, (list, tuple)):
# (r, u) -> SymbolicInput(result=r, update=u)
# (r, u) -> SymbolicInput(variable=r, update=u)
if len(input) == 2:
return SymbolicInput(input[0], update = input[1])
else:
raise TypeError("Expected two elements in the list or tuple.", input)
else:
raise TypeError("Unknown input type: %s (%s), expected Result instance", type(input), input)
raise TypeError("Unknown input type: %s (%s), expected Variable instance", type(input), input)
@staticmethod
def expand_in(sinput, rinputs):
# For SymbolicInputKits, this extracts a list of SymbolicInput instances
# and corresponding indices such that these SymbolicInputs are representative
# of some of the Result instances in inputs.
# of some of the Variable instances in inputs.
# For SymbolicInput, this returns None as the list of indices and a list with
# just the SymbolicInput.
if isinstance(sinput, SymbolicInputKit):
......@@ -494,7 +494,7 @@ class FunctionMaker(object):
def wrap_out(output):
if isinstance(output, SymbolicOutput):
return output
elif isinstance(output, gof.Result):
elif isinstance(output, gof.Variable):
return SymbolicOutput(output)
else:
raise TypeError("Unknown output type: %s (%s)", type(output), output)
......@@ -505,7 +505,7 @@ class FunctionMaker(object):
:type inputs: a list of SymbolicInput instances
:type outputs: a list of SymbolicOutput instances
outputs may also be a single Result (not a list), in which
outputs may also be a single Variable (not a list), in which
case the functions produced by FunctionMaker will return
their output value directly
......@@ -516,7 +516,7 @@ class FunctionMaker(object):
"""
# Handle the case where inputs and/or outputs is a single Result (not in a list)
# Handle the case where inputs and/or outputs is a single Variable (not in a list)
unpack_single = False
if not isinstance(outputs, (list, tuple)):
unpack_single = True
......@@ -526,7 +526,7 @@ class FunctionMaker(object):
# Wrap them in In or Out instances if needed.
inputs, outputs = map(self.wrap_in, inputs), map(self.wrap_out, outputs)
_inputs = gof.graph.inputs([o.result for o in outputs] + [i.update for i in inputs if getattr(i, 'update', False)])
_inputs = gof.graph.inputs([o.variable for o in outputs] + [i.update for i in inputs if getattr(i, 'update', False)])
indices = [[input] + self.expand_in(input, _inputs) for input in inputs]
expanded_inputs = reduce(list.__add__, [list(z) for x, y, z in indices], [])
......@@ -722,7 +722,7 @@ def function(inputs, outputs, mode=default_mode, accept_inplace = False):
Similarly, every element of the output list will be upgraded to an
`Out` instance if necessary:
* a `Result` instance r will be upgraded like `Out`(r)
* a `Variable` instance r will be upgraded like `Out`(r)
Random Numbers
......@@ -776,7 +776,7 @@ def convert_function_input(input):
The rules for upgrading are as follows:
- a `Result` instance r will be upgraded like `In`(r)
- a `Variable` instance r will be upgraded like `In`(r)
- a tuple (name, r) will be `In`(r, name=name)
......@@ -794,7 +794,7 @@ def convert_function_input(input):
return input
elif isinstance(input, gof.Constant):
raise TypeError('A Constant instance is not a legal function input', input)
elif isinstance(input, gof.Result):
elif isinstance(input, gof.Variable):
return In(input)
elif isinstance(input, (list, tuple)):
orig = input
......@@ -808,12 +808,12 @@ def convert_function_input(input):
if isinstance(input[0], (list, tuple)):
if len(input[0]) != 2 or len(input) != 2:
raise TypeError("Invalid input syntax: %s (check documentation or use an In instance)" % orig)
(result, update), value = input
elif isinstance(input[0], gof.Result):
(variable, update), value = input
elif isinstance(input[0], gof.Variable):
if len(input) == 1:
result, update, value = input[0], None, None
variable, update, value = input[0], None, None
elif len(input) == 2:
(result, value), update = input, None
(variable, value), update = input, None
else:
raise TypeError("Invalid input syntax: %s (check documentation or use an In instance)" % orig)
elif isinstance(input[0], (SymbolicInput, SymbolicInputKit)):
......@@ -827,14 +827,14 @@ def convert_function_input(input):
else:
raise TypeError("The input specification is not valid: %s" % input)
if not isinstance(result, gof.Result):
raise TypeError("Unknown input type: %s, expected Result instance" % type(result), result)
if update is not None and not isinstance(update, gof.Result):
raise TypeError("Unknown update type: %s, expected Result instance" % type(update), update)
if value is not None and isinstance(value, (gof.Result, SymbolicInput)):
raise TypeError("The value for input %s should not be a Result or SymbolicInput instance (got: %s)" % (result, value))
if not isinstance(variable, gof.Variable):
raise TypeError("Unknown input type: %s, expected Variable instance" % type(variable), variable)
if update is not None and not isinstance(update, gof.Variable):
raise TypeError("Unknown update type: %s, expected Variable instance" % type(update), update)
if value is not None and isinstance(value, (gof.Variable, SymbolicInput)):
raise TypeError("The value for input %s should not be a Variable or SymbolicInput instance (got: %s)" % (variable, value))
return In(result, name=name, value=value, update=update)
return In(variable, name=name, value=value, update=update)
else:
raise TypeError("Unknown input type: %s, expected Result instance" % type(input), input)
raise TypeError("Unknown input type: %s, expected Variable instance" % type(input), input)
......@@ -5,16 +5,16 @@ class SymbolicInput(object):
"""
Represents a symbolic input for use with function or FunctionMaker.
result: a Result instance.
variable: a Variable instance.
This will be assigned a value before running the function,
not computed from its owner.
name: Any type. (If autoname=True, defaults to result.name).
name: Any type. (If autoname=True, defaults to variable.name).
If name is a valid Python identifier, this input can be set by kwarg, and its value
can be accessed by self.<name>.
update: Result instance (default: None)
value (see previous) will be replaced with this expression result after each function call.
update: Variable instance (default: None)
value (see previous) will be replaced with this expression variable after each function call.
If update is None, the update will be the default value of the input.
mutable: Bool (default: False if update is None, True if update is not None)
......@@ -29,9 +29,9 @@ class SymbolicInput(object):
See the name option.
"""
def __init__(self, result, name=None, update=None, mutable=None, strict=False, autoname=True):
self.result = result
self.name = result.name if (autoname and name is None) else name
def __init__(self, variable, name=None, update=None, mutable=None, strict=False, autoname=True):
self.variable = variable
self.name = variable.name if (autoname and name is None) else name
if self.name is not None and not isinstance(self.name, str):
raise TypeError("name must be a string! (got: %s)" % self.name)
self.update = update
......@@ -40,9 +40,9 @@ class SymbolicInput(object):
def __str__(self):
if self.update:
return "In(%s -> %s)" % (self.result, self.update)
return "In(%s -> %s)" % (self.variable, self.update)
else:
return "In(%s)" % self.result
return "In(%s)" % self.variable
def __repr__(self):
return str(self)
......@@ -64,7 +64,7 @@ class SymbolicInputKit(object):
raise TypeError('naem must be a string (got: %s)' % name)
self.name = name
self.sinputs = []
self.results = []
self.variables = []
def add_input(self, sinput):
"""
......@@ -72,7 +72,7 @@ class SymbolicInputKit(object):
next available index.
"""
self.sinputs.append(sinput)
self.results.append(sinput.result)
self.variables.append(sinput.variable)
def distribute(self, value, indices, containers):
"""
......@@ -84,10 +84,10 @@ class SymbolicInputKit(object):
def complete(self, inputs):
"""
Given inputs (a list of Result instances), checks through all
Given inputs (a list of Variable instances), checks through all
the SymbolicInputs in the kit and return a sorted list of
indices and a list of their corresponding SymbolicInputs such
that each of them represents some result in the inputs list.
that each of them represents some variable in the inputs list.
Not all the provided inputs will have a corresponding
SymbolicInput in the kit.
......@@ -95,7 +95,7 @@ class SymbolicInputKit(object):
ret = []
for input in inputs:
try:
i = self.results.index(input)
i = self.variables.index(input)
ret.append((i, self.sinputs[i]))
except ValueError:
pass
......@@ -109,11 +109,11 @@ class In(SymbolicInput):
"""
Represents a symbolic input for use with function or FunctionMaker.
result: a Result instance.
variable: a Variable instance.
This will be assigned a value before running the function,
not computed from its owner.
name: Any type. (If autoname=True, defaults to result.name).
name: Any type. (If autoname=True, defaults to variable.name).
If name is a valid Python identifier, this input can be set by kwarg, and its value
can be accessed by self.<name>.
......@@ -122,8 +122,8 @@ class In(SymbolicInput):
an argument with a default value in Python. If update is not None, changes to this
value will "stick around", whether due to an update or a user's explicit action.
update: Result instance (default: None)
value (see previous) will be replaced with this expression result after each function call.
update: Variable instance (default: None)
value (see previous) will be replaced with this expression variable after each function call.
If update is None, the update will be the default value of the input.
mutable: Bool (default: False if update is None, True if update is not None)
......@@ -137,8 +137,8 @@ class In(SymbolicInput):
autoname: Bool (default: True)
See the name option.
"""
def __init__(self, result, name=None, value=None, update=None, mutable=None, strict=False, autoname=True):
super(In, self).__init__(result, name, update, mutable, strict, autoname)
def __init__(self, variable, name=None, value=None, update=None, mutable=None, strict=False, autoname=True):
super(In, self).__init__(variable, name, update, mutable, strict, autoname)
self.value = value
......@@ -152,12 +152,12 @@ class SymbolicOutput(object):
the function again, but the function might be faster.
"""
def __init__(self, result, borrow=False):
self.result = result
def __init__(self, variable, borrow=False):
self.variable = variable
self.borrow = borrow
def __str__(self):
return "Out(%s)" % self.result
return "Out(%s)" % self.variable
Out = SymbolicOutput
......
......@@ -38,7 +38,7 @@ Compilation via make
Conversion from a Component graph to a ComponentInstance graph is performed by `Component.make`.
This method traverses the Component graph in two passes.
In the first pass (the allocate pass), it creates storage for all Results that are contained in the graph (see
In the first pass (the allocate pass), it creates storage for all Variables that are contained in the graph (see
`Component.allocate`). These are the module variables.
In the second pass (the build pass), it creates functions that (in general) operate on these module variables.
......@@ -100,7 +100,7 @@ def name_split(sym, n=-1):
class AllocationError(Exception):
"""
Exception raised when a Result has no associated storage.
Exception raised when a Variable has no associated storage.
"""
pass
......@@ -117,11 +117,11 @@ class Component(object):
def allocate(self, memo):
"""
Populates the memo dictionary with gof.Result -> io.In
Populates the memo dictionary with gof.Variable -> io.In
pairings. The value field of the In instance should contain a
gof.Container instance. The memo dictionary is meant to tell
the build method of Components where the values associated to
certain results are stored and how they should behave if they
certain variables are stored and how they should behave if they
are implicit inputs to a Method (needed to compute its
output(s) but not in the inputs or updates lists).
"""
......@@ -198,19 +198,19 @@ class Component(object):
class _RComponent(Component):
"""
Base class for a Component wrapping a Result. For internal use.
Base class for a Component wrapping a Variable. For internal use.
"""
def __init__(self, r):
super(_RComponent, self).__init__()
self.r = r
# If self.owns_name is True, then the name of the result
# If self.owns_name is True, then the name of the variable
# may be adjusted when the name of the Component is. Else,
# the result will always keep its original name. The component
# will only be allowed to own a result's name if it has no
# the variable will always keep its original name. The component
# will only be allowed to own a variable's name if it has no
# original name to begin with. This allows the user to opt out
# of the automatic naming scheme if he or she wants to. It is
# also usually the case that a Result used in more than one
# also usually the case that a Variable used in more than one
# Component should only retain the first name it gets.
self.owns_name = r.name is None
......@@ -229,7 +229,7 @@ class _RComponent(Component):
class External(_RComponent):
"""
External represents a Result which comes from somewhere else
External represents a Variable which comes from somewhere else
(another module) or is a temporary calculation.
"""
......@@ -252,8 +252,8 @@ class External(_RComponent):
class Member(_RComponent):
"""
Member represents a Result which is a state of a Composite. That
Result will be accessible from a built Composite and it is
Member represents a Variable which is a state of a Composite. That
Variable will be accessible from a built Composite and it is
possible to do updates on Members.
Member builds a gof.Container.
......@@ -262,22 +262,22 @@ class Member(_RComponent):
def allocate(self, memo):
"""
If the memo does not have a Container associated to this
Member's Result, instantiates one and sets it in the memo.
Member's Variable, instantiates one and sets it in the memo.
"""
r = self.r
if memo and r in memo:
return memo[r]
assert isinstance(r, gof.Result)
assert isinstance(r, gof.Variable)
rval = gof.Container(r, storage = [getattr(r, 'data', None)],
readonly=isinstance(r, gof.Constant))
memo[r] = io.In(result=r,
memo[r] = io.In(variable=r,
value=rval,
mutable=False)
return memo[r]
def build(self, mode, memo):
"""
Returns the Container associated to this Member's Result.
Returns the Container associated to this Member's Variable.
"""
return memo[self.r].value
......@@ -314,11 +314,11 @@ class Method(Component):
update expression must be given in this dictionary.
Keys in this dictionary must be members of the module graph--results for which this Method
Keys in this dictionary must be members of the module graph--variables for which this Method
will use the shared storage.
The value associated with each key should be a Result (or a string that can be resolved to
a Result) representing the computation of a new value for this shared storage after
The value associated with each key should be a Variable (or a string that can be resolved to
a Variable) representing the computation of a new value for this shared storage after
each function call.
"""
......@@ -336,12 +336,12 @@ class Method(Component):
:param mode: value for `Method.mode`
:type inputs: list of (str or `Result` or `io.In`)
:type inputs: list of (str or `Variable` or `io.In`)
:type outputs: None or str or `Result` or `io.Out` or list of (str or `Result` or
:type outputs: None or str or `Variable` or `io.Out` or list of (str or `Variable` or
`io.Out`)
:type updates: dict of `Result` or str -> `Result` or str
:type updates: dict of `Variable` or str -> `Variable` or str
:type mode: None or any mode accepted by `compile.function`
......@@ -353,11 +353,11 @@ class Method(Component):
self.mode = mode
def resolve_all(self):
"""Convert all inputs, outputs, and updates specified as strings to Results.
"""Convert all inputs, outputs, and updates specified as strings to Variables.
This works by searching the attribute list of the Module to which this Method is bound.
"""
def resolve_result(x, passthrough=(gof.Result)):
def resolve_variable(x, passthrough=(gof.Variable)):
if isinstance(x, passthrough):
return x
elif isinstance(x, _RComponent):
......@@ -367,28 +367,28 @@ class Method(Component):
# return self.resolve(x).r
def resolve_inputs():
if isinstance(self.inputs, (io.In, gof.Result, str)):
if isinstance(self.inputs, (io.In, gof.Variable, str)):
inputs = [self.inputs]
else:
inputs = list(self.inputs)
self.inputs = [resolve_result(input,
passthrough=(gof.Result, io.In)) for input in inputs]
self.inputs = [resolve_variable(input,
passthrough=(gof.Variable, io.In)) for input in inputs]
def resolve_outputs():
if isinstance(self.outputs, (io.Out, gof.Result, str, type(None))):
if isinstance(self.outputs, (io.Out, gof.Variable, str, type(None))):
output = self.outputs
self.outputs = resolve_result(output,
passthrough=(gof.Result, io.Out, type(None)))
self.outputs = resolve_variable(output,
passthrough=(gof.Variable, io.Out, type(None)))
else:
outputs = list(self.outputs)
self.outputs = [resolve_result(output,
passthrough=(gof.Result, io.Out)) for output in outputs]
self.outputs = [resolve_variable(output,
passthrough=(gof.Variable, io.Out)) for output in outputs]
def resolve_updates():
updates = self.updates
self.updates = {}
for k, v in updates.iteritems():
k, v = resolve_result(k), resolve_result(v)
k, v = resolve_variable(k), resolve_variable(v)
self.updates[k] = v
resolve_inputs()
......@@ -405,9 +405,9 @@ class Method(Component):
"""Compile a function for this Method.
:param allocate_all: if True, storage will be
allocated for all needed Results even if there is no
allocated for all needed Variables even if there is no
associated storage for them in the memo. If allocate_all is
False, storage will only be allocated for Results that are
False, storage will only be allocated for Variables that are
reachable from the inputs list.
:returns: a function that implements this method
......@@ -428,7 +428,7 @@ class Method(Component):
' Verify that it is indeed a Member of the'
' enclosing module or of one of its submodules.' % (r, self.name, self))
else:
return io.In(result=r,
return io.In(variable=r,
value=gof.Container(r,
storage=[getattr(r, 'data', None)],
readonly=(isinstance(r, gof.Constant))),
......@@ -440,9 +440,9 @@ class Method(Component):
for input in self.inputs:
if type(input) is io.In:
inputs.append(input)
elif isinstance(input, gof.Result):
elif isinstance(input, gof.Variable):
input_in = io.In(
result=input,
variable=input,
mutable=False)
inputs.append(input_in)
else:
......@@ -450,15 +450,15 @@ class Method(Component):
# Deal with updates to shared storage
for k, v in self.updates.iteritems():
assert isinstance(k, gof.Result)
assert isinstance(k, gof.Variable)
if isinstance(k, gof.Constant):
raise TypeError('Module Constants cannot be updated', k)
assert isinstance(v, gof.Result)
assert isinstance(v, gof.Variable)
#identify an input for result k
#identify an input for variable k
input_k = None
for input in inputs:
if input.result == k:
if input.variable == k:
input_k = input
#print 'METHOD UPDATE', k, v, input_k
......@@ -466,24 +466,24 @@ class Method(Component):
# this is an implicit input,
# use shared storage
input_k = io.In(
result=k,
variable=k,
update=v,
value=get_storage(k, not allocate_all).value,
mutable=True)
inputs.append(input_k)
else:
raise ValueError(('Result listed in both inputs and updates.'
raise ValueError(('Variable listed in both inputs and updates.'
' Use inputs to use your own storage, use updates to '
'work on module-shared storage'), k)
# Deal with module inputs that are not updated
outputs = self.outputs
_inputs = [x.result for x in inputs]
# Grab the results that are not accessible from either the inputs or the updates.
_inputs = [x.variable for x in inputs]
# Grab the variables that are not accessible from either the inputs or the updates.
outputs_list = list(outputs) if isinstance(outputs, (list, tuple)) else [outputs]
outputs_result_list = [o.result if isinstance(o, io.Out) else o for o in outputs_list]
for input in gof.graph.inputs(outputs_result_list
outputs_variable_list = [o.variable if isinstance(o, io.Out) else o for o in outputs_list]
for input in gof.graph.inputs(outputs_variable_list
+ [x.update for x in inputs if getattr(x, 'update', False)],
blockers = _inputs):
if input not in _inputs:
......@@ -709,7 +709,7 @@ class ComponentList(Composite):
return self._components[item]
def set(self, item, value):
if isinstance(value, gof.Result):
if isinstance(value, gof.Variable):
value = Member(value)
elif not isinstance(value, Component):
raise TypeError('ComponentList may only contain Components.', value, type(value))
......@@ -718,7 +718,7 @@ class ComponentList(Composite):
self._components[item] = value
def append(self, c):
if isinstance(c, gof.Result):
if isinstance(c, gof.Variable):
c = Member(c)
elif not isinstance(c, Component):
raise TypeError('ComponentList may only contain Components.', c, type(c))
......@@ -900,20 +900,20 @@ def dict_wrap(d):
register_wrapper(lambda x: isinstance(x, Component),
lambda x: x)
# Result -> Member
register_wrapper(lambda x: isinstance(x, gof.Result) and not x.owner,
# Variable -> Member
register_wrapper(lambda x: isinstance(x, gof.Variable) and not x.owner,
lambda x: Member(x))
# Result -> External
register_wrapper(lambda x: isinstance(x, gof.Result) and x.owner,
# Variable -> External
register_wrapper(lambda x: isinstance(x, gof.Variable) and x.owner,
lambda x: External(x))
# [[Result1], {Result2}, Result3...] -> ComponentList(Member(Result1), Member(Result2), ...)
# [[Variable1], {Variable2}, Variable3...] -> ComponentList(Member(Variable1), Member(Variable2), ...)
register_wrapper(lambda x: isinstance(x, (list, tuple)) \
and all(wrapper(r) is not None for r in x),
lambda x: ComponentList(*map(wrap, x)))
#{ "name1":{Component,Result,list,tuple,dict},...} -> ComponentDict({Component,Result,list,tuple,dict},...)
#{ "name1":{Component,Variable,list,tuple,dict},...} -> ComponentDict({Component,Variable,list,tuple,dict},...)
register_wrapper(lambda x: isinstance(x, dict) \
and all(wrapper(r) is not None for r in x.itervalues()),
lambda x: ComponentDict(dict_wrap(x)))
......@@ -999,9 +999,9 @@ class Module(ComponentDict):
if isinstance(v, (Member, External)):
print >> sys.stderr, ("WARNING: assignment of Member or External "
"objects (either directly or indirectly) to Module "
"is deprecated. Just use Result.")
"is deprecated. Just use Variable.")
return v.r
elif isinstance(v, (gof.Result,Method,Module)):
elif isinstance(v, (gof.Variable,Method,Module)):
return v
elif isinstance(v,(int,bool)):
return v
......
......@@ -22,7 +22,7 @@ class BROKEN_ON_PURPOSE_StructuredDotCSC(gof.Op):
def __hash__(self):
return 29834 ^ hash(type(self)) ^ hash(self.py_offset)
def make_node(self, a_val, a_ind, a_ptr, a_nrows, b):
a_nrows = theano.tensor.as_ndarray_result(a_nrows)
a_nrows = theano.tensor.as_tensor_variable(a_nrows)
assert a_val.type.dtype == b.type.dtype
r = gof.Apply(self, [a_val, a_ind, a_ptr, a_nrows, b],
[theano.tensor.tensor(a_val.type.dtype, (False, False))])
......
......@@ -18,7 +18,7 @@ class StochasticGradientDescent(module.FancyModule):
def __init__(self, args, cost, params, gradients=None, stepsize=None, WEIRD_STUFF=True):
"""
:param stepsize: the step to take in (negative) gradient direction
:type stepsize: None, scalar value, or scalar NDArrayResult
:type stepsize: None, scalar value, or scalar TensorVariable
"""
super(StochasticGradientDescent, self).__init__()
self.WEIRD_STUFF = WEIRD_STUFF
......@@ -26,7 +26,7 @@ class StochasticGradientDescent(module.FancyModule):
if stepsize is None:
self.stepsize = (T.dscalar())
elif isinstance(stepsize, T.NDArrayResult):
elif isinstance(stepsize, T.TensorVariable):
self.stepsize = stepsize
else:
if self.WEIRD_STUFF:
......@@ -89,10 +89,10 @@ class TanhRnn(Op):
:type A: matrix (M by M)
"""
x = T.as_ndarray_result(x)
z0 = T.as_ndarray_result(z0)
A = T.as_ndarray_result(A)
z = x.type() #make a new symbolic result with the same type as x
x = T.as_tensor_variable(x)
z0 = T.as_tensor_variable(z0)
A = T.as_tensor_variable(A)
z = x.type() #make a new symbolic variable with the same type as x
return Apply(self, [x, z0, A], [z])
def perform(self, node, (x,z0,A), out):
......
......@@ -43,9 +43,9 @@ class T_module(unittest.TestCase):
m1.x=x()
m1.y=y()
m1.emtpylist = []
m1.lx=[x()]#cast Result]
m1.lx=[x()]#cast Variable]
m1.ly=[y()]
m1.llx=[[x()]]#cast Result]
m1.llx=[[x()]]#cast Variable]
m1.lly=[[y()]]
m1.ltx=[(x(),)]
m1.lty=[(y(),)]
......@@ -68,8 +68,8 @@ class T_module(unittest.TestCase):
m1.ddx={"x":{"x":x()}}
m1.ddy={"y":{"y":y()}}
assert isinstance(m1.x,(gof.Result))
assert isinstance(m1.y,(gof.Result))
assert isinstance(m1.x,(gof.Variable))
assert isinstance(m1.y,(gof.Variable))
for i, obj in enumerate([
m1.lx[0], #0
m1.llx[0][0],
......@@ -86,7 +86,7 @@ class T_module(unittest.TestCase):
m1.dy['y'], m1.dlx['x'][0], m1.dly['y'][0],
m1.dtx['x'][0], m1.dty['y'][0], m1.ddx['x']['x'],
m1.ddy['y']['y']]):
assert isinstance(obj,(gof.Result))
assert isinstance(obj,(gof.Variable))
inst=m1.make()
......@@ -136,7 +136,7 @@ class T_module(unittest.TestCase):
def local_test(x,y):
m1=Module()
#create a list with some results in it
#create a list with some variables in it
m1.l=[x(), y()]
# create a Method that makes the second list element a shared Member
......@@ -144,7 +144,7 @@ class T_module(unittest.TestCase):
m1.g=Method([], m1.l[0])
m = m1.make()
#assign 4 and 5 to the two results' containers in m
#assign 4 and 5 to the two variables' containers in m
m.l = [4, 5]
print 'm.f', m.f()
assert numpy.all(5 == m.f())
......@@ -164,7 +164,7 @@ class T_module(unittest.TestCase):
m1.f=Method([], m1.l[1])
m = m1.make()
#assign 4 and 5 to the two results' containers in m
#assign 4 and 5 to the two variables' containers in m
m.l = (4, 5)
assert 5 == m.f()
assert 4 == m.g()
......@@ -184,7 +184,7 @@ class T_module(unittest.TestCase):
m1.g=Method([], m1.l['x'])
m = m1.make()
#assign 4 and 5 to the two results' containers in m
#assign 4 and 5 to the two variables' containers in m
m.l = dict(x=4, y=5)
assert 5 == m.f()
assert 4 == m.g()
......@@ -198,7 +198,7 @@ class T_module(unittest.TestCase):
def test_method_in_list_or_dict(self):
"""Test that a Method which is only included via a list or dictionary is still treated as if it
were a toplevel attribute
Fred: why we don't do this of direct fct of results?
Fred: why we don't do this of direct fct of variables?
"""
m1=Module()
x=T.dscalar()
......@@ -255,7 +255,7 @@ class T_module(unittest.TestCase):
assert isinstance(f,theano.compile.function_module.Function)
def test_shared_members(self):
"""Test that under a variety of tricky conditions, the shared-ness of Results and Members
"""Test that under a variety of tricky conditions, the shared-ness of Variables and Members
is respected."""
def populate_module(m,x):
......@@ -352,7 +352,7 @@ class T_module(unittest.TestCase):
assert f==4
def test_shared_method(self):
"""Test that under a variety of tricky conditions, the shared-ness of Results and Methods
"""Test that under a variety of tricky conditions, the shared-ness of Variables and Methods
is respected.
Fred: the test create different method event if they are shared. What do we want?
"""
......@@ -463,7 +463,7 @@ class T_module(unittest.TestCase):
assert numpy.all(v0 != v0_copy)
def test_member_value(self):
"""Test that module Members of Value work correctly. As Result?"""
"""Test that module Members of Value work correctly. As Variable?"""
M = Module()
x = T.dscalar()
M.y = T.value(40)
......@@ -474,7 +474,7 @@ class T_module(unittest.TestCase):
def test_member_constant(self):
"""Test that module Members of Constant work correctly.
As Result with more optimization?"""
As Variable with more optimization?"""
M = Module()
x = T.dscalar()
M.y = T.constant(40)
......@@ -601,7 +601,7 @@ def test_method_updates():
assert numpy.all(xval == [0, 1])
# when a result is listed explicitly and in an update, then there's a problem.
# when a variable is listed explicitly and in an update, then there's a problem.
M = Module()
M.x = T.dvector()
x = T.dvector()
......@@ -611,7 +611,7 @@ def test_method_updates():
m = M.make()
assert False
except ValueError, e:
if str(e[0]).startswith('Result listed in both inputs and up'):
if str(e[0]).startswith('Variable listed in both inputs and up'):
pass
else:
raise
......
......@@ -12,7 +12,7 @@ from destroyhandler import \
DestroyHandler
from graph import \
Apply, Result, Constant, Value, view_roots
Apply, Variable, Constant, Value, view_roots
from link import \
Container, Linker, LocalLinker, PerformLinker, WrapLinker, WrapLinkerMany
......
......@@ -255,8 +255,8 @@ def get_c_sync(r, name, sub):
def apply_policy(policy, r, name, sub):
"""WRITEME
@param policy: list of functions that map a L{Result} to a string, or a single such function
@type r: L{Result}
@param policy: list of functions that map a L{Variable} to a string, or a single such function
@type r: L{Variable}
@return: C{policy[0](r) + policy[1](r) + ...}
"""
if isinstance(policy, (list, tuple)):
......@@ -268,35 +268,35 @@ def apply_policy(policy, r, name, sub):
def struct_result_codeblocks(result, policies, id, symbol_table, sub):
def struct_variable_codeblocks(variable, policies, id, symbol_table, sub):
"""WRITEME
result -> a Result
variable -> a Variable
policies -> a pair of tuples ((declare_policy, behavior_policy, cleanup_policy), -- at construction
(declare_policy, behavior_policy, cleanup_policy)) -- at execution
the first list will produce an element of the 'struct_builders' argument in struct_gen
the second list will produce an element of the 'blocks' argument in struct_gen
id -> the id assigned to this result's task in the computation
symbol_table -> a dict that maps results to variable names. It is not read
by this function but a variable name for the result is computed and added
id -> the id assigned to this variable's task in the computation
symbol_table -> a dict that maps variables to variable names. It is not read
by this function but a variable name for the variable is computed and added
to the table.
sub -> dictionary for use by L{CodeBlock}.
"""
name = "V%i" % id
symbol_table[result] = name
symbol_table[variable] = name
sub = dict(sub)
# sub['name'] = name
sub['id'] = id
sub['fail'] = failure_code(sub)
sub['py_ptr'] = "py_%s" % name
sub['stor_ptr'] = "storage_%s" % name
struct_builder = CodeBlock(*[apply_policy(policy, result, name, sub)
struct_builder = CodeBlock(*[apply_policy(policy, variable, name, sub)
for policy in policies[0]]+[sub]) # struct_declare, struct_behavior, struct_cleanup, sub)
sub['id'] = id + 1
sub['fail'] = failure_code(sub)
sub['py_ptr'] = "py_%s" % name
sub['stor_ptr'] = "storage_%s" % name
block = CodeBlock(*[apply_policy(policy, result, name, sub)
block = CodeBlock(*[apply_policy(policy, variable, name, sub)
for policy in policies[1]]+[sub]) # run_declare, run_behavior, run_cleanup, sub)
return struct_builder, block
......@@ -309,8 +309,8 @@ class CLinker(link.Linker):
through make_thunk and make_function that make use of the compiled
code.
no_recycling can contain a list of Results that belong to the env.
If a Result is in no_recycling, CLinker will clear the output storage
no_recycling can contain a list of Variables that belong to the env.
If a Variable is in no_recycling, CLinker will clear the output storage
associated to it during the computation (to avoid reusing it).
"""
......@@ -323,21 +323,21 @@ class CLinker(link.Linker):
return type(self)().accept(env, no_recycling)
#raise Exception("Cannot accept from a Linker that is already tied to another Env.")
self.env = env
self.fetch_results()
self.fetch_variables()
self.no_recycling = no_recycling
return self
def fetch_results(self):
def fetch_variables(self):
"""WRITEME
Fills the inputs, outputs, results, orphans, temps and node_order fields.
Fills the inputs, outputs, variables, orphans, temps and node_order fields.
"""
env = self.env
self.inputs = env.inputs
self.outputs = env.outputs
self.results = graph.results(self.inputs, self.outputs) # list(env.results)
self.variables = graph.variables(self.inputs, self.outputs) # list(env.variables)
# The orphans field is listified to ensure a consistent order.
self.orphans = list(r for r in self.results if isinstance(r, graph.Value) and r not in self.inputs) #list(env.orphans.difference(self.outputs))
self.temps = list(set(self.results).difference(self.inputs).difference(self.outputs).difference(self.orphans))
self.orphans = list(r for r in self.variables if isinstance(r, graph.Value) and r not in self.inputs) #list(env.orphans.difference(self.outputs))
self.temps = list(set(self.variables).difference(self.inputs).difference(self.outputs).difference(self.orphans))
self.node_order = env.toposort()
def code_gen(self):
......@@ -365,7 +365,7 @@ class CLinker(link.Linker):
symbol = {}
# (init_)tasks contains a list of pairs (Op/Result, task_name)
# (init_)tasks contains a list of pairs (Op/Variable, task_name)
# e.g. (x, 'get') or (x+y, 'code')
init_tasks = []
tasks = []
......@@ -380,46 +380,46 @@ class CLinker(link.Linker):
sub = dict(failure_var = failure_var)
for result in self.results:
for variable in self.variables:
# it might be possible to inline constant results as C literals
## if getattr(result, 'constant', False):
# it might be possible to inline constant variables as C literals
## if getattr(variable, 'constant', False):
# policy = [[what to declare in the struct, what to do at construction, what to do at destruction],
# [what to declare in each run, what to do at the beginning of each run, what to do at the end of each run]]
if result in self.inputs:
if variable in self.inputs:
# we need to extract the new inputs at each run
# they do not need to be relayed to Python, so we don't sync
# if isinstance(result, Constant):
# raise TypeError("Inputs to CLinker cannot be Constant.", result)
# if isinstance(variable, Constant):
# raise TypeError("Inputs to CLinker cannot be Constant.", variable)
policy = [[get_nothing, get_nothing, get_nothing],
[get_c_declare, get_c_extract, get_c_cleanup]]
elif result in self.orphans:
if not isinstance(result, graph.Value):
raise TypeError("All orphans to CLinker must be Value instances.", result)
if isinstance(result, graph.Constant):
elif variable in self.orphans:
if not isinstance(variable, graph.Value):
raise TypeError("All orphans to CLinker must be Value instances.", variable)
if isinstance(variable, graph.Constant):
try:
symbol[result] = "(" + result.type.c_literal(result.data) + ")"
consts.append(result)
self.orphans.remove(result)
symbol[variable] = "(" + variable.type.c_literal(variable.data) + ")"
consts.append(variable)
self.orphans.remove(variable)
continue
except (utils.MethodNotDefined, NotImplementedError):
pass
# orphans are not inputs so we'll just get fetch them when we initialize the struct and assume they stay the same
policy = [[get_c_declare, get_c_extract, get_c_cleanup],
[get_nothing, get_nothing, get_nothing]]
elif result in self.temps:
elif variable in self.temps:
# temps don't need to be extracted from Python, so we call c_init rather than c_extract
# they do not need to be relayed to Python, so we don't sync
if result.type.c_is_simple() or result in no_recycling:
if variable.type.c_is_simple() or variable in no_recycling:
policy = [[get_nothing, get_nothing, get_nothing],
[get_c_declare, get_c_init, get_c_cleanup]]
else:
# it is useful for complex temps to reuse storage at each run, so we only clean up in the destructor
policy = [[get_c_declare, get_c_init, get_c_cleanup],
[get_nothing, get_nothing, get_nothing]]
elif result in self.outputs:
elif variable in self.outputs:
# outputs don't need to be extracted from Python, so we call c_init rather than c_extract
if result.type.c_is_simple() or result in no_recycling:
if variable.type.c_is_simple() or variable in no_recycling:
policy = [[get_nothing, get_nothing, get_nothing],
[get_c_declare, get_c_init, (get_c_sync, get_c_cleanup)]]
else:
......@@ -429,16 +429,16 @@ class CLinker(link.Linker):
else:
raise Exception("what the fuck")
builder, block = struct_result_codeblocks(result, policy, id, symbol, sub)
builder, block = struct_variable_codeblocks(variable, policy, id, symbol, sub)
# each Result generates two CodeBlocks, one to declare/initialize/destroy struct variables
# each Variable generates two CodeBlocks, one to declare/initialize/destroy struct variables
# and the other to declare/extract/cleanup each time the function is run.
# Typically, only one of the two actually does anything (see all the possible combinations above)
init_tasks.append((result, 'init', id))
init_tasks.append((variable, 'init', id))
init_blocks.append(builder)
tasks.append((result, 'get', id + 1))
tasks.append((variable, 'get', id + 1))
blocks.append(block)
id += 2
......@@ -449,8 +449,8 @@ class CLinker(link.Linker):
# method to the actual variable names that we will use.
## ivnames, ovnames = op.c_var_names()
sub = dict(failure_var = failure_var)
## for result, vname in zip(op.inputs + op.outputs, ivnames + ovnames):
## sub[vname] = symbol[result]
## for variable, vname in zip(op.inputs + op.outputs, ivnames + ovnames):
## sub[vname] = symbol[variable]
name = "<invalid_c_thing>"
isyms, osyms = [symbol[r] for r in node.inputs], [symbol[r] for r in node.outputs]
......@@ -479,7 +479,7 @@ class CLinker(link.Linker):
# List of arg names for use in struct_gen. Note the call to uniq: duplicate inputs
# must only be passed once because they are mapped to the same name.
args = []
args += ["storage_%s" % symbol[result] for result in utils.uniq(self.inputs + self.outputs + self.orphans)]
args += ["storage_%s" % symbol[variable] for variable in utils.uniq(self.inputs + self.outputs + self.orphans)]
struct_code = struct_gen(args, init_blocks, blocks, dict(failure_var = failure_var, name = "<<<<NAME>>>>"))
......@@ -509,13 +509,13 @@ class CLinker(link.Linker):
def support_code(self):
"""WRITEME
Returns a list of support code strings that are needed by
one or more Results or Ops. The support code from Results is
one or more Variables or Ops. The support code from Variables is
added before the support code from Ops.
This might contain duplicates.
"""
ret = []
for x in [y.type for y in self.results] + [y.op for y in self.node_order]:
for x in [y.type for y in self.variables] + [y.op for y in self.node_order]:
try: ret.append(x.c_support_code())
except utils.MethodNotDefined: pass
return ret
......@@ -523,12 +523,12 @@ class CLinker(link.Linker):
def compile_args(self):
"""WRITEME
Returns a list of compile args that are needed by one
or more Results or Ops.
or more Variables or Ops.
This might contain duplicates.
"""
ret = []
for x in [y.type for y in self.results] + [y.op for y in self.node_order]:
for x in [y.type for y in self.variables] + [y.op for y in self.node_order]:
try: ret += x.c_compile_args()
except utils.MethodNotDefined: pass
return ret
......@@ -536,12 +536,12 @@ class CLinker(link.Linker):
def headers(self):
"""WRITEME
Returns a list of headers that are needed by one
or more Results or Ops.
or more Variables or Ops.
This might contain duplicates.
"""
ret = []
for x in [y.type for y in self.results] + [y.op for y in self.node_order]:
for x in [y.type for y in self.variables] + [y.op for y in self.node_order]:
try: ret += x.c_headers()
except utils.MethodNotDefined: pass
return ret
......@@ -549,12 +549,12 @@ class CLinker(link.Linker):
def libraries(self):
"""WRITEME
Returns a list of libraries that are needed by one
or more Results or Ops.
or more Variables or Ops.
This might contain duplicates.
"""
ret = []
for x in [y.type for y in self.results] + [y.op for y in self.node_order]:
for x in [y.type for y in self.variables] + [y.op for y in self.node_order]:
try: ret += x.c_libraries()
except utils.MethodNotDefined: pass
return ret
......@@ -568,21 +568,21 @@ class CLinker(link.Linker):
the thunk returned by __compile__, the inputs must be put in
that storage. If None, storage will be allocated.
@param output_storage: list of lists of length 1. The thunk returned
by __compile__ will put the results of the computation in these
by __compile__ will put the variables of the computation in these
lists. If None, storage will be allocated.
Returns: thunk, input_storage, output_storage, error_storage
"""
error_storage = [None, None, None]
if input_storage is None:
input_storage = tuple([None] for result in self.inputs)
input_storage = tuple([None] for variable in self.inputs)
if output_storage is None:
map = {}
output_storage = []
for result in self.outputs:
if result not in map:
map[result] = [None]
output_storage.append(map[result])
for variable in self.outputs:
if variable not in map:
map[variable] = [None]
output_storage.append(map[variable])
input_storage = tuple(input_storage)
output_storage = tuple(output_storage)
thunk = self.cthunk_factory(error_storage,
......@@ -604,7 +604,7 @@ class CLinker(link.Linker):
the thunk returned by __compile__, the inputs must be put in
that storage. If None, storage will be allocated.
@param output_storage: list of lists of length 1. The thunk returned
by __compile__ will put the results of the computation in these
by __compile__ will put the variables of the computation in these
lists. If None, storage will be allocated.
Returns: thunk, input_storage, output_storage
......@@ -760,16 +760,16 @@ def _execute(cthunk, init_tasks, tasks, error_storage):
class OpWiseCLinker(link.LocalLinker):
"""WRITEME
Uses CLinker on the individual Ops that comprise an env and loops
over them in Python. The result is slower than a compiled version of
over them in Python. The variable is slower than a compiled version of
the whole env, but saves on compilation time because small changes
in the computation graph won't necessarily trigger any recompilation,
only local changes in the Results or Ops that are used.
only local changes in the Variables or Ops that are used.
If fallback_on_perform is True, OpWiseCLinker will use an op's
perform method if no C version can be generated.
no_recycling can contain a list of Results that belong to the env.
If a Result is in no_recycling, CLinker will clear the output storage
no_recycling can contain a list of Variables that belong to the env.
If a Variable is in no_recycling, CLinker will clear the output storage
associated to it prior to computation (to avoid reusing it).
"""
......@@ -878,7 +878,7 @@ class OpWiseCLinker(link.LocalLinker):
def _default_checker(x, y):
"""WRITEME
Default checker for DualLinker. This checks that the
results contain the same data using ==.
variables contain the same data using ==.
"""
if x[0] != y[0]:
raise Exception("Output mismatch.", {'performlinker': x[0], 'clinker': y[0]})
......@@ -890,7 +890,7 @@ class DualLinker(link.Linker):
The thunk/function produced by DualLinker uses PerformLinker as the
"main" implementation: the inputs and outputs are fed to/taken from
the Ops' perform. However, DualLinker also instantiates a copy of
the env on which it runs OpWiseCLinker. At each step, the results
the env on which it runs OpWiseCLinker. At each step, the variables
of perform and of the C implementation are verified using a checker
function.
"""
......@@ -903,7 +903,7 @@ class DualLinker(link.Linker):
of length 1. The first one passed will contain the output
computed by PerformLinker and the second one the output
computed by OpWiseCLinker. The checker should compare the data
fields of the two results to see if they match. By default,
fields of the two variables to see if they match. By default,
DualLinker uses ==. A custom checker can be provided to
compare up to a certain error tolerance.
......@@ -914,8 +914,8 @@ class DualLinker(link.Linker):
careful not to share data between the two outputs (or inplace
operations that use them will interfere).
no_recycling can contain a list of Results that belong to the env.
If a Result is in no_recycling, CLinker will clear the output storage
no_recycling can contain a list of Variables that belong to the env.
If a Variable is in no_recycling, CLinker will clear the output storage
associated to it during the computation (to avoid reusing it).
"""
self.env = None
......
......@@ -14,7 +14,7 @@ except ImportError:
# The following function takes a PyCObject instance that contains
# a void*->int function in its VoidPtr field. It then calls that
# function on the object's Desc field and returns the int result.
# function on the object's Desc field and returns the int variable.
single_runner = """
if (!PyCObject_Check(py_cthunk)) {
PyErr_SetString(PyExc_ValueError,
......
......@@ -42,7 +42,7 @@ class DestroyHandler(object):
def getroot(r, view_i):
"""
For views: Return non-view result which is ultimatly viewed by r.
For views: Return non-view variable which is ultimatly viewed by r.
For non-views: return self.
"""
try:
......@@ -52,10 +52,10 @@ def getroot(r, view_i):
def add_impact(r, view_o, impact):
"""
In opposition to getroot, which finds the result that is viewed *by* r, this function
returns all the results that are views of r.
In opposition to getroot, which finds the variable that is viewed *by* r, this function
returns all the variables that are views of r.
:param impact: is a set of results that are views of r
:param impact: is a set of variables that are views of r
:param droot: a dictionary mapping views -> r
"""
for v in view_o.get(r,[]):
......@@ -94,10 +94,10 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
self.env = env
self.destroyers = set() #set of Apply instances with non-null destroy_map
self.view_i = {} # result -> result
self.view_o = {} # result -> set of results
#clients: how many times does an apply use a given result
self.clients = {} # result -> apply -> ninputs
self.view_i = {} # variable -> variable
self.view_o = {} # variable -> set of variables
#clients: how many times does an apply use a given variable
self.clients = {} # variable -> apply -> ninputs
self.stale_droot = True
self.debug_all_apps = set()
......@@ -111,8 +111,8 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
return self.droot, self.impact, self.root_destroyer
def _build_droot_impact(self):
droot = {} # destroyed view + nonview results -> foundation
impact = {} # destroyed nonview result -> it + all views of it
droot = {} # destroyed view + nonview variables -> foundation
impact = {} # destroyed nonview variable -> it + all views of it
root_destroyer = {} # root -> destroyer apply
for app in self.destroyers:
......@@ -286,7 +286,7 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
"""Return orderings induced by destructive operations.
Raise InconsistencyError when
a) attempting to destroy indestructable result, or
a) attempting to destroy indestructable variable, or
b) attempting to destroy a value multiple times, or
c) an Apply destroys (illegally) one of its own inputs by aliasing
......@@ -309,23 +309,23 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
isinstance(r, graph.Constant)]
if illegal_destroy:
#print 'destroying illegally'
raise InconsistencyError("Attempting to destroy indestructible results: %s" %
raise InconsistencyError("Attempting to destroy indestructible variables: %s" %
illegal_destroy)
# add destroyed result clients as computational dependencies
# add destroyed variable clients as computational dependencies
for app in self.destroyers:
# for each destroyed input...
for output_idx, input_idx_list in app.op.destroy_map.items():
destroyed_idx = input_idx_list[0]
destroyed_result = app.inputs[destroyed_idx]
root = droot[destroyed_result]
destroyed_variable = app.inputs[destroyed_idx]
root = droot[destroyed_variable]
root_impact = impact[root]
# we generally want to put all clients of things which depend on root
# as pre-requisites of app.
# But, app is itself one such client!
# App will always be a client of the node we're destroying
# (destroyed_result, but the tricky thing is when it is also a client of
# *another result* viewing on the root. Generally this is illegal, (e.g.,
# (destroyed_variable, but the tricky thing is when it is also a client of
# *another variable* viewing on the root. Generally this is illegal, (e.g.,
# add_inplace(x, x.T). In some special cases though, the in-place op will
# actually be able to work properly with multiple destroyed inputs (e.g,
# add_inplace(x, x). An Op that can still work in this case should declare
......@@ -349,7 +349,7 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
#print 'tolerated', tolerated
for i, input in enumerate(app.inputs):
if input in root_impact \
and (i not in tolerated or input is not destroyed_result):
and (i not in tolerated or input is not destroyed_variable):
raise InconsistencyError("Input aliasing: %s (%i, %i)"
% (app, destroyed_idx, i))
......
......@@ -18,13 +18,13 @@ class InconsistencyError(Exception):
class Env(utils.object2):
""" WRITEME
An Env represents a subgraph bound by a set of input results and a
set of output results. The inputs list should contain all the inputs
on which the outputs depend. Results of type Value or Constant are
An Env represents a subgraph bound by a set of input variables and a
set of output variables. The inputs list should contain all the inputs
on which the outputs depend. Variables of type Value or Constant are
not counted as inputs.
The Env supports the replace operation which allows to replace a
result in the subgraph by another, e.g. replace (x + x).out by (2
variable in the subgraph by another, e.g. replace (x + x).out by (2
* x).out. This is the basis for optimization in theano.
It can also be "extended" using env.extend(some_object). See the
......@@ -65,7 +65,7 @@ class Env(utils.object2):
- feature.on_setup_node(env, node):
WRITEME
- feature.on_setup_result(env, result):
- feature.on_setup_variable(env, variable):
WRITEME
"""
......@@ -90,8 +90,8 @@ class Env(utils.object2):
# All nodes in the subgraph defined by inputs and outputs are cached in nodes
self.nodes = set()
# Ditto for results
self.results = set()
# Ditto for variables
self.variables = set()
self.inputs = list(inputs)
self.outputs = outputs
......@@ -104,17 +104,17 @@ class Env(utils.object2):
raise ValueError("One of the provided inputs is the output of an already existing node. " \
"If that is okay, either discard that input's owner or use graph.clone.")
self.__setup_r__(input)
self.results.add(input)
self.variables.add(input)
self.__import_r__(outputs)
for i, output in enumerate(outputs):
output.clients.append(('output', i))
self.node_locks = {}
self.result_locks = {}
self.variable_locks = {}
### Setup a Result ###
### Setup a Variable ###
def __setup_r__(self, r):
# sets up r so it belongs to this env
......@@ -122,7 +122,7 @@ class Env(utils.object2):
raise Exception("%s is already owned by another env" % r)
r.env = self
r.clients = []
#self.execute_callbacks('on_setup_result', r)
#self.execute_callbacks('on_setup_variable', r)
def __setup_node__(self, node):
# sets up node so it belongs to this env
......@@ -134,23 +134,23 @@ class Env(utils.object2):
def disown(self):
""" WRITEME
Cleans up all of this Env's nodes and results so they are not
Cleans up all of this Env's nodes and variables so they are not
associated with this Env anymore.
The Env should not be used anymore after disown is called.
This may not clean everything this Env's features set in the
nodes and results. If there are no features, this should set
nodes and variables. If there are no features, this should set
them back to what they were originally.
"""
for node in self.nodes:
del node.env
del node.deps
for result in self.results:
del result.env
del result.clients
for variable in self.variables:
del variable.env
del variable.clients
self.nodes = set()
self.results = set()
self.variables = set()
self.inputs = None
self.outputs = None
......@@ -166,7 +166,7 @@ class Env(utils.object2):
def __add_clients__(self, r, new_clients):
""" WRITEME
r -> result
r -> variable
new_clients -> list of (node, i) pairs such that node.inputs[i] is r.
Updates the list of clients of r with new_clients.
......@@ -179,7 +179,7 @@ class Env(utils.object2):
def __remove_clients__(self, r, clients_to_remove, prune = True):
""" WRITEME
r -> result
r -> variable
clients_to_remove -> list of (op, i) pairs such that node.inputs[i] is not r anymore.
Removes all from the clients list of r.
......@@ -200,26 +200,26 @@ class Env(utils.object2):
### import ###
def __import_r__(self, results):
# Imports the owners of the results
def __import_r__(self, variables):
# Imports the owners of the variables
r_owner_done = set()
for node in [r.owner for r in results if r.owner is not None]:
for node in [r.owner for r in variables if r.owner is not None]:
if node not in r_owner_done:
r_owner_done.add(node)
self.__import__(node)
for r in results:
for r in variables:
if r.owner is None and not isinstance(r, graph.Value) and r not in self.inputs:
raise TypeError("Undeclared input", r)
if not getattr(r, 'env', None) is self:
self.__setup_r__(r)
self.results.add(r)
self.variables.add(r)
def __import__(self, node, check = True):
# We import the nodes in topological order. We only are interested
# in new nodes, so we use all results we know of as if they were the input set.
# in new nodes, so we use all variables we know of as if they were the input set.
# (the functions in the graph module only use the input set to
# know where to stop going down)
new_nodes = graph.io_toposort(self.results, node.outputs)
new_nodes = graph.io_toposort(self.variables, node.outputs)
if check:
for node in new_nodes:
......@@ -237,11 +237,11 @@ class Env(utils.object2):
self.nodes.add(node)
for output in node.outputs:
self.__setup_r__(output)
self.results.add(output)
self.variables.add(output)
for i, input in enumerate(node.inputs):
if input not in self.results:
if input not in self.variables:
self.__setup_r__(input)
self.results.add(input)
self.variables.add(input)
self.__add_clients__(input, [(node, i)])
assert node.env is self
self.execute_callbacks('on_import', node)
......@@ -249,13 +249,13 @@ class Env(utils.object2):
### prune ###
def __prune_r__(self, results):
# Prunes the owners of the results.
for node in set(r.owner for r in results if r.owner is not None):
def __prune_r__(self, variables):
# Prunes the owners of the variables.
for node in set(r.owner for r in variables if r.owner is not None):
self.__prune__(node)
for r in results:
if not r.clients and r in self.results:
self.results.remove(r)
for r in variables:
if not r.clients and r in self.variables:
self.variables.remove(r)
def __prune__(self, node):
if node not in self.nodes:
......@@ -270,7 +270,7 @@ class Env(utils.object2):
if self.clients(output) or output in self.outputs: #output in self.outputs or self.clients(output):
return
self.nodes.remove(node)
self.results.difference_update(node.outputs)
self.variables.difference_update(node.outputs)
self.execute_callbacks('on_prune', node)
for i, input in enumerate(node.inputs):
......@@ -295,14 +295,14 @@ class Env(utils.object2):
if node == 'output':
r = self.outputs[i]
if not r.type == new_r.type:
raise TypeError("The type of the replacement must be the same as the type of the original Result.", r, new_r)
raise TypeError("The type of the replacement must be the same as the type of the original Variable.", r, new_r)
self.outputs[i] = new_r
else:
if node.env is not self:
raise Exception("Cannot operate on %s because it does not belong to this Env" % node)
r = node.inputs[i]
if not r.type == new_r.type:
raise TypeError("The type of the replacement must be the same as the type of the original Result.", r, new_r)
raise TypeError("The type of the replacement must be the same as the type of the original Variable.", r, new_r)
node.inputs[i] = new_r
self.__import_r__([new_r])
......@@ -324,9 +324,9 @@ class Env(utils.object2):
if r.env is not self:
raise Exception("Cannot replace %s because it does not belong to this Env" % r, str(reason))
if not r.type == new_r.type:
raise TypeError("The type of the replacement must be the same as the type of the original Result.", r, new_r, r.type, new_r.type, str(reason))
if r not in self.results:
# this result isn't in the graph... don't raise an exception here, just return silently
raise TypeError("The type of the replacement must be the same as the type of the original Variable.", r, new_r, r.type, new_r.type, str(reason))
if r not in self.variables:
# this variable isn't in the graph... don't raise an exception here, just return silently
# because it makes it easier to implement some optimizations for multiple-output ops
return
......@@ -464,30 +464,30 @@ class Env(utils.object2):
for node in nodes:
if node.env is not self:
raise Exception("Node should belong to the env.", node)
for i, result in enumerate(node.inputs):
if result.env is not self:
raise Exception("Input of node should belong to the env.", result, (node, i))
if (node, i) not in result.clients:
raise Exception("Inconsistent clients list.", (node, i), result.clients)
results = set(graph.results(self.inputs, self.outputs))
if set(self.results) != results:
missing = results.difference(self.results)
excess = self.results.difference(results)
raise Exception("The results are inappropriately cached. missing, in excess: ", missing, excess)
for result in results:
if result.owner is None and result not in self.inputs and not isinstance(result, graph.Value):
raise Exception("Undeclared input.", result)
if result.env is not self:
raise Exception("Result should belong to the env.", result)
for node, i in result.clients:
for i, variable in enumerate(node.inputs):
if variable.env is not self:
raise Exception("Input of node should belong to the env.", variable, (node, i))
if (node, i) not in variable.clients:
raise Exception("Inconsistent clients list.", (node, i), variable.clients)
variables = set(graph.variables(self.inputs, self.outputs))
if set(self.variables) != variables:
missing = variables.difference(self.variables)
excess = self.variables.difference(variables)
raise Exception("The variables are inappropriately cached. missing, in excess: ", missing, excess)
for variable in variables:
if variable.owner is None and variable not in self.inputs and not isinstance(variable, graph.Value):
raise Exception("Undeclared input.", variable)
if variable.env is not self:
raise Exception("Variable should belong to the env.", variable)
for node, i in variable.clients:
if node == 'output':
if self.outputs[i] is not result:
raise Exception("Inconsistent clients list.", result, self.outputs[i])
if self.outputs[i] is not variable:
raise Exception("Inconsistent clients list.", variable, self.outputs[i])
continue
if node not in nodes:
raise Exception("Client not in env.", result, (node, i))
if node.inputs[i] is not result:
raise Exception("Inconsistent clients list.", result, node.inputs[i])
raise Exception("Client not in env.", variable, (node, i))
if node.inputs[i] is not variable:
raise Exception("Inconsistent clients list.", variable, node.inputs[i])
def __str__(self):
return "[%s]" % ", ".join(graph.as_string(self.inputs, self.outputs))
......
"""
Node classes (`Apply`, `Result`) and expression graph algorithms.
Node classes (`Apply`, `Variable`) and expression graph algorithms.
To read about what theano graphs are from a user perspective, have a look at
`graph.html <../doc/graph.html>`__.
......@@ -18,24 +18,24 @@ _creation_idx = [0]
class Apply(utils.object2):
"""
An :term:`Apply` instance is a node in an expression graph which represents the application
of an `Op` to some input `Result` nodes, producing some output `Result` nodes.
of an `Op` to some input `Variable` nodes, producing some output `Variable` nodes.
This class is typically instantiated by an Op's make_node() function, which is typically
called by that Op's __call__() function.
An Apply instance serves as a simple structure with three important attributes:
- :literal:`inputs` : a list of `Result` nodes that represent the arguments of the expression,
- :literal:`inputs` : a list of `Variable` nodes that represent the arguments of the expression,
- :literal:`outputs` : a list of `Result` nodes that represent the result of the expression, and
- :literal:`outputs` : a list of `Variable` nodes that represent the variable of the expression, and
- :literal:`op` : an `Op` instance that determines the nature of the expression being applied.
The driver `compile.function` uses Apply's inputs attribute together with Result's owner
The driver `compile.function` uses Apply's inputs attribute together with Variable's owner
attribute to search the expression graph and determine which inputs are necessary to
compute the function's outputs.
A `Linker` uses the Apply instance's `op` field to compute the results.
A `Linker` uses the Apply instance's `op` field to compute the variables.
Comparing with the Python language, an `Apply` instance is theano's version of a function
call (or expression instance) whereas `Op` is theano's version of a function definition.
......@@ -48,9 +48,9 @@ class Apply(utils.object2):
:Parameters:
`op` : `Op` instance
initialize self.op
`inputs` : list of Result instances
`inputs` : list of Variable instances
initialize self.inputs
`outputs` : list of Result instances
`outputs` : list of Variable instances
initialize self.outputs
:note:
......@@ -65,24 +65,24 @@ class Apply(utils.object2):
self.inputs = []
self.tag = utils.scratchpad()
## filter inputs to make sure each element is a Result
## filter inputs to make sure each element is a Variable
for input in inputs:
if isinstance(input, Result):
if isinstance(input, Variable):
self.inputs.append(input)
else:
raise TypeError("The 'inputs' argument to Apply must contain Result instances, not %s" % input)
raise TypeError("The 'inputs' argument to Apply must contain Variable instances, not %s" % input)
self.outputs = []
## filter outputs to make sure each element is a Result
## filter outputs to make sure each element is a Variable
for i, output in enumerate(outputs):
if isinstance(output, Result):
if isinstance(output, Variable):
if output.owner is None:
output.owner = self
output.index = i
elif output.owner is not self or output.index != i:
raise ValueError("All output results passed to Apply must belong to it.")
raise ValueError("All output variables passed to Apply must belong to it.")
self.outputs.append(output)
else:
raise TypeError("The 'outputs' argument to Apply must contain Result instances with no owner, not %s" % output)
raise TypeError("The 'outputs' argument to Apply must contain Variable instances with no owner, not %s" % output)
self._creation_idx = _creation_idx[0]
_creation_idx[0] += 1
......@@ -91,7 +91,7 @@ class Apply(utils.object2):
"""Returns the default output for this node.
:rtype:
Result instance
Variable instance
:return:
an element of self.outputs, typically self.outputs[0].
......@@ -145,7 +145,7 @@ class Apply(utils.object2):
def clone_with_new_inputs(self, inputs, strict = True):
"""Duplicate this Apply instance in a new graph.
:param inputs: list of Result instances to use as inputs.
:param inputs: list of Variable instances to use as inputs.
:type strict: Bool
......@@ -181,38 +181,38 @@ class Apply(utils.object2):
"""property: Number of outputs"""
class Result(utils.object2):
class Variable(utils.object2):
"""
A :term:`Result` is a node in an expression graph that represents a variable.
A :term:`Variable` is a node in an expression graph that represents a variable.
The inputs and outputs of every `Apply` are `Result` instances.
The input and output arguments to create a `function` are also `Result` instances.
A `Result` is like a strongly-typed variable in some other languages; each `Result` contains a
reference to a `Type` instance that defines the kind of value the `Result` can take in a
The inputs and outputs of every `Apply` are `Variable` instances.
The input and output arguments to create a `function` are also `Variable` instances.
A `Variable` is like a strongly-typed variable in some other languages; each `Variable` contains a
reference to a `Type` instance that defines the kind of value the `Variable` can take in a
computation.
A `Result` is a container for four important attributes:
A `Variable` is a container for four important attributes:
- :literal:`type` a `Type` instance defining the kind of value this `Result` can have,
- :literal:`type` a `Type` instance defining the kind of value this `Variable` can have,
- :literal:`owner` either None (for graph roots) or the `Apply` instance of which `self` is an output,
- :literal:`index` the integer such that :literal:`owner.outputs[index] is this_result` (ignored if `owner` is None)
- :literal:`index` the integer such that :literal:`owner.outputs[index] is this_variable` (ignored if `owner` is None)
- :literal:`name` a string to use in pretty-printing and debugging.
There are a few kinds of Results to be aware of: A Result which is the output of a symbolic
There are a few kinds of Variables to be aware of: A Variable which is the output of a symbolic
computation has a reference to the Apply instance to which it belongs (property: owner) and
the position of itself in the owner's output list (property: index).
- `Result` (this base type) is typically the output of a symbolic computation,
- `Variable` (this base type) is typically the output of a symbolic computation,
- `Value` (a subclass) adds a default :literal:`value`, and requires that owner == None
- `Constant` (a subclass) which adds a default and un-replacable :literal:`value`, and
requires that owner == None
A Result which is the output of a symbolic computation will have an owner != None.
A Variable which is the output of a symbolic computation will have an owner != None.
Code Example
============
......@@ -239,11 +239,11 @@ class Result(utils.object2):
e = d + b
theano.function([d,b], [e]) # this works. d's default value of 1.5 is ignored.
The python variables :literal:`a,b,c` all refer to instances of type `Result`.
The `Result` refered to by `a` is also an instance of `Constant`.
The python variables :literal:`a,b,c` all refer to instances of type `Variable`.
The `Variable` refered to by `a` is also an instance of `Constant`.
`compile.function` uses each `Apply` instance's `inputs` attribute
together with each Result's `owner` field to determine which inputs are necessary to compute the function's outputs.
together with each Variable's `owner` field to determine which inputs are necessary to compute the function's outputs.
"""
#__slots__ = ['type', 'owner', 'index', 'name']
......@@ -258,7 +258,7 @@ class Result(utils.object2):
:param owner: the Apply instance which computes the value for this variable
:type index: None or int
:param index: the position of this Result in owner.outputs
:param index: the position of this Variable in owner.outputs
:type name: None or str
:param name: a string for pretty-printing and debugging
......@@ -290,10 +290,10 @@ class Result(utils.object2):
def __repr__(self):
return str(self)
def clone(self):
"""Return a new Result like self.
"""Return a new Variable like self.
:rtype: Result instance
:return: a new Result instance (or subclass instance) with no owner or index.
:rtype: Variable instance
:return: a new Variable instance (or subclass instance) with no owner or index.
:note: tags are copied to the returned instance.
:note: name is copied to the returned instance.
......@@ -303,9 +303,9 @@ class Result(utils.object2):
cp.tag = copy(self.tag)
return cp
class Value(Result):
class Value(Variable):
"""
A :term:`Value` is a `Result` with a default value.
A :term:`Value` is a `Variable` with a default value.
Its owner field is always None. And since it has a default value, a `Value` instance need
not be named as an input to `compile.function`.
......@@ -325,7 +325,7 @@ class Value(Result):
WRITEME
"""
Result.__init__(self, type, None, None, name)
Variable.__init__(self, type, None, None, name)
self.data = type.filter(data)
def __str__(self):
"""WRITEME"""
......@@ -357,7 +357,7 @@ class Constant(Value):
def __init__(self, type, data, name = None):
Value.__init__(self, type, data, name)
def equals(self, other):
# this does what __eq__ should do, but Result and Apply should always be hashable by id
# this does what __eq__ should do, but Variable and Apply should always be hashable by id
return isinstance(other, Constant) and self.signature() == other.signature()
def signature(self):
return (self.type, self.data)
......@@ -378,7 +378,7 @@ def stack_search(start, expand, mode='bfs', build_inv = False):
:param expand:
when we get to a node, add expand(node) to the list of nodes to visit. This function
should return a list, or None
:rtype: list of `Result` or `Apply` instances (depends on `expend`)
:rtype: list of `Variable` or `Apply` instances (depends on `expend`)
:return: the list of nodes in order of traversal.
:note:
......@@ -414,16 +414,16 @@ def stack_search(start, expand, mode='bfs', build_inv = False):
return rval_list
def inputs(result_list, blockers = None):
"""Return the inputs required to compute the given Results.
def inputs(variable_list, blockers = None):
"""Return the inputs required to compute the given Variables.
:type result_list: list of `Result` instances
:param result_list:
output `Result` instances from which to search backward through owners
:rtype: list of `Result` instances
:type variable_list: list of `Variable` instances
:param variable_list:
output `Variable` instances from which to search backward through owners
:rtype: list of `Variable` instances
:returns:
input nodes with no owner, in the order found by a left-recursive depth-first search
started at the nodes in `result_list`.
started at the nodes in `variable_list`.
"""
def expand(r):
......@@ -431,13 +431,13 @@ def inputs(result_list, blockers = None):
l = list(r.owner.inputs)
l.reverse()
return l
dfs_results = stack_search(deque(result_list), expand, 'dfs')
rval = [r for r in dfs_results if r.owner is None]
dfs_variables = stack_search(deque(variable_list), expand, 'dfs')
rval = [r for r in dfs_variables if r.owner is None]
#print rval, _orig_inputs(o)
return rval
def results_and_orphans(i, o):
def variables_and_orphans(i, o):
"""WRITEME
"""
def expand(r):
......@@ -445,72 +445,72 @@ def results_and_orphans(i, o):
l = list(r.owner.inputs) + list(r.owner.outputs)
l.reverse()
return l
results = stack_search(deque(o), expand, 'dfs')
orphans = [r for r in results if r.owner is None and r not in i]
return results, orphans
variables = stack_search(deque(o), expand, 'dfs')
orphans = [r for r in variables if r.owner is None and r not in i]
return variables, orphans
def ops(i, o):
""" WRITEME
:type i: list
:param i: input L{Result}s
:param i: input L{Variable}s
:type o: list
:param o: output L{Result}s
:param o: output L{Variable}s
:returns:
the set of ops that are contained within the subgraph that lies between i and o,
including the owners of the L{Result}s in o and intermediary ops between i and o, but
not the owners of the L{Result}s in i.
including the owners of the L{Variable}s in o and intermediary ops between i and o, but
not the owners of the L{Variable}s in i.
"""
ops = set()
results, orphans = results_and_orphans(i, o)
for r in results:
variables, orphans = variables_and_orphans(i, o)
for r in variables:
if r not in i and r not in orphans:
if r.owner is not None:
ops.add(r.owner)
return ops
def results(i, o):
def variables(i, o):
""" WRITEME
:type i: list
:param i: input L{Result}s
:param i: input L{Variable}s
:type o: list
:param o: output L{Result}s
:param o: output L{Variable}s
:returns:
the set of Results that are involved in the subgraph that lies between i and o. This
the set of Variables that are involved in the subgraph that lies between i and o. This
includes i, o, orphans(i, o) and all values of all intermediary steps from i to o.
"""
return results_and_orphans(i, o)[0]
return variables_and_orphans(i, o)[0]
def orphans(i, o):
""" WRITEME
:type i: list
:param i: input L{Result}s
:param i: input L{Variable}s
:type o: list
:param o: output L{Result}s
:param o: output L{Variable}s
:returns:
the set of Results which one or more Results in o depend on but are neither in i nor in
the set of Variables which one or more Variables in o depend on but are neither in i nor in
the subgraph that lies between i and o.
e.g. orphans([x], [(x+y).out]) => [y]
"""
return results_and_orphans(i, o)[1]
return variables_and_orphans(i, o)[1]
def clone(i, o, copy_inputs = True):
""" WRITEME
:type i: list
:param i: input L{Result}s
:param i: input L{Variable}s
:type o: list
:param o: output L{Result}s
:param o: output L{Variable}s
:type copy_inputs: bool
:param copy_inputs: if True, the inputs will be copied (defaults to False)
......@@ -525,9 +525,9 @@ def clone_get_equiv(i, o, copy_inputs_and_orphans = True):
""" WRITEME
:type i: list
:param i: input L{Result}s
:param i: input L{Variable}s
:type o: list
:param o: output L{Result}s
:param o: output L{Variable}s
:type copy_inputs_and_orphans: bool
:param copy_inputs_and_orphans:
if True, the inputs and the orphans will be replaced in the cloned graph by copies
......@@ -536,7 +536,7 @@ def clone_get_equiv(i, o, copy_inputs_and_orphans = True):
:rtype: a dictionary
:return:
equiv mapping each L{Result} and L{Op} in the graph delimited by i and o to a copy
equiv mapping each L{Variable} and L{Op} in the graph delimited by i and o to a copy
(akin to deepcopy's memo).
"""
......@@ -629,7 +629,7 @@ def io_toposort(i, o, orderings = {}):
def deps(obj):
rval = []
if obj not in iset:
if isinstance(obj, Result):
if isinstance(obj, Variable):
if obj.owner:
rval = [obj.owner]
if isinstance(obj, Apply):
......@@ -660,11 +660,11 @@ def as_string(i, o,
"""WRITEME
:type i: list
:param i: input `Result` s
:param i: input `Variable` s
:type o: list
:param o: output `Result` s
:param o: output `Variable` s
:type leaf_formatter: function
:param leaf_formatter: takes a `Result` and returns a string to describe it
:param leaf_formatter: takes a `Variable` and returns a string to describe it
:type node_formatter: function
:param node_formatter:
takes an `Op` and the list of strings corresponding to its arguments and returns a
......
......@@ -52,14 +52,14 @@ class Linker(object):
def make_thunk(self):
"""
This function must return a triplet (function, input_results, output_results)
where function is a thunk that operates on the returned results. If inplace
is True, the input_results and output_results lists will be the same as the
This function must return a triplet (function, input_variables, output_variables)
where function is a thunk that operates on the returned variables. If inplace
is True, the input_variables and output_variables lists will be the same as the
inputs and outputs of the graph provided to the L{Linker}. Else, independent
results will be returned.
variables will be returned.
Example::
x, y = Result(Double), Result(Double)
x, y = Variable(Double), Variable(Double)
e = x + y
env = Env([x, y], [e])
fn, (new_x, new_y), (new_e, ) = MyLinker(env).make_thunk(inplace)
......@@ -98,13 +98,13 @@ class Linker(object):
% (takes, ['argument','arguments'][takes>1], got)
if (len(args) != len(inputs)):
raise TypeError(e_arity(len(inputs), len(args)))
for arg, result in zip(args, inputs):
result.data = arg
for arg, variable in zip(args, inputs):
variable.data = arg
thunk()
if unpack_single:
return utils.to_return_values([result.data for result in outputs])
return utils.to_return_values([variable.data for variable in outputs])
else:
return [result.data for result in outputs]
return [variable.data for variable in outputs]
execute.thunk = thunk
execute.inputs = inputs
execute.outputs = outputs
......@@ -114,14 +114,14 @@ class Linker(object):
#TODO: Move this class to the compile module, where it is used (and for which it exists).
class Container(object):
"""This class joins a result with its computed value.
"""This class joins a variable with its computed value.
It is used in linkers, especially for the inputs and outputs of a Function.
"""
def __init__(self, r, storage, readonly = False, strict = False, name = None):
"""WRITEME
:Parameters:
`r`: a result
`r`: a variable
`storage`: a list of length 1, whose element is the value for `r`
`readonly`: True indicates that this should not be setable by Function[r] = val
`strict`: if True, we don't allow type casting.
......@@ -176,8 +176,8 @@ def map_storage(env, order, input_storage, output_storage):
This function iterates over the nodes in `order` and ensures that for every
input and output `Result`, there is a unique storage container. This is
returned as a dictionary Result->storage called the `storage_map`.
input and output `Variable`, there is a unique storage container. This is
returned as a dictionary Variable->storage called the `storage_map`.
This function also returns `input_storage` which is a list of storages corresponding to env.inputs.
This function also returns `output_storage` which is a list of storages corresponding to env.outputs.
......@@ -313,8 +313,8 @@ def gc_helper(node_list):
:param node_list: list of Apply instances in program execution order
:rtype: a 2-tuple
:returns: FIRST, the set of Result instances which are computed by node_list, and SECOND a
dictionary that maps each Result instance to a the last node to use Result as an input.
:returns: FIRST, the set of Variable instances which are computed by node_list, and SECOND a
dictionary that maps each Variable instance to a the last node to use Variable as an input.
This is used to allow garbage collection within graphs.
"""
......@@ -434,7 +434,7 @@ class WrapLinker(Linker):
@note:
This linker ensures that each linker has its own storage for
inputs and outputs and intermediate results. There is no interference
inputs and outputs and intermediate variables. There is no interference
between linkers.
"""
......@@ -467,9 +467,9 @@ class WrapLinker(Linker):
@type env: gof.Env
@param env: the env which we will link
@type no_recycling: a list of Results that belong to env.
@type no_recycling: a list of Variables that belong to env.
@param no_recycling: If a Result is in no_recycling, L{WrapLinker} will clear
@param no_recycling: If a Variable is in no_recycling, L{WrapLinker} will clear
the output storage associated to it (for each linker in linkers) during
the computation to avoid reusing it.
......
......@@ -38,7 +38,7 @@ class CLinkerOp(object):
`PyObject` variable pointing to that input.
`outputs` : list of strings
Each string is the name of a `PyObject` pointer where the Op should store its
results. The `CLinker` guarantees that on entry to this code block, each pointer
variables. The `CLinker` guarantees that on entry to this code block, each pointer
is either NULL or is unchanged from the end of the previous execution.
`sub` : dict of strings
extra symbols defined in `CLinker` sub symbols (such as 'fail').
......@@ -68,7 +68,7 @@ class CLinkerOp(object):
`PyObject` variable pointing to that input.
`outputs` : list of strings
Each string is the name of a `PyObject` pointer where the Op should store its
results. The `CLinker` guarantees that on entry to this code block, each pointer
variables. The `CLinker` guarantees that on entry to this code block, each pointer
is either NULL or is unchanged from the end of the previous execution.
`sub` : dict of strings
extra symbols defined in `CLinker` sub symbols (such as 'fail').
......@@ -162,7 +162,7 @@ class PureOp(object):
- [optionally] building gradient-calculating graphs (via `grad`).
To see how `Op`, `Type`, `Result`, and `Apply` fit together see the page on :doc:`graph`.
To see how `Op`, `Type`, `Variable`, and `Apply` fit together see the page on :doc:`graph`.
For more specifications on how these methods should behave: see the `Op Contract` in the
sphinx docs (advanced tutorial on Op-making).
......@@ -229,7 +229,7 @@ class PureOp(object):
def perform(self, node, inputs, output_storage):
"""
Required: Calculate the function on the inputs and put the results in the
Required: Calculate the function on the inputs and put the variables in the
output storage. Return None.
:Parameters:
......
......@@ -179,7 +179,7 @@ class MergeOptimizer(Optimizer):
"""WRITEME
Merges parts of the graph that are identical, i.e. parts that
take the same inputs and carry out the asme computations so we
can avoid doing them more than once. Also merges results that
can avoid doing them more than once. Also merges variables that
are constant.
"""
......@@ -188,8 +188,8 @@ class MergeOptimizer(Optimizer):
def apply_constant_merge(self, env):
seen_constants = set()
const_sig = _metadict() # result -> result.signature() (for constants)
const_sig_inv = _metadict() # signature -> result (for constants)
const_sig = _metadict() # variable -> variable.signature() (for constants)
const_sig_inv = _metadict() # signature -> variable (for constants)
for node in _list_of_nodes(env):
for i, c in enumerate([r for r in node.inputs if isinstance(r, graph.Constant)]):
if id(c) in seen_constants:
......@@ -211,13 +211,13 @@ class MergeOptimizer(Optimizer):
def exptime_apply_node_merge(self, env):
# we clear the dicts because the Constants signatures are not necessarily hashable
# and it's more efficient to give them an integer like the other Results
# and it's more efficient to give them an integer like the other Variables
symbol_idx = {} #result -> int
symbol_idx_inv = {} #int -> result (inverse of symbol_idx)
symbol_idx = {} #variable -> int
symbol_idx_inv = {} #int -> variable (inverse of symbol_idx)
#add all graph sources to the symbol_idx dictionaries (arbitrary order)
for i, r in enumerate(r for r in env.results if r.owner is None):
for i, r in enumerate(r for r in env.variables if r.owner is None):
symbol_idx[r] = i
symbol_idx_inv[i] = r
......@@ -246,7 +246,7 @@ class MergeOptimizer(Optimizer):
def apply_node_merge(self, env):
# we clear the dicts because the Constants signatures are not necessarily hashable
# and it's more efficient to give them an integer like the other Results
# and it's more efficient to give them an integer like the other Variables
nodes_seen = {}
......@@ -336,7 +336,7 @@ class LocalOptimizer(object):
- False to indicate that no optimization can be applied to this `node`; or
- <list of results> to use in place of `node`'s outputs in the greater graph.
- <list of variables> to use in place of `node`'s outputs in the greater graph.
:type node: an Apply instance
......@@ -487,13 +487,13 @@ class PatternSub(LocalOptimizer):
place. The input pattern cannot just be a string but the output
pattern can.
If you put a constant result in the input pattern, there will be a
match iff a constant result with the same value and the same type
If you put a constant variable in the input pattern, there will be a
match iff a constant variable with the same value and the same type
is found in its place.
You can add a constraint to the match by using the dict(...) form
described above with a 'constraint' key. The constraint must be a
function that takes the env and the current Result that we are
function that takes the env and the current Variable that we are
trying to match and returns True or False according to an
arbitrary criterion.
......@@ -718,7 +718,7 @@ class NavigatorOptimizer(Optimizer):
def process_node(self, env, node, lopt = None):
"""
This function will use `lopt` to `transform` the `node`. The `transform` method will
return either False or a list of Results that are intended to replace `node.outputs`.
return either False or a list of Variables that are intended to replace `node.outputs`.
If the env accepts the replacement, then the optimization is successful, and this
function returns True.
......
......@@ -38,16 +38,16 @@ class DB(object):
def __query__(self, q):
if not isinstance(q, Query):
raise TypeError('Expected a Query.', q)
results = set()
variables = set()
for tag in q.include:
results.update(self.__db__[tag])
variables.update(self.__db__[tag])
for tag in q.require:
results.intersection_update(self.__db__[tag])
variables.intersection_update(self.__db__[tag])
for tag in q.exclude:
results.difference_update(self.__db__[tag])
variables.difference_update(self.__db__[tag])
remove = set()
add = set()
for obj in results:
for obj in variables:
if isinstance(obj, DB):
sq = q.subquery.get(obj.name, q)
if sq:
......@@ -55,9 +55,9 @@ class DB(object):
replacement.name = obj.name
remove.add(obj)
add.add(replacement)
results.difference_update(remove)
results.update(add)
return results
variables.difference_update(remove)
variables.update(add)
return variables
def query(self, *tags, **kwtags):
if len(tags) >= 1 and isinstance(tags[0], Query):
......@@ -75,13 +75,13 @@ class DB(object):
subquery = kwtags))
def __getitem__(self, name):
results = self.__db__[name]
if not results:
variables = self.__db__[name]
if not variables:
raise KeyError("Nothing registered for '%s'" % name)
elif len(results) > 1:
elif len(variables) > 1:
raise ValueError('More than one match for %s (please use query)' % name)
for result in results:
return result
for variable in variables:
return variable
class Query(object):
......
......@@ -4,13 +4,13 @@ import unittest
from theano.gof.link import PerformLinker
from theano.gof.cc import *
from theano.gof.type import Type
from theano.gof.graph import Result, Apply, Constant
from theano.gof.graph import Variable, Apply, Constant
from theano.gof.op import Op
from theano.gof import env
from theano.gof import toolbox
def as_result(x):
assert isinstance(x, Result)
def as_variable(x):
assert isinstance(x, Variable)
return x
class TDouble(Type):
......@@ -60,7 +60,7 @@ class TDouble(Type):
tdouble = TDouble()
def double(name):
return Result(tdouble, None, None, name = name)
return Variable(tdouble, None, None, name = name)
class MyOp(Op):
......@@ -71,7 +71,7 @@ class MyOp(Op):
def make_node(self, *inputs):
assert len(inputs) == self.nin
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if input.type is not tdouble:
raise Exception("Error 1")
......@@ -239,7 +239,7 @@ def test_duallinker_mismatch():
try:
# this runs OpWiseCLinker and PerformLinker in parallel and feeds
# results of matching operations to _my_checker to verify that they
# variables of matching operations to _my_checker to verify that they
# are the same.
res = fn(1.0, 2.0, 3.0)
raise Exception("An exception should have been raised here!")
......
......@@ -3,7 +3,7 @@ import unittest
from theano.gof.type import Type
from theano.gof import graph
from theano.gof.graph import Result, Apply
from theano.gof.graph import Variable, Apply
from theano.gof.op import Op
from theano.gof.opt import *
......@@ -17,8 +17,8 @@ PatternOptimizer = lambda p1, p2, ign=True: OpKeyOptimizer(PatternSub(p1, p2), i
OpSubOptimizer = lambda op1, op2, fail=NavigatorOptimizer.warn_ignore, ign=True: TopoOptimizer(OpSub(op1, op2), ignore_newtrees=ign, failure_callback = fail)
def as_result(x):
assert isinstance(x, Result)
def as_variable(x):
assert isinstance(x, Variable)
return x
......@@ -31,8 +31,8 @@ class MyType(Type):
return isinstance(other, MyType)
def MyResult(name):
return Result(MyType(), None, None, name = name)
def MyVariable(name):
return Variable(MyType(), None, None, name = name)
def MyValue(data):
return graph.Value(MyType(), data = data)
......@@ -50,11 +50,11 @@ class MyOp(Op):
def make_node(self, *inputs):
assert len(inputs) == self.nin
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if not isinstance(input.type, MyType):
raise Exception("Error 1")
outputs = [MyResult(self.name + "_R") for i in xrange(self.nout)]
outputs = [MyVariable(self.name + "_R") for i in xrange(self.nout)]
return Apply(self, inputs, outputs)
def __str__(self):
......@@ -70,9 +70,9 @@ dot = MyOp(2, 'Dot')
def inputs():
x = MyResult('x')
y = MyResult('y')
z = MyResult('z')
x = MyVariable('x')
y = MyVariable('y')
z = MyVariable('z')
return x, y, z
_Env = Env
......
......@@ -4,11 +4,11 @@ from theano.gof.graph import *
from theano.gof.op import Op
from theano.gof.type import Type
from theano.gof.graph import Result
from theano.gof.graph import Variable
def as_result(x):
assert isinstance(x, Result)
def as_variable(x):
assert isinstance(x, Variable)
return x
......@@ -26,19 +26,19 @@ class MyType(Type):
def __repr__(self):
return 'R%s' % str(self.thingy)
def MyResult(thingy):
return Result(MyType(thingy), None, None)
def MyVariable(thingy):
return Variable(MyType(thingy), None, None)
class MyOp(Op):
def make_node(self, *inputs):
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if not isinstance(input.type, MyType):
print input, input.type, type(input), type(input.type)
raise Exception("Error 1")
outputs = [MyResult(sum([input.type.thingy for input in inputs]))]
outputs = [MyVariable(sum([input.type.thingy for input in inputs]))]
return Apply(self, inputs, outputs)
def __str__(self):
......@@ -54,12 +54,12 @@ MyOp = MyOp()
class TestInputs:
def test_inputs(self):
r1, r2 = MyResult(1), MyResult(2)
r1, r2 = MyVariable(1), MyVariable(2)
node = MyOp.make_node(r1, r2)
assert inputs(node.outputs) == [r1, r2]
def test_inputs_deep(self):
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
node = MyOp.make_node(r1, r2)
node2 = MyOp.make_node(node.outputs[0], r5)
i = inputs(node2.outputs)
......@@ -86,26 +86,26 @@ class X:
class TestStr(X):
def test_as_string(self):
r1, r2 = MyResult(1), MyResult(2)
r1, r2 = MyVariable(1), MyVariable(2)
node = MyOp.make_node(r1, r2)
s = self.str([r1, r2], node.outputs)
assert s == ["MyOp(R1, R2)"]
def test_as_string_deep(self):
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
node = MyOp.make_node(r1, r2)
node2 = MyOp.make_node(node.outputs[0], r5)
s = self.str([r1, r2, r5], node2.outputs)
assert s == ["MyOp(MyOp(R1, R2), R5)"]
def test_multiple_references(self):
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
node = MyOp.make_node(r1, r2)
node2 = MyOp.make_node(node.outputs[0], node.outputs[0])
assert self.str([r1, r2, r5], node2.outputs) == ["MyOp(*1 -> MyOp(R1, R2), *1)"]
def test_cutoff(self):
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
node = MyOp.make_node(r1, r2)
node2 = MyOp.make_node(node.outputs[0], node.outputs[0])
assert self.str(node.outputs, node2.outputs) == ["MyOp(R3, R3)"]
......@@ -119,13 +119,13 @@ class TestStr(X):
class TestClone(X):
def test_accurate(self):
r1, r2 = MyResult(1), MyResult(2)
r1, r2 = MyVariable(1), MyVariable(2)
node = MyOp.make_node(r1, r2)
_, new = clone([r1, r2], node.outputs, False)
assert self.str([r1, r2], new) == ["MyOp(R1, R2)"]
def test_copy(self):
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
node = MyOp.make_node(r1, r2)
node2 = MyOp.make_node(node.outputs[0], r5)
_, new = clone([r1, r2, r5], node2.outputs, False)
......@@ -136,11 +136,11 @@ class TestClone(X):
def test_not_destructive(self):
# Checks that manipulating a cloned graph leaves the original unchanged.
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
node = MyOp.make_node(MyOp.make_node(r1, r2).outputs[0], r5)
_, new = clone([r1, r2, r5], node.outputs, False)
new_node = new[0].owner
new_node.inputs = MyResult(7), MyResult(8)
new_node.inputs = MyVariable(7), MyVariable(8)
assert self.str(inputs(new_node.outputs), new_node.outputs) == ["MyOp(R7, R8)"]
assert self.str(inputs(node.outputs), node.outputs) == ["MyOp(MyOp(R1, R2), R5)"]
......@@ -150,7 +150,7 @@ class TestClone(X):
############
def prenode(obj):
if isinstance(obj, Result):
if isinstance(obj, Variable):
if obj.owner:
return [obj.owner]
if isinstance(obj, Apply):
......@@ -160,7 +160,7 @@ class TestToposort:
def test_0(self):
"""Test a simple graph"""
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
o = MyOp.make_node(r1, r2)
o2 = MyOp.make_node(o.outputs[0], r5)
......@@ -172,7 +172,7 @@ class TestToposort:
def test_1(self):
"""Test a graph with double dependencies"""
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
o = MyOp.make_node(r1, r1)
o2 = MyOp.make_node(o.outputs[0], r5)
all = general_toposort(o2.outputs, prenode)
......@@ -180,7 +180,7 @@ class TestToposort:
def test_2(self):
"""Test a graph where the inputs have owners"""
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
o = MyOp.make_node(r1, r1)
r2b = o.outputs[0]
o2 = MyOp.make_node(r2b, r2b)
......@@ -193,7 +193,7 @@ class TestToposort:
def test_3(self):
"""Test a graph which is not connected"""
r1, r2, r3, r4 = MyResult(1), MyResult(2), MyResult(3), MyResult(4)
r1, r2, r3, r4 = MyVariable(1), MyVariable(2), MyVariable(3), MyVariable(4)
o0 = MyOp.make_node(r1, r2)
o1 = MyOp.make_node(r3, r4)
all = io_toposort([r1, r2, r3, r4], o0.outputs + o1.outputs)
......@@ -201,7 +201,7 @@ class TestToposort:
def test_4(self):
"""Test inputs and outputs mixed together in a chain graph"""
r1, r2, r3, r4 = MyResult(1), MyResult(2), MyResult(3), MyResult(4)
r1, r2, r3, r4 = MyVariable(1), MyVariable(2), MyVariable(3), MyVariable(4)
o0 = MyOp.make_node(r1, r2)
o1 = MyOp.make_node(o0.outputs[0], r1)
all = io_toposort([r1, o0.outputs[0]], [o0.outputs[0], o1.outputs[0]])
......@@ -209,7 +209,7 @@ class TestToposort:
def test_5(self):
"""Test when outputs have clients"""
r1, r2, r3, r4 = MyResult(1), MyResult(2), MyResult(3), MyResult(4)
r1, r2, r3, r4 = MyVariable(1), MyVariable(2), MyVariable(3), MyVariable(4)
o0 = MyOp.make_node(r1, r2)
o1 = MyOp.make_node(o0.outputs[0], r4)
all = io_toposort([], o0.outputs)
......
from theano.gof import graph
from theano.gof.graph import Result, Apply, Constant
from theano.gof.graph import Variable, Apply, Constant
from theano.gof.type import Type
from theano.gof.op import Op
from theano.gof import env
......@@ -8,11 +8,11 @@ from theano.gof import toolbox
from theano.gof.link import *
#from _test_result import Double
#from _test_variable import Double
def as_result(x):
assert isinstance(x, Result)
def as_variable(x):
assert isinstance(x, Variable)
return x
class TDouble(Type):
......@@ -22,7 +22,7 @@ class TDouble(Type):
tdouble = TDouble()
def double(name):
return Result(tdouble, None, None, name = name)
return Variable(tdouble, None, None, name = name)
class MyOp(Op):
......@@ -35,7 +35,7 @@ class MyOp(Op):
def make_node(self, *inputs):
assert len(inputs) == self.nin
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if input.type is not tdouble:
raise Exception("Error 1")
......
......@@ -2,10 +2,10 @@
from copy import copy
from theano.gof.op import *
from theano.gof.type import Type, Generic
from theano.gof.graph import Apply, Result
from theano.gof.graph import Apply, Variable
def as_result(x):
assert isinstance(x, Result)
def as_variable(x):
assert isinstance(x, Variable)
return x
......@@ -27,7 +27,7 @@ class MyType(Type):
class MyOp(Op):
def make_node(self, *inputs):
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if not isinstance(input.type, MyType):
raise Exception("Error 1")
......
from theano.gof.type import Type
from theano.gof.graph import Result, Apply, Constant
from theano.gof.graph import Variable, Apply, Constant
from theano.gof.op import Op
from theano.gof.opt import *
from theano.gof.env import Env
from theano.gof.toolbox import *
def as_result(x):
if not isinstance(x, Result):
raise TypeError("not a Result", x)
def as_variable(x):
if not isinstance(x, Variable):
raise TypeError("not a Variable", x)
return x
......@@ -25,8 +25,8 @@ class MyType(Type):
return hash(MyType)
def MyResult(name):
return Result(MyType(), None, None, name = name)
def MyVariable(name):
return Variable(MyType(), None, None, name = name)
class MyOp(Op):
......@@ -37,7 +37,7 @@ class MyOp(Op):
self.x = x
def make_node(self, *inputs):
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if not isinstance(input.type, MyType):
raise Exception("Error 1")
......@@ -74,9 +74,9 @@ op_z = MyOp('OpZ', x = 1)
def inputs():
x = MyResult('x')
y = MyResult('y')
z = MyResult('z')
x = MyVariable('x')
y = MyVariable('y')
z = MyVariable('z')
return x, y, z
......@@ -188,7 +188,7 @@ class TestPatternOptimizer:
def test_constant_unification(self):
x = Constant(MyType(), 2, name = 'x')
y = MyResult('y')
y = MyVariable('y')
z = Constant(MyType(), 2, name = 'z')
e = op1(op1(x, y), y)
g = Env([y], [e])
......@@ -288,7 +288,7 @@ class TestMergeOptimizer:
assert str(g) == "[Op1(*1 -> Op2(x, y), *1, Op2(x, z))]"
def test_constant_merging(self):
x = MyResult('x')
x = MyVariable('x')
y = Constant(MyType(), 2, name = 'y')
z = Constant(MyType(), 2, name = 'z')
e = op1(op2(x, y), op2(x, y), op2(x, z))
......@@ -334,7 +334,7 @@ class TestMergeOptimizer:
or strg == "[Op1(*2 -> Op1(x, y), Op4(*1 -> Op2(Op3(x), y, z), *2), Op1(*1))]"
def test_identical_constant_args(self):
x = MyResult('x')
x = MyVariable('x')
y = Constant(MyType(), 2, name = 'y')
z = Constant(MyType(), 2, name = 'z')
e1 = op1(y, z)
......@@ -347,7 +347,7 @@ class TestMergeOptimizer:
class TestEquilibrium(object):
def test_1(self):
x, y, z = map(MyResult, 'xyz')
x, y, z = map(MyVariable, 'xyz')
e = op3(op4(x, y))
g = Env([x, y, z], [e])
print g
......@@ -362,7 +362,7 @@ class TestEquilibrium(object):
assert str(g) == '[Op2(x, y)]'
def test_2(self):
x, y, z = map(MyResult, 'xyz')
x, y, z = map(MyVariable, 'xyz')
e = op1(op1(op3(x, y)))
g = Env([x, y, z], [e])
print g
......@@ -378,7 +378,7 @@ class TestEquilibrium(object):
assert str(g) == '[Op2(x, y)]'
def test_low_use_ratio(self):
x, y, z = map(MyResult, 'xyz')
x, y, z = map(MyVariable, 'xyz')
e = op3(op4(x, y))
g = Env([x, y, z], [e])
print 'before', g
......
from theano.gof.graph import Result, Apply
from theano.gof.graph import Variable, Apply
from theano.gof.type import Type
from theano.gof.op import Op
......@@ -7,8 +7,8 @@ from theano.gof.env import Env, InconsistencyError
from theano.gof.toolbox import *
def as_result(x):
assert isinstance(x, Result)
def as_variable(x):
assert isinstance(x, Variable)
return x
......@@ -27,8 +27,8 @@ class MyType(Type):
return isinstance(other, MyType)
def MyResult(name):
return Result(MyType(name), None, None)
def MyVariable(name):
return Variable(MyType(name), None, None)
class MyOp(Op):
......@@ -39,7 +39,7 @@ class MyOp(Op):
def make_node(self, *inputs):
assert len(inputs) == self.nin
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if not isinstance(input.type, MyType):
raise Exception("Error 1")
......@@ -55,9 +55,9 @@ dot = MyOp(2, 'Dot')
def inputs():
x = MyResult('x')
y = MyResult('y')
z = MyResult('z')
x = MyVariable('x')
y = MyVariable('y')
z = MyVariable('z')
return x, y, z
......
......@@ -5,7 +5,7 @@ __docformat__ = "restructuredtext en"
import copy
import utils
from utils import MethodNotDefined, object2
from graph import Result
from graph import Variable
import traceback
......@@ -71,7 +71,7 @@ class CLinkerType(object):
The code returned from this function must be templated using
"%(name)s", representing the name that the caller wants to
call this `Result`. The Python object self.data is in a
call this `Variable`. The Python object self.data is in a
variable called "py_%(name)s" and this code must set the
variables declared by c_declare to something representative
of py_%(name)s. If the data is improper, set an appropriate
......@@ -119,9 +119,9 @@ class CLinkerType(object):
"""Required: Return c code to pack C types back into a PyObject.
The code returned from this function must be templated using "%(name)s",
representing the name that the caller wants to call this Result. The
representing the name that the caller wants to call this Variable. The
returned code may set "py_%(name)s" to a PyObject* and that PyObject*
will be accessible from Python via result.data. Do not forget to adjust
will be accessible from Python via variable.data. Do not forget to adjust
reference counts if "py_%(name)s" is changed from its original value.
:Parameters:
......@@ -180,7 +180,7 @@ class CLinkerType(object):
raise MethodNotDefined("c_libraries", type(self), self.__class__.__name__)
def c_support_code(self):
"""Optional: Return utility code for use by a `Result` or `Op` to be
"""Optional: Return utility code for use by a `Variable` or `Op` to be
included at global scope prior to the rest of the code for this class.
QUESTION: How many times will this support code be emitted for a graph
......@@ -193,13 +193,13 @@ class CLinkerType(object):
raise MethodNotDefined("c_support_code", type(self), self.__class__.__name__)
class PureType(object):
"""Interface specification for result type instances.
"""Interface specification for variable type instances.
A :term:`Type` instance is mainly reponsible for two things:
- creating `Result` instances (conventionally, `__call__` does this), and
- creating `Variable` instances (conventionally, `__call__` does this), and
- filtering a value assigned to a `Result` so that the value conforms to restrictions
- filtering a value assigned to a `Variable` so that the value conforms to restrictions
imposed by the type (also known as casting, this is done by `filter`),
"""
......@@ -220,33 +220,33 @@ class PureType(object):
raise MethodNotDefined("filter", type(self), self.__class__.__name__)
def is_valid_value(self, a):
"""Required: Return True for any python object `a` that would be a legal value for a Result of this Type"""
"""Required: Return True for any python object `a` that would be a legal value for a Variable of this Type"""
try:
self.filter(a, True)
return True
except TypeError:
return False
def make_result(self, name = None):
"""Return a new `Result` instance of Type `self`.
def make_variable(self, name = None):
"""Return a new `Variable` instance of Type `self`.
:Parameters:
- `name`: None or str
A pretty string for printing and debugging.
"""
r = Result(self, name = name)
r = Variable(self, name = name)
return r
def __call__(self, name = None):
"""Return a new `Result` instance of Type `self`.
"""Return a new `Variable` instance of Type `self`.
:Parameters:
- `name`: None or str
A pretty string for printing and debugging.
"""
r = self.make_result(name)
r = self.make_variable(name)
r.tag.trace = traceback.extract_stack()[:-1]
return r
......@@ -262,9 +262,9 @@ class PureType(object):
"""
Return True if a and b can be considered approximately equal.
:param a: a potential value for a Result of this Type.
:param a: a potential value for a Variable of this Type.
:param b: a potential value for a Result of this Type.
:param b: a potential value for a Variable of this Type.
:rtype: Bool
......@@ -289,7 +289,7 @@ class Type(object2, PureType, CLinkerType):
- `Generic`: for any python type
- `NDArrayType`: for numpy.ndarray
- `TensorType`: for numpy.ndarray
- `SparseType`: for scipy.sparse
......@@ -301,15 +301,15 @@ class Type(object2, PureType, CLinkerType):
# Declare a symbolic floating-point vector using __call__
b = tensor.fvector()
# Create a second Result with the same Type instance
# Create a second Variable with the same Type instance
c = tensor.fvector()
Whenever you create a symbolic variable in theano (technically, `Result`) it will contain a
Whenever you create a symbolic variable in theano (technically, `Variable`) it will contain a
reference to a Type instance. That reference is typically constant during the lifetime of
the Result. Many variables can refer to a single Type instance, as do b and c above. The
the Variable. Many variables can refer to a single Type instance, as do b and c above. The
Type instance defines the kind of value which might end up in that variable when executing
a `Function`. In this sense, theano is like a strongly-typed language because the types
are included in the graph before the values. In our example above, b is a Result which is
are included in the graph before the values. In our example above, b is a Variable which is
guaranteed to corresond to a numpy.ndarray of rank 1 when we try to do some computations
with it.
......
# import op
# import result
# import variable
import re
......@@ -316,16 +316,16 @@ def comm_guard(type1, type2):
raise
try:
result = f(arg1, arg2, *rest)
variable = f(arg1, arg2, *rest)
except:
raise
if result is FALL_THROUGH:
if variable is FALL_THROUGH:
try:
return old_f(arg1, arg2, *rest)
except:
raise
else:
return result
return variable
new_f.__name__ = f.__name__
def typename(type):
......@@ -345,11 +345,11 @@ def type_guard(type1):
old_f = f.func_globals[f.__name__]
def new_f(arg1, *rest):
if (type1 is ANY_TYPE or isinstance(arg1, type1)):
result = f(arg1, *rest)
if result is FALL_THROUGH:
variable = f(arg1, *rest)
if variable is FALL_THROUGH:
return old_f(arg1, *rest)
else:
return result
return variable
else:
return old_f(arg1, *rest)
......
import gof #, gof.result
import gof #, gof.variable
import numpy #for numeric_grad
from gof.python25 import all
......@@ -7,22 +7,10 @@ import gof.utils
_msg_retType = 'op.grad(...) returned a non-list'
_msg_badlen = 'op.grad(...) returned wrong number of gradients'
def _unpack_result(lst):
if len(lst) > 1:
return lst
else:
return lst[0]
def _pack_result(arg):
if isinstance(arg, gof.result.Result):
return [arg]
else:
return arg
def grad_sources_inputs(sources, graph_inputs):
"""
A gradient source is a pair (r, g_r), in which r is a result, and g_r is a
result that is a gradient wrt r.
A gradient source is a pair (r, g_r), in which r is a variable, and g_r is a
variable that is a gradient wrt r.
This function traverses the graph backward from the 'r' sources,
calling L{Op.grad}(...) when it is provided by an L{Op}, and at least one of the
......@@ -32,21 +20,21 @@ def grad_sources_inputs(sources, graph_inputs):
op.grad( op.inputs[0], grad(op.outputs[0]))
This function expects the L{Op.grad}(...) function to return the gradient
expression [results] associated with the inputs of the L{Op}. The L{Op} should
return a list of results corresponding to the gradients in the same order
expression [variables] associated with the inputs of the L{Op}. The L{Op} should
return a list of variables corresponding to the gradients in the same order
as the inputs. If it has a single output it should return a list or tuple
of length 1.
For each input wrt to which an L{Op} is not differentiable, it should return
None instead of a result instance.
None instead of a variable instance.
@type sources: list
@param sources: gradient sources (explained below)
@type graph_inputs: list
@param graph_inputs: results considered to be constant
@param graph_inputs: variables considered to be constant
@rtype: dictionary
@return: dictionary mapping each result necessary for a source to its gradient.
@return: dictionary mapping each variable necessary for a source to its gradient.
"""
gmap = {}
for (r, g_r) in sources:
......@@ -94,7 +82,7 @@ def grad_sources_inputs(sources, graph_inputs):
op_grad = node.op.grad(input_arg, output_arg)
if not isinstance(op_grad, (list,tuple)):
raise ValueError(_msg_retType, node.op)
g_inputs = op_grad #_pack_result(op_grad)
g_inputs = op_grad
assert isinstance(g_inputs, (list, tuple))
if len(g_inputs) != len(node.inputs):
raise ValueError(_msg_badlen,
......
......@@ -23,7 +23,7 @@ class Print(Op):
self.attrs=attrs
def make_node(self,xin):
xout = xin.type.make_result()
xout = xin.type.make_variable()
return Apply(op = self, inputs = [xin], outputs=[xout])
def perform(self,node,inputs,output_storage):
......@@ -68,7 +68,7 @@ class OperatorPrinter:
pprinter = pstate.pprinter
node = output.owner
if node is None:
raise TypeError("operator %s cannot represent a result with no associated operation" % self.operator)
raise TypeError("operator %s cannot represent a variable that is not the result of an operation" % self.operator)
outer_precedence = getattr(pstate, 'precedence', -999999)
outer_assoc = getattr(pstate, 'assoc', 'none')
if outer_precedence > self.precedence:
......@@ -105,7 +105,7 @@ class PatternPrinter:
pprinter = pstate.pprinter
node = output.owner
if node is None:
raise TypeError("Patterns %s cannot represent a result with no associated operation" % self.patterns)
raise TypeError("Patterns %s cannot represent a variable that is not the result of an operation" % self.patterns)
idx = node.outputs.index(output)
pattern, precedences = self.patterns[idx]
precedences += (1000,) * len(node.inputs)
......@@ -123,7 +123,7 @@ class FunctionPrinter:
pprinter = pstate.pprinter
node = output.owner
if node is None:
raise TypeError("function %s cannot represent a result with no associated operation" % self.names)
raise TypeError("function %s cannot represent a variable that is not the result of an operation" % self.names)
idx = node.outputs.index(output)
name = self.names[idx]
return "%s(%s)" % (name, ", ".join([pprinter.process(input, pstate.clone(precedence = -1000))
......@@ -138,7 +138,7 @@ class MemberPrinter:
pprinter = pstate.pprinter
node = output.owner
if node is None:
raise TypeError("function %s cannot represent a result with no associated operation" % self.function)
raise TypeError("function %s cannot represent a variable that is not the result of an operation" % self.function)
names = self.names
idx = node.outputs.index(output)
name = self.names[idx]
......@@ -152,7 +152,7 @@ class IgnorePrinter:
pprinter = pstate.pprinter
node = output.owner
if node is None:
raise TypeError("function %s cannot represent a result with no associated operation" % self.function)
raise TypeError("function %s cannot represent a variable that is not the result of an operation" % self.function)
input = node.inputs[0]
return "%s" % pprinter.process(input, pstate)
......
......@@ -14,7 +14,7 @@ class DebugLinker(gof.WrapLinker):
debug_post = [],
copy_originals = False,
check_types = True,
compare_results = True,
compare_variables = True,
compare_fn = lambda x, y: x == y):
gof.WrapLinker.__init__(self,
linkers = linkers,
......@@ -27,8 +27,8 @@ class DebugLinker(gof.WrapLinker):
self.copy_originals = copy_originals
if check_types not in [None, True]:
self.check_types = check_types
if compare_results not in [None, True]:
self.compare_results = compare_results
if compare_variables not in [None, True]:
self.compare_variables = compare_variables
if not isinstance(debug_pre, (list, tuple)):
debug_pre = [debug_pre]
......@@ -39,8 +39,8 @@ class DebugLinker(gof.WrapLinker):
self.debug_post = debug_post
if check_types is not None:
self.debug_post.append(self.check_types)
if compare_results is not None:
self.debug_post.append(self.compare_results)
if compare_variables is not None:
self.debug_post.append(self.compare_variables)
def accept(self, env, no_recycling = []):
return gof.WrapLinker.accept(self,
......@@ -75,13 +75,13 @@ class DebugLinker(gof.WrapLinker):
exc.linker = linker
raise DebugException, exc, exc_trace
def compare_results(self, i, node, *thunks):
def compare_variables(self, i, node, *thunks):
thunk0 = thunks[0]
linker0 = self.linkers[0]
for thunk, linker in zip(thunks[1:], self.linkers[1:]):
for o, output0, output in zip(node.outputs, thunk0.outputs, thunk.outputs):
if not self.compare_fn(output0[0], output[0]):
exc = DebugException(("The results from %s and %s for output %s are not the same. This happened at step %i." % (linker0, linker, o, step)) + \
exc = DebugException(("The variables from %s and %s for output %s are not the same. This happened at step %i." % (linker0, linker, o, step)) + \
"For more info, inspect this exception's 'debugger', 'output', 'output_value1', 'output_value2', " \
"'step', 'node', 'thunk1', 'thunk2', 'linker1' and 'linker2' fields.")
exc.debugger = self
......@@ -98,7 +98,7 @@ class DebugLinker(gof.WrapLinker):
def pre(self, f, inputs, order, thunk_groups):
env = f.env
for r in env.results:
for r in env.variables:
if r.owner is None:
r.step = "value" # this will be overwritten if r is an input
else:
......
......@@ -19,7 +19,7 @@ class InitGraph(type):
return True
if issubclass(v, SymbolicModule):
return True
return isinstance(v, theano.Result) and not k.startswith('_')
return isinstance(v, theano.Variable) and not k.startswith('_')
r = {}
for key, val in dct.items():
if filter(key, val):
......@@ -31,7 +31,7 @@ class InitGraph(type):
dct = just_symbolic(build_graph_rval)
for key, val in dct.items():
#print ' adding class attribute', key
if isinstance(val, theano.Result) and val.name is None:
if isinstance(val, theano.Variable) and val.name is None:
val.name = key
if callable(val):
setattr(cls, key, staticmethod(val))
......@@ -98,7 +98,7 @@ def compile_fn(f, path_locals, common_inputs):
def compile(smod, initial_values={}):
"""
:type values: dictionary Result -> value
:type values: dictionary Variable -> value
"""
def sym_items(mod):
for k in mod.__dict__:
......@@ -121,7 +121,7 @@ def compile(smod, initial_values={}):
yield s
elif isinstance(val, (str, int, float)):
pass
elif isinstance(val, theano.Result):
elif isinstance(val, theano.Variable):
pass
elif issymbolicmethod(val):
pass
......@@ -135,7 +135,7 @@ def compile(smod, initial_values={}):
#Locate all the starting nodes, and create containers entries for their values
inputs = {}
for path_locals, val in walker(smod):
if isinstance(val, theano.Result) and (val.owner is None) and (val not in inputs):
if isinstance(val, theano.Variable) and (val.owner is None) and (val not in inputs):
inputs[val] = theano.In(val, value=theano.gof.Container(val, ['a']))
assert len(inputs) == len([v for v in inputs.items()])
......@@ -172,7 +172,7 @@ def compile(smod, initial_values={}):
setattr(CMod, key, reflect(val))
elif isinstance(thing, (str, int, float)):
reflected[thing] = thing
elif isinstance(thing, theano.Result):
elif isinstance(thing, theano.Variable):
if thing.owner is None:
def getter(s):
return inputs[thing].value.value
......@@ -275,11 +275,11 @@ if 0:
locals_dict = f()
for key, val in locals_dict.items():
if isinstance(val, theano.Result):
if isinstance(val, theano.Variable):
try:
kres = klass.KlassMember(val)
except:
kres = klass.KlassResult(val)
kres = klass.KlassVariable(val)
setattr(SymMod, key, kres)
elif callable(val) and getattr(val, '__is_symbolic'):
setattr(SymMod, key, val)
......@@ -333,14 +333,14 @@ if 0:
class SymbolicModule(object):
name = "__no_name__" #name of this module
result_table = {} #map strings (names) to Results
variable_table = {} #map strings (names) to Variables
method_table = {} #map strings to compilable functions
include_list = []
constructor_fn = None
def build(self):
"""Run the body of the included modules in order, using the current results and imports
"""Run the body of the included modules in order, using the current variables and imports
"""
def include(self, symbolic_module, name=None):
......@@ -350,7 +350,7 @@ if 0:
def __init__(self, constructor_fn=None):
""" A constructor fn builds
- a graph on top of the result table, and
- a graph on top of the variable table, and
- compilable methods.
"""
......
......@@ -20,7 +20,7 @@ if 0:
"""
class MisMatch(Exception): """Output mismatch"""
#define a comparison function, which works for all the results in a graph
#define a comparison function, which works for all the variables in a graph
#TODO: consider factoring this out (and maybe passing args explicitly
# instead of by closure)
def my_check_equal(x, y):
......
......@@ -5,7 +5,7 @@ from copy import copy
import numpy
from .. import gof
from ..gof import Op, utils, Result, Constant, Type, Apply, Env
from ..gof import Op, utils, Variable, Constant, Type, Apply, Env
from ..gof.python25 import partial
def upcast(dtype, *dtypes):
......@@ -20,9 +20,9 @@ def as_scalar(x, name = None):
raise ValueError("It is ambiguous which output of a multi-output Op has to be fetched.", x)
else:
x = x.outputs[0]
if isinstance(x, Result):
if isinstance(x, Variable):
if not isinstance(x.type, Scalar):
raise TypeError("Result type field must be a Scalar.", x, x.type)
raise TypeError("Variable type field must be a Scalar.", x, x.type)
return x
try:
return constant(x)
......@@ -82,8 +82,8 @@ class Scalar(Type):
def upcast(self, *others):
return upcast(*[x.dtype for x in [self]+list(others)])
def make_result(self, name = None):
return ScalarResult(self, name = name)
def make_variable(self, name = None):
return ScalarVariable(self, name = name)
def __str__(self):
return str(self.dtype)
......@@ -225,7 +225,7 @@ class _scalar_py_operators:
def __rmod__(self,other): return mod(other,self)
def __rpow__(self,other): return pow(other,self)
class ScalarResult(Result, _scalar_py_operators):
class ScalarVariable(Variable, _scalar_py_operators):
pass
class ScalarConstant(Constant, _scalar_py_operators):
......@@ -313,14 +313,14 @@ class ScalarOp(Op):
def output_types(self, types):
if hasattr(self, 'output_types_preference'):
results = self.output_types_preference(*types)
if not isinstance(results, (list, tuple)) or any(not isinstance(x, Type) for x in results):
raise TypeError("output_types_preference should return a list or a tuple of types", self.output_types_preference, results)
if len(results) != self.nout:
variables = self.output_types_preference(*types)
if not isinstance(variables, (list, tuple)) or any(not isinstance(x, Type) for x in variables):
raise TypeError("output_types_preference should return a list or a tuple of types", self.output_types_preference, variables)
if len(variables) != self.nout:
raise TypeError("Not the right number of outputs produced for %s(%s) by %s. Expected %s, got ?s."
% (self, ", ".join(str(input.type) for input in inputs),
self.output_types_preference, self.nout, len(results)))
return results
self.output_types_preference, self.nout, len(variables)))
return variables
else:
raise NotImplementedError("Cannot calculate the output types for %s" % self)
......@@ -328,10 +328,10 @@ class ScalarOp(Op):
if self.nout == 1:
output_storage[0][0] = self.impl(*inputs)
else:
results = utils.from_return_values(self.impl(*inputs))
assert len(results) == len(output_storage)
for storage, result in zip(output_storage, results):
storage[0] = result
variables = utils.from_return_values(self.impl(*inputs))
assert len(variables) == len(output_storage)
for storage, variable in zip(output_storage, variables):
storage[0] = variable
def impl(self, *inputs):
raise utils.MethodNotDefined("impl", type(self), self.__class__.__name__)
......@@ -822,7 +822,7 @@ class Composite(ScalarOp):
zip(outputs,
["%%(o%i)s"%i for i in range(len(outputs))]))
for orphan in env.results: #env.orphans:
for orphan in env.variables: #env.orphans:
if orphan.owner is None and orphan not in env.inputs:
if isinstance(orphan, Constant):
subd[orphan] = orphan.type.c_literal(orphan.data)
......
......@@ -11,7 +11,7 @@ If you do want to rewrite these tests, bear in mind:
import unittest
from theano.gof import Result, Op, Env
from theano.gof import Variable, Op, Env
from theano import gof
from theano.scalar.basic import *
......
......@@ -32,22 +32,22 @@ import scipy
if scipy.__version__ != '0.7.0':
sys.stderr.write("WARNING: scipy version = %s. We prefer version >=0.7.0 because it has bugs fixed in the sparse matrix code.\n" % scipy.__version__)
def _is_sparse_result(x):
def _is_sparse_variable(x):
"""
@rtype: boolean
@return: True iff x is a L{SparseResult} (and not a L{tensor.NDArrayType})
@return: True iff x is a L{SparseVariable} (and not a L{tensor.TensorType})
"""
if not isinstance(x.type, SparseType) and not isinstance(x.type, tensor.NDArrayType):
raise NotImplementedError("this function should only be called on *results* (of type sparse.SparseType or tensor.NDArrayType), not,", x)
if not isinstance(x.type, SparseType) and not isinstance(x.type, tensor.TensorType):
raise NotImplementedError("this function should only be called on *variables* (of type sparse.SparseType or tensor.TensorType), not,", x)
return isinstance(x.type, SparseType)
def _is_dense_result(x):
def _is_dense_variable(x):
"""
@rtype: boolean
@return: True unless x is a L{SparseResult} (and not a L{tensor.NDArrayType})
@return: True unless x is a L{SparseVariable} (and not a L{tensor.TensorType})
"""
if not isinstance(x.type, SparseType) and not isinstance(x.type, tensor.NDArrayType):
raise NotImplementedError("this function should only be called on *results* (of type sparse.SparseType or tensor.NDArrayType), not,", x)
return isinstance(x.type, tensor.NDArrayType)
if not isinstance(x.type, SparseType) and not isinstance(x.type, tensor.TensorType):
raise NotImplementedError("this function should only be called on *variables* (of type sparse.SparseType or tensor.TensorType), not,", x)
return isinstance(x.type, tensor.TensorType)
def _is_sparse(x):
"""
......@@ -78,12 +78,12 @@ def _kmap_hash(a):
# Wrapper type
def as_sparse_result(x):
def as_sparse_variable(x):
"""
Wrapper around SparseResult constructor.
@param x: A sparse matrix. as_sparse_result reads dtype and format properties
Wrapper around SparseVariable constructor.
@param x: A sparse matrix. as_sparse_variable reads dtype and format properties
out of this sparse matrix.
@return: SparseResult version of sp.
@return: SparseVariable version of sp.
@todo Verify that sp is sufficiently sparse, and raise a warning if it is not
"""
......@@ -92,16 +92,16 @@ def as_sparse_result(x):
raise ValueError("It is ambiguous which output of a multi-output Op has to be fetched.", x)
else:
x = x.outputs[0]
if isinstance(x, gof.Result):
if isinstance(x, gof.Variable):
if not isinstance(x.type, SparseType):
raise TypeError("Result type field must be a SparseType.", x, x.type)
raise TypeError("Variable type field must be a SparseType.", x, x.type)
return x
try:
return constant(x)
except TypeError:
raise TypeError("Cannot convert %s to SparseType" % x, type(x))
as_sparse = as_sparse_result
as_sparse = as_sparse_variable
def constant(x):
if not isinstance(x, sparse.spmatrix):
......@@ -146,7 +146,7 @@ class SparseType(gof.Type):
Fundamental way to create a sparse node.
@param dtype: Type of numbers in the matrix.
@param format: The sparse storage strategy.
@return An empty SparseResult instance.
@return An empty SparseVariable instance.
"""
dtype = str(dtype)
......@@ -174,8 +174,8 @@ class SparseType(gof.Type):
raise NotImplementedError()
return sp
def make_result(self, name = None):
return SparseResult(self, name = name)
def make_variable(self, name = None):
return SparseVariable(self, name = name)
def __eq__(self, other):
return type(self) == type(other) and other.dtype == self.dtype and other.format == self.format
......@@ -216,7 +216,7 @@ class _sparse_py_operators:
def __rdot__(right, left): return structured_dot(left, right)
class SparseResult(gof.Result, _sparse_py_operators):
class SparseVariable(gof.Variable, _sparse_py_operators):
dtype = property(lambda self: self.type.dtype)
format = property(lambda self: self.type.format)
......@@ -250,8 +250,8 @@ class CSMProperties(gof.Op):
return 8234 ^ hash(type(self)) ^ _kmap_hash(self.kmap)
def make_node(self, csm):
csm = as_sparse_result(csm)
data = tensor.NDArrayType(dtype=csm.type.dtype, broadcastable = (False,)).make_result()
csm = as_sparse_variable(csm)
data = tensor.TensorType(dtype=csm.type.dtype, broadcastable = (False,)).make_variable()
return gof.Apply(self, [csm],
[data, tensor.ivector(), tensor.ivector(), tensor.ivector()])
......@@ -311,7 +311,7 @@ class CSM(gof.Op):
return self._hashval
def make_node(self, data, indices, indptr, shape):
"""Build a SparseResult from the internal parametrization
"""Build a SparseVariable from the internal parametrization
:param data:
:param indices:
......@@ -321,10 +321,10 @@ class CSM(gof.Op):
:type indptr: 1-d tensor of ints
"""
data = tensor.as_ndarray_result(data)
indices = tensor.as_ndarray_result(indices)
indptr = tensor.as_ndarray_result(indptr)
shape = tensor.as_ndarray_result(shape)
data = tensor.as_tensor_variable(data)
indices = tensor.as_tensor_variable(indices)
indptr = tensor.as_tensor_variable(indptr)
shape = tensor.as_tensor_variable(shape)
if data.type.ndim != 1:
raise TypeError('data argument must be a vector', data.type)
......@@ -338,7 +338,7 @@ class CSM(gof.Op):
return gof.Apply(self,
[data, indices, indptr, shape],
[SparseType(dtype = data.type.dtype,
format = self.format).make_result()])
format = self.format).make_variable()])
def perform(self, node, (data, indices, indptr, shape), (out,)):
"""Build a csc_matrix"""
......@@ -368,7 +368,7 @@ class CSM(gof.Op):
def grad(self, (data, indices, indptr, shape), (g_out,)):
"""Return a gradient on the data vector"""
#unpack the data vector and wrap it as a 1d NDArrayType
#unpack the data vector and wrap it as a 1d TensorType
g_data = csm_grad(self.kmap)(data, csm_data(g_out),csm_indices(g_out))
return [g_data, None, None, None]
......@@ -425,11 +425,11 @@ class DenseFromSparse(gof.op.Op):
"""WRITEME"""
def make_node(self, x):
x = as_sparse_result(x)
x = as_sparse_variable(x)
return gof.Apply(self,
[x],
[tensor.NDArrayType(dtype = x.type.dtype,
broadcastable = (False, False)).make_result()])
[tensor.TensorType(dtype = x.type.dtype,
broadcastable = (False, False)).make_variable()])
def perform(self, node, (x, ), (out, )):
if _is_dense(x):
print >> sys.stderr, "WARNING: You just called DenseFromSparse on a dense matrix."
......@@ -455,11 +455,11 @@ class SparseFromDense(gof.op.Op):
return 982374 ^ hash(self.format) ^ hash(DenseFromSparse)
def make_node(self, x):
x = tensor.as_ndarray_result(x)
x = tensor.as_tensor_variable(x)
return gof.Apply(self,
[x],
[SparseType(dtype = x.type.dtype,
format = self.format).make_result()])
format = self.format).make_variable()])
def perform(self, node, (x, ), (out, )):
out[0] = SparseType.format_cls[self.format](x)
def grad(self, (x, ), (gz, )):
......@@ -475,35 +475,35 @@ class Transpose(gof.op.Op):
format_map = {'csr' : 'csc',
'csc' : 'csr'}
def make_node(self, x):
x = as_sparse_result(x)
x = as_sparse_variable(x)
return gof.Apply(self,
[x],
[SparseType(dtype = x.type.dtype,
format = self.format_map[x.type.format]).make_result()])
format = self.format_map[x.type.format]).make_variable()])
def perform(self, node, (x, ), (out, )):
assert _is_sparse(x)
out[0] = x.transpose()
def grad(self, (x,), (gz,)):
assert _is_sparse_result(x) and _is_sparse_result(gz)
assert _is_sparse_variable(x) and _is_sparse_variable(gz)
return transpose(gz),
transpose = Transpose()
class Neg(gof.op.Op):
def make_node(self, x):
x = as_sparse_result(x)
x = as_sparse_variable(x)
return gof.Apply(self, [x], [x.type()])
def perform(self, node, (x, ), (out, )):
assert _is_sparse(x)
out[0] = -x
def grad(self, (x,), (gz,)):
assert _is_sparse_result(x) and _is_sparse_result(gz)
assert _is_sparse_variable(x) and _is_sparse_variable(gz)
return -gz,
neg = Neg()
class AddSS(gof.op.Op):
'''Add two sparse matrices '''
def make_node(self, x, y):
x, y = map(as_sparse_result, [x, y])
x, y = map(as_sparse_variable, [x, y])
if x.type.dtype != y.type.dtype:
raise NotImplementedError()
if x.type.format != y.type.format:
......@@ -512,20 +512,20 @@ class AddSS(gof.op.Op):
return gof.Apply(self,
[x, y],
[SparseType(dtype = x.type.dtype,
format = x.type.format).make_result()])
format = x.type.format).make_variable()])
def perform(self, node, (x, y), (out, )):
assert _is_sparse(x) and _is_sparse(y)
assert x.shape == y.shape
out[0] = x + y
def grad(self, (x, y), (gz,)):
assert _is_sparse_result(x) and _is_sparse_result(y)
assert _is_sparse_result(gz)
assert _is_sparse_variable(x) and _is_sparse_variable(y)
assert _is_sparse_variable(gz)
return gz, gz
add_s_s = AddSS()
class AddSD(gof.op.Op):
''' Add a sparse and a dense matrix '''
def make_node(self, x, y):
x, y = as_sparse_result(x), tensor.as_ndarray_result(y)
x, y = as_sparse_variable(x), tensor.as_tensor_variable(y)
if x.type.dtype != y.type.dtype:
raise NotImplementedError()
# The magic number two here arises because L{scipy.sparse}
......@@ -533,30 +533,30 @@ class AddSD(gof.op.Op):
assert y.type.ndim == 2
return gof.Apply(self,
[x, y],
[tensor.NDArrayType(dtype = y.type.dtype,
broadcastable = y.type.broadcastable).make_result()])
[tensor.TensorType(dtype = y.type.dtype,
broadcastable = y.type.broadcastable).make_variable()])
def perform(self, node, (x, y), (out, )):
assert _is_sparse(x) and _is_dense(y)
out[0] = x + y
def grad(self, (x, y), (gz,)):
assert _is_sparse_result(x) and _is_dense_result(y)
assert _is_dense_result(gz)
assert _is_sparse_variable(x) and _is_dense_variable(y)
assert _is_dense_variable(gz)
return sp_one_like(x) * gz, gz
add_s_d = AddSD()
def add(x,y):
"""
Add two matrices, at least one of which is sparse.
"""
if hasattr(x, 'getnnz'): x = as_sparse_result(x)
if hasattr(y, 'getnnz'): y = as_sparse_result(y)
if hasattr(x, 'getnnz'): x = as_sparse_variable(x)
if hasattr(y, 'getnnz'): y = as_sparse_variable(y)
x_is_sparse_result = _is_sparse_result(x)
y_is_sparse_result = _is_sparse_result(y)
x_is_sparse_variable = _is_sparse_variable(x)
y_is_sparse_variable = _is_sparse_variable(y)
assert x_is_sparse_result or y_is_sparse_result
if x_is_sparse_result and y_is_sparse_result: return add_s_s(x,y)
elif x_is_sparse_result and not y_is_sparse_result: return add_s_d(x,y)
elif y_is_sparse_result and not x_is_sparse_result: return add_s_d(y,x)
assert x_is_sparse_variable or y_is_sparse_variable
if x_is_sparse_variable and y_is_sparse_variable: return add_s_s(x,y)
elif x_is_sparse_variable and not y_is_sparse_variable: return add_s_d(x,y)
elif y_is_sparse_variable and not x_is_sparse_variable: return add_s_d(y,x)
else: raise NotImplementedError()
def sub(x,y):
return x + (-y)
......@@ -566,7 +566,7 @@ def sub(x,y):
class MulSS(gof.op.Op):
''' Elementwise multiply a sparse and a ndarray '''
def make_node(self, x, y):
x, y = as_sparse_result(x), as_sparse_result(y)
x, y = as_sparse_variable(x), as_sparse_variable(y)
if x.type != y.type:
raise NotImplementedError()
return gof.Apply(self, [x, y], [x.type()])
......@@ -585,7 +585,7 @@ mul_s_s = MulSS()
class MulSD(gof.op.Op):
''' Elementwise multiply a sparse and a ndarray '''
def make_node(self, x, y):
x, y = as_sparse_result(x), tensor.as_ndarray_result(y)
x, y = as_sparse_variable(x), tensor.as_tensor_variable(y)
if x.type.dtype != y.type.dtype:
raise NotImplementedError()
# The magic number two here arises because L{scipy.sparse}
......@@ -635,24 +635,24 @@ class MulSD(gof.op.Op):
out[0] = type(x)(x.toarray() * y)
def grad(self, (x, y), (gz,)):
assert _is_sparse_result(x) and _is_dense_result(y)
assert _is_sparse_result(gz)
assert _is_sparse_variable(x) and _is_dense_variable(y)
assert _is_sparse_variable(gz)
return y * gz, x * gz
mul_s_d = MulSD()
def mul(x,y):
"""
Multiply (elementwise) two matrices, at least one of which is sparse.
"""
if hasattr(x, 'getnnz'): x = as_sparse_result(x)
if hasattr(y, 'getnnz'): y = as_sparse_result(y)
if hasattr(x, 'getnnz'): x = as_sparse_variable(x)
if hasattr(y, 'getnnz'): y = as_sparse_variable(y)
x_is_sparse_result = _is_sparse_result(x)
y_is_sparse_result = _is_sparse_result(y)
x_is_sparse_variable = _is_sparse_variable(x)
y_is_sparse_variable = _is_sparse_variable(y)
assert x_is_sparse_result or y_is_sparse_result
if x_is_sparse_result and y_is_sparse_result: return mul_s_s(x,y)
elif x_is_sparse_result and not y_is_sparse_result: return mul_s_d(x,y)
elif y_is_sparse_result and not x_is_sparse_result: return mul_s_d(y,x)
assert x_is_sparse_variable or y_is_sparse_variable
if x_is_sparse_variable and y_is_sparse_variable: return mul_s_s(x,y)
elif x_is_sparse_variable and not y_is_sparse_variable: return mul_s_d(x,y)
elif y_is_sparse_variable and not x_is_sparse_variable: return mul_s_d(y,x)
else: raise NotImplementedError()
###############
......@@ -663,12 +663,12 @@ class StructuredDot(gof.Op):
"""Structured Dot is like dot, except that only the gradient wrt non-zero elements of the
sparse matrix A are calculated and propagated.
The output is presumed to be a dense matrix, and is represented by a NDArrayType instance.
The output is presumed to be a dense matrix, and is represented by a TensorType instance.
"""
def make_node(self, a, b):
assert a.type.dtype == b.type.dtype
if type(a) is not SparseResult and type(a) is not SparseConstant:
raise TypeError('First argument must be of type SparseResult or SparseConstant');
if type(a) is not SparseVariable and type(a) is not SparseConstant:
raise TypeError('First argument must be of type SparseVariable or SparseConstant');
return gof.Apply(self, [a,b], [tensor.tensor(a.type.dtype, (False, False))])
......@@ -676,28 +676,28 @@ class StructuredDot(gof.Op):
if a.shape[1] != b.shape[0]:
raise ValueError('shape mismatch in StructuredDot.perform', (a.shape, b.shape))
result = a.dot(b)
assert _is_dense(result) # scipy 0.7 automatically converts to dense
variable = a.dot(b)
assert _is_dense(variable) # scipy 0.7 automatically converts to dense
# dot of an NxM sparse matrix, with a Mx1 dense matrix, returns vector not matrix
if result.ndim == 1:
result = numpy.expand_dims(result,1)
elif result.ndim != 2:
if variable.ndim == 1:
variable = numpy.expand_dims(variable,1)
elif variable.ndim != 2:
raise Exception('Output of structured dot should be a matrix (ndim=2)')
assert result.ndim == 2
assert variable.ndim == 2
if result.shape != (a.shape[0], b.shape[1]):
if variable.shape != (a.shape[0], b.shape[1]):
if b.shape[0] == 1:
raise Exception("a.shape=%s, b.shape=%s, result.shape=%s ??? This is probably because scipy.csc_matrix dot has a bug with singleton dimensions (i.e. b.shape[0]=1), for scipy 0.6. Use scipy 0.7. NB you have scipy version %s" % (a.shape, b.shape, result.shape, scipy.__version__))
raise Exception("a.shape=%s, b.shape=%s, variable.shape=%s ??? This is probably because scipy.csc_matrix dot has a bug with singleton dimensions (i.e. b.shape[0]=1), for scipy 0.6. Use scipy 0.7. NB you have scipy version %s" % (a.shape, b.shape, variable.shape, scipy.__version__))
else:
raise Exception("a.shape=%s, b.shape=%s, result.shape=%s ??? I have no idea why")
raise Exception("a.shape=%s, b.shape=%s, variable.shape=%s ??? I have no idea why")
## Commenting this out because result should be a numpy.ndarray since the assert above
## Commenting this out because variable should be a numpy.ndarray since the assert above
## (JB 20090109)
# out[0] = numpy.asarray(result) #TODO: fix this really bad implementation
# out[0] = numpy.asarray(variable) #TODO: fix this really bad implementation
#
out[0] = result
out[0] = variable
def grad(self, (a,b), (g_out,)):
# a is sparse, b is dense, g_out is dense
......@@ -712,19 +712,19 @@ def structured_dot(x, y):
@todo: Maybe the triple-transposition formulation (when x is dense)
is slow. See if there is a direct way to do this.
"""
if hasattr(x, 'getnnz'): x = as_sparse_result(x)
if hasattr(y, 'getnnz'): y = as_sparse_result(y)
if hasattr(x, 'getnnz'): x = as_sparse_variable(x)
if hasattr(y, 'getnnz'): y = as_sparse_variable(y)
x_is_sparse_result = _is_sparse_result(x)
y_is_sparse_result = _is_sparse_result(y)
x_is_sparse_variable = _is_sparse_variable(x)
y_is_sparse_variable = _is_sparse_variable(y)
if not x_is_sparse_result and not y_is_sparse_result:
if not x_is_sparse_variable and not y_is_sparse_variable:
raise TypeError('structured_dot requires at least one sparse argument')
if x_is_sparse_result:
if x_is_sparse_variable:
return _structured_dot(x, y)
else:
assert y_is_sparse_result
assert y_is_sparse_variable
return _structured_dot(y.T, x.T).T
class StructuredDotCSC(gof.Op):
......
......@@ -9,7 +9,7 @@ class TrueDot(gof.op.Op):
grad_preserves_dense - a boolean flags [default: True].
grad_preserves_dense controls whether gradients with respect to inputs
are converted to dense matrices when the corresponding input y is
dense (not in a L{SparseResult} wrapper). This is generally a good idea
dense (not in a L{SparseVariable} wrapper). This is generally a good idea
when L{Dot} is in the middle of a larger graph, because the types
of gy will match that of y. This conversion might be inefficient if
the gradients are graph outputs though, hence this mask.
......@@ -26,12 +26,12 @@ class TrueDot(gof.op.Op):
return not (self == other)
def make_node(self, x, y):
"""
:note: Because of trickiness of implementing, we assume that the left argument x is SparseResult (not dense)
:note: Because of trickiness of implementing, we assume that the left argument x is SparseVariable (not dense)
"""
if x.type.dtype != y.type.dtype:
raise NotImplementedError()
if not _is_sparse_result(x):
if not _is_sparse_variable(x):
raise TypeError(x)
# These are the conversions performed by scipy.sparse.dot
......@@ -43,7 +43,7 @@ class TrueDot(gof.op.Op):
raise NotImplementedError()
inputs = [x, y] # Need to convert? e.g. assparse
outputs = [Sparse(dtype = x.type.dtype, format = myformat).make_result()]
outputs = [Sparse(dtype = x.type.dtype, format = myformat).make_variable()]
return gof.Apply(self, inputs, outputs)
def perform(self, node, (x, y), (out, )):
"""
......@@ -53,10 +53,10 @@ class TrueDot(gof.op.Op):
rval = x.dot(y)
out[0] = rval
def grad(self, (x, y), (gz,)):
assert _is_sparse_result(gz)
assert _is_sparse_result(x)
assert _is_sparse_variable(gz)
assert _is_sparse_variable(x)
rval = [true_dot(gz, y.T), true_dot(x.T, gz)]
if _is_dense_result(y):
if _is_dense_variable(y):
if self.grad_preserves_dense:
rval[1] = dense_from_sparse(rval[1])
return rval
......@@ -66,17 +66,17 @@ def true_dot(x, y, grad_preserves_dense=True):
@todo: Maybe the triple-transposition formulation (when x is dense)
is slow. See if there is a direct way to do this.
"""
if hasattr(x, 'getnnz'): x = as_sparse_result(x)
if hasattr(y, 'getnnz'): y = as_sparse_result(y)
if hasattr(x, 'getnnz'): x = as_sparse_variable(x)
if hasattr(y, 'getnnz'): y = as_sparse_variable(y)
x_is_sparse_result = _is_sparse_result(x)
y_is_sparse_result = _is_sparse_result(y)
if not x_is_sparse_result and not y_is_sparse_result:
x_is_sparse_variable = _is_sparse_variable(x)
y_is_sparse_variable = _is_sparse_variable(y)
if not x_is_sparse_variable and not y_is_sparse_variable:
raise TypeError()
if x_is_sparse_result:
if x_is_sparse_variable:
return TrueDot(grad_preserves_dense)(x, y)
else:
assert y_is_sparse_result
assert y_is_sparse_variable
return transpose(TrueDot(grad_preserves_dense)(y.T, x.T))
......@@ -86,16 +86,16 @@ class test_true_dot(unittest.TestCase):
def test_basicSS(self):
for mtype in _mtypes:
x = as_sparse_result(mtype((500,3)))
x = as_sparse_variable(mtype((500,3)))
x.data[(10, 1)] = 1
x.data[(20, 2)] = 2
self.failUnless(_is_sparse_result(x))
self.failUnless(_is_sparse_variable(x))
xT = x.T
self.failUnless(_is_sparse_result(xT))
self.failUnless(_is_sparse_variable(xT))
zop = true_dot(x,xT)
self.failUnless(_is_sparse_result(zop))
self.failUnless(_is_sparse_variable(zop))
z = eval_outputs([zop])
self.failUnless(_is_sparse(z))
self.failUnless(z.shape == (500,500))
......@@ -117,16 +117,16 @@ class test_true_dot(unittest.TestCase):
def test_basicSD(self):
for mtype in _mtypes:
x = as_sparse_result(mtype((500,3)))
x = as_sparse_variable(mtype((500,3)))
x.data[(10, 1)] = 1
x.data[(20, 2)] = 2
self.failUnless(_is_sparse_result(x))
self.failUnless(_is_sparse_variable(x))
y = tensor.as_ndarray_result([[1., 2], [3, 4], [2, 1]])
self.failUnless(_is_dense_result(y))
y = tensor.as_tensor_variable([[1., 2], [3, 4], [2, 1]])
self.failUnless(_is_dense_variable(y))
zop = true_dot(x,y)
self.failUnless(_is_sparse_result(zop))
self.failUnless(_is_sparse_variable(zop))
z = eval_outputs([zop])
self.failUnless(_is_sparse(z))
self.failUnless(z.shape == (500,2))
......@@ -150,20 +150,20 @@ class test_true_dot(unittest.TestCase):
def test_basicDS(self):
for mtype in _mtypes:
x = as_sparse_result(mtype((500,3)))
x = as_sparse_variable(mtype((500,3)))
x.data[(10, 1)] = 1
x.data[(20, 2)] = 2
self.failUnless(_is_sparse_result(x))
self.failUnless(_is_sparse_variable(x))
y = tensor.as_ndarray_result([[1., 2], [3, 4], [2, 1]])
self.failUnless(_is_dense_result(y))
y = tensor.as_tensor_variable([[1., 2], [3, 4], [2, 1]])
self.failUnless(_is_dense_variable(y))
x.data = x.data.T
y.data = y.data.T
zop = true_dot(y, x)
zop = transpose(true_dot(y, x))
self.failUnless(_is_sparse_result(zop))
self.failUnless(_is_sparse_variable(zop))
z = eval_outputs([zop])
self.failUnless(_is_sparse(z))
self.failUnless(z.shape == (500,2))
......@@ -189,8 +189,8 @@ class test_true_dot(unittest.TestCase):
def test_graph_bprop0(self):
for mtype in _mtypes:
x = tensor.matrix('x') #NDArrayType('float64', broadcastable=[False,False], name='x')
w = Sparse(dtype = 'float64', format = _mtype_to_str[mtype]).make_result()
x = tensor.matrix('x') #TensorType('float64', broadcastable=[False,False], name='x')
w = Sparse(dtype = 'float64', format = _mtype_to_str[mtype]).make_variable()
xw = dense_from_sparse(true_dot(w, x))
y = dense_from_sparse(true_dot(w.T, xw))
diff = x-y
......@@ -217,7 +217,7 @@ class test_true_dot(unittest.TestCase):
xorig = numpy.random.rand(3,2)
for mtype in _mtypes:
x = tensor.matrix('x')
w = Sparse(dtype = 'float64', format = _mtype_to_str[mtype]).make_result()
w = Sparse(dtype = 'float64', format = _mtype_to_str[mtype]).make_variable()
xw = dense_from_sparse(true_dot(w, x))
y = dense_from_sparse(true_dot(w.T, xw))
diff = x-y
......
......@@ -8,7 +8,7 @@ from theano import compile
from theano import gradient
from theano import gof
from theano.sparse.basic import _is_dense, _is_sparse, _is_dense_result, _is_sparse_result
from theano.sparse.basic import _is_dense, _is_sparse, _is_dense_variable, _is_sparse_variable
from theano.sparse.basic import _mtypes, _mtype_to_str
from theano.tests import unittest_tools
......@@ -22,7 +22,7 @@ class T_transpose(unittest.TestCase):
def test_transpose_csc(self):
sp = sparse.csc_matrix(sparse.eye(5,3))
a = as_sparse_result(sp)
a = as_sparse_variable(sp)
self.failUnless(a.data is sp)
self.failUnless(a.data.shape == (5,3))
self.failUnless(a.type.dtype == 'float64', a.type.dtype)
......@@ -34,7 +34,7 @@ class T_transpose(unittest.TestCase):
vta = eval_outputs([ta])
self.failUnless(vta.shape == (3,5))
def test_transpose_csr(self):
a = as_sparse_result(sparse.csr_matrix(sparse.eye(5,3)))
a = as_sparse_variable(sparse.csr_matrix(sparse.eye(5,3)))
self.failUnless(a.data.shape == (5,3))
self.failUnless(a.type.dtype == 'float64')
self.failUnless(a.type.format == 'csr')
......@@ -49,19 +49,19 @@ class T_Add(unittest.TestCase):
def testSS(self):
for mtype in _mtypes:
a = mtype(numpy.array([[1., 0], [3, 0], [0, 6]]))
aR = as_sparse_result(a)
aR = as_sparse_variable(a)
self.failUnless(aR.data is a)
self.failUnless(_is_sparse(a))
self.failUnless(_is_sparse_result(aR))
self.failUnless(_is_sparse_variable(aR))
b = mtype(numpy.asarray([[0, 2.], [0, 4], [5, 0]]))
bR = as_sparse_result(b)
bR = as_sparse_variable(b)
self.failUnless(bR.data is b)
self.failUnless(_is_sparse(b))
self.failUnless(_is_sparse_result(bR))
self.failUnless(_is_sparse_variable(bR))
apb = add(aR, bR)
self.failUnless(_is_sparse_result(apb))
self.failUnless(_is_sparse_variable(apb))
self.failUnless(apb.type.dtype == aR.type.dtype, apb.type.dtype)
self.failUnless(apb.type.dtype == bR.type.dtype, apb.type.dtype)
......@@ -76,19 +76,19 @@ class T_Add(unittest.TestCase):
def testSD(self):
for mtype in _mtypes:
a = numpy.array([[1., 0], [3, 0], [0, 6]])
aR = tensor.as_ndarray_result(a)
aR = tensor.as_tensor_variable(a)
self.failUnless(aR.data is a)
self.failUnless(_is_dense(a))
self.failUnless(_is_dense_result(aR))
self.failUnless(_is_dense_variable(aR))
b = mtype(numpy.asarray([[0, 2.], [0, 4], [5, 0]]))
bR = as_sparse_result(b)
bR = as_sparse_variable(b)
self.failUnless(bR.data is b)
self.failUnless(_is_sparse(b))
self.failUnless(_is_sparse_result(bR))
self.failUnless(_is_sparse_variable(bR))
apb = add(aR, bR)
self.failUnless(_is_dense_result(apb))
self.failUnless(_is_dense_variable(apb))
self.failUnless(apb.type.dtype == aR.type.dtype, apb.type.dtype)
self.failUnless(apb.type.dtype == bR.type.dtype, apb.type.dtype)
......@@ -101,19 +101,19 @@ class T_Add(unittest.TestCase):
def testDS(self):
for mtype in _mtypes:
a = mtype(numpy.array([[1., 0], [3, 0], [0, 6]]))
aR = as_sparse_result(a)
aR = as_sparse_variable(a)
self.failUnless(aR.data is a)
self.failUnless(_is_sparse(a))
self.failUnless(_is_sparse_result(aR))
self.failUnless(_is_sparse_variable(aR))
b = numpy.asarray([[0, 2.], [0, 4], [5, 0]])
bR = tensor.as_ndarray_result(b)
bR = tensor.as_tensor_variable(b)
self.failUnless(bR.data is b)
self.failUnless(_is_dense(b))
self.failUnless(_is_dense_result(bR))
self.failUnless(_is_dense_variable(bR))
apb = add(aR, bR)
self.failUnless(_is_dense_result(apb))
self.failUnless(_is_dense_variable(apb))
self.failUnless(apb.type.dtype == aR.type.dtype, apb.type.dtype)
self.failUnless(apb.type.dtype == bR.type.dtype, apb.type.dtype)
......@@ -128,14 +128,14 @@ class T_conversion(unittest.TestCase):
unittest_tools.seed_rng()
def test0(self):
a = tensor.as_ndarray_result(numpy.random.rand(5))
a = tensor.as_tensor_variable(numpy.random.rand(5))
s = csc_from_dense(a)
val = eval_outputs([s])
self.failUnless(str(val.dtype)=='float64')
self.failUnless(val.format == 'csc')
def test1(self):
a = tensor.as_ndarray_result(numpy.random.rand(5))
a = tensor.as_tensor_variable(numpy.random.rand(5))
s = csr_from_dense(a)
val = eval_outputs([s])
self.failUnless(str(val.dtype)=='float64')
......
......@@ -11,7 +11,7 @@ import numpy
from copy import copy
from .. import gof
from ..gof import Result, Op, utils, Type, Constant, Apply, Value
from ..gof import Variable, Op, utils, Type, Constant, Apply, Value
from .. import gradient
......@@ -59,26 +59,26 @@ def __oplist_tag(thing, tag):
thing.__oplist_tags = tags
def as_ndarray_result(x, name = None, ndim=None):
"""Return `x`, transformed into a `NDArrayType`
def as_tensor_variable(x, name = None, ndim=None):
"""Return `x`, transformed into a `TensorType`
This function is often used by `make_node` methods of `Op` subclasses to
turn ndarrays, numbers, `Scalar` instances, `Apply` instances and `NDArrayType`
turn ndarrays, numbers, `Scalar` instances, `Apply` instances and `TensorType`
instances into valid input list elemnts.
:Parameters:
- `x`: Apply instance, Result instance, numpy.ndarray, or number
This thing will be transformed into a `Result` in a sensible way. An
- `x`: Apply instance, Variable instance, numpy.ndarray, or number
This thing will be transformed into a `Variable` in a sensible way. An
ndarray argument will not be copied, but a list of numbers will be copied
to make an ndarray.
- `name`: str or None
If a new `Result` instance is created, it will be named with this string.
If a new `Variable` instance is created, it will be named with this string.
- `ndim`: None or integer
Return a Result with this many dimensions. Raise TypeError if it's not possible.
Return a Variable with this many dimensions. Raise TypeError if it's not possible.
:Exceptions:
- `ValueError`: raised if an `Apply` with no default output is fetched
- `TypeError`: raised if `x` cannot be converted to a NDArrayType Result
- `TypeError`: raised if `x` cannot be converted to a TensorType Variable
"""
......@@ -88,24 +88,24 @@ def as_ndarray_result(x, name = None, ndim=None):
raise ValueError("It is ambiguous which output of a multi-output Op has to be fetched.", x)
else:
x = x.outputs[0]
if isinstance(x, Result):
if isinstance(x, Variable):
if isinstance(x.type, scal.Scalar):
x = tensor_from_scalar(x)
if not isinstance(x.type, NDArrayType):
raise TypeError("Result type field must be a NDArrayType.", x, x.type)
if not isinstance(x.type, TensorType):
raise TypeError("Variable type field must be a TensorType.", x, x.type)
if ndim is None:
return x
else:
if (x.type.ndim > ndim):
#TODO: strip off leading broadcastable dimensions
raise ValueError('NDArrayType could not be cast to have %i dimensions' % ndim, x.type)
raise ValueError('TensorType could not be cast to have %i dimensions' % ndim, x.type)
elif (x.type.ndim < ndim):
return shape_padleft(x, n_ones=(ndim - x.type.ndim))
else:
return x
if isinstance(x, (tuple, list)) and any(isinstance(xi, Result) for xi in x):
if isinstance(x, (tuple, list)) and any(isinstance(xi, Variable) for xi in x):
try:
return stack(*x)
except (TypeError, ValueError):
......@@ -118,13 +118,13 @@ def as_ndarray_result(x, name = None, ndim=None):
str_x = str(x)
except:
str_x = repr(x)
raise TypeError("Cannot convert %s to NDArrayType" % str_x, type(x))
raise TypeError("Cannot convert %s to TensorType" % str_x, type(x))
# this has a different name, because _as_ndarray_result is the function which ops use
# this has a different name, because _as_tensor_variable is the function which ops use
# to upcast their arguments... this internal-use function is a good place to put debugging stuff, better than the global astensor.
_as_ndarray_result = as_ndarray_result
_as_tensor_variable = as_tensor_variable
as_tensor = as_ndarray_result
as_tensor = as_tensor_variable
def constant_or_value(x, rtype, name=None, ndim=None):
......@@ -150,19 +150,19 @@ def constant_or_value(x, rtype, name=None, ndim=None):
assert len(bcastable) == ndim
try:
return rtype(NDArrayType(dtype = x_.dtype, broadcastable = bcastable), x_, name=name)
return rtype(TensorType(dtype = x_.dtype, broadcastable = bcastable), x_, name=name)
except:
raise TypeError("Could not convert %s to NDArrayType" % x, type(x))
raise TypeError("Could not convert %s to TensorType" % x, type(x))
def constant(x, name=None, ndim=None):
return constant_or_value(x, rtype=NDArrayConstant, name=name, ndim=ndim)
return constant_or_value(x, rtype=TensorConstant, name=name, ndim=ndim)
def value(x, name=None, ndim=None):
return constant_or_value(x, rtype=NDArrayValue, name=name, ndim=ndim)
return constant_or_value(x, rtype=TensorValue, name=name, ndim=ndim)
class NDArrayType(Type):
class TensorType(Type):
"""Symbolic `Type` representing a numpy.ndarray value."""
def __init__(self, dtype, broadcastable, name = None):
......@@ -170,7 +170,7 @@ class NDArrayType(Type):
:Parameters:
- `dtype`: str corresponding to numpy dtype (e.g., 'int64')
The value (ndarray) associated to a `Result` of this `Type` will have
The value (ndarray) associated to a `Variable` of this `Type` will have
this dtype.
- `broadcastable`: tuple, list, or array of boolean values
This argument serves two purposes. First, the True elements of this
......@@ -187,7 +187,7 @@ class NDArrayType(Type):
self.name = name
def filter(self, data, strict = False):
"""Convert `data` to something which can be associated to a `NDArrayResult`.
"""Convert `data` to something which can be associated to a `TensorVariable`.
This function is not meant to be called in user code. It is for
`Linker` instances to use when running a compiled graph.
......@@ -237,7 +237,7 @@ class NDArrayType(Type):
return scal.Scalar(dtype = self.dtype)
def __eq__(self, other):
"""Compare True iff other is the same kind of NDArrayType"""
"""Compare True iff other is the same kind of TensorType"""
return type(self) == type(other) and other.dtype == self.dtype and other.broadcastable == self.broadcastable
def values_eq_approx(self, a, b):
......@@ -252,26 +252,26 @@ class NDArrayType(Type):
return False
def __hash__(self):
"""Hash equal for same kinds of NDArrayType"""
"""Hash equal for same kinds of TensorType"""
return hash(self.dtype) ^ hash(self.broadcastable)
ndim = property(lambda self: len(self.broadcastable), doc = "number of dimensions")
"""Number of dimensions
This read-only property is the preferred way to get the number of dimensions
of a `NDArrayType`.
of a `TensorType`.
"""
def make_result(self, name = None):
"""Return a `NDArrayResult` of this type
def make_variable(self, name = None):
"""Return a `TensorVariable` of this type
:Parameters:
- `name`: str
A pretty name to identify this `Result` when printing and debugging
A pretty name to identify this `Variable` when printing and debugging
"""
return NDArrayResult(self, name = name)
return TensorVariable(self, name = name)
def __str__(self):
if self.name:
......@@ -284,11 +284,11 @@ class NDArrayType(Type):
(False, True): 'col',
(True, False): 'row',
(False, False): 'matrix'}.get(b, "%iD" % len(b) if not any(b) else str(b))
return "NDArrayType(%s, %s)" % (str(self.dtype), bcast)
return "TensorType(%s, %s)" % (str(self.dtype), bcast)
def __repr__(self):
return str(self)
#"NDArrayType{%s, %s}" % (str(self.dtype), str(self.broadcastable))
#"TensorType{%s, %s}" % (str(self.dtype), str(self.broadcastable))
def c_declare(self, name, sub):
"""Override `CLinkerOp.c_declare` """
......@@ -402,7 +402,7 @@ class NDArrayType(Type):
# Easy constructors
def tensor(*args, **kwargs):
return NDArrayType(*args, **kwargs).make_result()
return TensorType(*args, **kwargs).make_variable()
def _multi(*fns):
def f2(f, *names):
......@@ -423,16 +423,16 @@ def _multi(*fns):
else:
return [partial(f2, f) for f in fns]
cscalar = NDArrayType('complex64', ())
zscalar = NDArrayType('complex128', ())
fscalar = NDArrayType('float32', ())
dscalar = NDArrayType('float64', ())
bscalar = NDArrayType('int8', ())
wscalar = NDArrayType('int16', ())
iscalar = NDArrayType('int32', ())
lscalar = NDArrayType('int64', ())
cscalar = TensorType('complex64', ())
zscalar = TensorType('complex128', ())
fscalar = TensorType('float32', ())
dscalar = TensorType('float64', ())
bscalar = TensorType('int8', ())
wscalar = TensorType('int16', ())
iscalar = TensorType('int32', ())
lscalar = TensorType('int64', ())
def scalar(name = None, dtype = 'float64'):
type = NDArrayType(dtype, ())
type = TensorType(dtype, ())
return type(name)
scalars, fscalars, dscalars, iscalars, lscalars = _multi(scalar, fscalar, dscalar, iscalar, lscalar)
......@@ -443,16 +443,16 @@ int_scalar_types = int_types
float_scalar_types = float_types
complex_scalar_types = complex_types
cvector = NDArrayType('complex64', (False, ))
zvector = NDArrayType('complex128', (False, ))
fvector = NDArrayType('float32', (False, ))
dvector = NDArrayType('float64', (False, ))
bvector = NDArrayType('int8', (False,))
wvector = NDArrayType('int16', (False,))
ivector = NDArrayType('int32', (False, ))
lvector = NDArrayType('int64', (False, ))
cvector = TensorType('complex64', (False, ))
zvector = TensorType('complex128', (False, ))
fvector = TensorType('float32', (False, ))
dvector = TensorType('float64', (False, ))
bvector = TensorType('int8', (False,))
wvector = TensorType('int16', (False,))
ivector = TensorType('int32', (False, ))
lvector = TensorType('int64', (False, ))
def vector(name = None, dtype = 'float64'):
type = NDArrayType(dtype, (False, ))
type = TensorType(dtype, (False, ))
return type(name)
vectors, fvectors, dvectors, ivectors, lvectors = _multi(vector, fvector, dvector, ivector, lvector)
......@@ -460,16 +460,16 @@ int_vector_types = bvector, wvector, ivector, lvector
float_vector_types = fvector, dvector
complex_vector_types = cvector, zvector
cmatrix = NDArrayType('complex64', (False, False))
zmatrix = NDArrayType('complex128', (False, False))
fmatrix = NDArrayType('float32', (False, False))
dmatrix = NDArrayType('float64', (False, False))
bmatrix = NDArrayType('int8', (False, False))
wmatrix = NDArrayType('int16', (False, False))
imatrix = NDArrayType('int32', (False, False))
lmatrix = NDArrayType('int64', (False, False))
cmatrix = TensorType('complex64', (False, False))
zmatrix = TensorType('complex128', (False, False))
fmatrix = TensorType('float32', (False, False))
dmatrix = TensorType('float64', (False, False))
bmatrix = TensorType('int8', (False, False))
wmatrix = TensorType('int16', (False, False))
imatrix = TensorType('int32', (False, False))
lmatrix = TensorType('int64', (False, False))
def matrix(name = None, dtype = 'float64'):
type = NDArrayType(dtype, (False, False))
type = TensorType(dtype, (False, False))
return type(name)
matrices, fmatrices, dmatrices, imatrices, lmatrices = _multi(matrix, fmatrix, dmatrix, imatrix, lmatrix)
......@@ -477,29 +477,29 @@ int_matrix_types = bmatrix, wmatrix, imatrix, lmatrix
float_matrix_types = fmatrix, dmatrix
complex_matrix_types = cmatrix, zmatrix
crow = NDArrayType('complex64', (True, False))
zrow = NDArrayType('complex128', (True, False))
frow = NDArrayType('float32', (True, False))
drow = NDArrayType('float64', (True, False))
brow = NDArrayType('int8', (True, False))
wrow = NDArrayType('int16', (True, False))
irow = NDArrayType('int32', (True, False))
lrow = NDArrayType('int64', (True, False))
crow = TensorType('complex64', (True, False))
zrow = TensorType('complex128', (True, False))
frow = TensorType('float32', (True, False))
drow = TensorType('float64', (True, False))
brow = TensorType('int8', (True, False))
wrow = TensorType('int16', (True, False))
irow = TensorType('int32', (True, False))
lrow = TensorType('int64', (True, False))
def row(name = None, dtype = 'float64'):
type = NDArrayType(dtype, (True, False))
type = TensorType(dtype, (True, False))
return type(name)
rows, frows, drows, irows, lrows = _multi(row, frow, drow, irow, lrow)
ccol = NDArrayType('complex64', (False, True))
zcol = NDArrayType('complex128', (False, True))
fcol = NDArrayType('float32', (False, True))
dcol = NDArrayType('float64', (False, True))
bcol = NDArrayType('int8', (False, True))
wcol = NDArrayType('int16', (False, True))
icol = NDArrayType('int32', (False, True))
lcol = NDArrayType('int64', (False, True))
ccol = TensorType('complex64', (False, True))
zcol = TensorType('complex128', (False, True))
fcol = TensorType('float32', (False, True))
dcol = TensorType('float64', (False, True))
bcol = TensorType('int8', (False, True))
wcol = TensorType('int16', (False, True))
icol = TensorType('int32', (False, True))
lcol = TensorType('int64', (False, True))
def col(name = None, dtype = 'float64'):
type = NDArrayType(dtype, (False, True))
type = TensorType(dtype, (False, True))
return type(name)
cols, fcols, dcols, icols, lcols = _multi(col, fcol, dcol, icol, lcol)
......@@ -594,10 +594,10 @@ class _tensor_py_operators:
def __getitem__(self, args):
if not isinstance(args, tuple):
args = args,
return Subtensor(args)(self, *Subtensor.collapse(args, lambda entry: isinstance(entry, Result)))
return Subtensor(args)(self, *Subtensor.collapse(args, lambda entry: isinstance(entry, Variable)))
def __getslice__(self, *args):
args = slice(*args),
return Subtensor(args)(self, *Subtensor.collapse(args, lambda entry: isinstance(entry, Result)))
return Subtensor(args)(self, *Subtensor.collapse(args, lambda entry: isinstance(entry, Variable)))
#COPYING
def copy(self): return tensor_copy(self)
......@@ -608,7 +608,7 @@ class _tensor_py_operators:
yield self[i]
except:
# This prevents accidental iteration via builtin.sum(self)
raise TypeError('NDArrayType does not support iteration. '
raise TypeError('TensorType does not support iteration. '
'Maybe you are using builtin.sum instead of theano.tensor.sum? (Maybe .max?)')
......@@ -641,10 +641,10 @@ class _tensor_py_operators:
return pow(pow(abs_(self), L).sum(axis=axis), 1.0/L)
class NDArrayResult(Result, _tensor_py_operators):
"""Subclass to add the tensor operators to the basic `Result` class."""
class TensorVariable(Variable, _tensor_py_operators):
"""Subclass to add the tensor operators to the basic `Variable` class."""
class NDArrayConstantSignature(tuple):
class TensorConstantSignature(tuple):
def __eq__(self, other):
(a, b), (x,y) = self, other
#N.B. compare shape to ensure no broadcasting in ==
......@@ -653,33 +653,33 @@ class NDArrayConstantSignature(tuple):
a, b = self
return hash(type(self)) ^ hash(a) ^ hash(b.shape)
class NDArrayConstant(Constant, _tensor_py_operators):
class TensorConstant(Constant, _tensor_py_operators):
"""Subclass to add the tensor operators to the basic `Constant` class.
To create a NDArrayConstant, use the `constant` function in this module.
To create a TensorConstant, use the `constant` function in this module.
"""
def signature(self):
return NDArrayConstantSignature((self.type, self.data))
return TensorConstantSignature((self.type, self.data))
class NDArrayValue(Value, _tensor_py_operators):
class TensorValue(Value, _tensor_py_operators):
"""Subclass to add the tensor operators to the basic `Value` class.
To create a NDArrayValue, use the `value` function in this module.
To create a TensorValue, use the `value` function in this module.
"""
Tensor = NDArrayType
TensorResult = NDArrayResult
TensorConstant = NDArrayConstant
TensorValue = NDArrayValue
Tensor = TensorType
TensorVariable = TensorVariable
TensorConstant = TensorConstant
TensorValue = TensorValue
#QUESTION: why are we doing this!?
elemwise.as_ndarray_result = as_ndarray_result
elemwise.NDArrayType = NDArrayType
elemwise.NDArrayResult = NDArrayResult
elemwise.NDArrayConstant = NDArrayConstant
elemwise.NDArrayValue = NDArrayValue
elemwise.as_tensor_variable = as_tensor_variable
elemwise.TensorType = TensorType
elemwise.TensorVariable = TensorVariable
elemwise.TensorConstant = TensorConstant
elemwise.TensorValue = TensorValue
......@@ -751,7 +751,7 @@ def _scal_elemwise(symbol):
# Casting Operations
#########################
class NDArrayFromScalar(Op):
class TensorFromScalar(Op):
def make_node(self, s):
assert isinstance(s.type, scal.Scalar)
return Apply(self,
......@@ -761,21 +761,21 @@ class NDArrayFromScalar(Op):
def perform(self, node, (s, ), (out, )):
out[0] = numpy.asarray(s)
def grad(self, (s,), (dt,)):
return [ScalarFromNDArray(dt)]
tensor_from_scalar = NDArrayFromScalar()
return [ScalarFromTensor(dt)]
tensor_from_scalar = TensorFromScalar()
class ScalarFromNDArray(Op):
class ScalarFromTensor(Op):
def make_node(self, t):
assert isinstance(t.type, NDArrayType)
assert isinstance(t.type, TensorType)
assert t.type.broadcastable == ()
return Apply(self,
[t],
[scal.Scalar(dtype = t.type.dtype).make_result()])
[scal.Scalar(dtype = t.type.dtype).make_variable()])
def perform(self, node, (s, ), (out, )):
out[0] = s.flatten()[0]
def grad(self, (s,), (dt,)):
return [NDArrayFromScalar(dt)]
scalar_from_tensor = ScalarFromNDArray()
return [TensorFromScalar(dt)]
scalar_from_tensor = ScalarFromTensor()
@constructor
......@@ -834,7 +834,7 @@ class Shape(Op):
@note: Non-differentiable.
"""
def make_node(self, x):
x = as_ndarray_result(x)
x = as_tensor_variable(x)
return Apply(self, [x], [lvector()])
def perform(self, node, (x, ), (out, )):
out[0] = numpy.asarray(x.shape, dtype = 'int64')
......@@ -854,10 +854,10 @@ class MaxAndArgmax(Op):
E_axis = 'invalid axis'
def make_node(self, x, axis=None):
x = _as_ndarray_result(x)
x = _as_tensor_variable(x)
if axis is None:
axis = x.type.ndim - 1
axis = _as_ndarray_result(axis)
axis = _as_tensor_variable(axis)
inputs = [x, axis]
broadcastable = [False] * (x.type.ndim - 1)
outputs = [tensor(x.type.dtype, broadcastable),
......@@ -1002,7 +1002,7 @@ def invert(a):
def abs_(a):
"""|`a`|
NDArrayResult overloads the `NDArrayResult.__abs__` operator so that
TensorVariable overloads the `TensorVariable.__abs__` operator so that
this function is called when you type abs(a).
"""
......@@ -1103,11 +1103,11 @@ class Filler(gof.Op):
self.value = value
self.ndim = ndim
self.dtype = dtype
self.type = NDArrayType(dtype = dtype,
self.type = TensorType(dtype = dtype,
broadcastable = (False,)*ndim)
def make_node(self, dims):
dims = as_ndarray_result(dims)
dims = as_tensor_variable(dims)
return gof.Apply(self, [dims], [self.type()])
def perform(self, node, (dims,), (out,)):
......@@ -1192,10 +1192,10 @@ def mean(input, axis = None):
class Repeat(gof.Op):
def make_node(self, input, repeats, axis):
assert isinstance(input.type, NDArrayType)
assert isinstance(input.type, TensorType)
assert repeats.type == iscalar
assert axis.type == iscalar
type = NDArrayType(dtype = input.type.dtype,
type = TensorType(dtype = input.type.dtype,
broadcastable = [False if i==axis else x for i, x in enumerate(input.broadcastable)])
return gof.Apply(self, [inputs, repeats, axis], [type()])
......@@ -1267,7 +1267,7 @@ class Subtensor(Op):
idxlist is a list whose elements are either integers, or slices. The
integers are indexes into the inputs array, and the start/stop/step members
of each slice are also integer indexes into the inputs array (or None). The
inputs array is the tensor x, followed by scalar integer results.
inputs array is the tensor x, followed by scalar integer variables.
@todo: add support for advanced tensor indexing (in Subtensor_dx too).
"""
......@@ -1296,11 +1296,11 @@ class Subtensor(Op):
def convert(entry, slice_ok=True):
scal_types = [scal.int64, scal.int32, scal.int16, scal.int8]
tensor_types = [bscalar, iscalar, lscalar]
if isinstance(entry, gof.Result) and entry.type in scal_types:
if isinstance(entry, gof.Variable) and entry.type in scal_types:
return entry.type
elif isinstance(entry, gof.Type) and entry in scal_types:
return entry
if isinstance(entry, gof.Result) and entry.type in tensor_types:
if isinstance(entry, gof.Variable) and entry.type in tensor_types:
return scal.Scalar(entry.type.dtype)
elif isinstance(entry, gof.Type) and entry in tensor_types:
return scal.Scalar(entry.dtype)
......@@ -1320,9 +1320,9 @@ class Subtensor(Op):
self.idx_list = map(self.convert, idx_list)
def make_node(self, x, *inputs):
x = as_ndarray_result(x)
x = as_tensor_variable(x)
def my_as_scalar(a):
if isinstance(a, gof.Result) and isinstance(a.type, NDArrayType):
if isinstance(a, gof.Variable) and isinstance(a.type, TensorType):
return scalar_from_tensor(a)
else:
return scal.as_scalar(a)
......@@ -1424,7 +1424,7 @@ pprint.assign(lambda pstate, r: r.owner and isinstance(r.owner.op, Subtensor), S
class SetSubtensor(Op):
"""Set just some elements of a larger NDArrayType.
"""Set just some elements of a larger TensorType.
This is like numpy's
......@@ -1461,7 +1461,7 @@ class SetSubtensor(Op):
self.__class__.__name__, ", ".join(indices))
def make_node(self, x, y, *inputs):
x, y = map(as_ndarray_result, [x, y])
x, y = map(as_tensor_variable, [x, y])
inputs = tuple(map(scal.as_scalar, inputs))
idx_list = list(self.idx_list)
......@@ -1514,7 +1514,7 @@ def split(x, splits_size, n_splits, axis=0):
return the_split(x, axis, splits_size)
class Split(Op):
"""Partition a `NDArrayResult` along some axis.
"""Partition a `TensorVariable` along some axis.
.. python::
......@@ -1550,9 +1550,9 @@ class Split(Op):
def make_node(self, x, axis, splits):
"""WRITEME"""
x = as_ndarray_result(x)
axis = as_ndarray_result(axis)
splits = as_ndarray_result(splits)
x = as_tensor_variable(x)
axis = as_tensor_variable(axis)
splits = as_tensor_variable(splits)
if splits.type not in int_vector_types:
raise TypeError('splits must have type tensor.lvector', splits.type)
......@@ -1594,10 +1594,10 @@ class Split(Op):
class Join(Op):
"""
Concatenate two `NDArrayResult`s along some axis.
Concatenate two `TensorVariable`s along some axis.
These tensors must have the same shape along all dimensions other than this axis.
Of course, NDArrayResult instances don't have a shape, so this error can't be caught until
Of course, TensorVariable instances don't have a shape, so this error can't be caught until
runtime. See `perform()`.
For joins involving scalar values, see @stack.
......@@ -1615,28 +1615,28 @@ class Join(Op):
def make_node(self, *axis_and_tensors):
"""
:param axis: an Int or integer-valued Result
:param axis: an Int or integer-valued Variable
:param tensors: a variable number (but not zero) of tensors to concatenate along the
specified axis. These tensors must have the same shape along all dimensions other than this axis.
:returns: a symbolic Result. It has the same ndim as the input tensors, and the most
:returns: a symbolic Variable. It has the same ndim as the input tensors, and the most
inclusive dtype.
"""
axis, tensors = axis_and_tensors[0], axis_and_tensors[1:]
if not tensors:
raise ValueError('Cannot join an empty list of tensors')
as_ndarray_result_args= [as_ndarray_result(x) for x in tensors]
dtypes = [x.type.dtype for x in as_ndarray_result_args]
as_tensor_variable_args= [as_tensor_variable(x) for x in tensors]
dtypes = [x.type.dtype for x in as_tensor_variable_args]
out_dtype = scal.upcast(*dtypes)
if not all(targs.type.ndim for targs in as_ndarray_result_args):
if not all(targs.type.ndim for targs in as_tensor_variable_args):
raise TypeError('Join cannot handle arguments of dimension 0. For joining scalar values, see @stack');
# When the axis may vary, no dimension can be guaranteed to be
# broadcastable.
bcastable = [False] * len(as_ndarray_result_args[0].type.broadcastable)
bcastable = [False] * len(as_tensor_variable_args[0].type.broadcastable)
# When the axis is fixed, the broadcastable dimensions remain, except
# for the axis dimension.
......@@ -1644,17 +1644,17 @@ class Join(Op):
# dimensions.
if isinstance(axis, int):
bcasts = [x.type.broadcastable[0:axis] + \
x.type.broadcastable[axis + 1:] for x in as_ndarray_result_args]
x.type.broadcastable[axis + 1:] for x in as_tensor_variable_args]
if not all([bcasts[0] == bc for bc in bcasts[1:]]):
raise ValueError('Dimensions other than the given axis must'
' match', tensors)
bcastable[:] = as_ndarray_result_args[0].type.broadcastable
bcastable[:] = as_tensor_variable_args[0].type.broadcastable
try:
bcastable[axis] = False
except IndexError, e:
raise ValueError('Join argument "axis" is out of range (given input dimensions)')
inputs = [as_ndarray_result(axis)] + as_ndarray_result_args
inputs = [as_tensor_variable(axis)] + as_tensor_variable_args
if inputs[0].type not in int_types:
raise TypeError('Axis could not be cast to an integer type', axis, inputs[0].type, int_types)
......@@ -1695,7 +1695,7 @@ class Join(Op):
for k in range(len(sizes_along_axis))]
def vec_length(self, node):
"""Guess the length of a Join Result"""
"""Guess the length of a Join Variable"""
assert isinstance(node.owner.op, Join)
if node.ndim != 1:
raise TypeError('argument must be symbolic vector')
......@@ -1710,7 +1710,7 @@ class Join(Op):
@_redefine_asRoutine(Join())
def join(axis, *tensors):
"""
Convenience function to concatenate `NDArrayType`s along the given axis.
Convenience function to concatenate `TensorType`s along the given axis.
:Parameters:
- `tensors` : list of tensors (or list-like)
......@@ -1738,7 +1738,7 @@ def shape_padleft(t, n_ones=1):
See also: `shape_padright` and `Dimshuffle`
"""
_t = as_ndarray_result(t)
_t = as_tensor_variable(t)
pattern = ['x']*n_ones + [i for i in range(_t.type.ndim)]
return DimShuffle(_t.broadcastable, pattern)(_t)
......@@ -1749,7 +1749,7 @@ def shape_padright(t, n_ones=1):
See also: `shape_padleft` and `Dimshuffle`
"""
_t = as_ndarray_result(t)
_t = as_tensor_variable(t)
pattern = [i for i in range(_t.type.ndim)] + ['x']*n_ones
return DimShuffle(_t.broadcastable, pattern)(_t)
......@@ -1786,7 +1786,7 @@ def get_vector_length(v):
"""Return the run-time length of a symbolic vector.
:Parameters:
- `v` : A rank-1 NDArrayType result.
- `v` : A rank-1 TensorType variable.
:Exceptions:
- `TypeError` : `v` hasn't the proper type.
......@@ -1797,7 +1797,7 @@ def get_vector_length(v):
cases.
"""
v = as_ndarray_result(v)
v = as_tensor_variable(v)
if v.ndim != 1:
raise TypeError('argument must be symbolic vector')
if isinstance(v, gof.Constant) and v.type.ndim == 1:
......@@ -1814,9 +1814,9 @@ def get_vector_length(v):
@constructor
def horizontal_stack(*args):
"""
Horizontally stack two L{NDArrayType}s.
Stack two L{NDArrayType}s along the second axis (column wise). These
L{NDArrayType}s must have the same shape along all dimensions but the
Horizontally stack two L{TensorType}s.
Stack two L{TensorType}s along the second axis (column wise). These
L{TensorType}s must have the same shape along all dimensions but the
second.
"""
assert len(args) >= 2
......@@ -1832,17 +1832,17 @@ def vertical_stack(*args):
if 0: #vertical and horizontal stacking are deprecated. Better to use stack() and join().
class VerticalStack(Op):
"""
Vertically stack two L{NDArrayType}s.
Stack two L{NDArrayType}s along the first axis (row wise). These
L{NDArrayType}s must have the same shape along all dimensions but the
Vertically stack two L{TensorType}s.
Stack two L{TensorType}s along the first axis (row wise). These
L{TensorType}s must have the same shape along all dimensions but the
first.
@attention: Because we use vstack as the implementation, if the
inputs have 1-dimension, the output will have 2-dimensions.
"""
def make_node(self, x, y):
x = as_ndarray_result(x)
y = as_ndarray_result(y)
x = as_tensor_variable(x)
y = as_tensor_variable(y)
assert x.type.dtype == y.type.dtype
if x.type.broadcastable[1:] != y.type.broadcastable[1:]:
raise NotImplementedError
......@@ -1879,9 +1879,9 @@ class MakeVector(Op):
def __init__(self, stype):
self.stype = stype
def make_node(self, *inputs):
inputs = map(as_ndarray_result, inputs)
inputs = map(as_tensor_variable, inputs)
assert all(a.type == self.stype for a in inputs)
return Apply(self, inputs, [NDArrayType(broadcastable = (False,),
return Apply(self, inputs, [TensorType(broadcastable = (False,),
dtype = self.stype.dtype)()])
def perform(self, node, inputs, (out,)):
out[0] = numpy.asarray(inputs)
......@@ -1917,8 +1917,8 @@ class Reshape(Op):
def __hash__(self):
return hash(Reshape) ^ hash(self.ndim)
def make_node(self, x, shp):
x = as_ndarray_result(x)
shp = as_ndarray_result(shp)
x = as_tensor_variable(x)
shp = as_tensor_variable(shp)
return gof.Apply(self, [x, shp], [tensor(x.type.dtype, [False]*self.ndim)])
def perform(self, node, (x, shp), (out,)):
if (len(shp) != self.ndim):
......@@ -1951,7 +1951,7 @@ class Flatten(Op):
def __hash__(self):
return hash(type(self))^hash(self.outdim)
def make_node(self, x):
t_x = as_ndarray_result(x)
t_x = as_tensor_variable(x)
if self.outdim < 1 or (x.ndim and self.outdim > x.ndim):
raise ValueError('invalid output ndimensions(%i) for tensor of rank %i' %(self.outdim, t_x.ndim))
return gof.Apply(self, [t_x], [tensor(x.type.dtype, (False,)*self.outdim)])
......@@ -1997,8 +1997,8 @@ class Tile(Op):
return hash(Tile) ^ hash(self.ndim)
def make_node(self, x, reps):
x = as_ndarray_result(x)
reps = as_ndarray_result(reps)
x = as_tensor_variable(x)
reps = as_tensor_variable(reps)
return gof.Apply(self, [x, reps], [tensor(x.type.dtype, [False,] * self.ndim)])
def perform(self, node, (x, reps), (out,)):
out[0] = numpy.tile(x, reps)
......@@ -2030,7 +2030,7 @@ class Dot(Op):
"""
def make_node(self, *inputs):
inputs = map(as_ndarray_result, inputs)
inputs = map(as_tensor_variable, inputs)
numpy_semantics = 0
if numpy_semantics:
......@@ -2104,9 +2104,9 @@ class TensorDotGrad(Op):
return hash(type(self)) ^ hash(self.axes) ^ 89234
def make_node(self, x, y, gz):
assert isinstance(x, Result)
assert isinstance(y, Result)
assert isinstance(gz, Result)
assert isinstance(x, Variable)
assert isinstance(y, Variable)
assert isinstance(gz, Variable)
gx = x.type()
gy = y.type()
return Apply(self, [x,y,gz], [gx, gy])
......@@ -2151,7 +2151,7 @@ class TensorDot(Op):
def make_node(self, x, y):
axesdim = numpy.size(self.axes)/2
x, y = map(as_ndarray_result, [x, y])
x, y = map(as_tensor_variable, [x, y])
if axesdim > x.type.ndim or axesdim > y.type.ndim:
raise TypeError('Cannot sum over more dimensions than input. %i > %i,%i' %
......@@ -2182,7 +2182,7 @@ class Outer(Op):
""" Compute vector-vector outer product
"""
def make_node(self, *inputs):
inputs = map(as_ndarray_result, inputs)
inputs = map(as_tensor_variable, inputs)
x, y = inputs
nx = x.type.ndim
......@@ -2211,23 +2211,23 @@ outer = Outer()
def grad(cost, wrt, g_cost=None, consider_constant=[]):
"""
@type cost: L{Result}
@type wrt: L{Result} or list of L{Result}s.
@type g_cost: L{Result} broadcastable to size of I{cost}, or None
@type cost: L{Variable}
@type wrt: L{Variable} or list of L{Variable}s.
@type g_cost: L{Variable} broadcastable to size of I{cost}, or None
@param g_cost: an expression for the gradient through cost. The default is
{{{ones_like(cost)}}}
@param consider_constant: a list of expressions not to backpropagate through
@rtype: L{Result} or list of L{Result}s (depending upon I{wrt})
@rtype: L{Variable} or list of L{Variable}s (depending upon I{wrt})
@return: symbolic expression of gradient of I{cost} with respect to I{wrt}.
If I{wrt} is a list, then return a list containing the gradient of I{cost} wrt
each element of the list. If an element of I{wrt} is not differentiable
with respect to the output, then a L{NDArrayConstant} with an appropriate
with respect to the output, then a L{TensorConstant} with an appropriate
kind of zero is returned.
"""
if not isinstance(cost, NDArrayResult):
raise TypeError('In tensor.grad(), cost argument should be a NDArrayResult.', cost)
if not isinstance(cost, TensorVariable):
raise TypeError('In tensor.grad(), cost argument should be a TensorVariable.', cost)
if g_cost is None:
g_cost = ones_like(cost)
......@@ -2235,8 +2235,8 @@ def grad(cost, wrt, g_cost=None, consider_constant=[]):
gmap = gradient.grad_sources_inputs([(cost, g_cost)], inputs + consider_constant)
def zero(p):
return NDArrayConstant(
NDArrayType(dtype = p.type.dtype, broadcastable = []),
return TensorConstant(
TensorType(dtype = p.type.dtype, broadcastable = []),
numpy.asarray(0, dtype=p.type.dtype))
#try:
......@@ -2365,7 +2365,7 @@ def verify_grad(testcase, op, pt, n_tests=1, rng=numpy.random, eps=1.0e-7, tol=0
o_fn = function(tensor_pt, o_output)
o_fn_out = o_fn(*[p.copy() for p in pt])
random_projection = rng.rand(*o_fn_out.shape)
t_r = as_ndarray_result(random_projection)
t_r = as_tensor_variable(random_projection)
#random projection of o onto t_r
cost = sum(t_r * o_output) #This sum() is defined above, it's not the builtin sum.
......@@ -2373,7 +2373,7 @@ def verify_grad(testcase, op, pt, n_tests=1, rng=numpy.random, eps=1.0e-7, tol=0
num_grad = numeric_grad(cost_fn, [p.copy() for p in pt], eps)
symbolic_grad = grad(cost, tensor_pt,as_ndarray_result(1.0,name='g_cost'))
symbolic_grad = grad(cost, tensor_pt,as_tensor_variable(1.0,name='g_cost'))
grad_fn = function(tensor_pt, symbolic_grad)
......
......@@ -262,9 +262,9 @@ class Gemm(GemmRelated):
The difference between the two is that the top form is destructive on z,
whereas the bottom form is not. Gemm works in-place on the storage
associated with z, and the L{Result} returned by Gemm has a storage that
associated with z, and the L{Variable} returned by Gemm has a storage that
will be aliased to the storage of the z argument. Because of this in-place
computation, an L{Apply} of this op will destroy the L{Result} z on
computation, an L{Apply} of this op will destroy the L{Variable} z on
which it operates. (See L{DestructiveOps} for an explanation of what
destroying means in the context of theano graphs. See L{BlasLapackSupport} for
more optimized linear algebra operations.)
......@@ -275,7 +275,7 @@ class Gemm(GemmRelated):
E_z_uniq = 'argument z aliased to x or y'
destroy_map = {0: [0]}
def make_node(self, *inputs):
inputs = map(T.as_ndarray_result, inputs)
inputs = map(T.as_tensor_variable, inputs)
if len(inputs) != 5:
raise TypeError("Wrong number of inputs for %s (expected 5, got %s)" % (self, len(inputs)))
z, a, x, y, b = inputs
......@@ -475,7 +475,7 @@ class GemmLocalOptimizer(LocalOptimizer):
@staticmethod
def _as_scalar(res):
"""Return None or a NDArrayResult whose type is in T.float_scalar_types"""
"""Return None or a TensorVariable whose type is in T.float_scalar_types"""
if res.owner and isinstance(res.owner.op, T.DimShuffle):
return GemmLocalOptimizer._as_scalar(res.owner.inputs[0])
elif res.type in T.float_scalar_types:
......
......@@ -30,14 +30,14 @@ class KitComponent(Component):
Containers.
"""
for input in self.kit.sinputs:
r = input.result
r = input.variable
if r not in memo:
input = copy(input)
input.value = Container(r, storage = [None])
memo[r] = input
def build(self, mode, memo):
return [memo[i.result].value for i in self.kit.sinputs]
return [memo[i.variable].value for i in self.kit.sinputs]
class RandomKit(SymbolicInputKit):
......@@ -47,9 +47,9 @@ class RandomKit(SymbolicInputKit):
self.value = value
def gen(self, op, *args, **kwargs):
random_state_result = raw_random.random_state_type()
new_r, out = op(random_state_result, *args, **kwargs)
self.add_input(SymbolicInput(random_state_result, update = new_r))
random_state_variable = raw_random.random_state_type()
new_r, out = op(random_state_variable, *args, **kwargs)
self.add_input(SymbolicInput(random_state_variable, update = new_r))
out.rng = new_r
out.auto = self
return out
......
......@@ -13,18 +13,18 @@ from copy import copy, deepcopy
# tensor depends on elemwise to provide definitions for several ops
# but elemwise needs to make NDArrayType instances, so we have these as
# but elemwise needs to make TensorType instances, so we have these as
# placeholders and the tensor module fills them
def as_ndarray_result(data):
def as_tensor_variable(data):
raise Exception("Circular dependencies prevent using this here. import tensor before elemwise")
def NDArrayType(*inputs, **kwargs):
def TensorType(*inputs, **kwargs):
raise Exception("Circular dependencies prevent using this here. import tensor before elemwise")
def NDArrayResult(*inputs, **kwargs):
def TensorVariable(*inputs, **kwargs):
raise Exception("Circular dependencies prevent using this here. import tensor before elemwise")
def NDArrayConstant(*inputs, **kwargs):
def TensorConstant(*inputs, **kwargs):
raise Exception("Circular dependencies prevent using this here. import tensor before elemwise")
......@@ -137,8 +137,8 @@ class DimShuffle(Op):
else:
ob.append(ib[value])
output = NDArrayType(dtype = input.type.dtype,
broadcastable = ob).make_result()
output = TensorType(dtype = input.type.dtype,
broadcastable = ob).make_variable()
return Apply(self, [input], [output])
def __eq__(self, other):
......@@ -256,7 +256,7 @@ class DimShuffle(Op):
return full_code % dict(locals(), **sub)
def grad(self, (x, ), (gz, )):
gz = as_ndarray_result(gz)
gz = as_tensor_variable(gz)
grad_order = ['x'] * len(x.type.broadcastable)
for i, v in enumerate(self.new_order):
if v != 'x':
......@@ -365,7 +365,7 @@ class Elemwise(Op):
using DimShuffle.
"""
inputs = map(as_ndarray_result, inputs)
inputs = map(as_tensor_variable, inputs)
shadow = self.scalar_op.make_node(*[Scalar(dtype = t.type.dtype)() for t in inputs])
target_length = max([input.type.ndim for input in inputs])
......@@ -403,7 +403,7 @@ class Elemwise(Op):
if any(inputs[i].type.dtype != out_dtypes[o] for o, i in inplace_pattern.items()):
raise TypeError("Cannot do an inplace operation on incompatible data types.",
([i.type.dtype for i in inputs], out_dtypes, inplace_pattern))
outputs = [NDArrayType(dtype = dtype, broadcastable = broadcastable)() for dtype, broadcastable in zip(out_dtypes, out_broadcastables)]
outputs = [TensorType(dtype = dtype, broadcastable = broadcastable)() for dtype, broadcastable in zip(out_dtypes, out_broadcastables)]
return Apply(self, inputs, outputs)
def __eq__(self, other):
......@@ -431,7 +431,7 @@ class Elemwise(Op):
return self.name
def grad(self, inputs, ograds):
ograds = map(as_ndarray_result, ograds) # this shouldn't be necessary...
ograds = map(as_tensor_variable, ograds) # this shouldn't be necessary...
scalar_inputs = [Scalar(dtype = t.type.dtype)() for t in inputs]
scalar_ograds = [Scalar(dtype = ograd.type.dtype)() for ograd in ograds]
scalar_igrads = self.scalar_op.grad(scalar_inputs, scalar_ograds)
......@@ -445,8 +445,8 @@ class Elemwise(Op):
node = r.owner
if node is None:
# the gradient contains a constant, translate it as
# an equivalent NDArrayType of size 1 and proper number of dimensions
res = NDArrayConstant(NDArrayType(dtype = r.type.dtype,
# an equivalent TensorType of size 1 and proper number of dimensions
res = TensorConstant(TensorType(dtype = r.type.dtype,
broadcastable = ()),
numpy.asarray(r.data)) # .reshape(b)
return DimShuffle((), ['x']*nd, inplace = True)(res)
......@@ -520,18 +520,18 @@ class Elemwise(Op):
ufunc = self.ufunc or numpy.frompyfunc(self.scalar_op.impl, len(inputs), self.scalar_op.nout)
try:
results = ufunc(*ufunc_args)
variables = ufunc(*ufunc_args)
except Exception, e:
errormsg = 'Failed calling ufunc for op', self.scalar_op,\
'for params of shape', [arg.shape for arg in ufunc_args]
e.args = e.args + errormsg
raise e
if ufunc.nout == 1: results = [results]
for result, storage in zip(results, output_storage):
if ufunc.nout == 1: variables = [variables]
for variable, storage in zip(variables, output_storage):
if storage[0].shape:
storage[0][:] = result
storage[0][:] = variable
else:
storage[0].itemset(result)
storage[0].itemset(variable)
# the following should be used instead of the previous loop, unfortunately it tends to segfault
# self.ufunc(*(ufunc_args+[s[0] for s in output_storage]))
......@@ -640,7 +640,7 @@ class CAReduce(Op):
Reduces a scalar operation along the specified axis(es).
The output will have the same shape as the input minus the reduced
dimensions. It will contain the result of accumulating all values
dimensions. It will contain the variable of accumulating all values
over the reduced dimensions using the specified scalar op.
Examples:
......@@ -652,7 +652,7 @@ class CAReduce(Op):
In order to (eventually) optimize memory usage patterns,
L{CAReduce} makes zero guarantees on the order in which it
iterates over the dimensions and the elements of the
array(s). Therefore, to ensure consistent results, the scalar
array(s). Therefore, to ensure consistent variables, the scalar
operation represented by the reduction must be both commutative
and associative (eg add, multiply, binary or/and/xor - but not
subtract, divide or power).
......@@ -678,11 +678,11 @@ class CAReduce(Op):
self.ufunc = numpy.frompyfunc(scalar_op.impl, 2, 1)
def make_node(self, input):
input = as_ndarray_result(input)
input = as_tensor_variable(input)
axis = self.axis
if axis is None:
axis = range(len(input.type.broadcastable))
output = NDArrayType(dtype = input.type.dtype,
output = TensorType(dtype = input.type.dtype,
broadcastable = [x for i, x in enumerate(input.type.broadcastable) if i not in axis])()
return Apply(self, [input], [output])
......@@ -714,14 +714,14 @@ class CAReduce(Op):
axis = self.axis
if axis is None:
axis = range(input.ndim)
result = input
variable = input
to_reduce = reversed(sorted(axis))
if to_reduce:
for dimension in to_reduce:
result = self.ufunc.reduce(result, dimension)
output[0] = numpy.asarray(result, dtype = node.outputs[0].type.dtype)
variable = self.ufunc.reduce(variable, dimension)
output[0] = numpy.asarray(variable, dtype = node.outputs[0].type.dtype)
else:
output[0] = numpy.copy(result)
output[0] = numpy.copy(variable)
def _c_all(self, node, name, inames, onames, sub):
......@@ -809,7 +809,7 @@ class Sum(CAReduce):
CAReduce.__init__(self, scalar.add, axis)
def grad(self, (x, ), (gz, )):
gz = as_ndarray_result(gz)
gz = as_tensor_variable(gz)
axis = self.axis
if axis is None:
axis = range(x.type.ndim)
......
......@@ -94,8 +94,8 @@ class SoftmaxWithBias(gof.Op):
gof.Op.__init__(self, **kwargs)
def make_node(self, x, b):
x = tensor.as_ndarray_result(x)
b = tensor.as_ndarray_result(b)
x = tensor.as_tensor_variable(x)
b = tensor.as_tensor_variable(b)
if x.type.ndim != 2 \
or x.type.dtype not in ['float32', 'float64']:
raise ValueError('x must be 2-d tensor of floats')
......@@ -103,7 +103,7 @@ class SoftmaxWithBias(gof.Op):
or x.type.dtype not in ['float32', 'float64']:
raise ValueError('b must be 1-d tensor of floats')
sm = x.type.make_result()
sm = x.type.make_variable()
return gof.Apply(self, [x, b], [sm])
def perform(self, node, input_storage, output_storage):
......@@ -263,9 +263,9 @@ class SoftmaxWithBiasDx(gof.Op):
gof.Op.__init__(self, **kwargs)
def make_node(self, dy, sm, **kwargs):
dy = tensor.as_ndarray_result(dy)
sm = tensor.as_ndarray_result(sm)
return gof.Apply(self, [dy, sm], [sm.type.make_result()])
dy = tensor.as_tensor_variable(dy)
sm = tensor.as_tensor_variable(sm)
return gof.Apply(self, [dy, sm], [sm.type.make_variable()])
def perform(self, node, input_storage, output_storage):
dy, sm = input_storage
......@@ -368,9 +368,9 @@ class CrossentropySoftmaxArgmax1HotWithBias(gof.Op):
gof.Op.__init__(self, **kwargs)
def make_node(self, x, b, y_idx):
x = tensor.as_ndarray_result(x)
b = tensor.as_ndarray_result(b)
y_idx = tensor.as_ndarray_result(y_idx)
x = tensor.as_tensor_variable(x)
b = tensor.as_tensor_variable(b)
y_idx = tensor.as_tensor_variable(y_idx)
if x.type.ndim != 2 \
or x.type.dtype not in ['float32', 'float64']:
raise ValueError('x must be 2-d tensor of floats')
......@@ -382,11 +382,11 @@ class CrossentropySoftmaxArgmax1HotWithBias(gof.Op):
raise ValueError('y_idx must be 1-d tensor of ints')
# TODO: Is this correct? It used to be y, not y_idx
nll = tensor.NDArrayType(x.type.dtype,
y_idx.type.broadcastable).make_result()
# nll = NDArrayType(x.dtype, y.broadcastable)
sm = x.type.make_result()
am = y_idx.type.make_result()
nll = tensor.TensorType(x.type.dtype,
y_idx.type.broadcastable).make_variable()
# nll = TensorType(x.dtype, y.broadcastable)
sm = x.type.make_variable()
am = y_idx.type.make_variable()
return gof.Apply(self, [x, b, y_idx], [nll, sm, am])
def perform(self, node, input_storage, output_storage):
"""
......@@ -532,10 +532,10 @@ class CrossentropySoftmax1HotWithBiasDx (gof.Op):
def __init__(self, **kwargs):
gof.Op.__init__(self,**kwargs)
def make_node(self, dy, sm, y_idx,**kwargs):
dy = tensor.as_ndarray_result(dy)
sm = tensor.as_ndarray_result(sm)
y_idx = tensor.as_ndarray_result(y_idx)
return gof.Apply(self, [dy, sm, y_idx],[sm.type.make_result()])
dy = tensor.as_tensor_variable(dy)
sm = tensor.as_tensor_variable(sm)
y_idx = tensor.as_tensor_variable(y_idx)
return gof.Apply(self, [dy, sm, y_idx],[sm.type.make_variable()])
def perform(self, node, input_storage, output_storage):
dy,sm,y_idx = input_storage
dx = numpy.zeros_like(sm)
......@@ -670,10 +670,10 @@ class Prepend_scalar_constant_to_each_row(gof.Op):
def make_node(self, mat):
#check type of input
if not isinstance(mat,gof.Result) or not mat.type==tensor.matrix().type:
if not isinstance(mat,gof.Variable) or not mat.type==tensor.matrix().type:
raise TypeError("Expected a matrix as input")
x = tensor.as_ndarray_result(mat)
y = tensor.as_ndarray_result(self.val)
x = tensor.as_tensor_variable(mat)
y = tensor.as_tensor_variable(self.val)
if x.type.dtype != y.type.dtype:
TypeError("the value to prepend don't have the same type as the matrix")
......@@ -704,10 +704,10 @@ class Prepend_scalar_to_each_row(gof.Op):
#check type of input
if isinstance(val, float):
val = scalar.constant(val)
if not isinstance(mat,gof.Result) or not mat.type==tensor.matrix().type:
if not isinstance(mat,gof.Variable) or not mat.type==tensor.matrix().type:
raise TypeError("Expected a matrix as input")
x = tensor.as_ndarray_result(mat)
y = tensor.as_ndarray_result(val)
x = tensor.as_tensor_variable(mat)
y = tensor.as_tensor_variable(val)
if x.type.dtype != y.type.dtype:
TypeError("the value to prepend don't have the same type as the matrix")
......@@ -744,9 +744,9 @@ class solve(gof.Op):
"""
def make_node(self, A, b):
if not isinstance(A, gof.Result) or not A.type==tensor.matrix().type:
if not isinstance(A, gof.Variable) or not A.type==tensor.matrix().type:
raise TypeError("We expected that A had a matrix type")
if not isinstance(B, gof.Result) or not B.type==tensor.matrix().type:
if not isinstance(B, gof.Variable) or not B.type==tensor.matrix().type:
raise TypeError("We expected that B had a matrix type")
node = gof.Apply(op=self, inputs=[A, B], outputs=[tensor.matrix()])
......
......@@ -397,7 +397,7 @@ class Canonizer(gof.LocalOptimizer):
the value is such that value = main(). In that case,
the return value should be an empty list.
The result is a local_optimizer. It is best used with a TopoOptimizer in
The variable is a local_optimizer. It is best used with a TopoOptimizer in
in_to_out order.
Examples:
......@@ -534,7 +534,7 @@ class Canonizer(gof.LocalOptimizer):
ln, ld = len(num), len(denum)
if not ln and not ld:
return T.as_ndarray_result(self.calculate([], []))
return T.as_tensor_variable(self.calculate([], []))
if not ln:
if self.use_reciprocal:
return self.reciprocal(self.merge_num_denum(denum, []))
......@@ -542,10 +542,10 @@ class Canonizer(gof.LocalOptimizer):
ln = [self.calculate([], [], aslist = False)]
if not ld:
if ln == 1:
if isinstance(num[0], gof.Result):
if isinstance(num[0], gof.Variable):
return num[0]
else:
return T.as_ndarray_result(num[0])
return T.as_tensor_variable(num[0])
else:
return self.main(*num)
return self.inverse(self.merge_num_denum(num, []),
......@@ -556,7 +556,7 @@ class Canonizer(gof.LocalOptimizer):
"""
Returns a numeric constant if v is a gof.Constant or, well, a
numeric constant. If v is a plain Result, returns None.
numeric constant. If v is a plain Variable, returns None.
"""
if isinstance(v, N.generic):
......@@ -591,7 +591,7 @@ class Canonizer(gof.LocalOptimizer):
def simplify_factors(self, num, denum):
"""
For any Result r which is both in num and denum, removes it
For any Variable r which is both in num and denum, removes it
from both lists. Modifies the lists inplace. Returns the
modified lists. For example:
......@@ -844,7 +844,7 @@ def local_mul_specialize(node):
if len(new_inputs) < len(node.inputs):
if len(new_inputs) == 0:
newval = -y.flatten()[0] if neg else y.flatten()[0]
return [T.NDArrayConstant(T.NDArrayType(dtype=node.outputs[0].type.dtype,
return [T.TensorConstant(T.TensorType(dtype=node.outputs[0].type.dtype,
broadcastable = [True] * node.outputs[0].ndim), N.asarray(newval))]
if len(new_inputs) == 1:
......@@ -1190,11 +1190,11 @@ register_canonicalize(local_transposed_dot, name='local_transposed_dot')
# # aaaaaaaaaaaaaaa
# # i, o = [], []
# # for output in node.outputs:
# # results = grab_down(output, out)
# # # if results is None:
# # variables = grab_down(output, out)
# # # if variables is None:
# # # return [input], []
# # i += results[0]
# # o += results[1]
# # i += variables[0]
# # o += variables[1]
# # return i, o
......
......@@ -39,7 +39,7 @@ class RandomStreamsInstance(object):
"""
seed = self.default_seed if seed is None else seed
seedgen = numpy.random.RandomState(seed)
for old_r, new_r in self.random_streams.random_state_results:
for old_r, new_r in self.random_streams.random_state_variables:
old_r_seed = seedgen.randint(2**30)
old_r_container = self.memo[old_r].value
if old_r_container.value is None:
......@@ -52,12 +52,12 @@ class RandomStreamsInstance(object):
def __getitem__(self, item):
"""Retrieve the numpy RandomState instance associated with a particular stream
:param item: a result of type RandomStateType, associated with this RandomStream
:param item: a variable of type RandomStateType, associated with this RandomStream
:rtype: numpy RandomState (or None, before initialize)
"""
for old_r, new_r in self.random_streams.random_state_results:
for old_r, new_r in self.random_streams.random_state_variables:
if item is old_r:
container = self.memo[item].value
return container.value
......@@ -66,7 +66,7 @@ class RandomStreamsInstance(object):
def __setitem__(self, item, val):
"""Set the numpy RandomState instance associated with a particular stream
:param item: a result of type RandomStateType, associated with this RandomStream
:param item: a variable of type RandomStateType, associated with this RandomStream
:param val: the new value
:type val: numpy RandomState
......@@ -76,7 +76,7 @@ class RandomStreamsInstance(object):
"""
if type(val) is not numpy.random.RandomState:
raise TypeError('only values of type RandomState are permitted', val)
for old_r, new_r in self.random_streams.random_state_results:
for old_r, new_r in self.random_streams.random_state_variables:
if item is old_r:
container = self.memo[item].value
container.value = val
......@@ -86,7 +86,7 @@ class RandomStreamsInstance(object):
class RandomStreams(Component):
"""Module component with similar interface to numpy.random (numpy.random.RandomState)"""
random_state_results = []
random_state_variables = []
"""A list of pairs of the form (input_r, output_r). This will be over-ridden by the module
instance to contain stream generators.
"""
......@@ -103,12 +103,12 @@ class RandomStreams(Component):
`RandomStreamsInstance.__init__` for more details.
"""
super(RandomStreams, self).__init__()
self.random_state_results = []
self.random_state_variables = []
self.default_instance_seed = seed
def allocate(self, memo):
"""override `Component.allocate` """
for old_r, new_r in self.random_state_results:
for old_r, new_r in self.random_state_variables:
assert old_r not in memo
memo[old_r] = In(old_r,
value=Container(old_r, storage=[None]),
......@@ -129,14 +129,14 @@ class RandomStreams(Component):
:param kwargs: interpreted by `op`
:returns: The symbolic random draw part of op()'s return value. This function stores
the updated RandomStateType Result for use at `build` time.
the updated RandomStateType Variable for use at `build` time.
:rtype: NDArrayResult
:rtype: TensorVariable
"""
random_state_result = raw_random.random_state_type()
new_r, out = op(random_state_result, *args, **kwargs)
out.rng = random_state_result
self.random_state_results.append((random_state_result, new_r))
random_state_variable = raw_random.random_state_type()
new_r, out = op(random_state_variable, *args, **kwargs)
out.rng = random_state_variable
self.random_state_variables.append((random_state_variable, new_r))
return out
def binomial(self, *args, **kwargs):
......
......@@ -87,27 +87,27 @@ class RandomFunction(gof.Op):
fn, outtype, args, kwargs = state
self.fn = getattr(numpy.random.RandomState, fn) if isinstance(fn, str) else fn
self.outtype = outtype
self.args = tuple(tensor.as_ndarray_result(arg) for arg in args)
self.args = tuple(tensor.as_tensor_variable(arg) for arg in args)
self.inplace = kwargs.pop('inplace', False)
if self.inplace:
self.destroy_map = {0: [0]}
def make_node(self, r, shape, *args):
"""
:param r: a numpy.RandomState instance, or a Result of Type RandomStateType that will
:param r: a numpy.RandomState instance, or a Variable of Type RandomStateType that will
contain a RandomState instance.
:param shape: an lvector with the shape of the tensor output by this Op. At runtime,
the value associated with this lvector must have a length that matches the number of
dimensions promised by `self.outtype`.
:param args: the values associated with these results will be passed to the RandomState
:param args: the values associated with these variables will be passed to the RandomState
function during perform as extra "*args"-style arguments. These should be castable to
results of Type NDArrayType.
variables of Type TensorType.
:rtype: Apply
:return: Apply with two outputs. The first output is a gof.generic Result from which
:return: Apply with two outputs. The first output is a gof.generic Variable from which
to draw further random numbers. The second output is the outtype() instance holding
the random draw.
......@@ -115,7 +115,7 @@ class RandomFunction(gof.Op):
if shape == () or shape == []:
shape = tensor.lvector()
else:
shape = tensor.as_ndarray_result(shape, ndim=1)
shape = tensor.as_tensor_variable(shape, ndim=1)
#print 'SHAPE TYPE', shape.type, tensor.lvector
assert shape.type.ndim == 1
assert (shape.type.dtype == 'int64') or (shape.type.dtype == 'int32')
......@@ -127,9 +127,9 @@ class RandomFunction(gof.Op):
# shape.type
# assert shape.type == tensor.lvector
# convert args to NDArrayType instances
# convert args to TensorType instances
# and append enough None's to match the length of self.args
args = map(tensor.as_ndarray_result, args)
args = map(tensor.as_tensor_variable, args)
if len(args) > len(self.args):
raise TypeError('Too many args for this kind of random generator')
args += (None,) * (len(self.args) - len(args))
......@@ -202,14 +202,14 @@ def random_function(fn, dtype, *rfargs, **rfkwargs):
else:
r, shape, args = ndim, args[0], args[1:]
if shape == () or shape == []:
shape = tensor.NDArrayConstant(type = tensor.lvector, data = shape)
shape = tensor.TensorConstant(type = tensor.lvector, data = shape)
else:
shape = tensor.as_ndarray_result(shape)
shape = tensor.as_tensor_variable(shape)
ndim = tensor.get_vector_length(shape)
if ndim is None:
raise ValueError('Cannot infer the number of dimensions from the shape argument.')
# note: rf could be cached for future use
rf = RandomFunction(fn, tensor.NDArrayType(dtype = dtype, broadcastable = (False,)*ndim), *rfargs, **rfkwargs)
rf = RandomFunction(fn, tensor.TensorType(dtype = dtype, broadcastable = (False,)*ndim), *rfargs, **rfkwargs)
return rf(r, shape, *args, **kwargs)
return f
......
......@@ -26,10 +26,10 @@ def inplace_func(inputs, outputs, mode=default_mode):
def eval_outputs(outputs):
results = inplace_func([], outputs)()
if len(results) == 1:
return results[0]
return results
variables = inplace_func([], outputs)()
if len(variables) == 1:
return variables[0]
return variables
_public_verify_grad = verify_grad
def verify_grad(*args, **kwargs):
......@@ -98,7 +98,7 @@ def make_restet(name, op, expected, checks = {}, good = {}, bad_build = {}, bad_
expecteds = self.expected(*inputs)
try:
results = f(*inputs)
variables = f(*inputs)
except:
type, exc_value, traceback = sys.exc_info()
err_msg = "Test %s::%s: Error occurred while calling the Function on the inputs %s" \
......@@ -108,16 +108,16 @@ def make_restet(name, op, expected, checks = {}, good = {}, bad_build = {}, bad_
if not isinstance(expecteds, (list, tuple)):
expecteds = (expecteds, )
for i, (result, expected) in enumerate(zip(results, expecteds)):
if result.dtype != expected.dtype or result.shape != expected.shape or \
numpy.any(numpy.abs(result - expected) > 1e-10):
for i, (variable, expected) in enumerate(zip(variables, expecteds)):
if variable.dtype != expected.dtype or variable.shape != expected.shape or \
numpy.any(numpy.abs(variable - expected) > 1e-10):
self.fail("Test %s::%s: Output %s gave the wrong value. With inputs %s, expected %s, got %s."
% (self.op, testname, i, inputs, expected, result))
% (self.op, testname, i, inputs, expected, variable))
for description, check in self.checks.items():
if not check(inputs, results):
if not check(inputs, variables):
self.fail("Test %s::%s: Failed check: %s (inputs were %s, outputs were %s)"
% (self.op, testname, description, inputs, results))
% (self.op, testname, description, inputs, variables))
def test_bad_build(self):
for testname, inputs in self.bad_build.items():
......@@ -153,7 +153,7 @@ def make_restet(name, op, expected, checks = {}, good = {}, bad_build = {}, bad_
raise type, exc_value, traceback
try:
results = f(*inputs)
variables = f(*inputs)
except:
return
......@@ -595,7 +595,7 @@ class T_Shape(unittest.TestCase):
class T_Cast(unittest.TestCase):
def test_basic(self):
for type1 in ['int8', 'int16', 'int32', 'int64', 'float32', 'float64']:
x = NDArrayType(dtype = type1, broadcastable = (False, )).make_result()
x = TensorType(dtype = type1, broadcastable = (False, )).make_variable()
for type2, converter in zip(['int8', 'int16', 'int32', 'int64', 'float32', 'float64'],
[convert_to_int8, convert_to_int16, convert_to_int32, convert_to_int64,
convert_to_float32, convert_to_float64]):
......@@ -611,51 +611,51 @@ class T_max_and_argmax(unittest.TestCase):
MaxAndArgmax.debug = 0
def test0(self):
n = as_ndarray_result(5.0)
n = as_tensor_variable(5.0)
v,i = eval_outputs(max_and_argmax(n))
self.failUnless(v == 5.0)
self.failUnless(i == 0)
def test1(self):
n = as_ndarray_result([1,2,3,2,-6])
n = as_tensor_variable([1,2,3,2,-6])
v,i = eval_outputs(max_and_argmax(n))
self.failUnless(v == 3)
self.failUnless(i == 2)
def test2(self):
data = numpy.random.rand(2,3)
n = as_ndarray_result(data)
n = as_tensor_variable(data)
v,i = eval_outputs(max_and_argmax(n))
self.failUnless(numpy.all(v == numpy.max(data,-1)))
self.failUnless(numpy.all(i == numpy.argmax(data,-1)))
def test2b(self):
data = numpy.random.rand(2,3)
n = as_ndarray_result(data)
n = as_tensor_variable(data)
v,i = eval_outputs(max_and_argmax(n,0))
self.failUnless(numpy.all(v == numpy.max(data,0)))
self.failUnless(numpy.all(i == numpy.argmax(data,0)))
def test2_invalid(self):
n = as_ndarray_result(numpy.random.rand(2,3))
n = as_tensor_variable(numpy.random.rand(2,3))
try:
eval_outputs(max_and_argmax(n,3))
except ValueError, e:
return
self.fail()
def test2_invalid_neg(self):
n = as_ndarray_result(numpy.random.rand(2,3))
n = as_tensor_variable(numpy.random.rand(2,3))
try:
eval_outputs(max_and_argmax(n,-3))
except ValueError, e:
return
self.fail()
def test2_valid_neg(self):
n = as_ndarray_result(numpy.random.rand(2,3))
n = as_tensor_variable(numpy.random.rand(2,3))
v,i = eval_outputs(max_and_argmax(n,-1))
self.failUnless(v.shape == (2,))
v,i = eval_outputs(max_and_argmax(n,-2))
self.failUnless(v.shape == (3,))
def test3(self):
n = as_ndarray_result(numpy.random.rand(2,3,4))
n = as_tensor_variable(numpy.random.rand(2,3,4))
v,i = eval_outputs(max_and_argmax(n,0))
self.failUnless(v.shape == (3,4))
self.failUnless(i.shape == (3,4))
......@@ -674,7 +674,7 @@ class T_subtensor(unittest.TestCase):
def test0_err_invalid(self):
#it is impossible to retrieve a view of a 0-d tensor
n = as_ndarray_result(numpy.ones(()))
n = as_tensor_variable(numpy.ones(()))
try:
t = n[0]
except ValueError, e:
......@@ -683,7 +683,7 @@ class T_subtensor(unittest.TestCase):
self.fail()
def test1_err_bounds(self):
n = as_ndarray_result(numpy.ones(3))
n = as_tensor_variable(numpy.ones(3))
t = n[7]
self.failUnless(isinstance(t.owner.op, Subtensor))
try:
......@@ -694,7 +694,7 @@ class T_subtensor(unittest.TestCase):
return
self.fail()
def test1_err_subslice(self):
n = as_ndarray_result(numpy.ones(3))
n = as_tensor_variable(numpy.ones(3))
try:
t = n[slice(0,slice(1,2,None),None)]
except Exception, e:
......@@ -704,21 +704,21 @@ class T_subtensor(unittest.TestCase):
self.fail()
def test1_ok_range_finite(self):
n = as_ndarray_result(numpy.ones(3)*5)
n = as_tensor_variable(numpy.ones(3)*5)
t = n[0:2]
self.failUnless(isinstance(t.owner.op, Subtensor))
tval = eval_outputs([t])
self.failUnless(tval.shape == (2,))
self.failUnless(tval[1] == 5.0)
def test2_ok_range_finite(self):
n = as_ndarray_result(numpy.ones((3,4))*5)
n = as_tensor_variable(numpy.ones((3,4))*5)
t = n[0:2,3]
self.failUnless(isinstance(t.owner.op, Subtensor))
tval = eval_outputs([t])
self.failUnless(tval.shape == (2,))
self.failUnless(tval[1] == 5.0)
def test1_err_invalid(self):
n = as_ndarray_result(numpy.ones(1))
n = as_tensor_variable(numpy.ones(1))
try:
t = n[0,0]
except ValueError, e:
......@@ -726,7 +726,7 @@ class T_subtensor(unittest.TestCase):
return
self.fail()
def test1_ok_elem(self):
n = as_ndarray_result(numpy.ones(1)*5)
n = as_tensor_variable(numpy.ones(1)*5)
t = n[0]
self.failUnless(isinstance(t.owner.op, Subtensor))
tval = eval_outputs([t])
......@@ -734,14 +734,14 @@ class T_subtensor(unittest.TestCase):
self.failUnless(tval == 5.0)
def test1_ok_range_infinite(self):
#Subtensor.debug = True
n = as_ndarray_result(numpy.ones(3)*5)
n = as_tensor_variable(numpy.ones(3)*5)
t = n[1:]
self.failUnless(isinstance(t.owner.op, Subtensor))
tval = eval_outputs([t])
self.failUnless(tval.shape == (2,))
self.failUnless(tval[1] == 5.0)
def test1_ok_strided(self):
n = as_ndarray_result(numpy.ones(5)*5)
n = as_tensor_variable(numpy.ones(5)*5)
t = n[1::2]
self.failUnless(isinstance(t.owner.op, Subtensor))
tval = eval_outputs([t])
......@@ -753,7 +753,7 @@ class T_subtensor(unittest.TestCase):
self.failUnless(tval[1] == 5.0)
def test2_err_bounds0(self):
n = as_ndarray_result(numpy.ones((2,3))*5)
n = as_tensor_variable(numpy.ones((2,3))*5)
t = n[0,4]
self.failUnless(isinstance(t.owner.op, Subtensor))
try:
......@@ -762,7 +762,7 @@ class T_subtensor(unittest.TestCase):
return
self.fail()
def test2_err_bounds1(self):
n = as_ndarray_result(numpy.ones((2,3))*5)
n = as_tensor_variable(numpy.ones((2,3))*5)
t = n[4:5,2]
self.failUnless(isinstance(t.owner.op, Subtensor))
try:
......@@ -771,14 +771,14 @@ class T_subtensor(unittest.TestCase):
if e[0] != 'index out of bounds':
raise
def test2_ok_elem(self):
n = as_ndarray_result(numpy.asarray(range(6)).reshape((2,3)))
n = as_tensor_variable(numpy.asarray(range(6)).reshape((2,3)))
t = n[0,2]
self.failUnless(isinstance(t.owner.op, Subtensor))
tval = eval_outputs([t])
self.failUnless(tval.shape == ())
self.failUnless(numpy.all(tval == 2))
def test2_ok_row(self):
n = as_ndarray_result(numpy.asarray(range(6)).reshape((2,3)))
n = as_tensor_variable(numpy.asarray(range(6)).reshape((2,3)))
t = n[1]
self.failIf(any(n.type.broadcastable))
self.failUnless(isinstance(t.owner.op, Subtensor))
......@@ -787,7 +787,7 @@ class T_subtensor(unittest.TestCase):
self.failUnless(numpy.all(tval == [3,4,5]))
def test2_ok_col(self):
n = as_ndarray_result(numpy.ones((2,3))*5)
n = as_tensor_variable(numpy.ones((2,3))*5)
t = n[:,0]
self.failUnless(isinstance(t.owner.op, Subtensor))
self.failIf(any(n.type.broadcastable))
......@@ -796,7 +796,7 @@ class T_subtensor(unittest.TestCase):
self.failUnless(numpy.all(tval == 5.0))
def test2_ok_rows_finite(self):
n = as_ndarray_result(numpy.ones((4,3))*5)
n = as_tensor_variable(numpy.ones((4,3))*5)
t = n[1:3,0]
self.failUnless(isinstance(t.owner.op, Subtensor))
tval = eval_outputs([t])
......@@ -804,7 +804,7 @@ class T_subtensor(unittest.TestCase):
self.failUnless(numpy.all(tval == 5.0))
def test2_ok_cols_infinite(self):
n = as_ndarray_result(numpy.asarray(range(12)).reshape((4,3)))
n = as_tensor_variable(numpy.asarray(range(12)).reshape((4,3)))
t = n[1,2:]
self.failUnless(isinstance(t.owner.op, Subtensor))
tval = eval_outputs([t])
......@@ -812,7 +812,7 @@ class T_subtensor(unittest.TestCase):
self.failUnless(numpy.all(tval == 5))
def test2_ok_strided(self):
n = as_ndarray_result(numpy.asarray(range(20)).reshape((4,5)))
n = as_tensor_variable(numpy.asarray(range(20)).reshape((4,5)))
t = n[1:4:2,1:5:2]
self.failUnless(isinstance(t.owner.op, Subtensor))
tval = eval_outputs([t])
......@@ -820,7 +820,7 @@ class T_subtensor(unittest.TestCase):
self.failUnless(numpy.all(tval == [[6, 8],[16, 18]]))
def test3_ok_mat(self):
n = as_ndarray_result(numpy.asarray(range(24)).reshape((2,3,4)))
n = as_tensor_variable(numpy.asarray(range(24)).reshape((2,3,4)))
t = n[0,0,0]
self.failUnless(isinstance(t.owner.op, Subtensor))
tval = eval_outputs([t])
......@@ -830,7 +830,7 @@ class T_subtensor(unittest.TestCase):
def test_grad_1d(self):
subi = 0
data = numpy.random.rand(2,3)
n = as_ndarray_result(data)
n = as_tensor_variable(data)
z = scal.constant(subi)
t = n[z:,z]
gn = grad(sum(exp(t)), n)
......@@ -841,7 +841,7 @@ class T_subtensor(unittest.TestCase):
def test_grad_0d(self):
data = numpy.random.rand(2,3)
n = as_ndarray_result(data)
n = as_tensor_variable(data)
t = n[1,0]
gn = grad(sum(exp(t)), n)
gval = eval_outputs([gn])
......@@ -857,7 +857,7 @@ class T_Join_and_Split(unittest.TestCase):
class Join1(Op):
def make_node(self, *inputs):
inputs = [as_ndarray_result(t) for t in inputs]
inputs = [as_tensor_variable(t) for t in inputs]
outputs = [lscalar()] + [i.type() for i in inputs]
return Apply(self, inputs, outputs)
def perform(self, node, inputs, outputs):
......@@ -871,8 +871,8 @@ class T_Join_and_Split(unittest.TestCase):
Join.debug = False
def test_join_scalar(self):
a = as_ndarray_result(1)
b = as_ndarray_result(2)
a = as_tensor_variable(1)
b = as_tensor_variable(2)
try:
s = join(0, a, b)
except:
......@@ -880,42 +880,42 @@ class T_Join_and_Split(unittest.TestCase):
self.fail()
def test_stack_mixed_type_constants(self):
a = as_ndarray_result(1)
b = as_ndarray_result(2.0)
c = as_ndarray_result(3.0)
a = as_tensor_variable(1)
b = as_tensor_variable(2.0)
c = as_tensor_variable(3.0)
s = stack(a, b, c)
want = numpy.array([1, 2, 3])
self.failUnless((eval_outputs([s]) == want).all())
def test_stack_scalar(self):
a = as_ndarray_result(1)
b = as_ndarray_result(2)
c = as_ndarray_result(3)
a = as_tensor_variable(1)
b = as_tensor_variable(2)
c = as_tensor_variable(3)
s = stack(a, b, c)
want = numpy.array([1, 2, 3])
self.failUnless((eval_outputs([s]) == want).all())
def test_join_vector(self):
a = as_ndarray_result(numpy.array([1, 2, 3]))
b = as_ndarray_result(numpy.array([7, 8, 9]))
a = as_tensor_variable(numpy.array([1, 2, 3]))
b = as_tensor_variable(numpy.array([7, 8, 9]))
s = join(0, a, b)
want = numpy.array([1, 2, 3, 7, 8, 9])
self.failUnless((eval_outputs([s]) == want).all())
def test_stack_vector(self):
a = as_ndarray_result(numpy.array([1, 2, 3]))
b = as_ndarray_result(numpy.array([7, 8, 9]))
a = as_tensor_variable(numpy.array([1, 2, 3]))
b = as_tensor_variable(numpy.array([7, 8, 9]))
s = stack(a, b)
want = numpy.array([[1, 2, 3],[ 7, 8, 9]])
self.failUnless((eval_outputs([s]) == want).all())
def test_join_matrix0(self):
a = as_ndarray_result(numpy.array([[1, 2, 3], [4, 5, 6]]))
b = as_ndarray_result(numpy.array([[7, 8, 9]]))
a = as_tensor_variable(numpy.array([[1, 2, 3], [4, 5, 6]]))
b = as_tensor_variable(numpy.array([[7, 8, 9]]))
s = join(0, a, b)
want = numpy.array([[1, 2, 3],[4,5,6],[7, 8, 9]])
......@@ -924,8 +924,8 @@ class T_Join_and_Split(unittest.TestCase):
def test_join_matrix1(self):
av=numpy.array([[1, 2, 3], [4, 5, 6]], dtype='float32')
bv= numpy.array([[7], [8]],dtype='float32')
a = as_ndarray_result(av)
b = as_ndarray_result(bv)
a = as_tensor_variable(av)
b = as_tensor_variable(bv)
s = join(1, a, b)
want = numpy.array([[1, 2, 3, 7], [4, 5, 6, 8]], dtype='float32')
self.failUnless((eval_outputs([s]) == want).all())
......@@ -933,9 +933,9 @@ class T_Join_and_Split(unittest.TestCase):
verify_grad(self, lambda a, b: join(1,a,b), [av, bv], eps=1.0e-4, tol=1.0e-3)
def test_join_matrix1_using_vertical_stack(self):
a = as_ndarray_result(numpy.array([[1, 2, 3], [4, 5, 6]]))
b = as_ndarray_result(numpy.array([[7, 8, 9]]))
c = as_ndarray_result(numpy.array([[9, 8, 7]]))
a = as_tensor_variable(numpy.array([[1, 2, 3], [4, 5, 6]]))
b = as_tensor_variable(numpy.array([[7, 8, 9]]))
c = as_tensor_variable(numpy.array([[9, 8, 7]]))
s = vertical_stack(a, b, c)
want = numpy.array([[1, 2, 3],[4,5,6],[7, 8, 9], [9, 8, 7]])
......@@ -945,9 +945,9 @@ class T_Join_and_Split(unittest.TestCase):
av=numpy.array([[1, 2, 3], [4, 5, 6]], dtype='float32')
bv=numpy.array([[7], [8]],dtype='float32')
cv=numpy.array([[3, 2, 1], [6, 5, 4]], dtype='float32')
a = as_ndarray_result(av)
b = as_ndarray_result(bv)
c = as_ndarray_result(cv)
a = as_tensor_variable(av)
b = as_tensor_variable(bv)
c = as_tensor_variable(cv)
s = horizontal_stack(a, b, c)
want = numpy.array([[1, 2, 3, 7, 3, 2, 1], [4, 5, 6, 8, 6, 5, 4]], dtype='float32')
self.failUnless((eval_outputs([s]) == want).all())
......@@ -957,8 +957,8 @@ class T_Join_and_Split(unittest.TestCase):
def test_join_matrixV(self):
"""variable join axis"""
v = numpy.array([[1., 2., 3.], [4., 5., 6.]])
a = as_ndarray_result(v.copy())
b = as_ndarray_result(v.copy())
a = as_tensor_variable(v.copy())
b = as_tensor_variable(v.copy())
ax = lscalar()
s = join(ax, a, b)
......@@ -979,7 +979,7 @@ class T_Join_and_Split(unittest.TestCase):
x = lscalar('x')
y = dscalar('y')
triple = as_ndarray_result((x, y, 9.0))
triple = as_tensor_variable((x, y, 9.0))
assert 3 == get_vector_length(triple)
a,b,c = triple
......@@ -1117,12 +1117,12 @@ class T_exp(unittest.TestCase):
# class T_abs(unittest.TestCase):
# def test_impl(self):
# t = as_ndarray_result(1.0)
# t = as_tensor_variable(1.0)
# check_eq(self, t, abs(t), 1.0, 1.0)
# check_eq(self, t, abs(t), -1.0, 1.0)
# for shape in (2,), (3,4):
# t = as_ndarray_result(numpy.ones(shape))
# t = as_tensor_variable(numpy.ones(shape))
# d = numpy.random.rand(*shape)*2-1.0
# check_eq(self, t, abs(t), d, abs(d))
# check_eq(self, t, abs(t), -d, abs(-d))
......@@ -1157,7 +1157,7 @@ class T_exp(unittest.TestCase):
# self.failUnless(numpy.all(eval_outputs([t]) == [9,9,9]))
# def test1(self):
# x = as_ndarray_result(numpy.ones((4,5)))
# x = as_tensor_variable(numpy.ones((4,5)))
# l = ones_like(x[:,0:1])
# r = ones_like(x[0:1,:])
# xx = x + dot(l,r)
......@@ -1165,11 +1165,11 @@ class T_exp(unittest.TestCase):
# class T_sum(unittest.TestCase):
# def test_impl(self):
# t = as_ndarray_result(0.0)
# t = as_tensor_variable(0.0)
# check_eq(self, t, Sum(t).out, 1.0, 1.0)
# check_eq(self, t, Sum(t).out, -1.0, -1.0)
# t = as_ndarray_result([0.0, 0.0])
# t = as_tensor_variable([0.0, 0.0])
# d = numpy.asarray([-0.4, 1.2])
# check_eq(self, t, Sum(t).out, d, numpy.sum(d))
# check_eq(self, t, Sum(t).out, -d, -numpy.sum(d))
......@@ -1179,13 +1179,13 @@ class T_exp(unittest.TestCase):
# unittest_tools.seed_rng()
# def test_elemwise(self):
# a = as_ndarray_result(0.0)
# b = as_ndarray_result(0.0)
# a = as_tensor_variable(0.0)
# b = as_tensor_variable(0.0)
# check_eq2_both(self, [a,b], mul(a,b), [3.0, 4.0], 12.0)
# check_eq2_both(self, [a,b], mul(b,a), [-1.0,2.0], -2.0)
# a = as_ndarray_result(numpy.ones(2))
# b = as_ndarray_result(numpy.ones(2))
# a = as_tensor_variable(numpy.ones(2))
# b = as_tensor_variable(numpy.ones(2))
# aa = numpy.asarray([-0.5, 4.0])
# bb = numpy.asarray([-0.5, 2.0])
# check_eq2_both(self, [a,b], mul(a,b), [aa,bb], numpy.asarray([0.25, 8.0]))
......@@ -1193,8 +1193,8 @@ class T_exp(unittest.TestCase):
# def test_scalar(self):
# r = numpy.random.rand(2,3)
# a = as_ndarray_result(r)
# b = as_ndarray_result(2.0)
# a = as_tensor_variable(r)
# b = as_tensor_variable(2.0)
# check_eq2_both(self, [a,b], mul(a,b), [r, 2.0], r*2.0)
# check_eq2_both(self, [a,b], mul(a,b), [r, 4.0], r*4.0)
# self.failUnless(b.data == 2.0)
......@@ -1203,7 +1203,7 @@ class T_exp(unittest.TestCase):
# r1 = numpy.random.rand(3,5)
# r2 = numpy.random.rand(1,5)
# r3 = numpy.random.rand(3,1)
# a1, a2, a3 = as_ndarray_result(r1), as_ndarray_result(r2), as_ndarray_result(r3)
# a1, a2, a3 = as_tensor_variable(r1), as_tensor_variable(r2), as_tensor_variable(r3)
# check_eq2_both(self, [a1,a2], mul(a1,a2), [r1, r2], r1*r2)
# check_eq2_both(self, [a1,a3], mul(a1,a3), [r1, r3], r1*r3)
......@@ -1222,8 +1222,8 @@ class T_exp(unittest.TestCase):
# verify_grad(self, Mul, [numpy.random.rand(3, 5), numpy.random.rand(3, 1)])
# def test_wrong_shapes(self):
# a = as_ndarray_result(numpy.ones(3))
# b = as_ndarray_result(numpy.ones(4))
# a = as_tensor_variable(numpy.ones(3))
# b = as_tensor_variable(numpy.ones(4))
# try:
# check_eq2(self, [a,b], Mul(a,b).out,
# [numpy.ones(3), numpy.ones(4)], 1.0)
......@@ -1262,8 +1262,8 @@ class T_exp(unittest.TestCase):
# def test0(self):
# verify_grad(self, Log, [numpy.random.rand(3,1)+0.0001])
# def test1(self):
# a = as_ndarray_result(numpy.ones(2))
# b = as_ndarray_result(numpy.ones(2))
# a = as_tensor_variable(numpy.ones(2))
# b = as_tensor_variable(numpy.ones(2))
# aa = numpy.asarray([0.5, 4.0])
# bb = numpy.asarray([0.5, 2.0])
# check_eq2(self, [a], log(a), [aa], numpy.log(numpy.asarray(aa)))
......@@ -1292,12 +1292,12 @@ class test_matinv(unittest.TestCase):
# symbolic program
# broadcastable=[False,False] means that the shape of matrix is two dimensional,
# and none of the dimensions are constrained to have length 1.
# Note that NDArrayType's constructor does not actually allocate any memory.
# TODO: Make NDArrayType syntax more explicit, and maybe give shape or number of dimensions.
# Note that TensorType's constructor does not actually allocate any memory.
# TODO: Make TensorType syntax more explicit, and maybe give shape or number of dimensions.
a, b = matrices('ab')
ab = a*b
# Here, as_ndarray_result actually uses the data allocated by numpy.
diff = ab - as_ndarray_result(numpy.ones((dim,dim)))
# Here, as_tensor_variable actually uses the data allocated by numpy.
diff = ab - as_tensor_variable(numpy.ones((dim,dim)))
# Sum of squared errors
ssdiff = sum((diff**2.0))
......@@ -1348,7 +1348,7 @@ class t_dot(unittest.TestCase):
x = numpy.asarray(x)
return type(x), x.dtype, x.shape
nz = numpy.dot(x,y)
tz = eval_outputs([dot(as_ndarray_result(x), as_ndarray_result(y))])
tz = eval_outputs([dot(as_tensor_variable(x), as_tensor_variable(y))])
self.failUnless(tz.dtype == nz.dtype)
self.failUnless(tz.shape == nz.shape)
self.failUnless(_approx_eq(nz, tz))
......@@ -1415,7 +1415,7 @@ class T_tensorfromscalar(unittest.TestCase):
def test1(self):
s = scal.constant(56)
t = as_ndarray_result(s)
t = as_tensor_variable(s)
self.failUnless(t.owner.op is tensor_from_scalar)
self.failUnless(t.type.broadcastable == (), t.type.broadcastable)
self.failUnless(t.type.ndim == 0, t.type.ndim)
......@@ -1429,13 +1429,13 @@ class T_tensorfromscalar(unittest.TestCase):
# def _tensor(data, broadcastable=None, name=None):
# """Return a NDArrayType containing given data"""
# """Return a TensorType containing given data"""
# data = numpy.asarray(data)
# if broadcastable is None:
# broadcastable = [s==1 for s in data.shape]
# elif broadcastable in [0, 1]:
# broadcastable = [broadcastable] * len(data.shape)
# rval = NDArrayType(data.dtype, broadcastable, name)
# rval = TensorType(data.dtype, broadcastable, name)
# rval.data = data # will raise if broadcastable was mis-specified
# return rval
......@@ -1446,7 +1446,7 @@ class T_tensorfromscalar(unittest.TestCase):
# unittest_tools.seed_rng()
# def test0(self): # allocate from a scalar float
# t = _tensor(1.0)
# self.failUnless(isinstance(t, NDArrayType))
# self.failUnless(isinstance(t, TensorType))
# self.failUnless(t.dtype == 'float64')
# self.failUnless(t.broadcastable == ())
# self.failUnless(t.role == None)
......@@ -1455,25 +1455,25 @@ class T_tensorfromscalar(unittest.TestCase):
# self.failUnless(t.data == 1.0)
# def test0_int(self): # allocate from a scalar float
# t = _tensor(1)
# self.failUnless(isinstance(t, NDArrayType))
# self.failUnless(isinstance(t, TensorType))
# self.failUnless(t.dtype == 'int64' or t.dtype == 'int32')
# def test1(self): # allocate from a vector of ints, not broadcastable
# t = _tensor(numpy.ones(5,dtype='int32'))
# self.failUnless(isinstance(t, NDArrayType))
# self.failUnless(isinstance(t, TensorType))
# self.failUnless(t.dtype == 'int32')
# self.failUnless(t.broadcastable == (0,))
# self.failUnless(isinstance(t.data, numpy.ndarray))
# self.failUnless(str(t.data.dtype) == 'int32')
# def test2(self): # allocate from a column matrix of complex with name
# t = _tensor(numpy.ones((5,1),dtype='complex64'),name='bart')
# self.failUnless(isinstance(t, NDArrayType))
# self.failUnless(isinstance(t, TensorType))
# self.failUnless(t.dtype == 'complex64')
# self.failUnless(t.broadcastable == (0,1))
# self.failUnless(isinstance(t.data, numpy.ndarray))
# self.failUnless(t.name == 'bart')
# def test2b(self): # allocate from a column matrix, not broadcastable
# t = _tensor(numpy.ones((5,1),dtype='complex64'),broadcastable=0)
# self.failUnless(isinstance(t, NDArrayType))
# self.failUnless(isinstance(t, TensorType))
# self.failUnless(t.dtype == 'complex64')
# self.failUnless(t.broadcastable == (0,0))
# self.failUnless(isinstance(t.data, numpy.ndarray))
......@@ -1493,39 +1493,39 @@ class T_tensorfromscalar(unittest.TestCase):
# t.data = numpy.ones((2,7,1))
# self.fail()
# except ValueError, e:
# self.failUnless(e[0] is NDArrayType.filter.E_rank)
# self.failUnless(e[0] is TensorType.filter.E_rank)
# try:
# t.data = numpy.ones(1)
# self.fail()
# except ValueError, e:
# self.failUnless(e[0] is NDArrayType.filter.E_rank)
# self.failUnless(e[0] is TensorType.filter.E_rank)
# def test_data_badrank1(self):
# t = _tensor(numpy.ones((1,1),dtype='complex64'), broadcastable=1)
# try:
# t.data = numpy.ones((1,1,1))
# self.fail()
# except ValueError, e:
# self.failUnless(e[0] is NDArrayType.filter.E_rank)
# self.failUnless(e[0] is TensorType.filter.E_rank)
# try:
# t.data = numpy.ones(1)
# self.fail()
# except ValueError, e:
# self.failUnless(e[0] is NDArrayType.filter.E_rank)
# self.failUnless(e[0] is TensorType.filter.E_rank)
# def test_data_badshape0(self):
# t = _tensor(numpy.ones((1,1),dtype='complex64'), broadcastable=1)
# try:
# t.data = numpy.ones((1,2))
# self.fail()
# except ValueError, e:
# self.failUnless(e[0] is NDArrayType.filter.E_shape)
# self.failUnless(e[0] is TensorType.filter.E_shape)
# try:
# t.data = numpy.ones((0,1))
# self.fail()
# except ValueError, e:
# self.failUnless(e[0] is NDArrayType.filter.E_shape)
# self.failUnless(e[0] is TensorType.filter.E_shape)
# def test_cast0(self):
# t = NDArrayType('float32', [0])
# t = TensorType('float32', [0])
# t.data = numpy.random.rand(4) > 0.5
# self.failUnless(str(t.data.dtype) == t.dtype)
......@@ -1576,13 +1576,13 @@ class test_grad(unittest.TestCase):
return self.gval0, self.gval1
def test_1param(self):
"""grad: Test passing a single result param"""
"""grad: Test passing a single variable param"""
o = test_grad.O()
a1 = o.make_node()
self.failUnless(o.gval0 is grad(a1.outputs[0], a1.inputs[0]))
def test_Nparam(self):
"""grad: Test passing multiple result params"""
"""grad: Test passing multiple variable params"""
o = test_grad.O()
a1 = o.make_node()
g0,g1 = grad(a1.outputs[0], a1.inputs)
......@@ -1594,7 +1594,7 @@ class test_grad(unittest.TestCase):
o = test_grad.O()
a1 = o.make_node()
g = grad(a1.outputs[0], a1.outputs[1])
self.failUnless(isinstance(g, NDArrayConstant))
self.failUnless(isinstance(g, TensorConstant))
self.failUnless(g.data == 0)
try:
grad(a1.outputs[0], 'wtf')
......@@ -1609,7 +1609,7 @@ class test_grad(unittest.TestCase):
g0,g1,g2 = grad(a1.outputs[0], a1.inputs + [scalar('z')])
self.failUnless(o.gval0 is g0)
self.failUnless(o.gval1 is g1)
self.failUnless(isinstance(g2, NDArrayConstant))
self.failUnless(isinstance(g2, TensorConstant))
self.failUnless(g2.data == 0)
class T_op_cache(unittest.TestCase):
......@@ -1712,7 +1712,7 @@ def test_flatten_outdim2():
tensor.verify_grad(None, Flatten(2), [a_val])
def test_flatten_outdim2_of_3():
a = NDArrayType('float64', (False, False, False))()
a = TensorType('float64', (False, False, False))()
c = flatten(a, 2)
f = inplace_func([a], c)
a_val = numpy.asarray([[[0,1],[2,3]], [[4,5],[6,7]]], dtype='float64')
......@@ -1783,7 +1783,7 @@ class test_tensordot(unittest.TestCase):
tensor.verify_grad(None, TensorDot(axes), [aval,bval])
# test ndarray-matrix, sum over one dim of matrix
atens = NDArrayType('float64', broadcastable=(False,)*4)()
atens = TensorType('float64', broadcastable=(False,)*4)()
axes = ((2,),(1,))
c = tensordot(axes)(atens, bmat)
f4 = inplace_func([atens,bmat],c)
......@@ -1794,8 +1794,8 @@ class test_tensordot(unittest.TestCase):
tensor.verify_grad(None, TensorDot(axes), [aval,bval])
# test ndarray-ndarray
atens = NDArrayType('float64', broadcastable=(False,)*4)()
btens = NDArrayType('float64', broadcastable=(False,)*3)()
atens = TensorType('float64', broadcastable=(False,)*4)()
btens = TensorType('float64', broadcastable=(False,)*3)()
axes = ((1,3),(0,2))
c = tensordot(axes)(atens, btens)
f5 = inplace_func([atens,btens],c)
......
......@@ -12,7 +12,7 @@ _as_scalar = GemmLocalOptimizer._as_scalar
_is_real_matrix = GemmLocalOptimizer._is_real_matrix
from theano import In, Out
from .test_basic import (_approx_eq, as_ndarray_result, inplace_func,
from .test_basic import (_approx_eq, as_tensor_variable, inplace_func,
compile, value, constant, inplace, eval_outputs)
class t_gemm(TestCase):
......@@ -35,7 +35,7 @@ class t_gemm(TestCase):
def cmp_linker(z, a, x, y, b, l):
z,a,x,y,b = [numpy.asarray(p) for p in z,a,x,y,b]
z_orig = z.copy()
tz,ta,tx,ty,tb = [as_ndarray_result(p).type() for p in z,a,x,y,b]
tz,ta,tx,ty,tb = [as_tensor_variable(p).type() for p in z,a,x,y,b]
f = inplace_func([tz,ta,tx,ty,tb], gemm(tz,ta,tx,ty,tb), mode=compile.Mode(optimizer = None, linker = l))
new_z = f(z,a,x,y,b)
......@@ -100,7 +100,7 @@ class t_gemm(TestCase):
def test_destroy_map0(self):
"""test that only first input can be overwritten"""
Z = as_ndarray_result(self.rand(2,2))
Z = as_tensor_variable(self.rand(2,2))
try:
gemm(Z, 1.0, Z, Z, 1.0)
except ValueError, e:
......@@ -109,8 +109,8 @@ class t_gemm(TestCase):
self.fail()
def test_destroy_map1(self):
"""test that only first input can be overwritten"""
Z = as_ndarray_result(self.rand(2,2))
A = as_ndarray_result(self.rand(2,2))
Z = as_tensor_variable(self.rand(2,2))
A = as_tensor_variable(self.rand(2,2))
try:
gemm(Z, 1.0, A, inplace.transpose_inplace(Z), 1.0)
except ValueError, e:
......@@ -119,8 +119,8 @@ class t_gemm(TestCase):
self.fail()
def test_destroy_map2(self):
"""test that only first input can be overwritten"""
Z = as_ndarray_result(self.rand(2,2))
A = as_ndarray_result(self.rand(2,2))
Z = as_tensor_variable(self.rand(2,2))
A = as_tensor_variable(self.rand(2,2))
try:
gemm(Z, 1.0, inplace.transpose_inplace(Z), A, 1.0)
except ValueError, e:
......@@ -129,8 +129,8 @@ class t_gemm(TestCase):
self.fail()
def test_destroy_map3(self):
"""test that only first input can be overwritten"""
Z = as_ndarray_result(self.rand(2,2))
A = as_ndarray_result(self.rand(2,2))
Z = as_tensor_variable(self.rand(2,2))
A = as_tensor_variable(self.rand(2,2))
try:
gemm(Z, 1.0, Z, A, 1.0)
except ValueError, e:
......
......@@ -2,7 +2,7 @@
import time
import unittest
from theano.gof import Result, Op
from theano.gof import Variable, Op
from theano import gof
from theano.scalar import *
......@@ -27,7 +27,7 @@ class test_DimShuffle(unittest.TestCase):
((1, 4, 3, 2, 1), (3, 2, 1), (2, 3, 4)),
((1, 1, 4), (1, 2), (1, 4))]:
ib = [(entry == 1) for entry in xsh]
x = NDArrayType('float64', ib)('x')
x = TensorType('float64', ib)('x')
e = DimShuffle(ib, shuffle)(x)
f = copy(linker).accept(Env([x], [e])).make_function()
assert f(numpy.ones(xsh)).shape == zsh
......@@ -50,8 +50,8 @@ class test_Broadcast(unittest.TestCase):
((2, 3, 4, 5), (1, 3, 1, 5)),
((2, 3, 4, 5), (1, 1, 1, 1)),
((), ())]:
x = NDArrayType('float64', [(entry == 1) for entry in xsh])('x')
y = NDArrayType('float64', [(entry == 1) for entry in ysh])('y')
x = TensorType('float64', [(entry == 1) for entry in xsh])('x')
y = TensorType('float64', [(entry == 1) for entry in ysh])('y')
e = Elemwise(add)(x, y)
f = copy(linker).accept(Env([x, y], [e])).make_function()
xv = numpy.asarray(numpy.random.rand(*xsh))
......@@ -69,8 +69,8 @@ class test_Broadcast(unittest.TestCase):
((2, 3, 4, 5), (1, 3, 1, 5)),
((2, 3, 4, 5), (1, 1, 1, 1)),
((), ())]:
x = NDArrayType('float64', [(entry == 1) for entry in xsh])('x')
y = NDArrayType('float64', [(entry == 1) for entry in ysh])('y')
x = TensorType('float64', [(entry == 1) for entry in xsh])('x')
y = TensorType('float64', [(entry == 1) for entry in ysh])('y')
e = Elemwise(Add(transfer_type(0)), {0:0})(x, y)
f = copy(linker).accept(Env([x, y], [e])).make_function()
xv = numpy.asarray(numpy.random.rand(*xsh))
......@@ -94,8 +94,8 @@ class test_Broadcast(unittest.TestCase):
self.with_linker_inplace(gof.CLinker())
def test_fill(self):
x = NDArrayType('float64', [0, 0])('x')
y = NDArrayType('float64', [1, 1])('y')
x = TensorType('float64', [0, 0])('x')
y = TensorType('float64', [1, 1])('y')
e = Elemwise(Second(transfer_type(0)), {0:0})(x, y)
f = gof.CLinker().accept(Env([x, y], [e])).make_function()
xv = numpy.ones((5, 5))
......@@ -104,8 +104,8 @@ class test_Broadcast(unittest.TestCase):
assert (xv == yv).all()
def test_weird_strides(self):
x = NDArrayType('float64', [0, 0, 0, 0, 0])('x')
y = NDArrayType('float64', [0, 0, 0, 0, 0])('y')
x = TensorType('float64', [0, 0, 0, 0, 0])('x')
y = TensorType('float64', [0, 0, 0, 0, 0])('y')
e = Elemwise(add)(x, y)
f = gof.CLinker().accept(Env([x, y], [e])).make_function()
xv = numpy.random.rand(2, 2, 2, 2, 2)
......@@ -114,7 +114,7 @@ class test_Broadcast(unittest.TestCase):
assert (f(xv, yv) == zv).all()
def test_same_inputs(self):
x = NDArrayType('float64', [0, 0])('x')
x = TensorType('float64', [0, 0])('x')
e = Elemwise(add)(x, x)
f = gof.CLinker().accept(Env([x], [e])).make_function()
xv = numpy.random.rand(2, 2)
......@@ -134,7 +134,7 @@ class test_CAReduce(unittest.TestCase):
((5, 6), ()),
((2, 3, 4, 5), (0, 1, 3)),
((), ())]:
x = NDArrayType('float64', [(entry == 1) for entry in xsh])('x')
x = TensorType('float64', [(entry == 1) for entry in xsh])('x')
e = CAReduce(add, axis = tosum)(x)
if tosum is None: tosum = range(len(xsh))
f = copy(linker).accept(Env([x], [e])).make_function()
......
import numpy
from theano.gof.type import Type
from theano.gof.graph import Result, Apply, Constant
from theano.gof.graph import Variable, Apply, Constant
from theano.gof.op import Op
from theano.gof.opt import *
from theano.gof.env import Env
from theano.gof.toolbox import *
import theano.tensor.basic as T
def as_result(x):
if not isinstance(x, Result):
raise TypeError("not a Result", x)
def as_variable(x):
if not isinstance(x, Variable):
raise TypeError("not a Variable", x)
return x
class MyType(Type):
......@@ -27,7 +27,7 @@ class MyOp(Op):
self.x = x
def make_node(self, *inputs):
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if not isinstance(input.type, MyType):
raise Exception("Error 1")
......@@ -63,7 +63,7 @@ def test_merge_with_weird_eq():
assert node.inputs[0] is node.inputs[1]
#NONSCALAR CASE
# This was created to test NDArrayConstantSignature
# This was created to test TensorConstantSignature
x = T.constant(numpy.ones(5), name='x')
y = T.constant(numpy.ones(5), name='y')
g = Env([x, y], [x+y])
......
......@@ -6,7 +6,7 @@ import unittest
from theano import gof
from theano.tensor.opt import *
from theano import tensor
from theano.tensor import NDArrayType
from theano.tensor import TensorType
from theano.gof import Env
from theano.tensor.elemwise import DimShuffle
from theano import pprint
......@@ -18,9 +18,9 @@ from theano import function
def inputs(xbc = (0, 0), ybc = (0, 0), zbc = (0, 0)):
x = NDArrayType(broadcastable = xbc, dtype = 'float64')('x')
y = NDArrayType(broadcastable = ybc, dtype = 'float64')('y')
z = NDArrayType(broadcastable = zbc, dtype = 'float64')('z')
x = TensorType(broadcastable = xbc, dtype = 'float64')('x')
y = TensorType(broadcastable = ybc, dtype = 'float64')('y')
z = TensorType(broadcastable = zbc, dtype = 'float64')('z')
return x, y, z
......
......@@ -3,7 +3,7 @@ from theano.tensor.xlogx import xlogx
import unittest
import theano
from theano.tensor import as_ndarray_result
from theano.tensor import as_tensor_variable
import test_basic as TT
import random
......@@ -15,7 +15,7 @@ class T_XlogX(unittest.TestCase):
unittest_tools.seed_rng()
def test0(self):
x = as_ndarray_result([1, 0])
x = as_tensor_variable([1, 0])
y = xlogx(x)
f = theano.function([], [y])
self.failUnless(numpy.all(f() == numpy.asarray([0, 0.])))
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论