提交 3d91144f authored 作者: Olivier Breuleux's avatar Olivier Breuleux

Result -> Variable, NDArray* -> Tensor*

上级 99647a3b
......@@ -117,12 +117,12 @@ Example:
>>> cmp = f(self.avals,self.bvals) == numpy.dot(self.avals,self.bvals)
>>> self.failUnless(numpy.all(cmp))
Avoid hard-coding results, as in the following case:
Avoid hard-coding variables, as in the following case:
>>> self.failUnless(numpy.all(f(self.avals,self.bvals)==numpy.array([[25,25,30,28],[21,18,14,25]])))
This makes the test case less manageable and forces the user to update the
results each time the input is changed or possibly when the module being
variables each time the input is changed or possibly when the module being
tested changes (after a bug fix for example). It also constrains the test case
to specific input/output data pairs. The section on random values covers why this
might not be such a good idea.
......@@ -148,7 +148,7 @@ Example:
>>> ...
>>> def test_3D_dot_fail(self):
>>> def func():
>>> a = T.NDArrayType('float64', (False,False,False)) # create 3d tensor
>>> a = T.TensorType('float64', (False,False,False)) # create 3d tensor
>>> b = T.dmatrix()
>>> c = T.dot(a,b) # we expect this to fail
>>> # above should fail as dot operates on 2D tensors only
......@@ -209,7 +209,7 @@ The main advantage of using unittest_tools.seed_rng is that it allows us to
change the seed used in the unitests, without having to manually edit all the
files. For example, this allows the nightly build to run nosetests repeatedly,
changing the seed on every run (hence achieving a higher confidence that the
results are correct), while still making sure unittests are deterministic.
variables are correct), while still making sure unittests are deterministic.
Users who prefer their unittests to be random (when run on their local machine)
can simply undefine THEANO_UNITTEST_SEED.
......@@ -220,4 +220,4 @@ Similarly, to provide a seed to numpy.random.RandomState, simply use:
>>> # OR providing an explicit seed
>>> rng = numpy.random.RandomState(unittest_tools.fetch_seed(1231))
Note that the ability to change the seed from one nosetest to another, is incompatible with the method of hard-coding the baseline results (against which we compare the theano outputs). These must then be determined "algorithmically". Although this represents more work, the test suite will be better because of it.
Note that the ability to change the seed from one nosetest to another, is incompatible with the method of hard-coding the baseline variables (against which we compare the theano outputs). These must then be determined "algorithmically". Although this represents more work, the test suite will be better because of it.
......@@ -18,7 +18,7 @@ have been calculated by another operation. For each of the outputs,
the variables associated to them will be declared and initialized.
The operation then has to compute what it needs to using the
input variables and place the results in the output variables.
input variables and place the variables in the output variables.
What needs to be defined
......@@ -54,13 +54,13 @@ application of the current Op on a list of inputs, producing a list of
outputs. ``input_names`` and ``output_names`` arguments contain as
many strings as there are inputs and outputs to the application of the
Op and they correspond to the ``name`` that is passed to the type of
each Result in these lists. For example, if ``node.inputs[0].type ==
each Variable in these lists. For example, if ``node.inputs[0].type ==
double``, then ``input_names[0]`` is the ``name`` argument passed to
``double.c_declare`` etc. when the first input is processed by Theano.
In a nutshell, ``input_names`` and ``output_names`` parameterize the
names of the inputs your operation needs to use and the outputs it
needs to put results into. But this will be clear with the examples.
needs to put variables into. But this will be clear with the examples.
Defining the methods
......@@ -92,7 +92,7 @@ had more than one output, you would just set the variable(s) for
each output to what they should be.
.. warning::
Do *NOT* use C's ``return`` statement to return the result(s) of
Do *NOT* use C's ``return`` statement to return the variable(s) of
the computations. Set the output variables directly as shown
above. Theano will pick them up for you.
......
......@@ -18,7 +18,7 @@ Python data that satisfy the constraints it puts forward. In other
words, it must define C code that can convert a Python reference into
some type suitable for manipulation in C and it must define C code
that can convert some C structure in which the C implementation of an
operation stores its results into a reference to an object that can be
operation stores its variables into a reference to an object that can be
used from Python and is a valid value for the Type.
For example, in the current example, we have a Type which represents a
......@@ -65,7 +65,7 @@ the most important ones:
- **c_sync(name, sub)**
- When the computations are done, transfer the results from the C
- When the computations are done, transfer the variables from the C
structure we put them in to the destination Python object. This
will only be called for the outputs.
......@@ -82,12 +82,12 @@ the most important ones:
Each of these functions take two arguments, ``name`` and ``sub`` which
must be used to parameterize the C code they return. ``name`` is a
string which is chosen by the compiler to represent a :ref:`result` of
string which is chosen by the compiler to represent a :ref:`variable` of
the Type in such a way that there are no name conflicts between
different pieces of data. Therefore, all variables declared in
``c_declare`` should have a name which includes ``name``. Furthermore,
the name of the variable containing a pointer to the Python object
associated to the Result is ``py_<name>``.
associated to the Variable is ``py_<name>``.
``sub``, on the other hand, is a dictionary containing bits of C code
suitable for use in certain situations. For instance, ``sub['fail']``
......@@ -129,7 +129,7 @@ double. That double will be named whatever is passed to our function
in the "name" argument. That will usually be some mangled name like
"V0", "V2" or "V92" depending on how many nodes there are in the
computation graph and what rank the current node has. This function
will be called for all Results whose type is ``double``.
will be called for all Variables whose type is ``double``.
You can declare as many variables as you want there and you can also
do typedefs. Make sure that the name of each variable contains the
......@@ -157,15 +157,15 @@ it, it's best to publish it somewhere.
This function has to initialize the
double we declared previously to a suitable value. This is useful if
we want to avoid dealing with garbage values, especially if our data
type is a pointer. This is not going to be called for all Results with
the ``double`` type. Indeed, if a Result is an input which we pass
type is a pointer. This is not going to be called for all Variables with
the ``double`` type. Indeed, if a Variable is an input which we pass
from Python we will want to extract that input from a Python object,
therefore it is the c_extract method that will be called instead of
c_init. You can therefore not assume, when writing c_extract, that the
initialization has been done (in fact you can assume that it *hasn't*
been done).
``c_init`` will typically be called on output Results, but in general
``c_init`` will typically be called on output Variables, but in general
you should only assume that either c_init or c_extract has been
called, without knowing for sure which of the two.
......@@ -190,7 +190,7 @@ we have a reference to a Python object which Theano has placed in
given in the inputs. This special variable is declared by Theano as
``PyObject* py_%(name)s`` where ``PyObject*`` is a pointer to a Python
object as defined by CPython's C API. This is the reference that
corresponds, on the Python side of things, to a Result with the
corresponds, on the Python side of things, to a Variable with the
``double`` type. It is what the end user will give and what he or she
expects to get back.
......@@ -223,7 +223,7 @@ API) and we put it in our double variable that we declared previously.
double.c_sync = c_sync
This function is probably the trickiest. What happens here is that we
have computed some operation on doubles and we have put the result
have computed some operation on doubles and we have put the variable
into the double variable ``%(name)s``. Now, we need to put this data
into a Python object that we can manipulate on the Python side of
things. This Python object must be put into the ``py_%(name)s``
......@@ -306,9 +306,9 @@ object on which we want to apply computations using C
code. Conversely, ``c_sync`` will only be called if we want to
communicate the values we have computed to Python and ``c_cleanup``
will only be called when we don't need to process the data with C
anymore. In other words, the use of these functions for a given Result
anymore. In other words, the use of these functions for a given Variable
depends on the the relationship between Python and C with respect to
that Result. For instance, imagine you define the following function
that Variable. For instance, imagine you define the following function
and call it:
.. code-block:: python
......
......@@ -14,7 +14,7 @@ An Op is any object which defines the following methods:
- **make_node(*inputs)**
- This method is responsible for creating output Results of a suitable Type
- This method is responsible for creating output Variables of a suitable Type
to serve as the outputs of this Op's application. This method should put these
outputs into an Apply instance, and return the Apply instance.
......@@ -38,7 +38,7 @@ An Op is any object which defines the following methods:
- **__call__(*inputs)**
- Syntactic shortcut to make_node which returns the output Results
- Syntactic shortcut to make_node which returns the output Variables
of the Op.
- *Default*: this is done for you by Op.
......@@ -48,7 +48,7 @@ An Op is any object which defines the following methods:
- This method computes the function associated to this Op. The
``node`` is an Apply node created by the Op's ``make_node``
method, ``inputs`` is a list of references to data to operate on,
and ``output_storage`` is a list of storage cells where the results of
and ``output_storage`` is a list of storage cells where the variables of
the computation must be put. More specifically:
- ``node``: This is a reference to an Apply node which was previously
......@@ -112,9 +112,9 @@ An Op is any object which defines the following methods:
- If the Op you are defining is differentiable, you can define its
gradient symbolically in this method.
- Both the ``inputs`` and ``output_gradients`` will be Results. This
method must return a list containing one Result (or None) for each
input. Each returned Result represents the gradient with respect to
- Both the ``inputs`` and ``output_gradients`` will be Variables. This
method must return a list containing one Variable (or None) for each
input. Each returned Variable represents the gradient with respect to
that input given the symbolic gradients with respect to each output.
- If the output is not differentiable with respect to any inputs, then this
......@@ -193,7 +193,7 @@ two.
This function ensures that both inputs have the ``double``
type.
Since multiplying two doubles yields a double,
this function makes an Apply node with an output Result of type
this function makes an Apply node with an output Variable of type
``double``.
.. code-block:: python
......@@ -205,14 +205,14 @@ this function makes an Apply node with an output Result of type
mul.make_node = make_node
The first two lines make sure that both inputs are Results of the
The first two lines make sure that both inputs are Variables of the
``double`` type that we created in the previous section. We would not
want to multiply two arbitrary types, it would not make much sense
(and we'd be screwed when we implement this in C!)
The last line is the meat of the definition. There we create an Apply
node representing the application of Op ``mul`` to inputs ``x`` and
``y``, giving a Result instance of type ``double`` as the output.
``y``, giving a Variable instance of type ``double`` as the output.
.. note::
Theano relies on the fact that if you call the ``make_node`` method
......@@ -228,7 +228,7 @@ This code actually computes the function.
In our example, the data in ``inputs`` will be instances of Python's
built-in type ``float`` because this is the type that ``double.filter()``
will always return, per our own definition. ``output_storage`` will
contain a single storage cell for the multiplication's result.
contain a single storage cell for the multiplication's variable.
.. code-block:: python
......@@ -296,9 +296,9 @@ by modifying ``make_node`` to accept Python ``int`` or ``float`` as
return gof.Apply(mul, [x, y], [double()])
mul.make_node = make_node
Whenever we pass a Python int or float instead of a Result as ``x`` or
Whenever we pass a Python int or float instead of a Variable as ``x`` or
``y``, ``make_node`` will convert it to :ref:`constant` for us. ``gof.Constant``
is a :ref:`result` we statically know the value of.
is a :ref:`variable` we statically know the value of.
>>> x = double('x')
>>> z = mul(x, 2)
......@@ -365,7 +365,7 @@ arithmetic operators:
Instead of working directly on an instance of Op, we create a subclass of
Op that we can parametrize. All the operations we define are binary. They
all work on two inputs with type ``double``. They all return a single
Result of type ``double``. Therefore, ``make_node`` does the same thing
Variable of type ``double``. Therefore, ``make_node`` does the same thing
for all these operations, except for the Op reference ``self`` passed
as first argument to Apply. We define ``perform`` using the function
``fn`` passed in the constructor.
......
......@@ -53,22 +53,22 @@ default values.
- *Default*: ``values_eq(a, b)``
- **make_result(name=None)**
- **make_variable(name=None)**
- Makes a :term:`Result` of this Type with the specified name, if
``name is not None``. If ``name is ``None``, then the Result does
not have a name. The Result will have its ``type`` field set to the
- Makes a :term:`Variable` of this Type with the specified name, if
``name is not None``. If ``name is ``None``, then the Variable does
not have a name. The Variable will have its ``type`` field set to the
Type object.
- *Default*: there is a generic definition of this in Type. The Result's
- *Default*: there is a generic definition of this in Type. The Variable's
``type`` will be the object that defines this method (in other words,
``self``).
- **__call__(name=None)**:
- Syntactic shortcut to ``make_result``.
- Syntactic shortcut to ``make_variable``.
- *Default*: ``make_result``
- *Default*: ``make_variable``
For each method, the *default* is what :api:``theano.gof.Type`` defines
......@@ -120,7 +120,7 @@ so if ``x`` is an ``int`` it we will return an equivalent ``float``.
The second method we define is ``values_eq_approx``. This method
allows approximate comparison between two values respecting our Type's
constraints. It might happen that an optimization changes the computation
graph in such a way that it produces slightly different results, for
graph in such a way that it produces slightly different variables, for
example because of numerical instability like rounding errors at the
end of the mantissa. For instance, ``a + a + a + a + a + a`` might not
actually produce the exact same output as ``6 * a`` (try with a=0.1),
......@@ -209,7 +209,7 @@ Untangling some concepts
========================
Initially, confusion is common on what an instance of Type is versus
a subclass of Type or an instance of Result. Some of this confusion is
a subclass of Type or an instance of Variable. Some of this confusion is
syntactic. A Type is any object which has fields corresponding to the
functions defined above. The Type class provides sensible defaults for
all of them except ``filter``, so when defining new Types it is natural
......@@ -222,17 +222,17 @@ attempt to clear up the confusion:
akin to a primitive type or class in C. It is a *static*
annotation.
* An **instance of Result** symbolizes data nodes in a data flow
* An **instance of Variable** symbolizes data nodes in a data flow
graph. If you were to parse the C expression ``int x;``, ``int``
would be a Type instance and ``x`` would be a Result instance of
would be a Type instance and ``x`` would be a Variable instance of
that Type instance. If you were to parse the C expression ``c = a +
b;``, ``a``, ``b`` and ``c`` would all be Result instances.
b;``, ``a``, ``b`` and ``c`` would all be Variable instances.
* A **subclass of Type** represents a set of Type instances that share
structural similarities. In the ``double`` example that we are doing,
there is actually only one Type in that set, therefore the subclass
doesn't represent anything that one of its instances doesn't. In this
case it is a singleton, a set with one element. However, the NDArrayType
case it is a singleton, a set with one element. However, the TensorType
class which is a subclass of Type represents a set of types of tensors
parametrized by their data type or number of dimensions. We could say
that subclassing Type builds a hierarchy of Types which is based upon
......
......@@ -90,8 +90,8 @@ operation on ``x``.
Inplace operations in theano still work in a functional setting:
they need to return the modified input. Symbolically, Theano
requires one Result standing for the input *before* being modified
and *another* Result representing the input *after* being
requires one Variable standing for the input *before* being modified
and *another* Variable representing the input *after* being
modified. Therefore, code using inplace operations would look like
this:
......@@ -129,7 +129,7 @@ operation on ``x``.
Take the previous definitions of x, y and z and suppose an Op which
adds one to every byte of its input. If we give ``x`` as an input to
that Op, it can either allocate a new buffer of the same size as ``x``
(that could be ``z``) and set that new buffer's bytes to the result of
(that could be ``z``) and set that new buffer's bytes to the variable of
the addition. That would be a normal, :term:`pure` Op. Alternatively,
it could add one to each byte *in* the buffer ``x``, therefore
changing it. That would be an inplace Op.
......
......@@ -15,10 +15,10 @@ two types of optimizations: *global* optimizations and *local*
optimizations. A global optimization takes an :ref:`env` object (an
Env is a wrapper around a whole computation graph, you can see its
:ref:`documentation <env>` for more details) and navigates through it
in a suitable way, replacing some Results by others in the process. A
in a suitable way, replacing some Variables by others in the process. A
local optimization, on the other hand, is defined as a function on a
*single* :ref:`apply` node and must return either False (to mean that
nothing is to be done) or a list of new Results that we would like to
nothing is to be done) or a list of new Variables that we would like to
replace the node's outputs with. A :ref:`navigator` is a special kind
of global optimization which navigates the computation graph in some
fashion (in topological order, reverse-topological order, random
......@@ -68,7 +68,7 @@ A local optimization is an object which defines the following methods:
- **transform(node)**
- This method takes an :ref:`apply` node and returns either False to
signify that no changes are to be done or a list of Results which
signify that no changes are to be done or a list of Variables which
matches the length of the node's ``outputs`` list. When the
LocalOptimizer is applied by a Navigator, the outputs of the node
passed as argument to the LocalOptimizer will be replaced by the
......@@ -125,7 +125,7 @@ does additional checks to ensure that we are not messing up the
computation graph (note: if ReplaceValidate was already added by
another optimizer, ``extend`` will do nothing). In a nutshell,
``toolbox.ReplaceValidate`` grants access to ``env.replace_validate``
and ``env.replace_validate`` allows us to replace a Result with
and ``env.replace_validate`` allows us to replace a Variable with
another while respecting certain validation constraints. You can
browse the list of :ref:`features <envfeaturelist>` and see if some of
them might be useful to write optimizations with. For example, as an
......@@ -142,7 +142,7 @@ numerator is a multiplication we put the two operands in a and b, so
we can now say that ``z == (a*b)/y``. If ``y==a`` then ``z==b`` and if
``y==b`` then ``z==a``. When either case happens then we can replace z
by either a or b using ``env.replace_validate`` - else we do
nothing. You might want to check the documentation about :ref:`result`
nothing. You might want to check the documentation about :ref:`variable`
and :ref:`apply` to get a better understanding of the
pointer-following game you need to get ahold of the nodes of interest
for the simplification (x, y, z, a, b, etc.)
......
......@@ -12,7 +12,7 @@ analogue in Python:
Theano Python
=============== ===========================================================
Apply function application / function call
Result function data / variable
Variable function data / variable
Op operations carried out in computation / function definition
Type data types
Module ??? class?
......
......@@ -40,7 +40,7 @@ Theano provides some generic Op classes which allow you to generate a
lot of ops at a lesser effort. For instance, Elemwise can be used to
make :term:`elementwise` operations easily whereas DimShuffle can be
used to make transpose-like transformations. These higher order Ops
are mostly NDArray-related, as this is Theano's specialty. An exposé of
are mostly Tensor-related, as this is Theano's specialty. An exposé of
them can therefore be found in :ref:`tensoroptools`.
......
......@@ -25,9 +25,9 @@ array(28.4)
Let's break this down into several steps. The first step is to define
two symbols, or Results, representing the quantities that you want
to add. Note that from now on, we will use the term :term:`Result`
to mean "symbol" (in other words, ``x``, ``y``, ``z`` are all Result
two symbols, or Variables, representing the quantities that you want
to add. Note that from now on, we will use the term :term:`Variable`
to mean "symbol" (in other words, ``x``, ``y``, ``z`` are all Variable
objects). The output of the function ``f`` is a :api:`numpy.ndarray`
with zero dimensions.
......@@ -50,16 +50,16 @@ is the type we assign to "0-dimensional arrays (`scalar`) of doubles
``dscalar`` is not a class. Therefore, neither ``x`` nor ``y``
are actually instances of ``dscalar``. They are instances of
:api:`NDArrayResult <theano.tensor.basic.NDArrayResult>`. ``x`` and ``y``
:api:`TensorVariable <theano.tensor.basic.TensorVariable>`. ``x`` and ``y``
are, however, assigned the theano Type ``dscalar`` in their ``type``
field, as you can see here:
>>> type(x)
<class 'theano.tensor.basic.NDArrayResult'>
<class 'theano.tensor.basic.TensorVariable'>
>>> x.type
NDArrayType(float64, scalar)
TensorType(float64, scalar)
>>> T.dscalar
NDArrayType(float64, scalar)
TensorType(float64, scalar)
>>> x.type == T.dscalar
True
......@@ -67,7 +67,7 @@ You can learn more about the structures in Theano in
the :ref:`advtutorial` and in :ref:`graphstructures`.
By calling ``T.dscalar`` with a string argument, you create a
:term:`Result` representing a floating-point scalar quantity with the
:term:`Variable` representing a floating-point scalar quantity with the
given name. If you provide no argument, the symbol will be unnamed. Names
are not require, but they can aid debugging.
......@@ -79,7 +79,7 @@ The second step is to combine ``x`` and ``y`` into their sum ``z``:
>>> z = x + y
``z`` is yet another :term:`Result` which represents the addition of
``z`` is yet another :term:`Variable` which represents the addition of
``x`` and ``y``. You can use the :api:`pp <theano.printing.pp>`
function to pretty-print out the computation associated to ``z``.
......@@ -95,9 +95,9 @@ and giving ``z`` as output:
>>> f = function([x, y], z)
The first argument to ``function`` is a list of :term:`Results <Result>`
The first argument to ``function`` is a list of :term:`Variables <Variable>`
that will be provided as inputs to the function. The second argument
is a single Result *or* a list of Results. For either case, the second
is a single Variable *or* a list of Variables. For either case, the second
argument is what we want to see as output when we apply the function.
``f`` may then be used like a normal Python function.
......@@ -122,7 +122,7 @@ our new function on 2D arrays:
array([[ 11., 22.],
[ 33., 44.]])
The result is a numpy array. We can also use numpy arrays directly as
The variable is a numpy array. We can also use numpy arrays directly as
inputs:
>>> import numpy
......
......@@ -67,7 +67,7 @@ squared difference between two matrices ``x`` and ``y`` at the same time:
>>> diff_squared = diff**2
>>> f = function([x, y], [diff, abs_diff, diff_squared])
When we use the function, it will return the two results (the printing
When we use the function, it will return the two variables (the printing
was reformatted for readability):
>>> f([[1, 1], [1, 1]], [[0, 1], [2, 3]])
......@@ -136,7 +136,7 @@ with respect to the second. In this way, Theano can be used for
.. note::
The result of ``T.grad`` has the same dimensions as the
The variable of ``T.grad`` has the same dimensions as the
second argument. This is exactly like the first derivative if the
first argument is a scalar or a tensor of size 1 but not if it is
larger. For more information on the semantics when the first
......@@ -205,11 +205,11 @@ First let's define the accumulator function:
The first argument is a pair. As we saw in the previous section, this
means that ``inc`` is an input with a default value of 1. The second
argument has syntax that creates an internal state. The syntax is
``((state_result, new_state_result), initial_value)``.
The internal storage associated with ``state_result`` is initialized to
``((state_variable, new_state_variable), initial_value)``.
The internal storage associated with ``state_variable`` is initialized to
``initial_value``. Every time ``accumulator`` is called, the value
of the internal ``state`` will be replaced by the value computed as
``new_state``. In this case, the state will be replaced by the result
``new_state``. In this case, the state will be replaced by the variable
of incrementing it by ``inc``.
We recommend (insist?) that internal state arguments occur after any
......@@ -223,8 +223,8 @@ of other inputs.
Anyway, let's try it out! The state can be accessed using the square
brackets notation ``[]``. You may access the state either by using
the :ref:`result` representing it or the name of that
:ref:`result`. In our example we can access the state either with the
the :ref:`variable` representing it or the name of that
:ref:`variable`. In our example we can access the state either with the
``state`` object or the string 'state'.
>>> accumulator[state]
......
......@@ -26,17 +26,17 @@ The idea here is that we've compiled the symbolic graph (``2*x``) into a functio
Inputs
======
The ``inputs`` argument to ``theano.function`` is a list, containing the ``Result`` instances for which values will be specified at the time of the function call. But inputs can be more than just Results.
``In`` instances let us attach properties to ``Results`` to tell function more about how to use them.
The ``inputs`` argument to ``theano.function`` is a list, containing the ``Variable`` instances for which values will be specified at the time of the function call. But inputs can be more than just Variables.
``In`` instances let us attach properties to ``Variables`` to tell function more about how to use them.
**In(result, name=None, value=None, update=None, mutable=False)** returns an ``In`` instance:
**In(variable, name=None, value=None, update=None, mutable=False)** returns an ``In`` instance:
- ``result``: a Result instance.
- ``variable``: a Variable instance.
This will be assigned a value before running the function,
not computed from its owner.
- ``name``: Any type. (If autoname_input=True, defaults to result.name).
- ``name``: Any type. (If autoname_input=True, defaults to variable.name).
If name is a valid Python identifier, this input can be set by
``kwarg``, and its value can be accessed by ``self.<name>``.
......@@ -49,9 +49,9 @@ The ``inputs`` argument to ``theano.function`` is a list, containing the ``Resul
Default: ``None``
- ``update``: Result instance
- ``update``: Variable instance
This expression Result will replace ``value`` after each function call.
This expression Variable will replace ``value`` after each function call.
Default: ``None``
......@@ -63,7 +63,7 @@ The ``inputs`` argument to ``theano.function`` is a list, containing the ``Resul
- ``autoname``: Bool
``True``: if ``name`` is None and the Result has a name, it will be taken
``True``: if ``name`` is None and the Variable has a name, it will be taken
as the input's name.
``False``: the name is the exact value passed as the name parameter
......@@ -121,7 +121,7 @@ Advanced: Sharing Storage Between Functions
-------------------------------------------
``value`` can be a :api:`theano.gof.Container` as well as a literal.
This permits linking a value of a Result in one function to the value of a Result in another function.
This permits linking a value of a Variable in one function to the value of a Variable in another function.
By using a ``Container`` as a value we can implement shared variables between functions.
For example, consider the following program.
......@@ -141,7 +141,7 @@ For example, consider the following program.
The functions ``inc`` and ``dec`` operate on a shared internal value for ``s``.
Theano's Module system uses this mechanism to share storage between Methods.
The container being shared doesn't have to correspond to the same Result in both functions,
The container being shared doesn't have to correspond to the same Variable in both functions,
but that's usually how this mechanism is used.
Input Argument Restrictions
......@@ -161,11 +161,11 @@ The following restrictions apply to the inputs to ``theano.function``:
have the same name, then the function will raise an exception. [***Which
exception?**]
- Two ``In`` instances may not name the same Result. I.e. you cannot
- Two ``In`` instances may not name the same Variable. I.e. you cannot
give the same parameter multiple times.
If no name is specified explicitly for an In instance, then its name
will be taken from the Result's name. Note that this feature can cause
will be taken from the Variable's name. Note that this feature can cause
harmless-looking input lists to not satisfy the two conditions above.
In such cases, Inputs should be named explicitly to avoid problems
such as duplicate names, and named arguments preceding unnamed ones.
......@@ -198,7 +198,7 @@ Both ``value`` and ``container`` properties provide dictionary-like access based
- integer keys: you can look up a value/container by its position in the input list;
- name keys: you can look up a value/container by its name;
- Result keys: you can look up a value/container by the Result it corresponds to.
- Variable keys: you can look up a value/container by the Variable it corresponds to.
In addition to these access mechanisms, there is an even more convenient
method to access values by indexing a Function directly by typing
......@@ -234,7 +234,7 @@ Input Shortcuts
Every element of the inputs list will be upgraded to an In instance if necessary.
- a Result instance ``r`` will be upgraded like ``In(r)``
- a Variable instance ``r`` will be upgraded like ``In(r)``
- a tuple ``(name, r)`` will be ``In(r, name=name)``
......@@ -285,13 +285,13 @@ Outputs
The ``outputs`` argument to function can be one of
- ``None``, or
- a Result or ``Out`` instance, or
- a list of Results or ``Out`` instances.
- a Variable or ``Out`` instance, or
- a list of Variables or ``Out`` instances.
An ``Out`` instance is a structure that lets us attach options to individual output ``Result`` instances,
similarly to how ``In`` lets us attach options to individual input ``Result`` instances.
An ``Out`` instance is a structure that lets us attach options to individual output ``Variable`` instances,
similarly to how ``In`` lets us attach options to individual input ``Variable`` instances.
**Out(result, borrow=False)** returns an ``Out`` instance:
**Out(variable, borrow=False)** returns an ``Out`` instance:
* ``borrow``
......@@ -304,9 +304,9 @@ similarly to how ``In`` lets us attach options to individual input ``Result`` in
If a single ``Result`` or ``Out`` instance is given as argument, then the compiled function will return a single value.
If a single ``Variable`` or ``Out`` instance is given as argument, then the compiled function will return a single value.
If a list of ``Result`` or ``Out`` instances is given as argument, then the compiled function will return a list of their values.
If a list of ``Variable`` or ``Out`` instances is given as argument, then the compiled function will return a list of their values.
.. code-block:: python
......
......@@ -44,7 +44,7 @@ Here we instantiate an empty Module.
>>> m.state = Member(T.dscalar())
Then we declare a Result for use with our Module. That Result will
Then we declare a Variable for use with our Module. That Variable will
be a :ref:`member` of the Module, which means that it will be
accessible as a field of the object we will create later (for reading
and writing). It will also be accessible from any :ref:`method`
......@@ -52,7 +52,7 @@ defined in our Module.
.. note::
There is no need to name the Result explicitly here. ``m.state`` will
There is no need to name the Variable explicitly here. ``m.state`` will
be given the name 'state' automatically.
......@@ -82,7 +82,7 @@ This line describes how to compute the new state.
.. note::
Here new_state is implicitly declared as External since it is
illegal to declare a Result as a Member if it is the result of
illegal to declare a Variable as a Member if it is the variable of
previous computations.
......@@ -90,10 +90,10 @@ This line describes how to compute the new state.
Here we declare a Method. The three arguments are as follow:
* **inputs**: a list of input Results
* **outputs**: a list of output Results
* **updates**: a dictionary mapping a Result declared as a Member to a
Result representing the computation of the next state of the member.
* **inputs**: a list of input Variables
* **outputs**: a list of output Variables
* **updates**: a dictionary mapping a Variable declared as a Member to a
Variable representing the computation of the next state of the member.
If possible, you may also give the updates as keyword arguments, as
in: ``Method(m.inc, m.new_state, state = m.new_state)``. This implies
......@@ -206,7 +206,7 @@ give a method called ``_instance_print_state`` to our Module.
acc.print_state() # --> prints "state is: 0.0"
Any method called like ``_instance_XXX`` will result in the object
Any method called like ``_instance_XXX`` will variable in the object
obtained through a call to ``make`` to gain an ``XXX`` method. Note
that when we define ``_instance_print_state`` there are two "self"
arguments: ``self`` which is *symbolic* and ``obj`` which contains
......
......@@ -60,23 +60,23 @@ as ``theano.tensor.frow``. If you want a matrix of unsigned
Each of the types described above can be constructed by two methods:
a singular version (e.g., ``dmatrix``) and a plural version
(``dmatrices``). When called, the singular version takes a single
argument which is the name of the :term:`Result` we want to make and it
makes a single Result of that type. The plural version can either take
argument which is the name of the :term:`Variable` we want to make and it
makes a single Variable of that type. The plural version can either take
an integer or several strings. If an integer is provided, the method
will return that many Results and if strings are provided, it will
create one Result for each string, using the string as the Result's
will return that many Variables and if strings are provided, it will
create one Variable for each string, using the string as the Variable's
name. For example:
.. code-block:: python
from theano.tensor import *
x = dmatrix() # creates one Result with no name
x = dmatrix('x') # creates one Result with name 'x'
xyz = dmatrix('xyz') # creates one Result with name 'xyz'
x = dmatrix() # creates one Variable with no name
x = dmatrix('x') # creates one Variable with name 'x'
xyz = dmatrix('xyz') # creates one Variable with name 'xyz'
x, y, z = dmatrices(3) # creates three Results with no names
x, y, z = dmatrices('x', 'y', 'z') # creates three Results named 'x', 'y' and 'z'
x, y, z = dmatrices(3) # creates three Variables with no names
x, y, z = dmatrices('x', 'y', 'z') # creates three Variables named 'x', 'y' and 'z'
Custom tensor types
......@@ -84,7 +84,7 @@ Custom tensor types
If you wish to use a type of tensor which is not already available here
(for example, a 3D tensor) you can build an appropriate type using
``theano.tensor.NDArrayType``. The first argument you pass is the ``dtype``
``theano.tensor.TensorType``. The first argument you pass is the ``dtype``
and the second is the ``broadcastable pattern``.
Where ``dtype`` is one of:
......@@ -110,7 +110,7 @@ complex128 complex 128 (two float64)
Even though ``theano.tensor`` does not define any type
using ``complex`` dtypes (``complex64`` or ``complex128``),
you can define them explicitly with ``NDArrayType`` (see example
you can define them explicitly with ``TensorType`` (see example
below). However, few operations are fully supported for complex
types: as of version 0.1, only elementary operations (``+-*/``)
have C implementations. Additionally, complex types have received
......@@ -154,11 +154,11 @@ bytes, we would do:
.. code-block:: python
# 3D tensor of signed bytes
mytype = theano.tensor.NDArrayType('uint8', [False]*3)
mytype = theano.tensor.TensorType('uint8', [False]*3)
# complex types (based on complex64)
my_cscalar = theano.tensor.NDArrayType('complex64', [])
my_cmatrix = theano.tensor.NDArrayType('complex64', [False, False])
my_cscalar = theano.tensor.TensorType('complex64', [])
my_cmatrix = theano.tensor.TensorType('complex64', [False, False])
Ops
......
差异被折叠。
......@@ -43,7 +43,7 @@ Extending Theano
================
- Read about `How Theano Works <UserAdvanced.html>`__. This introduces the
major interface data structures: Op, Type, Result, Apply.
major interface data structures: Op, Type, Variable, Apply.
- Read about `Extending theano <extending.html>`__.
......
......@@ -18,7 +18,7 @@ Examples of parameterized Ops in theano:
``Reduce(<scalar op>, <axes>)``
reduces the specified axes using the provided scalar op.
``Add(<output type inferrer>)``
adds scalars and puts the result in a scalar whose type is inferred from the input types using ``output_type_inferrer(*inputs)``
adds scalars and puts the variable in a scalar whose type is inferred from the input types using ``output_type_inferrer(*inputs)``
``Composite(<graph>)``
makes a single Op out of a graph of scalar operations.
......@@ -46,14 +46,14 @@ The ``make_node`` method is expected to have the following signature:
make_node(self, *inputs)
``inputs`` may be a list of anything that the user wants to provide as symbolic input (symbolic: standing for the actual values that will be passed when the graph is compiled into an executable function). [*The Theano intro should describe symbolic in greater depth, and we should link to that from here.*] This may or may not include Result instances (but if you want the inputs of this Op to sometimes be outputs of another Op, then the inputs should be Result instances). [*What else could they be? Constant, Values, ...*] The return value should be an instance of [GraphStructures Apply] (see the example below). Here are the tasks typically handled in ``make_node``.
``inputs`` may be a list of anything that the user wants to provide as symbolic input (symbolic: standing for the actual values that will be passed when the graph is compiled into an executable function). [*The Theano intro should describe symbolic in greater depth, and we should link to that from here.*] This may or may not include Variable instances (but if you want the inputs of this Op to sometimes be outputs of another Op, then the inputs should be Variable instances). [*What else could they be? Constant, Values, ...*] The return value should be an instance of [GraphStructures Apply] (see the example below). Here are the tasks typically handled in ``make_node``.
* Check that the inputs are valid (type checking, etc.). [*Since we don't actually have values, what can we do besides type checking?*]
* If needed, wrap the inputs in Result instances with the proper type.
* Make the Result instances that will serve as the outputs of the node.
* If needed, wrap the inputs in Variable instances with the proper type.
* Make the Variable instances that will serve as the outputs of the node.
* ``return Apply(self, <wrapped inputs>, <outputs>)``
The ``inputs`` and ``outputs`` arguments to ``Apply`` must be lists of ``Result`` instances (or instances of subclasses of ``Result``). The inputs given to ``Apply`` do not have to be the same as the inputs passed to ``make_node``, but it is recommended that the order corresponds. [*why?*] The behavior of ``make_node`` should not depend on the structure of the graph of [*or?*] its inputs: it may look at the type and type fields of its inputs, but not at their owner field, because modifications to the graph structure do not use ``make_node``. [*???*]
The ``inputs`` and ``outputs`` arguments to ``Apply`` must be lists of ``Variable`` instances (or instances of subclasses of ``Variable``). The inputs given to ``Apply`` do not have to be the same as the inputs passed to ``make_node``, but it is recommended that the order corresponds. [*why?*] The behavior of ``make_node`` should not depend on the structure of the graph of [*or?*] its inputs: it may look at the type and type fields of its inputs, but not at their owner field, because modifications to the graph structure do not use ``make_node``. [*???*]
Example:
......@@ -66,14 +66,14 @@ Example:
def make_node(self, x, y):
# note 1: constant, int64 and Scalar are defined in theano.scalar
# note 2: constant(x) is equivalent to Constant(type = int64, data = x)
# note 3: the call int64() is equivalent to Result(type = int64) or Result(type = Scalar(dtype = 'int64'))
# note 3: the call int64() is equivalent to Variable(type = int64) or Variable(type = Scalar(dtype = 'int64'))
if isinstance(x, int):
x = constant(x)
elif not isinstance(x, Result) or not x.type == int64:
elif not isinstance(x, Variable) or not x.type == int64:
raise TypeError("expected an int64 Scalar")
if isinstance(y, int):
y = constant(y)
elif not isinstance(y, Result) or not x.type == int64:
elif not isinstance(y, Variable) or not x.type == int64:
raise TypeError("expected an int64 Scalar")
inputs = [x, y]
outputs = [int64()]
......@@ -82,12 +82,12 @@ Example:
#...
add = Add() # I make an instance of Add
node1 = add.make_node(int64(), int64()) # I make a node with two Result inputs
node1 = add.make_node(int64(), int64()) # I make a node with two Variable inputs
node2 = add.make_node(1, 2) # this works too
node3 = add.make_node(int64(), 79) # this works three
node4 = add.make_node(float64(), int64()) # this raises a TypeError
[*What type is an instance of Add? It's an Apply? But that's not a Result, and cannot be used as input for another Op.*]
[*What type is an instance of Add? It's an Apply? But that's not a Variable, and cannot be used as input for another Op.*]
Two Apply nodes ``node1`` and ``node2`` are *assumed* by the compiler to represent the same behavior if:
1. ``node1.op == node2.op``
......@@ -99,7 +99,7 @@ It is considered an *error* to have conditions 1 and 2 but not condition 3. A co
``__call__``
----------------
In ``Op``, ``__call__`` is defined in terms of ``make_node``. Instead of returning a node, it returns the output Results directly, which is practical from a UI standpoint. Here is pseudocode:
In ``Op``, ``__call__`` is defined in terms of ``make_node``. Instead of returning a node, it returns the output Variables directly, which is practical from a UI standpoint. Here is pseudocode:
.. code-block:: python
......@@ -122,7 +122,7 @@ perform(self, node, inputs, output_storage)
Where:
* *node*: a pointer to an Apply instance - ``node`` is assumed to be produced by a previous call to ``self.make_node``.
* *inputs*: *not* the same as ``node.inputs`` - it is a list of values. [*i.e. actually data, not just symbolic stuff?*]
* *output_storage*: *not* the same as ``node.outputs`` - it is a list of lists of length 1 where the results of the computation must be put.
* *output_storage*: *not* the same as ``node.outputs`` - it is a list of lists of length 1 where the variables of the computation must be put.
[*Can you explain better how inputs is not node.inputs and output_storage is not node.outputs?*]
......@@ -138,7 +138,7 @@ Here is an example of a properly defined ``perform``:
# this does z = x + y
x, y = inputs # extract the two inputs
z, = output_storage # extract the one storage (the comma after z is not optional)
z[0] = x + y # we must put the result in z[0]
z[0] = x + y # we must put the variable in z[0]
...
add = Add() # I make an instance of Add
......@@ -175,8 +175,8 @@ grad
where:
* ``inputs`` is a list of Result instances. It is assumed to be the ``inputs`` field of a node produced by ``make_node``.
* ``output_gradients`` is a list of Result instances. They have the same properties as the outputs of the node, but are filled with gradient values.
* ``inputs`` is a list of Variable instances. It is assumed to be the ``inputs`` field of a node produced by ``make_node``.
* ``output_gradients`` is a list of Variable instances. They have the same properties as the outputs of the node, but are filled with gradient values.
Essentially, the semantics are:
......@@ -192,7 +192,7 @@ Essentially, the semantics are:
return gz*dz/dx + gw*dw/dx, gz*dz/dy + gw*dw/dy
More specifically,
``grad`` must return a list or tuple of input gradients, as many as there are inputs. Let C be a Result (currently assumed to be a scalar) that depends through a theano symbolic expression on the node outputs. Then each output_gradients[i] represents symbolically dC/doutputs[i]. The returned input gradients should represent symbolically dC/dinputs[i].
``grad`` must return a list or tuple of input gradients, as many as there are inputs. Let C be a Variable (currently assumed to be a scalar) that depends through a theano symbolic expression on the node outputs. Then each output_gradients[i] represents symbolically dC/doutputs[i]. The returned input gradients should represent symbolically dC/dinputs[i].
Example:
......@@ -253,7 +253,7 @@ Example: if we expect to call the op repeatedly on incrementally bigger inputs,
"""
default_output = 0
def make_node(self, x, y):
return Apply(self, [x,y], [x.type.make_result(), x.type.make_result()])
return Apply(self, [x,y], [x.type.make_variable(), x.type.make_variable()])
def perform(self, node, (x, y), (z, stor)):
if z[0] is None or stor[0] is None:
......
......@@ -50,12 +50,12 @@ Usage:
.. code-block:: python
#module.state = result
#module.state = variable
module.state = T.scalar()
A ``Member`` represents a state variable (i.e., whose value remains after a ``Method`` is called). It will be named automatically after that field and it will be an implicit input of all ``Methods`` of the ``Module``. Its storage (i.e. where the value is stored) will be shared by all ``Methods`` of the ``Module``.
A ``Result`` which is the result of a previous computation (by opposition to being ``updated``) is not a ``Member``. Internally this is called an External. You should not need to care about this.
A ``Variable`` which is the variable of a previous computation (by opposition to being ``updated``) is not a ``Member``. Internally this is called an External. You should not need to care about this.
For sharing state between modules, see ``Inner Module`` section.
......@@ -100,7 +100,7 @@ Module Interface
def resolve(self, symbol, filter = None)
Resolves a symbol in this module. The symbol can be a string or a ``Result``. If the string contains dots (eg ``"x.y"``), the module will resolve the symbol hierarchically in its inner modules. The filter argument is None or a class and it can be used to restrict the search to ``Member`` or ``Method`` instances for example.
Resolves a symbol in this module. The symbol can be a string or a ``Variable``. If the string contains dots (eg ``"x.y"``), the module will resolve the symbol hierarchically in its inner modules. The filter argument is None or a class and it can be used to restrict the search to ``Member`` or ``Method`` instances for example.
.. code-block:: python
......
......@@ -30,14 +30,14 @@ you compute the gradient, then there is no problem.
If an Op does not define ``grad``, and this Op *does* appear in the path when
you compute the gradient, **WRITEME**.
Gradients for a particular result can be one of four kinds:
Gradients for a particular variable can be one of four kinds:
1) forgot to implement it
You will get an exception of the following form.
theano.gof.utils.MethodNotDefined: ('grad', <class 'pylearn.algorithms.sandbox.cost.LogFactorial'>, 'LogFactorial')
2) a symbolic result
2) a symbolic variable
3) None / zero
4) undefined mathematically
currently, there is no way for a grad() method to distinguish between cases 3
......@@ -123,7 +123,7 @@ Guillaume can you make sure to hit these points:
* There are a lot of tests that define their own epsilon, but this should be standardized. e.g. in test_elemwise.py ``self.failUnless((numpy.abs(f(xv) - zv) < 1e-10).all())``
* If the expected result of a test is that an Exception is thrown, how do we correctly detect and handle that?
* If the expected variable of a test is that an Exception is thrown, how do we correctly detect and handle that?
nosetests has ``failUnlessRaises``
......
......@@ -2,7 +2,7 @@
.. _tensoroptools:
================
NDArray Op Tools
Tensor Op Tools
================
WRITEME - describe how to use Elemwise here
......
......@@ -2,7 +2,7 @@
Theano is an optimizing compiler in Python, built to evaluate complicated expressions
(especially matrix-valued ones) as quickly as possible.
Theano compiles expression graphs (see :doc:`graph` ) that are built by Python code.
The expressions in these graphs are called `Apply` nodes and the variables in these graphs are called `Result` nodes.
The expressions in these graphs are called `Apply` nodes and the variables in these graphs are called `Variable` nodes.
You compile a graph by calling `function`, which takes a graph, and returns a callable object.
One of theano's most important features is that `function` can transform your graph before
......@@ -29,7 +29,7 @@ from gof import \
CLinker, OpWiseCLinker, DualLinker, Linker, LocalLinker, PerformLinker, \
Container, \
InconsistencyError, Env, \
Apply, Result, Constant, Value, \
Apply, Variable, Constant, Value, \
Op, \
opt, \
toolbox, \
......
......@@ -6,8 +6,8 @@ from function_module import function
class OpFromGraph(gof.Op):
"""
This create an L{Op} from a list of input results and a list of output
results.
This create an L{Op} from a list of input variables and a list of output
variables.
The signature is the same as the signature of L{FunctionFactory}
and/or function and the resulting L{Op}'s perform will do the same
......@@ -62,9 +62,9 @@ class OpFromGraph(gof.Op):
[type() for type in self.output_types])
def perform(self, node, inputs, outputs):
results = self.fn(*inputs)
for output, result in zip(outputs, results):
output[0] = result
variables = self.fn(*inputs)
for output, variable in zip(outputs, variables):
output[0] = variable
def grad(self, inputs, output_grads):
if hasattr(self, 'grad_ops'):
......
......@@ -5,16 +5,16 @@ class SymbolicInput(object):
"""
Represents a symbolic input for use with function or FunctionMaker.
result: a Result instance.
variable: a Variable instance.
This will be assigned a value before running the function,
not computed from its owner.
name: Any type. (If autoname=True, defaults to result.name).
name: Any type. (If autoname=True, defaults to variable.name).
If name is a valid Python identifier, this input can be set by kwarg, and its value
can be accessed by self.<name>.
update: Result instance (default: None)
value (see previous) will be replaced with this expression result after each function call.
update: Variable instance (default: None)
value (see previous) will be replaced with this expression variable after each function call.
If update is None, the update will be the default value of the input.
mutable: Bool (default: False if update is None, True if update is not None)
......@@ -29,9 +29,9 @@ class SymbolicInput(object):
See the name option.
"""
def __init__(self, result, name=None, update=None, mutable=None, strict=False, autoname=True):
self.result = result
self.name = result.name if (autoname and name is None) else name
def __init__(self, variable, name=None, update=None, mutable=None, strict=False, autoname=True):
self.variable = variable
self.name = variable.name if (autoname and name is None) else name
if self.name is not None and not isinstance(self.name, str):
raise TypeError("name must be a string! (got: %s)" % self.name)
self.update = update
......@@ -40,9 +40,9 @@ class SymbolicInput(object):
def __str__(self):
if self.update:
return "In(%s -> %s)" % (self.result, self.update)
return "In(%s -> %s)" % (self.variable, self.update)
else:
return "In(%s)" % self.result
return "In(%s)" % self.variable
def __repr__(self):
return str(self)
......@@ -64,7 +64,7 @@ class SymbolicInputKit(object):
raise TypeError('naem must be a string (got: %s)' % name)
self.name = name
self.sinputs = []
self.results = []
self.variables = []
def add_input(self, sinput):
"""
......@@ -72,7 +72,7 @@ class SymbolicInputKit(object):
next available index.
"""
self.sinputs.append(sinput)
self.results.append(sinput.result)
self.variables.append(sinput.variable)
def distribute(self, value, indices, containers):
"""
......@@ -84,10 +84,10 @@ class SymbolicInputKit(object):
def complete(self, inputs):
"""
Given inputs (a list of Result instances), checks through all
Given inputs (a list of Variable instances), checks through all
the SymbolicInputs in the kit and return a sorted list of
indices and a list of their corresponding SymbolicInputs such
that each of them represents some result in the inputs list.
that each of them represents some variable in the inputs list.
Not all the provided inputs will have a corresponding
SymbolicInput in the kit.
......@@ -95,7 +95,7 @@ class SymbolicInputKit(object):
ret = []
for input in inputs:
try:
i = self.results.index(input)
i = self.variables.index(input)
ret.append((i, self.sinputs[i]))
except ValueError:
pass
......@@ -109,11 +109,11 @@ class In(SymbolicInput):
"""
Represents a symbolic input for use with function or FunctionMaker.
result: a Result instance.
variable: a Variable instance.
This will be assigned a value before running the function,
not computed from its owner.
name: Any type. (If autoname=True, defaults to result.name).
name: Any type. (If autoname=True, defaults to variable.name).
If name is a valid Python identifier, this input can be set by kwarg, and its value
can be accessed by self.<name>.
......@@ -122,8 +122,8 @@ class In(SymbolicInput):
an argument with a default value in Python. If update is not None, changes to this
value will "stick around", whether due to an update or a user's explicit action.
update: Result instance (default: None)
value (see previous) will be replaced with this expression result after each function call.
update: Variable instance (default: None)
value (see previous) will be replaced with this expression variable after each function call.
If update is None, the update will be the default value of the input.
mutable: Bool (default: False if update is None, True if update is not None)
......@@ -137,8 +137,8 @@ class In(SymbolicInput):
autoname: Bool (default: True)
See the name option.
"""
def __init__(self, result, name=None, value=None, update=None, mutable=None, strict=False, autoname=True):
super(In, self).__init__(result, name, update, mutable, strict, autoname)
def __init__(self, variable, name=None, value=None, update=None, mutable=None, strict=False, autoname=True):
super(In, self).__init__(variable, name, update, mutable, strict, autoname)
self.value = value
......@@ -152,12 +152,12 @@ class SymbolicOutput(object):
the function again, but the function might be faster.
"""
def __init__(self, result, borrow=False):
self.result = result
def __init__(self, variable, borrow=False):
self.variable = variable
self.borrow = borrow
def __str__(self):
return "Out(%s)" % self.result
return "Out(%s)" % self.variable
Out = SymbolicOutput
......
差异被折叠。
......@@ -22,7 +22,7 @@ class BROKEN_ON_PURPOSE_StructuredDotCSC(gof.Op):
def __hash__(self):
return 29834 ^ hash(type(self)) ^ hash(self.py_offset)
def make_node(self, a_val, a_ind, a_ptr, a_nrows, b):
a_nrows = theano.tensor.as_ndarray_result(a_nrows)
a_nrows = theano.tensor.as_tensor_variable(a_nrows)
assert a_val.type.dtype == b.type.dtype
r = gof.Apply(self, [a_val, a_ind, a_ptr, a_nrows, b],
[theano.tensor.tensor(a_val.type.dtype, (False, False))])
......
......@@ -18,7 +18,7 @@ class StochasticGradientDescent(module.FancyModule):
def __init__(self, args, cost, params, gradients=None, stepsize=None, WEIRD_STUFF=True):
"""
:param stepsize: the step to take in (negative) gradient direction
:type stepsize: None, scalar value, or scalar NDArrayResult
:type stepsize: None, scalar value, or scalar TensorVariable
"""
super(StochasticGradientDescent, self).__init__()
self.WEIRD_STUFF = WEIRD_STUFF
......@@ -26,7 +26,7 @@ class StochasticGradientDescent(module.FancyModule):
if stepsize is None:
self.stepsize = (T.dscalar())
elif isinstance(stepsize, T.NDArrayResult):
elif isinstance(stepsize, T.TensorVariable):
self.stepsize = stepsize
else:
if self.WEIRD_STUFF:
......@@ -89,10 +89,10 @@ class TanhRnn(Op):
:type A: matrix (M by M)
"""
x = T.as_ndarray_result(x)
z0 = T.as_ndarray_result(z0)
A = T.as_ndarray_result(A)
z = x.type() #make a new symbolic result with the same type as x
x = T.as_tensor_variable(x)
z0 = T.as_tensor_variable(z0)
A = T.as_tensor_variable(A)
z = x.type() #make a new symbolic variable with the same type as x
return Apply(self, [x, z0, A], [z])
def perform(self, node, (x,z0,A), out):
......
......@@ -43,9 +43,9 @@ class T_module(unittest.TestCase):
m1.x=x()
m1.y=y()
m1.emtpylist = []
m1.lx=[x()]#cast Result]
m1.lx=[x()]#cast Variable]
m1.ly=[y()]
m1.llx=[[x()]]#cast Result]
m1.llx=[[x()]]#cast Variable]
m1.lly=[[y()]]
m1.ltx=[(x(),)]
m1.lty=[(y(),)]
......@@ -68,8 +68,8 @@ class T_module(unittest.TestCase):
m1.ddx={"x":{"x":x()}}
m1.ddy={"y":{"y":y()}}
assert isinstance(m1.x,(gof.Result))
assert isinstance(m1.y,(gof.Result))
assert isinstance(m1.x,(gof.Variable))
assert isinstance(m1.y,(gof.Variable))
for i, obj in enumerate([
m1.lx[0], #0
m1.llx[0][0],
......@@ -86,7 +86,7 @@ class T_module(unittest.TestCase):
m1.dy['y'], m1.dlx['x'][0], m1.dly['y'][0],
m1.dtx['x'][0], m1.dty['y'][0], m1.ddx['x']['x'],
m1.ddy['y']['y']]):
assert isinstance(obj,(gof.Result))
assert isinstance(obj,(gof.Variable))
inst=m1.make()
......@@ -136,7 +136,7 @@ class T_module(unittest.TestCase):
def local_test(x,y):
m1=Module()
#create a list with some results in it
#create a list with some variables in it
m1.l=[x(), y()]
# create a Method that makes the second list element a shared Member
......@@ -144,7 +144,7 @@ class T_module(unittest.TestCase):
m1.g=Method([], m1.l[0])
m = m1.make()
#assign 4 and 5 to the two results' containers in m
#assign 4 and 5 to the two variables' containers in m
m.l = [4, 5]
print 'm.f', m.f()
assert numpy.all(5 == m.f())
......@@ -164,7 +164,7 @@ class T_module(unittest.TestCase):
m1.f=Method([], m1.l[1])
m = m1.make()
#assign 4 and 5 to the two results' containers in m
#assign 4 and 5 to the two variables' containers in m
m.l = (4, 5)
assert 5 == m.f()
assert 4 == m.g()
......@@ -184,7 +184,7 @@ class T_module(unittest.TestCase):
m1.g=Method([], m1.l['x'])
m = m1.make()
#assign 4 and 5 to the two results' containers in m
#assign 4 and 5 to the two variables' containers in m
m.l = dict(x=4, y=5)
assert 5 == m.f()
assert 4 == m.g()
......@@ -198,7 +198,7 @@ class T_module(unittest.TestCase):
def test_method_in_list_or_dict(self):
"""Test that a Method which is only included via a list or dictionary is still treated as if it
were a toplevel attribute
Fred: why we don't do this of direct fct of results?
Fred: why we don't do this of direct fct of variables?
"""
m1=Module()
x=T.dscalar()
......@@ -255,7 +255,7 @@ class T_module(unittest.TestCase):
assert isinstance(f,theano.compile.function_module.Function)
def test_shared_members(self):
"""Test that under a variety of tricky conditions, the shared-ness of Results and Members
"""Test that under a variety of tricky conditions, the shared-ness of Variables and Members
is respected."""
def populate_module(m,x):
......@@ -352,7 +352,7 @@ class T_module(unittest.TestCase):
assert f==4
def test_shared_method(self):
"""Test that under a variety of tricky conditions, the shared-ness of Results and Methods
"""Test that under a variety of tricky conditions, the shared-ness of Variables and Methods
is respected.
Fred: the test create different method event if they are shared. What do we want?
"""
......@@ -463,7 +463,7 @@ class T_module(unittest.TestCase):
assert numpy.all(v0 != v0_copy)
def test_member_value(self):
"""Test that module Members of Value work correctly. As Result?"""
"""Test that module Members of Value work correctly. As Variable?"""
M = Module()
x = T.dscalar()
M.y = T.value(40)
......@@ -474,7 +474,7 @@ class T_module(unittest.TestCase):
def test_member_constant(self):
"""Test that module Members of Constant work correctly.
As Result with more optimization?"""
As Variable with more optimization?"""
M = Module()
x = T.dscalar()
M.y = T.constant(40)
......@@ -601,7 +601,7 @@ def test_method_updates():
assert numpy.all(xval == [0, 1])
# when a result is listed explicitly and in an update, then there's a problem.
# when a variable is listed explicitly and in an update, then there's a problem.
M = Module()
M.x = T.dvector()
x = T.dvector()
......@@ -611,7 +611,7 @@ def test_method_updates():
m = M.make()
assert False
except ValueError, e:
if str(e[0]).startswith('Result listed in both inputs and up'):
if str(e[0]).startswith('Variable listed in both inputs and up'):
pass
else:
raise
......
......@@ -12,7 +12,7 @@ from destroyhandler import \
DestroyHandler
from graph import \
Apply, Result, Constant, Value, view_roots
Apply, Variable, Constant, Value, view_roots
from link import \
Container, Linker, LocalLinker, PerformLinker, WrapLinker, WrapLinkerMany
......
差异被折叠。
......@@ -14,7 +14,7 @@ except ImportError:
# The following function takes a PyCObject instance that contains
# a void*->int function in its VoidPtr field. It then calls that
# function on the object's Desc field and returns the int result.
# function on the object's Desc field and returns the int variable.
single_runner = """
if (!PyCObject_Check(py_cthunk)) {
PyErr_SetString(PyExc_ValueError,
......
......@@ -42,7 +42,7 @@ class DestroyHandler(object):
def getroot(r, view_i):
"""
For views: Return non-view result which is ultimatly viewed by r.
For views: Return non-view variable which is ultimatly viewed by r.
For non-views: return self.
"""
try:
......@@ -52,10 +52,10 @@ def getroot(r, view_i):
def add_impact(r, view_o, impact):
"""
In opposition to getroot, which finds the result that is viewed *by* r, this function
returns all the results that are views of r.
In opposition to getroot, which finds the variable that is viewed *by* r, this function
returns all the variables that are views of r.
:param impact: is a set of results that are views of r
:param impact: is a set of variables that are views of r
:param droot: a dictionary mapping views -> r
"""
for v in view_o.get(r,[]):
......@@ -94,10 +94,10 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
self.env = env
self.destroyers = set() #set of Apply instances with non-null destroy_map
self.view_i = {} # result -> result
self.view_o = {} # result -> set of results
#clients: how many times does an apply use a given result
self.clients = {} # result -> apply -> ninputs
self.view_i = {} # variable -> variable
self.view_o = {} # variable -> set of variables
#clients: how many times does an apply use a given variable
self.clients = {} # variable -> apply -> ninputs
self.stale_droot = True
self.debug_all_apps = set()
......@@ -111,8 +111,8 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
return self.droot, self.impact, self.root_destroyer
def _build_droot_impact(self):
droot = {} # destroyed view + nonview results -> foundation
impact = {} # destroyed nonview result -> it + all views of it
droot = {} # destroyed view + nonview variables -> foundation
impact = {} # destroyed nonview variable -> it + all views of it
root_destroyer = {} # root -> destroyer apply
for app in self.destroyers:
......@@ -286,7 +286,7 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
"""Return orderings induced by destructive operations.
Raise InconsistencyError when
a) attempting to destroy indestructable result, or
a) attempting to destroy indestructable variable, or
b) attempting to destroy a value multiple times, or
c) an Apply destroys (illegally) one of its own inputs by aliasing
......@@ -309,23 +309,23 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
isinstance(r, graph.Constant)]
if illegal_destroy:
#print 'destroying illegally'
raise InconsistencyError("Attempting to destroy indestructible results: %s" %
raise InconsistencyError("Attempting to destroy indestructible variables: %s" %
illegal_destroy)
# add destroyed result clients as computational dependencies
# add destroyed variable clients as computational dependencies
for app in self.destroyers:
# for each destroyed input...
for output_idx, input_idx_list in app.op.destroy_map.items():
destroyed_idx = input_idx_list[0]
destroyed_result = app.inputs[destroyed_idx]
root = droot[destroyed_result]
destroyed_variable = app.inputs[destroyed_idx]
root = droot[destroyed_variable]
root_impact = impact[root]
# we generally want to put all clients of things which depend on root
# as pre-requisites of app.
# But, app is itself one such client!
# App will always be a client of the node we're destroying
# (destroyed_result, but the tricky thing is when it is also a client of
# *another result* viewing on the root. Generally this is illegal, (e.g.,
# (destroyed_variable, but the tricky thing is when it is also a client of
# *another variable* viewing on the root. Generally this is illegal, (e.g.,
# add_inplace(x, x.T). In some special cases though, the in-place op will
# actually be able to work properly with multiple destroyed inputs (e.g,
# add_inplace(x, x). An Op that can still work in this case should declare
......@@ -349,7 +349,7 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
#print 'tolerated', tolerated
for i, input in enumerate(app.inputs):
if input in root_impact \
and (i not in tolerated or input is not destroyed_result):
and (i not in tolerated or input is not destroyed_variable):
raise InconsistencyError("Input aliasing: %s (%i, %i)"
% (app, destroyed_idx, i))
......
差异被折叠。
差异被折叠。
......@@ -52,14 +52,14 @@ class Linker(object):
def make_thunk(self):
"""
This function must return a triplet (function, input_results, output_results)
where function is a thunk that operates on the returned results. If inplace
is True, the input_results and output_results lists will be the same as the
This function must return a triplet (function, input_variables, output_variables)
where function is a thunk that operates on the returned variables. If inplace
is True, the input_variables and output_variables lists will be the same as the
inputs and outputs of the graph provided to the L{Linker}. Else, independent
results will be returned.
variables will be returned.
Example::
x, y = Result(Double), Result(Double)
x, y = Variable(Double), Variable(Double)
e = x + y
env = Env([x, y], [e])
fn, (new_x, new_y), (new_e, ) = MyLinker(env).make_thunk(inplace)
......@@ -98,13 +98,13 @@ class Linker(object):
% (takes, ['argument','arguments'][takes>1], got)
if (len(args) != len(inputs)):
raise TypeError(e_arity(len(inputs), len(args)))
for arg, result in zip(args, inputs):
result.data = arg
for arg, variable in zip(args, inputs):
variable.data = arg
thunk()
if unpack_single:
return utils.to_return_values([result.data for result in outputs])
return utils.to_return_values([variable.data for variable in outputs])
else:
return [result.data for result in outputs]
return [variable.data for variable in outputs]
execute.thunk = thunk
execute.inputs = inputs
execute.outputs = outputs
......@@ -114,14 +114,14 @@ class Linker(object):
#TODO: Move this class to the compile module, where it is used (and for which it exists).
class Container(object):
"""This class joins a result with its computed value.
"""This class joins a variable with its computed value.
It is used in linkers, especially for the inputs and outputs of a Function.
"""
def __init__(self, r, storage, readonly = False, strict = False, name = None):
"""WRITEME
:Parameters:
`r`: a result
`r`: a variable
`storage`: a list of length 1, whose element is the value for `r`
`readonly`: True indicates that this should not be setable by Function[r] = val
`strict`: if True, we don't allow type casting.
......@@ -176,8 +176,8 @@ def map_storage(env, order, input_storage, output_storage):
This function iterates over the nodes in `order` and ensures that for every
input and output `Result`, there is a unique storage container. This is
returned as a dictionary Result->storage called the `storage_map`.
input and output `Variable`, there is a unique storage container. This is
returned as a dictionary Variable->storage called the `storage_map`.
This function also returns `input_storage` which is a list of storages corresponding to env.inputs.
This function also returns `output_storage` which is a list of storages corresponding to env.outputs.
......@@ -313,8 +313,8 @@ def gc_helper(node_list):
:param node_list: list of Apply instances in program execution order
:rtype: a 2-tuple
:returns: FIRST, the set of Result instances which are computed by node_list, and SECOND a
dictionary that maps each Result instance to a the last node to use Result as an input.
:returns: FIRST, the set of Variable instances which are computed by node_list, and SECOND a
dictionary that maps each Variable instance to a the last node to use Variable as an input.
This is used to allow garbage collection within graphs.
"""
......@@ -434,7 +434,7 @@ class WrapLinker(Linker):
@note:
This linker ensures that each linker has its own storage for
inputs and outputs and intermediate results. There is no interference
inputs and outputs and intermediate variables. There is no interference
between linkers.
"""
......@@ -467,9 +467,9 @@ class WrapLinker(Linker):
@type env: gof.Env
@param env: the env which we will link
@type no_recycling: a list of Results that belong to env.
@type no_recycling: a list of Variables that belong to env.
@param no_recycling: If a Result is in no_recycling, L{WrapLinker} will clear
@param no_recycling: If a Variable is in no_recycling, L{WrapLinker} will clear
the output storage associated to it (for each linker in linkers) during
the computation to avoid reusing it.
......
......@@ -38,7 +38,7 @@ class CLinkerOp(object):
`PyObject` variable pointing to that input.
`outputs` : list of strings
Each string is the name of a `PyObject` pointer where the Op should store its
results. The `CLinker` guarantees that on entry to this code block, each pointer
variables. The `CLinker` guarantees that on entry to this code block, each pointer
is either NULL or is unchanged from the end of the previous execution.
`sub` : dict of strings
extra symbols defined in `CLinker` sub symbols (such as 'fail').
......@@ -68,7 +68,7 @@ class CLinkerOp(object):
`PyObject` variable pointing to that input.
`outputs` : list of strings
Each string is the name of a `PyObject` pointer where the Op should store its
results. The `CLinker` guarantees that on entry to this code block, each pointer
variables. The `CLinker` guarantees that on entry to this code block, each pointer
is either NULL or is unchanged from the end of the previous execution.
`sub` : dict of strings
extra symbols defined in `CLinker` sub symbols (such as 'fail').
......@@ -162,7 +162,7 @@ class PureOp(object):
- [optionally] building gradient-calculating graphs (via `grad`).
To see how `Op`, `Type`, `Result`, and `Apply` fit together see the page on :doc:`graph`.
To see how `Op`, `Type`, `Variable`, and `Apply` fit together see the page on :doc:`graph`.
For more specifications on how these methods should behave: see the `Op Contract` in the
sphinx docs (advanced tutorial on Op-making).
......@@ -229,7 +229,7 @@ class PureOp(object):
def perform(self, node, inputs, output_storage):
"""
Required: Calculate the function on the inputs and put the results in the
Required: Calculate the function on the inputs and put the variables in the
output storage. Return None.
:Parameters:
......
......@@ -179,7 +179,7 @@ class MergeOptimizer(Optimizer):
"""WRITEME
Merges parts of the graph that are identical, i.e. parts that
take the same inputs and carry out the asme computations so we
can avoid doing them more than once. Also merges results that
can avoid doing them more than once. Also merges variables that
are constant.
"""
......@@ -188,8 +188,8 @@ class MergeOptimizer(Optimizer):
def apply_constant_merge(self, env):
seen_constants = set()
const_sig = _metadict() # result -> result.signature() (for constants)
const_sig_inv = _metadict() # signature -> result (for constants)
const_sig = _metadict() # variable -> variable.signature() (for constants)
const_sig_inv = _metadict() # signature -> variable (for constants)
for node in _list_of_nodes(env):
for i, c in enumerate([r for r in node.inputs if isinstance(r, graph.Constant)]):
if id(c) in seen_constants:
......@@ -211,13 +211,13 @@ class MergeOptimizer(Optimizer):
def exptime_apply_node_merge(self, env):
# we clear the dicts because the Constants signatures are not necessarily hashable
# and it's more efficient to give them an integer like the other Results
# and it's more efficient to give them an integer like the other Variables
symbol_idx = {} #result -> int
symbol_idx_inv = {} #int -> result (inverse of symbol_idx)
symbol_idx = {} #variable -> int
symbol_idx_inv = {} #int -> variable (inverse of symbol_idx)
#add all graph sources to the symbol_idx dictionaries (arbitrary order)
for i, r in enumerate(r for r in env.results if r.owner is None):
for i, r in enumerate(r for r in env.variables if r.owner is None):
symbol_idx[r] = i
symbol_idx_inv[i] = r
......@@ -246,7 +246,7 @@ class MergeOptimizer(Optimizer):
def apply_node_merge(self, env):
# we clear the dicts because the Constants signatures are not necessarily hashable
# and it's more efficient to give them an integer like the other Results
# and it's more efficient to give them an integer like the other Variables
nodes_seen = {}
......@@ -336,7 +336,7 @@ class LocalOptimizer(object):
- False to indicate that no optimization can be applied to this `node`; or
- <list of results> to use in place of `node`'s outputs in the greater graph.
- <list of variables> to use in place of `node`'s outputs in the greater graph.
:type node: an Apply instance
......@@ -487,13 +487,13 @@ class PatternSub(LocalOptimizer):
place. The input pattern cannot just be a string but the output
pattern can.
If you put a constant result in the input pattern, there will be a
match iff a constant result with the same value and the same type
If you put a constant variable in the input pattern, there will be a
match iff a constant variable with the same value and the same type
is found in its place.
You can add a constraint to the match by using the dict(...) form
described above with a 'constraint' key. The constraint must be a
function that takes the env and the current Result that we are
function that takes the env and the current Variable that we are
trying to match and returns True or False according to an
arbitrary criterion.
......@@ -718,7 +718,7 @@ class NavigatorOptimizer(Optimizer):
def process_node(self, env, node, lopt = None):
"""
This function will use `lopt` to `transform` the `node`. The `transform` method will
return either False or a list of Results that are intended to replace `node.outputs`.
return either False or a list of Variables that are intended to replace `node.outputs`.
If the env accepts the replacement, then the optimization is successful, and this
function returns True.
......
......@@ -38,16 +38,16 @@ class DB(object):
def __query__(self, q):
if not isinstance(q, Query):
raise TypeError('Expected a Query.', q)
results = set()
variables = set()
for tag in q.include:
results.update(self.__db__[tag])
variables.update(self.__db__[tag])
for tag in q.require:
results.intersection_update(self.__db__[tag])
variables.intersection_update(self.__db__[tag])
for tag in q.exclude:
results.difference_update(self.__db__[tag])
variables.difference_update(self.__db__[tag])
remove = set()
add = set()
for obj in results:
for obj in variables:
if isinstance(obj, DB):
sq = q.subquery.get(obj.name, q)
if sq:
......@@ -55,9 +55,9 @@ class DB(object):
replacement.name = obj.name
remove.add(obj)
add.add(replacement)
results.difference_update(remove)
results.update(add)
return results
variables.difference_update(remove)
variables.update(add)
return variables
def query(self, *tags, **kwtags):
if len(tags) >= 1 and isinstance(tags[0], Query):
......@@ -75,13 +75,13 @@ class DB(object):
subquery = kwtags))
def __getitem__(self, name):
results = self.__db__[name]
if not results:
variables = self.__db__[name]
if not variables:
raise KeyError("Nothing registered for '%s'" % name)
elif len(results) > 1:
elif len(variables) > 1:
raise ValueError('More than one match for %s (please use query)' % name)
for result in results:
return result
for variable in variables:
return variable
class Query(object):
......
......@@ -4,13 +4,13 @@ import unittest
from theano.gof.link import PerformLinker
from theano.gof.cc import *
from theano.gof.type import Type
from theano.gof.graph import Result, Apply, Constant
from theano.gof.graph import Variable, Apply, Constant
from theano.gof.op import Op
from theano.gof import env
from theano.gof import toolbox
def as_result(x):
assert isinstance(x, Result)
def as_variable(x):
assert isinstance(x, Variable)
return x
class TDouble(Type):
......@@ -60,7 +60,7 @@ class TDouble(Type):
tdouble = TDouble()
def double(name):
return Result(tdouble, None, None, name = name)
return Variable(tdouble, None, None, name = name)
class MyOp(Op):
......@@ -71,7 +71,7 @@ class MyOp(Op):
def make_node(self, *inputs):
assert len(inputs) == self.nin
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if input.type is not tdouble:
raise Exception("Error 1")
......@@ -239,7 +239,7 @@ def test_duallinker_mismatch():
try:
# this runs OpWiseCLinker and PerformLinker in parallel and feeds
# results of matching operations to _my_checker to verify that they
# variables of matching operations to _my_checker to verify that they
# are the same.
res = fn(1.0, 2.0, 3.0)
raise Exception("An exception should have been raised here!")
......
......@@ -3,7 +3,7 @@ import unittest
from theano.gof.type import Type
from theano.gof import graph
from theano.gof.graph import Result, Apply
from theano.gof.graph import Variable, Apply
from theano.gof.op import Op
from theano.gof.opt import *
......@@ -17,8 +17,8 @@ PatternOptimizer = lambda p1, p2, ign=True: OpKeyOptimizer(PatternSub(p1, p2), i
OpSubOptimizer = lambda op1, op2, fail=NavigatorOptimizer.warn_ignore, ign=True: TopoOptimizer(OpSub(op1, op2), ignore_newtrees=ign, failure_callback = fail)
def as_result(x):
assert isinstance(x, Result)
def as_variable(x):
assert isinstance(x, Variable)
return x
......@@ -31,8 +31,8 @@ class MyType(Type):
return isinstance(other, MyType)
def MyResult(name):
return Result(MyType(), None, None, name = name)
def MyVariable(name):
return Variable(MyType(), None, None, name = name)
def MyValue(data):
return graph.Value(MyType(), data = data)
......@@ -50,11 +50,11 @@ class MyOp(Op):
def make_node(self, *inputs):
assert len(inputs) == self.nin
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if not isinstance(input.type, MyType):
raise Exception("Error 1")
outputs = [MyResult(self.name + "_R") for i in xrange(self.nout)]
outputs = [MyVariable(self.name + "_R") for i in xrange(self.nout)]
return Apply(self, inputs, outputs)
def __str__(self):
......@@ -70,9 +70,9 @@ dot = MyOp(2, 'Dot')
def inputs():
x = MyResult('x')
y = MyResult('y')
z = MyResult('z')
x = MyVariable('x')
y = MyVariable('y')
z = MyVariable('z')
return x, y, z
_Env = Env
......
......@@ -4,11 +4,11 @@ from theano.gof.graph import *
from theano.gof.op import Op
from theano.gof.type import Type
from theano.gof.graph import Result
from theano.gof.graph import Variable
def as_result(x):
assert isinstance(x, Result)
def as_variable(x):
assert isinstance(x, Variable)
return x
......@@ -26,19 +26,19 @@ class MyType(Type):
def __repr__(self):
return 'R%s' % str(self.thingy)
def MyResult(thingy):
return Result(MyType(thingy), None, None)
def MyVariable(thingy):
return Variable(MyType(thingy), None, None)
class MyOp(Op):
def make_node(self, *inputs):
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if not isinstance(input.type, MyType):
print input, input.type, type(input), type(input.type)
raise Exception("Error 1")
outputs = [MyResult(sum([input.type.thingy for input in inputs]))]
outputs = [MyVariable(sum([input.type.thingy for input in inputs]))]
return Apply(self, inputs, outputs)
def __str__(self):
......@@ -54,12 +54,12 @@ MyOp = MyOp()
class TestInputs:
def test_inputs(self):
r1, r2 = MyResult(1), MyResult(2)
r1, r2 = MyVariable(1), MyVariable(2)
node = MyOp.make_node(r1, r2)
assert inputs(node.outputs) == [r1, r2]
def test_inputs_deep(self):
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
node = MyOp.make_node(r1, r2)
node2 = MyOp.make_node(node.outputs[0], r5)
i = inputs(node2.outputs)
......@@ -86,26 +86,26 @@ class X:
class TestStr(X):
def test_as_string(self):
r1, r2 = MyResult(1), MyResult(2)
r1, r2 = MyVariable(1), MyVariable(2)
node = MyOp.make_node(r1, r2)
s = self.str([r1, r2], node.outputs)
assert s == ["MyOp(R1, R2)"]
def test_as_string_deep(self):
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
node = MyOp.make_node(r1, r2)
node2 = MyOp.make_node(node.outputs[0], r5)
s = self.str([r1, r2, r5], node2.outputs)
assert s == ["MyOp(MyOp(R1, R2), R5)"]
def test_multiple_references(self):
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
node = MyOp.make_node(r1, r2)
node2 = MyOp.make_node(node.outputs[0], node.outputs[0])
assert self.str([r1, r2, r5], node2.outputs) == ["MyOp(*1 -> MyOp(R1, R2), *1)"]
def test_cutoff(self):
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
node = MyOp.make_node(r1, r2)
node2 = MyOp.make_node(node.outputs[0], node.outputs[0])
assert self.str(node.outputs, node2.outputs) == ["MyOp(R3, R3)"]
......@@ -119,13 +119,13 @@ class TestStr(X):
class TestClone(X):
def test_accurate(self):
r1, r2 = MyResult(1), MyResult(2)
r1, r2 = MyVariable(1), MyVariable(2)
node = MyOp.make_node(r1, r2)
_, new = clone([r1, r2], node.outputs, False)
assert self.str([r1, r2], new) == ["MyOp(R1, R2)"]
def test_copy(self):
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
node = MyOp.make_node(r1, r2)
node2 = MyOp.make_node(node.outputs[0], r5)
_, new = clone([r1, r2, r5], node2.outputs, False)
......@@ -136,11 +136,11 @@ class TestClone(X):
def test_not_destructive(self):
# Checks that manipulating a cloned graph leaves the original unchanged.
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
node = MyOp.make_node(MyOp.make_node(r1, r2).outputs[0], r5)
_, new = clone([r1, r2, r5], node.outputs, False)
new_node = new[0].owner
new_node.inputs = MyResult(7), MyResult(8)
new_node.inputs = MyVariable(7), MyVariable(8)
assert self.str(inputs(new_node.outputs), new_node.outputs) == ["MyOp(R7, R8)"]
assert self.str(inputs(node.outputs), node.outputs) == ["MyOp(MyOp(R1, R2), R5)"]
......@@ -150,7 +150,7 @@ class TestClone(X):
############
def prenode(obj):
if isinstance(obj, Result):
if isinstance(obj, Variable):
if obj.owner:
return [obj.owner]
if isinstance(obj, Apply):
......@@ -160,7 +160,7 @@ class TestToposort:
def test_0(self):
"""Test a simple graph"""
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
o = MyOp.make_node(r1, r2)
o2 = MyOp.make_node(o.outputs[0], r5)
......@@ -172,7 +172,7 @@ class TestToposort:
def test_1(self):
"""Test a graph with double dependencies"""
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
o = MyOp.make_node(r1, r1)
o2 = MyOp.make_node(o.outputs[0], r5)
all = general_toposort(o2.outputs, prenode)
......@@ -180,7 +180,7 @@ class TestToposort:
def test_2(self):
"""Test a graph where the inputs have owners"""
r1, r2, r5 = MyResult(1), MyResult(2), MyResult(5)
r1, r2, r5 = MyVariable(1), MyVariable(2), MyVariable(5)
o = MyOp.make_node(r1, r1)
r2b = o.outputs[0]
o2 = MyOp.make_node(r2b, r2b)
......@@ -193,7 +193,7 @@ class TestToposort:
def test_3(self):
"""Test a graph which is not connected"""
r1, r2, r3, r4 = MyResult(1), MyResult(2), MyResult(3), MyResult(4)
r1, r2, r3, r4 = MyVariable(1), MyVariable(2), MyVariable(3), MyVariable(4)
o0 = MyOp.make_node(r1, r2)
o1 = MyOp.make_node(r3, r4)
all = io_toposort([r1, r2, r3, r4], o0.outputs + o1.outputs)
......@@ -201,7 +201,7 @@ class TestToposort:
def test_4(self):
"""Test inputs and outputs mixed together in a chain graph"""
r1, r2, r3, r4 = MyResult(1), MyResult(2), MyResult(3), MyResult(4)
r1, r2, r3, r4 = MyVariable(1), MyVariable(2), MyVariable(3), MyVariable(4)
o0 = MyOp.make_node(r1, r2)
o1 = MyOp.make_node(o0.outputs[0], r1)
all = io_toposort([r1, o0.outputs[0]], [o0.outputs[0], o1.outputs[0]])
......@@ -209,7 +209,7 @@ class TestToposort:
def test_5(self):
"""Test when outputs have clients"""
r1, r2, r3, r4 = MyResult(1), MyResult(2), MyResult(3), MyResult(4)
r1, r2, r3, r4 = MyVariable(1), MyVariable(2), MyVariable(3), MyVariable(4)
o0 = MyOp.make_node(r1, r2)
o1 = MyOp.make_node(o0.outputs[0], r4)
all = io_toposort([], o0.outputs)
......
from theano.gof import graph
from theano.gof.graph import Result, Apply, Constant
from theano.gof.graph import Variable, Apply, Constant
from theano.gof.type import Type
from theano.gof.op import Op
from theano.gof import env
......@@ -8,11 +8,11 @@ from theano.gof import toolbox
from theano.gof.link import *
#from _test_result import Double
#from _test_variable import Double
def as_result(x):
assert isinstance(x, Result)
def as_variable(x):
assert isinstance(x, Variable)
return x
class TDouble(Type):
......@@ -22,7 +22,7 @@ class TDouble(Type):
tdouble = TDouble()
def double(name):
return Result(tdouble, None, None, name = name)
return Variable(tdouble, None, None, name = name)
class MyOp(Op):
......@@ -35,7 +35,7 @@ class MyOp(Op):
def make_node(self, *inputs):
assert len(inputs) == self.nin
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if input.type is not tdouble:
raise Exception("Error 1")
......
......@@ -2,10 +2,10 @@
from copy import copy
from theano.gof.op import *
from theano.gof.type import Type, Generic
from theano.gof.graph import Apply, Result
from theano.gof.graph import Apply, Variable
def as_result(x):
assert isinstance(x, Result)
def as_variable(x):
assert isinstance(x, Variable)
return x
......@@ -27,7 +27,7 @@ class MyType(Type):
class MyOp(Op):
def make_node(self, *inputs):
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if not isinstance(input.type, MyType):
raise Exception("Error 1")
......
from theano.gof.type import Type
from theano.gof.graph import Result, Apply, Constant
from theano.gof.graph import Variable, Apply, Constant
from theano.gof.op import Op
from theano.gof.opt import *
from theano.gof.env import Env
from theano.gof.toolbox import *
def as_result(x):
if not isinstance(x, Result):
raise TypeError("not a Result", x)
def as_variable(x):
if not isinstance(x, Variable):
raise TypeError("not a Variable", x)
return x
......@@ -25,8 +25,8 @@ class MyType(Type):
return hash(MyType)
def MyResult(name):
return Result(MyType(), None, None, name = name)
def MyVariable(name):
return Variable(MyType(), None, None, name = name)
class MyOp(Op):
......@@ -37,7 +37,7 @@ class MyOp(Op):
self.x = x
def make_node(self, *inputs):
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if not isinstance(input.type, MyType):
raise Exception("Error 1")
......@@ -74,9 +74,9 @@ op_z = MyOp('OpZ', x = 1)
def inputs():
x = MyResult('x')
y = MyResult('y')
z = MyResult('z')
x = MyVariable('x')
y = MyVariable('y')
z = MyVariable('z')
return x, y, z
......@@ -188,7 +188,7 @@ class TestPatternOptimizer:
def test_constant_unification(self):
x = Constant(MyType(), 2, name = 'x')
y = MyResult('y')
y = MyVariable('y')
z = Constant(MyType(), 2, name = 'z')
e = op1(op1(x, y), y)
g = Env([y], [e])
......@@ -288,7 +288,7 @@ class TestMergeOptimizer:
assert str(g) == "[Op1(*1 -> Op2(x, y), *1, Op2(x, z))]"
def test_constant_merging(self):
x = MyResult('x')
x = MyVariable('x')
y = Constant(MyType(), 2, name = 'y')
z = Constant(MyType(), 2, name = 'z')
e = op1(op2(x, y), op2(x, y), op2(x, z))
......@@ -334,7 +334,7 @@ class TestMergeOptimizer:
or strg == "[Op1(*2 -> Op1(x, y), Op4(*1 -> Op2(Op3(x), y, z), *2), Op1(*1))]"
def test_identical_constant_args(self):
x = MyResult('x')
x = MyVariable('x')
y = Constant(MyType(), 2, name = 'y')
z = Constant(MyType(), 2, name = 'z')
e1 = op1(y, z)
......@@ -347,7 +347,7 @@ class TestMergeOptimizer:
class TestEquilibrium(object):
def test_1(self):
x, y, z = map(MyResult, 'xyz')
x, y, z = map(MyVariable, 'xyz')
e = op3(op4(x, y))
g = Env([x, y, z], [e])
print g
......@@ -362,7 +362,7 @@ class TestEquilibrium(object):
assert str(g) == '[Op2(x, y)]'
def test_2(self):
x, y, z = map(MyResult, 'xyz')
x, y, z = map(MyVariable, 'xyz')
e = op1(op1(op3(x, y)))
g = Env([x, y, z], [e])
print g
......@@ -378,7 +378,7 @@ class TestEquilibrium(object):
assert str(g) == '[Op2(x, y)]'
def test_low_use_ratio(self):
x, y, z = map(MyResult, 'xyz')
x, y, z = map(MyVariable, 'xyz')
e = op3(op4(x, y))
g = Env([x, y, z], [e])
print 'before', g
......
from theano.gof.graph import Result, Apply
from theano.gof.graph import Variable, Apply
from theano.gof.type import Type
from theano.gof.op import Op
......@@ -7,8 +7,8 @@ from theano.gof.env import Env, InconsistencyError
from theano.gof.toolbox import *
def as_result(x):
assert isinstance(x, Result)
def as_variable(x):
assert isinstance(x, Variable)
return x
......@@ -27,8 +27,8 @@ class MyType(Type):
return isinstance(other, MyType)
def MyResult(name):
return Result(MyType(name), None, None)
def MyVariable(name):
return Variable(MyType(name), None, None)
class MyOp(Op):
......@@ -39,7 +39,7 @@ class MyOp(Op):
def make_node(self, *inputs):
assert len(inputs) == self.nin
inputs = map(as_result, inputs)
inputs = map(as_variable, inputs)
for input in inputs:
if not isinstance(input.type, MyType):
raise Exception("Error 1")
......@@ -55,9 +55,9 @@ dot = MyOp(2, 'Dot')
def inputs():
x = MyResult('x')
y = MyResult('y')
z = MyResult('z')
x = MyVariable('x')
y = MyVariable('y')
z = MyVariable('z')
return x, y, z
......
......@@ -5,7 +5,7 @@ __docformat__ = "restructuredtext en"
import copy
import utils
from utils import MethodNotDefined, object2
from graph import Result
from graph import Variable
import traceback
......@@ -71,7 +71,7 @@ class CLinkerType(object):
The code returned from this function must be templated using
"%(name)s", representing the name that the caller wants to
call this `Result`. The Python object self.data is in a
call this `Variable`. The Python object self.data is in a
variable called "py_%(name)s" and this code must set the
variables declared by c_declare to something representative
of py_%(name)s. If the data is improper, set an appropriate
......@@ -119,9 +119,9 @@ class CLinkerType(object):
"""Required: Return c code to pack C types back into a PyObject.
The code returned from this function must be templated using "%(name)s",
representing the name that the caller wants to call this Result. The
representing the name that the caller wants to call this Variable. The
returned code may set "py_%(name)s" to a PyObject* and that PyObject*
will be accessible from Python via result.data. Do not forget to adjust
will be accessible from Python via variable.data. Do not forget to adjust
reference counts if "py_%(name)s" is changed from its original value.
:Parameters:
......@@ -180,7 +180,7 @@ class CLinkerType(object):
raise MethodNotDefined("c_libraries", type(self), self.__class__.__name__)
def c_support_code(self):
"""Optional: Return utility code for use by a `Result` or `Op` to be
"""Optional: Return utility code for use by a `Variable` or `Op` to be
included at global scope prior to the rest of the code for this class.
QUESTION: How many times will this support code be emitted for a graph
......@@ -193,13 +193,13 @@ class CLinkerType(object):
raise MethodNotDefined("c_support_code", type(self), self.__class__.__name__)
class PureType(object):
"""Interface specification for result type instances.
"""Interface specification for variable type instances.
A :term:`Type` instance is mainly reponsible for two things:
- creating `Result` instances (conventionally, `__call__` does this), and
- creating `Variable` instances (conventionally, `__call__` does this), and
- filtering a value assigned to a `Result` so that the value conforms to restrictions
- filtering a value assigned to a `Variable` so that the value conforms to restrictions
imposed by the type (also known as casting, this is done by `filter`),
"""
......@@ -220,33 +220,33 @@ class PureType(object):
raise MethodNotDefined("filter", type(self), self.__class__.__name__)
def is_valid_value(self, a):
"""Required: Return True for any python object `a` that would be a legal value for a Result of this Type"""
"""Required: Return True for any python object `a` that would be a legal value for a Variable of this Type"""
try:
self.filter(a, True)
return True
except TypeError:
return False
def make_result(self, name = None):
"""Return a new `Result` instance of Type `self`.
def make_variable(self, name = None):
"""Return a new `Variable` instance of Type `self`.
:Parameters:
- `name`: None or str
A pretty string for printing and debugging.
"""
r = Result(self, name = name)
r = Variable(self, name = name)
return r
def __call__(self, name = None):
"""Return a new `Result` instance of Type `self`.
"""Return a new `Variable` instance of Type `self`.
:Parameters:
- `name`: None or str
A pretty string for printing and debugging.
"""
r = self.make_result(name)
r = self.make_variable(name)
r.tag.trace = traceback.extract_stack()[:-1]
return r
......@@ -262,9 +262,9 @@ class PureType(object):
"""
Return True if a and b can be considered approximately equal.
:param a: a potential value for a Result of this Type.
:param a: a potential value for a Variable of this Type.
:param b: a potential value for a Result of this Type.
:param b: a potential value for a Variable of this Type.
:rtype: Bool
......@@ -289,7 +289,7 @@ class Type(object2, PureType, CLinkerType):
- `Generic`: for any python type
- `NDArrayType`: for numpy.ndarray
- `TensorType`: for numpy.ndarray
- `SparseType`: for scipy.sparse
......@@ -301,15 +301,15 @@ class Type(object2, PureType, CLinkerType):
# Declare a symbolic floating-point vector using __call__
b = tensor.fvector()
# Create a second Result with the same Type instance
# Create a second Variable with the same Type instance
c = tensor.fvector()
Whenever you create a symbolic variable in theano (technically, `Result`) it will contain a
Whenever you create a symbolic variable in theano (technically, `Variable`) it will contain a
reference to a Type instance. That reference is typically constant during the lifetime of
the Result. Many variables can refer to a single Type instance, as do b and c above. The
the Variable. Many variables can refer to a single Type instance, as do b and c above. The
Type instance defines the kind of value which might end up in that variable when executing
a `Function`. In this sense, theano is like a strongly-typed language because the types
are included in the graph before the values. In our example above, b is a Result which is
are included in the graph before the values. In our example above, b is a Variable which is
guaranteed to corresond to a numpy.ndarray of rank 1 when we try to do some computations
with it.
......
# import op
# import result
# import variable
import re
......@@ -316,16 +316,16 @@ def comm_guard(type1, type2):
raise
try:
result = f(arg1, arg2, *rest)
variable = f(arg1, arg2, *rest)
except:
raise
if result is FALL_THROUGH:
if variable is FALL_THROUGH:
try:
return old_f(arg1, arg2, *rest)
except:
raise
else:
return result
return variable
new_f.__name__ = f.__name__
def typename(type):
......@@ -345,11 +345,11 @@ def type_guard(type1):
old_f = f.func_globals[f.__name__]
def new_f(arg1, *rest):
if (type1 is ANY_TYPE or isinstance(arg1, type1)):
result = f(arg1, *rest)
if result is FALL_THROUGH:
variable = f(arg1, *rest)
if variable is FALL_THROUGH:
return old_f(arg1, *rest)
else:
return result
return variable
else:
return old_f(arg1, *rest)
......
import gof #, gof.result
import gof #, gof.variable
import numpy #for numeric_grad
from gof.python25 import all
......@@ -7,22 +7,10 @@ import gof.utils
_msg_retType = 'op.grad(...) returned a non-list'
_msg_badlen = 'op.grad(...) returned wrong number of gradients'
def _unpack_result(lst):
if len(lst) > 1:
return lst
else:
return lst[0]
def _pack_result(arg):
if isinstance(arg, gof.result.Result):
return [arg]
else:
return arg
def grad_sources_inputs(sources, graph_inputs):
"""
A gradient source is a pair (r, g_r), in which r is a result, and g_r is a
result that is a gradient wrt r.
A gradient source is a pair (r, g_r), in which r is a variable, and g_r is a
variable that is a gradient wrt r.
This function traverses the graph backward from the 'r' sources,
calling L{Op.grad}(...) when it is provided by an L{Op}, and at least one of the
......@@ -32,21 +20,21 @@ def grad_sources_inputs(sources, graph_inputs):
op.grad( op.inputs[0], grad(op.outputs[0]))
This function expects the L{Op.grad}(...) function to return the gradient
expression [results] associated with the inputs of the L{Op}. The L{Op} should
return a list of results corresponding to the gradients in the same order
expression [variables] associated with the inputs of the L{Op}. The L{Op} should
return a list of variables corresponding to the gradients in the same order
as the inputs. If it has a single output it should return a list or tuple
of length 1.
For each input wrt to which an L{Op} is not differentiable, it should return
None instead of a result instance.
None instead of a variable instance.
@type sources: list
@param sources: gradient sources (explained below)
@type graph_inputs: list
@param graph_inputs: results considered to be constant
@param graph_inputs: variables considered to be constant
@rtype: dictionary
@return: dictionary mapping each result necessary for a source to its gradient.
@return: dictionary mapping each variable necessary for a source to its gradient.
"""
gmap = {}
for (r, g_r) in sources:
......@@ -94,7 +82,7 @@ def grad_sources_inputs(sources, graph_inputs):
op_grad = node.op.grad(input_arg, output_arg)
if not isinstance(op_grad, (list,tuple)):
raise ValueError(_msg_retType, node.op)
g_inputs = op_grad #_pack_result(op_grad)
g_inputs = op_grad
assert isinstance(g_inputs, (list, tuple))
if len(g_inputs) != len(node.inputs):
raise ValueError(_msg_badlen,
......
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论