Note that the ability to change the seed from one nosetest to another, is incompatible with the method of hard-coding the baseline results (against which we compare the theano outputs). These must then be determined "algorithmically". Although this represents more work, the test suite will be better because of it.
Note that the ability to change the seed from one nosetest to another, is incompatible with the method of hard-coding the baseline variables (against which we compare the theano outputs). These must then be determined "algorithmically". Although this represents more work, the test suite will be better because of it.
@@ -26,17 +26,17 @@ The idea here is that we've compiled the symbolic graph (``2*x``) into a functio
Inputs
======
The ``inputs`` argument to ``theano.function`` is a list, containing the ``Result`` instances for which values will be specified at the time of the function call. But inputs can be more than just Results.
``In`` instances let us attach properties to ``Results`` to tell function more about how to use them.
The ``inputs`` argument to ``theano.function`` is a list, containing the ``Variable`` instances for which values will be specified at the time of the function call. But inputs can be more than just Variables.
``In`` instances let us attach properties to ``Variables`` to tell function more about how to use them.
**In(result, name=None, value=None, update=None, mutable=False)** returns an ``In`` instance:
**In(variable, name=None, value=None, update=None, mutable=False)** returns an ``In`` instance:
- ``result``: a Result instance.
- ``variable``: a Variable instance.
This will be assigned a value before running the function,
not computed from its owner.
- ``name``: Any type. (If autoname_input=True, defaults to result.name).
- ``name``: Any type. (If autoname_input=True, defaults to variable.name).
If name is a valid Python identifier, this input can be set by
``kwarg``, and its value can be accessed by ``self.<name>``.
...
...
@@ -49,9 +49,9 @@ The ``inputs`` argument to ``theano.function`` is a list, containing the ``Resul
Default: ``None``
- ``update``: Result instance
- ``update``: Variable instance
This expression Result will replace ``value`` after each function call.
This expression Variable will replace ``value`` after each function call.
Default: ``None``
...
...
@@ -63,7 +63,7 @@ The ``inputs`` argument to ``theano.function`` is a list, containing the ``Resul
- ``autoname``: Bool
``True``: if ``name`` is None and the Result has a name, it will be taken
``True``: if ``name`` is None and the Variable has a name, it will be taken
as the input's name.
``False``: the name is the exact value passed as the name parameter
...
...
@@ -121,7 +121,7 @@ Advanced: Sharing Storage Between Functions
-------------------------------------------
``value`` can be a :api:`theano.gof.Container` as well as a literal.
This permits linking a value of a Result in one function to the value of a Result in another function.
This permits linking a value of a Variable in one function to the value of a Variable in another function.
By using a ``Container`` as a value we can implement shared variables between functions.
For example, consider the following program.
...
...
@@ -141,7 +141,7 @@ For example, consider the following program.
The functions ``inc`` and ``dec`` operate on a shared internal value for ``s``.
Theano's Module system uses this mechanism to share storage between Methods.
The container being shared doesn't have to correspond to the same Result in both functions,
The container being shared doesn't have to correspond to the same Variable in both functions,
but that's usually how this mechanism is used.
Input Argument Restrictions
...
...
@@ -161,11 +161,11 @@ The following restrictions apply to the inputs to ``theano.function``:
have the same name, then the function will raise an exception. [***Which
exception?**]
- Two ``In`` instances may not name the same Result. I.e. you cannot
- Two ``In`` instances may not name the same Variable. I.e. you cannot
give the same parameter multiple times.
If no name is specified explicitly for an In instance, then its name
will be taken from the Result's name. Note that this feature can cause
will be taken from the Variable's name. Note that this feature can cause
harmless-looking input lists to not satisfy the two conditions above.
In such cases, Inputs should be named explicitly to avoid problems
such as duplicate names, and named arguments preceding unnamed ones.
...
...
@@ -198,7 +198,7 @@ Both ``value`` and ``container`` properties provide dictionary-like access based
- integer keys: you can look up a value/container by its position in the input list;
- name keys: you can look up a value/container by its name;
- Result keys: you can look up a value/container by the Result it corresponds to.
- Variable keys: you can look up a value/container by the Variable it corresponds to.
In addition to these access mechanisms, there is an even more convenient
method to access values by indexing a Function directly by typing
...
...
@@ -234,7 +234,7 @@ Input Shortcuts
Every element of the inputs list will be upgraded to an In instance if necessary.
- a Result instance ``r`` will be upgraded like ``In(r)``
- a Variable instance ``r`` will be upgraded like ``In(r)``
- a tuple ``(name, r)`` will be ``In(r, name=name)``
...
...
@@ -285,13 +285,13 @@ Outputs
The ``outputs`` argument to function can be one of
- ``None``, or
- a Result or ``Out`` instance, or
- a list of Results or ``Out`` instances.
- a Variable or ``Out`` instance, or
- a list of Variables or ``Out`` instances.
An ``Out`` instance is a structure that lets us attach options to individual output ``Result`` instances,
similarly to how ``In`` lets us attach options to individual input ``Result`` instances.
An ``Out`` instance is a structure that lets us attach options to individual output ``Variable`` instances,
similarly to how ``In`` lets us attach options to individual input ``Variable`` instances.
**Out(result, borrow=False)** returns an ``Out`` instance:
**Out(variable, borrow=False)** returns an ``Out`` instance:
* ``borrow``
...
...
@@ -304,9 +304,9 @@ similarly to how ``In`` lets us attach options to individual input ``Result`` in
If a single ``Result`` or ``Out`` instance is given as argument, then the compiled function will return a single value.
If a single ``Variable`` or ``Out`` instance is given as argument, then the compiled function will return a single value.
If a list of ``Result`` or ``Out`` instances is given as argument, then the compiled function will return a list of their values.
If a list of ``Variable`` or ``Out`` instances is given as argument, then the compiled function will return a list of their values.
Unlike numpy which does broadcasting dynamically, Theano needs
to know, for any operation which supports broadcasting, which
dimensions will need to be broadcasted. When applicable, this
information is given in the :term:`Type` of a :term:`Result`.
information is given in the :term:`Type` of a :term:`Variable`.
See also:
...
...
@@ -84,24 +84,24 @@ Glossary of terminology
pure
WRITEME
Result
A :ref:`result` is the main data structure you work with when
type
WRITEME
Variable
A :ref:`variable` is the main data structure you work with when
using Theano. The symbolic inputs that you operate on are
Results and what you get from applying various operations to
these inputs are also Results. For example, when I type
Variables and what you get from applying various operations to
these inputs are also Variables. For example, when I type
>>> x = theano.tensor.ivector()
>>> y = -x
``x`` and ``y`` are both Results, i.e. instances of the
:api:`Result <theano.gof.graph.Result>` class. The
``x`` and ``y`` are both Variables, i.e. instances of the
:api:`Variable <theano.gof.graph.Variable>` class. The
:term:`Type` of both ``x`` and ``y`` is
``theano.tensor.ivector``.
For more information, see: :ref:`result`.
type
WRITEME
For more information, see: :ref:`variable`.
view
WRITEME
...
...
@@ -177,14 +177,14 @@ Glossary of terminology
A :term:`Tensor` is for storing a number of objects that
all have the same type. In computations, the storage for
:term:`TensorResult` instances is a ``numpy.ndarray``.
:term:`TensorVariable` instances is a ``numpy.ndarray``.
Instances of ``numpy.ndarray`` have a ``dtype`` property
to indicate which data type (i.e., byte, float, double, python
object) can be stored in each element. The ``dtype`` property
of Tensors is a little different: it is a string which can be
converted to a numpy ``dtype`` object. Still the meaning
is pretty much the same: elements of the ``numpy.ndarray``
corresponding to a :term:`TensorResult` in a particular
corresponding to a :term:`TensorVariable` in a particular
computation must have the corresponding data type.
...
...
@@ -266,30 +266,30 @@ Glossary of terminology
WRITEME.
Result
Variable
a Type-related graph node (a variable)
A Result
(`Result API <http://lgcm.iro.umontreal.ca/epydoc/theano.gof.graph.Result-class.html>`_)
A Variable
(`Variable API <http://lgcm.iro.umontreal.ca/epydoc/theano.gof.graph.Variable-class.html>`_)
is theano's variable. It symbolically represents a value (which
can be a number, vector, matrix, tensor, etc.).
The inputs and outputs of every :term:`Op` are Result instances.
The input and output arguments to create a :term:`function` are also Results.
A Result is like a strongly-typed variable in some other languages; each Result contains a reference to a :term:`TTI` (Theano Type Instance) that defines the kind of value that can be associated to the Result by a :term:`function`.
The inputs and outputs of every :term:`Op` are Variable instances.
The input and output arguments to create a :term:`function` are also Variables.
A Variable is like a strongly-typed variable in some other languages; each Variable contains a reference to a :term:`TTI` (Theano Type Instance) that defines the kind of value that can be associated to the Variable by a :term:`function`.
A Result is a container for four important fields:
A Variable is a container for four important fields:
type
a :term:`TTI` defining the kind of value this Result can have,
a :term:`TTI` defining the kind of value this Variable can have,
owner
either None (for graph roots) or the :term:`Apply` instance (i.e. result of applying an :term:`Op`) of which ``self`` is an output,
either None (for graph roots) or the :term:`Apply` instance (i.e. variable of applying an :term:`Op`) of which ``self`` is an output,
index
the integer such that ``owner.outputs[index] is this_result`` (ignored if ``owner`` is None)
the integer such that ``owner.outputs[index] is this_variable`` (ignored if ``owner`` is None)
name
a string to use in pretty-printing and debugging.
There are two subclasses related to Result:
There are two subclasses related to Variable:
:term:`Value`
a Result with a data field.
a Variable with a data field.
:term:`Constant`
like ``Value``, but the data it contains cannot be modified.
...
...
@@ -320,10 +320,10 @@ Glossary of terminology
e = d + b
theano.function([d,b], [e]) # this works. d's default value of 1.5 is ignored.
The python variables ``a,b,c`` all refer to instances of type Result.
The Result refered to by ``a`` is also an instance of ``Constant``.
The python variables ``a,b,c`` all refer to instances of type Variable.
The Variable refered to by ``a`` is also an instance of ``Constant``.
Theano.:term:`function` uses the :term:`Apply` instances' ``inputs`` field together with each Result's ``owner`` field to determine which inputs are necessary to compute the function's outputs.
Theano.:term:`function` uses the :term:`Apply` instances' ``inputs`` field together with each Variable's ``owner`` field to determine which inputs are necessary to compute the function's outputs.
Scalar
...
...
@@ -344,7 +344,7 @@ Glossary of terminology
Stabilizations are like :term:`optimizations <Optimization>` in the sense that they are often pattern-based sub-graph substitutions.
Stabilizations are unlike :term:`optimizations <Optimization>` in that
- they are typically applied even when intermediate results in the subgraph have external :term:`clients`,
- they are typically applied even when intermediate variables in the subgraph have external :term:`clients`,
- they are typically prioritized over transformations which improve run-time speed, and
- they are typically not faster than the naive implementation.
...
...
@@ -359,17 +359,17 @@ Glossary of terminology
* :term:`broadcastable <Broadcasting>` - which dimensions are broadcastable
* :term:`dtype` - what kind of elements will the tensor contain
See also :term:`TensorResult`.
See also :term:`TensorVariable`.
TensorResult
:term:`Results <Result>` of type :term:`Tensor` are of class
TensorResult (`TensorResult API <http://lgcm.iro.umontreal.ca/epydoc/theano.tensor.TensorResult-class.html>`_).
``TensorResult`` adds operator overloading so that ``TensorResult`` instances can be used
in mathematical expressions. When any input to an expression is a ``TensorResult`` then the
expression will evaluate to an ``TensorResult`` and a :term:`graph` corresponding to
TensorVariable
:term:`Variables <Variable>` of type :term:`Tensor` are of class
TensorVariable (`TensorVariable API <http://lgcm.iro.umontreal.ca/epydoc/theano.tensor.TensorVariable-class.html>`_).
``TensorVariable`` adds operator overloading so that ``TensorVariable`` instances can be used
in mathematical expressions. When any input to an expression is a ``TensorVariable`` then the
expression will evaluate to an ``TensorVariable`` and a :term:`graph` corresponding to
the expression.
Many shortcuts exist for creating ``TensorResult`` instances:
Many shortcuts exist for creating ``TensorVariable`` instances:
* ``<t>scalar`` - create a tensor of rank 0
* ``<t>vector`` - create a tensor of rank 1
...
...
@@ -397,24 +397,24 @@ Glossary of terminology
# declare a symbolic floating-point vector using __call__
b = tensor.fvector()
# create a second Result with the same TTI
# create a second Variable with the same TTI
c = tensor.fvector()
(``tensor.fvector``) is a TTI because
(``tensor.fvector``) is an instance of the (``theano.tensor.Tensor``) class, which is a subclass of (``theano.Type``).
Whenever you create a variable in theano (technically, a :term:`Result`) it will contain a reference to a TTI.
That reference is typically constant during the lifetime of the Result.
Whenever you create a variable in theano (technically, a :term:`Variable`) it will contain a reference to a TTI.
That reference is typically constant during the lifetime of the Variable.
Many variables can refer to a single TTI, as do ``b`` and ``c`` above.
The TTI defines the kind of value which might end up in that variable when executing a :term:`function`.
In this sense, theano is like a strongly-typed language.
In our example above, ``b`` is a result which is guaranteed to corresond to a ``numpy.ndarray`` of rank 1 when we try to do some computations with it.
In our example above, ``b`` is a variable which is guaranteed to corresond to a ``numpy.ndarray`` of rank 1 when we try to do some computations with it.
Many :term:`Ops <Op>` will raise an exception if their inputs do not have the correct types (TTI references).
TTI references are also useful to do type-checking in pattern-based optimizations.
Type
:term:`Results <Result>` are strongly typed by :term:`Type` instances
:term:`Variables <Variable>` are strongly typed by :term:`Type` instances
@@ -18,7 +18,7 @@ Examples of parameterized Ops in theano:
``Reduce(<scalar op>, <axes>)``
reduces the specified axes using the provided scalar op.
``Add(<output type inferrer>)``
adds scalars and puts the result in a scalar whose type is inferred from the input types using ``output_type_inferrer(*inputs)``
adds scalars and puts the variable in a scalar whose type is inferred from the input types using ``output_type_inferrer(*inputs)``
``Composite(<graph>)``
makes a single Op out of a graph of scalar operations.
...
...
@@ -46,14 +46,14 @@ The ``make_node`` method is expected to have the following signature:
make_node(self, *inputs)
``inputs`` may be a list of anything that the user wants to provide as symbolic input (symbolic: standing for the actual values that will be passed when the graph is compiled into an executable function). [*The Theano intro should describe symbolic in greater depth, and we should link to that from here.*] This may or may not include Result instances (but if you want the inputs of this Op to sometimes be outputs of another Op, then the inputs should be Result instances). [*What else could they be? Constant, Values, ...*] The return value should be an instance of [GraphStructures Apply] (see the example below). Here are the tasks typically handled in ``make_node``.
``inputs`` may be a list of anything that the user wants to provide as symbolic input (symbolic: standing for the actual values that will be passed when the graph is compiled into an executable function). [*The Theano intro should describe symbolic in greater depth, and we should link to that from here.*] This may or may not include Variable instances (but if you want the inputs of this Op to sometimes be outputs of another Op, then the inputs should be Variable instances). [*What else could they be? Constant, Values, ...*] The return value should be an instance of [GraphStructures Apply] (see the example below). Here are the tasks typically handled in ``make_node``.
* Check that the inputs are valid (type checking, etc.). [*Since we don't actually have values, what can we do besides type checking?*]
* If needed, wrap the inputs in Result instances with the proper type.
* Make the Result instances that will serve as the outputs of the node.
* If needed, wrap the inputs in Variable instances with the proper type.
* Make the Variable instances that will serve as the outputs of the node.
The ``inputs`` and ``outputs`` arguments to ``Apply`` must be lists of ``Result`` instances (or instances of subclasses of ``Result``). The inputs given to ``Apply`` do not have to be the same as the inputs passed to ``make_node``, but it is recommended that the order corresponds. [*why?*] The behavior of ``make_node`` should not depend on the structure of the graph of [*or?*] its inputs: it may look at the type and type fields of its inputs, but not at their owner field, because modifications to the graph structure do not use ``make_node``. [*???*]
The ``inputs`` and ``outputs`` arguments to ``Apply`` must be lists of ``Variable`` instances (or instances of subclasses of ``Variable``). The inputs given to ``Apply`` do not have to be the same as the inputs passed to ``make_node``, but it is recommended that the order corresponds. [*why?*] The behavior of ``make_node`` should not depend on the structure of the graph of [*or?*] its inputs: it may look at the type and type fields of its inputs, but not at their owner field, because modifications to the graph structure do not use ``make_node``. [*???*]
Example:
...
...
@@ -66,14 +66,14 @@ Example:
def make_node(self, x, y):
# note 1: constant, int64 and Scalar are defined in theano.scalar
# note 2: constant(x) is equivalent to Constant(type = int64, data = x)
# note 3: the call int64() is equivalent to Result(type = int64) or Result(type = Scalar(dtype = 'int64'))
# note 3: the call int64() is equivalent to Variable(type = int64) or Variable(type = Scalar(dtype = 'int64'))
if isinstance(x, int):
x = constant(x)
elif not isinstance(x, Result) or not x.type == int64:
elif not isinstance(x, Variable) or not x.type == int64:
raise TypeError("expected an int64 Scalar")
if isinstance(y, int):
y = constant(y)
elif not isinstance(y, Result) or not x.type == int64:
elif not isinstance(y, Variable) or not x.type == int64:
raise TypeError("expected an int64 Scalar")
inputs = [x, y]
outputs = [int64()]
...
...
@@ -82,12 +82,12 @@ Example:
#...
add = Add() # I make an instance of Add
node1 = add.make_node(int64(), int64()) # I make a node with two Result inputs
node1 = add.make_node(int64(), int64()) # I make a node with two Variable inputs
node2 = add.make_node(1, 2) # this works too
node3 = add.make_node(int64(), 79) # this works three
node4 = add.make_node(float64(), int64()) # this raises a TypeError
[*What type is an instance of Add? It's an Apply? But that's not a Result, and cannot be used as input for another Op.*]
[*What type is an instance of Add? It's an Apply? But that's not a Variable, and cannot be used as input for another Op.*]
Two Apply nodes ``node1`` and ``node2`` are *assumed* by the compiler to represent the same behavior if:
1. ``node1.op == node2.op``
...
...
@@ -99,7 +99,7 @@ It is considered an *error* to have conditions 1 and 2 but not condition 3. A co
``__call__``
----------------
In ``Op``, ``__call__`` is defined in terms of ``make_node``. Instead of returning a node, it returns the output Results directly, which is practical from a UI standpoint. Here is pseudocode:
In ``Op``, ``__call__`` is defined in terms of ``make_node``. Instead of returning a node, it returns the output Variables directly, which is practical from a UI standpoint. Here is pseudocode:
* *node*: a pointer to an Apply instance - ``node`` is assumed to be produced by a previous call to ``self.make_node``.
* *inputs*: *not* the same as ``node.inputs`` - it is a list of values. [*i.e. actually data, not just symbolic stuff?*]
* *output_storage*: *not* the same as ``node.outputs`` - it is a list of lists of length 1 where the results of the computation must be put.
* *output_storage*: *not* the same as ``node.outputs`` - it is a list of lists of length 1 where the variables of the computation must be put.
[*Can you explain better how inputs is not node.inputs and output_storage is not node.outputs?*]
...
...
@@ -138,7 +138,7 @@ Here is an example of a properly defined ``perform``:
# this does z = x + y
x, y = inputs # extract the two inputs
z, = output_storage # extract the one storage (the comma after z is not optional)
z[0] = x + y # we must put the result in z[0]
z[0] = x + y # we must put the variable in z[0]
...
add = Add() # I make an instance of Add
...
...
@@ -175,8 +175,8 @@ grad
where:
* ``inputs`` is a list of Result instances. It is assumed to be the ``inputs`` field of a node produced by ``make_node``.
* ``output_gradients`` is a list of Result instances. They have the same properties as the outputs of the node, but are filled with gradient values.
* ``inputs`` is a list of Variable instances. It is assumed to be the ``inputs`` field of a node produced by ``make_node``.
* ``output_gradients`` is a list of Variable instances. They have the same properties as the outputs of the node, but are filled with gradient values.
Essentially, the semantics are:
...
...
@@ -192,7 +192,7 @@ Essentially, the semantics are:
return gz*dz/dx + gw*dw/dx, gz*dz/dy + gw*dw/dy
More specifically,
``grad`` must return a list or tuple of input gradients, as many as there are inputs. Let C be a Result (currently assumed to be a scalar) that depends through a theano symbolic expression on the node outputs. Then each output_gradients[i] represents symbolically dC/doutputs[i]. The returned input gradients should represent symbolically dC/dinputs[i].
``grad`` must return a list or tuple of input gradients, as many as there are inputs. Let C be a Variable (currently assumed to be a scalar) that depends through a theano symbolic expression on the node outputs. Then each output_gradients[i] represents symbolically dC/doutputs[i]. The returned input gradients should represent symbolically dC/dinputs[i].
Example:
...
...
@@ -253,7 +253,7 @@ Example: if we expect to call the op repeatedly on incrementally bigger inputs,
A ``Member`` represents a state variable (i.e., whose value remains after a ``Method`` is called). It will be named automatically after that field and it will be an implicit input of all ``Methods`` of the ``Module``. Its storage (i.e. where the value is stored) will be shared by all ``Methods`` of the ``Module``.
A ``Result`` which is the result of a previous computation (by opposition to being ``updated``) is not a ``Member``. Internally this is called an External. You should not need to care about this.
A ``Variable`` which is the variable of a previous computation (by opposition to being ``updated``) is not a ``Member``. Internally this is called an External. You should not need to care about this.
For sharing state between modules, see ``Inner Module`` section.
...
...
@@ -100,7 +100,7 @@ Module Interface
def resolve(self, symbol, filter = None)
Resolves a symbol in this module. The symbol can be a string or a ``Result``. If the string contains dots (eg ``"x.y"``), the module will resolve the symbol hierarchically in its inner modules. The filter argument is None or a class and it can be used to restrict the search to ``Member`` or ``Method`` instances for example.
Resolves a symbol in this module. The symbol can be a string or a ``Variable``. If the string contains dots (eg ``"x.y"``), the module will resolve the symbol hierarchically in its inner modules. The filter argument is None or a class and it can be used to restrict the search to ``Member`` or ``Method`` instances for example.
currently, there is no way for a grad() method to distinguish between cases 3
...
...
@@ -123,7 +123,7 @@ Guillaume can you make sure to hit these points:
* There are a lot of tests that define their own epsilon, but this should be standardized. e.g. in test_elemwise.py ``self.failUnless((numpy.abs(f(xv) - zv) < 1e-10).all())``
* If the expected result of a test is that an Exception is thrown, how do we correctly detect and handle that?
* If the expected variable of a test is that an Exception is thrown, how do we correctly detect and handle that?
@@ -676,28 +676,28 @@ class StructuredDot(gof.Op):
ifa.shape[1]!=b.shape[0]:
raiseValueError('shape mismatch in StructuredDot.perform',(a.shape,b.shape))
result=a.dot(b)
assert_is_dense(result)# scipy 0.7 automatically converts to dense
variable=a.dot(b)
assert_is_dense(variable)# scipy 0.7 automatically converts to dense
# dot of an NxM sparse matrix, with a Mx1 dense matrix, returns vector not matrix
ifresult.ndim==1:
result=numpy.expand_dims(result,1)
elifresult.ndim!=2:
ifvariable.ndim==1:
variable=numpy.expand_dims(variable,1)
elifvariable.ndim!=2:
raiseException('Output of structured dot should be a matrix (ndim=2)')
assertresult.ndim==2
assertvariable.ndim==2
ifresult.shape!=(a.shape[0],b.shape[1]):
ifvariable.shape!=(a.shape[0],b.shape[1]):
ifb.shape[0]==1:
raiseException("a.shape=%s, b.shape=%s, result.shape=%s ??? This is probably because scipy.csc_matrix dot has a bug with singleton dimensions (i.e. b.shape[0]=1), for scipy 0.6. Use scipy 0.7. NB you have scipy version %s"%(a.shape,b.shape,result.shape,scipy.__version__))
raiseException("a.shape=%s, b.shape=%s, variable.shape=%s ??? This is probably because scipy.csc_matrix dot has a bug with singleton dimensions (i.e. b.shape[0]=1), for scipy 0.6. Use scipy 0.7. NB you have scipy version %s"%(a.shape,b.shape,variable.shape,scipy.__version__))
else:
raiseException("a.shape=%s, b.shape=%s, result.shape=%s ??? I have no idea why")
raiseException("a.shape=%s, b.shape=%s, variable.shape=%s ??? I have no idea why")
## Commenting this out because result should be a numpy.ndarray since the assert above
## Commenting this out because variable should be a numpy.ndarray since the assert above
## (JB 20090109)
# out[0] = numpy.asarray(result) #TODO: fix this really bad implementation
# out[0] = numpy.asarray(variable) #TODO: fix this really bad implementation
#
out[0]=result
out[0]=variable
defgrad(self,(a,b),(g_out,)):
# a is sparse, b is dense, g_out is dense
...
...
@@ -712,19 +712,19 @@ def structured_dot(x, y):
@todo: Maybe the triple-transposition formulation (when x is dense)