提交 99f22814 authored 作者: Ian Goodfellow's avatar Ian Goodfellow 提交者: David Warde-Farley

Rename env and Env.

Changes all occurrences of the 'env' attribute to 'fgraph', and renames Env to FunctionGraph. Adds properties that warn about deprecated 'env' attribute.
上级 f477c6a1
...@@ -224,14 +224,14 @@ Exercise 2 ...@@ -224,14 +224,14 @@ Exercise 2
name = "predict") name = "predict")
if any( [x.op.__class__.__name__=='Gemv' for x in if any( [x.op.__class__.__name__=='Gemv' for x in
train.maker.env.toposort()]): train.maker.fgraph.toposort()]):
print 'Used the cpu' print 'Used the cpu'
elif any( [x.op.__class__.__name__=='GpuGemm' for x in elif any( [x.op.__class__.__name__=='GpuGemm' for x in
train.maker.env.toposort()]): train.maker.fgraph.toposort()]):
print 'Used the gpu' print 'Used the gpu'
else: else:
print 'ERROR, not able to tell if theano used the cpu or the gpu' print 'ERROR, not able to tell if theano used the cpu or the gpu'
print train.maker.env.toposort() print train.maker.fgraph.toposort()
......
...@@ -26,9 +26,9 @@ Global and local optimizations ...@@ -26,9 +26,9 @@ Global and local optimizations
First, let's lay out the way optimizations work in Theano. There are First, let's lay out the way optimizations work in Theano. There are
two types of optimizations: *global* optimizations and *local* two types of optimizations: *global* optimizations and *local*
optimizations. A global optimization takes an ``Env`` object (an optimizations. A global optimization takes an ``FunctionGraph`` object (an
Env is a wrapper around a whole computation graph, you can see its FunctionGraph is a wrapper around a whole computation graph, you can see its
:class:`documentation <Env>` for more details) and navigates through it :class:`documentation <FunctionGraph>` for more details) and navigates through it
in a suitable way, replacing some Variables by others in the process. A in a suitable way, replacing some Variables by others in the process. A
local optimization, on the other hand, is defined as a function on a local optimization, on the other hand, is defined as a function on a
*single* :ref:`apply` node and must return either ``False`` (to mean that *single* :ref:`apply` node and must return either ``False`` (to mean that
...@@ -54,26 +54,26 @@ methods: ...@@ -54,26 +54,26 @@ methods:
.. class:: Optimizer .. class:: Optimizer
.. method:: apply(env) .. method:: apply(fgraph)
This method takes an Env object which contains the computation graph This method takes an FunctionGraph object which contains the computation graph
and does modifications in line with what the optimization is meant and does modifications in line with what the optimization is meant
to do. This is one of the main methods of the optimizer. to do. This is one of the main methods of the optimizer.
.. method:: add_requirements(env) .. method:: add_requirements(fgraph)
This method takes an Env object and adds :ref:`features This method takes an FunctionGraph object and adds :ref:`features
<libdoc_gof_envfeature>` to it. These features are "plugins" that are needed <libdoc_gof_fgraphfeature>` to it. These features are "plugins" that are needed
for the ``apply`` method to do its job properly. for the ``apply`` method to do its job properly.
.. method:: optimize(env) .. method:: optimize(fgraph)
This is the interface function called by Theano. This is the interface function called by Theano.
*Default:* this is defined by Optimizer as ``add_requirement(env); *Default:* this is defined by Optimizer as ``add_requirement(fgraph);
apply(env)``. apply(fgraph)``.
See the section about :class:`Env` to understand how to define these See the section about :class:`FunctionGraph` to understand how to define these
methods. methods.
...@@ -123,19 +123,19 @@ simplification described above: ...@@ -123,19 +123,19 @@ simplification described above:
from theano.gof import toolbox from theano.gof import toolbox
class Simplify(gof.Optimizer): class Simplify(gof.Optimizer):
def add_requirements(self, env): def add_requirements(self, fgraph):
env.extend(toolbox.ReplaceValidate()) fgraph.extend(toolbox.ReplaceValidate())
def apply(self, env): def apply(self, fgraph):
for node in env.toposort(): for node in fgraph.toposort():
if node.op == div: if node.op == div:
x, y = node.inputs x, y = node.inputs
z = node.outputs[0] z = node.outputs[0]
if x.owner and x.owner.op == mul: if x.owner and x.owner.op == mul:
a, b = x.owner.inputs a, b = x.owner.inputs
if y == a: if y == a:
env.replace_validate(z, b) fgraph.replace_validate(z, b)
elif y == b: elif y == b:
env.replace_validate(z, a) fgraph.replace_validate(z, a)
simplify = Simplify() simplify = Simplify()
...@@ -145,16 +145,16 @@ simplification described above: ...@@ -145,16 +145,16 @@ simplification described above:
requirements we might want to know about? requirements we might want to know about?
Here's how it works: first, in ``add_requirements``, we add the Here's how it works: first, in ``add_requirements``, we add the
``ReplaceValidate`` :ref:`libdoc_gof_envfeature` located in ``ReplaceValidate`` :ref:`libdoc_gof_fgraphfeature` located in
:ref:`libdoc_gof_toolbox`. This feature adds the ``replace_validate`` :ref:`libdoc_gof_toolbox`. This feature adds the ``replace_validate``
method to ``env``, which is an enhanced version of ``replace`` that method to ``fgraph``, which is an enhanced version of ``replace`` that
does additional checks to ensure that we are not messing up the does additional checks to ensure that we are not messing up the
computation graph (note: if ``ReplaceValidate`` was already added by computation graph (note: if ``ReplaceValidate`` was already added by
another optimizer, ``extend`` will do nothing). In a nutshell, another optimizer, ``extend`` will do nothing). In a nutshell,
``toolbox.ReplaceValidate`` grants access to ``env.replace_validate``, ``toolbox.ReplaceValidate`` grants access to ``fgraph.replace_validate``,
and ``env.replace_validate`` allows us to replace a Variable with and ``fgraph.replace_validate`` allows us to replace a Variable with
another while respecting certain validation constraints. You can another while respecting certain validation constraints. You can
browse the list of :ref:`libdoc_gof_envfeaturelist` and see if some of browse the list of :ref:`libdoc_gof_fgraphfeaturelist` and see if some of
them might be useful to write optimizations with. For example, as an them might be useful to write optimizations with. For example, as an
exercise, try to rewrite Simplify using :class:`NodeFinder`. (Hint: you exercise, try to rewrite Simplify using :class:`NodeFinder`. (Hint: you
want to use the method it publishes instead of the call to toposort!) want to use the method it publishes instead of the call to toposort!)
...@@ -169,7 +169,7 @@ so we check for that. If the numerator is a multiplication we put the ...@@ -169,7 +169,7 @@ so we check for that. If the numerator is a multiplication we put the
two operands in ``a`` and ``b``, so two operands in ``a`` and ``b``, so
we can now say that ``z == (a*b)/y``. If ``y==a`` then ``z==b`` and if we can now say that ``z == (a*b)/y``. If ``y==a`` then ``z==b`` and if
``y==b`` then ``z==a``. When either case happens then we can replace ``y==b`` then ``z==a``. When either case happens then we can replace
``z`` by either ``a`` or ``b`` using ``env.replace_validate`` - else we do ``z`` by either ``a`` or ``b`` using ``fgraph.replace_validate`` - else we do
nothing. You might want to check the documentation about :ref:`variable` nothing. You might want to check the documentation about :ref:`variable`
and :ref:`apply` to get a better understanding of the and :ref:`apply` to get a better understanding of the
pointer-following game you need to get ahold of the nodes of interest pointer-following game you need to get ahold of the nodes of interest
...@@ -185,7 +185,7 @@ Test time: ...@@ -185,7 +185,7 @@ Test time:
>>> y = double('y') >>> y = double('y')
>>> z = double('z') >>> z = double('z')
>>> a = add(z, mul(div(mul(y, x), y), div(z, x))) >>> a = add(z, mul(div(mul(y, x), y), div(z, x)))
>>> e = gof.Env([x, y, z], [a]) >>> e = gof.FunctionGraph([x, y, z], [a])
>>> e >>> e
[add(z, mul(div(mul(y, x), y), div(z, x)))] [add(z, mul(div(mul(y, x), y), div(z, x)))]
>>> simplify.optimize(e) >>> simplify.optimize(e)
...@@ -201,7 +201,7 @@ optimization you wrote. For example, consider the following: ...@@ -201,7 +201,7 @@ optimization you wrote. For example, consider the following:
>>> y = double('y') >>> y = double('y')
>>> z = double('z') >>> z = double('z')
>>> a = div(mul(add(y, z), x), add(y, z)) >>> a = div(mul(add(y, z), x), add(y, z))
>>> e = gof.Env([x, y, z], [a]) >>> e = gof.FunctionGraph([x, y, z], [a])
>>> e >>> e
[div(mul(add(y, z), x), add(y, z))] [div(mul(add(y, z), x), add(y, z))]
>>> simplify.optimize(e) >>> simplify.optimize(e)
...@@ -233,7 +233,7 @@ for this somewhere in the future. ...@@ -233,7 +233,7 @@ for this somewhere in the future.
.. note:: .. note::
:class:`Env` is a Theano structure intended for the optimization :class:`FunctionGraph` is a Theano structure intended for the optimization
phase. It is used internally by function and Module and is rarely phase. It is used internally by function and Module and is rarely
exposed to the end user. You can use it to test out optimizations, exposed to the end user. You can use it to test out optimizations,
etc. if you are comfortable with it, but it is recommended to use etc. if you are comfortable with it, but it is recommended to use
...@@ -292,7 +292,7 @@ subset of them) and applies one or several local optimizers on them. ...@@ -292,7 +292,7 @@ subset of them) and applies one or several local optimizers on them.
>>> y = double('y') >>> y = double('y')
>>> z = double('z') >>> z = double('z')
>>> a = add(z, mul(div(mul(y, x), y), div(z, x))) >>> a = add(z, mul(div(mul(y, x), y), div(z, x)))
>>> e = gof.Env([x, y, z], [a]) >>> e = gof.FunctionGraph([x, y, z], [a])
>>> e >>> e
[add(z, mul(div(mul(y, x), y), div(z, x)))] [add(z, mul(div(mul(y, x), y), div(z, x)))]
>>> simplify = gof.TopoOptimizer(local_simplify) >>> simplify = gof.TopoOptimizer(local_simplify)
......
...@@ -43,14 +43,14 @@ The subgraph given by the end user is wrapped in a structure called ...@@ -43,14 +43,14 @@ The subgraph given by the end user is wrapped in a structure called
*FunctionGraph*. That structure defines several hooks on adding and *FunctionGraph*. That structure defines several hooks on adding and
removing (pruning) nodes as well as on modifying links between nodes removing (pruning) nodes as well as on modifying links between nodes
(for example, modifying an input of an :ref:`apply` node) (see the (for example, modifying an input of an :ref:`apply` node) (see the
article about :ref:`libdoc_gof_env` for more information). article about :ref:`libdoc_gof_fgraph` for more information).
FunctionGraph provides a method to change the input of an Apply node from one FunctionGraph provides a method to change the input of an Apply node from one
Variable to another and a more high-level method to replace a Variable Variable to another and a more high-level method to replace a Variable
with another. This is the structure that :ref:`Optimizers with another. This is the structure that :ref:`Optimizers
<optimization>` work on. <optimization>` work on.
Some relevant :ref:`Features <libdoc_gof_envfeature>` are typically added to the Some relevant :ref:`Features <libdoc_gof_fgraphfeature>` are typically added to the
FunctionGraph, namely to prevent any optimization from operating inplace on FunctionGraph, namely to prevent any optimization from operating inplace on
inputs declared as immutable. inputs declared as immutable.
......
...@@ -35,13 +35,13 @@ train = theano.function( ...@@ -35,13 +35,13 @@ train = theano.function(
predict = theano.function(inputs=[x], outputs=prediction, predict = theano.function(inputs=[x], outputs=prediction,
name = "predict") name = "predict")
if any( [x.op.__class__.__name__=='Gemv' for x in train.maker.env.toposort()]): if any( [x.op.__class__.__name__=='Gemv' for x in train.maker.fgraph.toposort()]):
print 'Used the cpu' print 'Used the cpu'
elif any( [x.op.__class__.__name__=='GpuGemm' for x in train.maker.env.toposort()]): elif any( [x.op.__class__.__name__=='GpuGemm' for x in train.maker.fgraph.toposort()]):
print 'Used the gpu' print 'Used the gpu'
else: else:
print 'ERROR, not able to tell if theano used the cpu or the gpu' print 'ERROR, not able to tell if theano used the cpu or the gpu'
print train.maker.env.toposort() print train.maker.fgraph.toposort()
......
...@@ -179,7 +179,7 @@ Here is the state of that vision as of 24 October 2011 (after Theano release ...@@ -179,7 +179,7 @@ Here is the state of that vision as of 24 October 2011 (after Theano release
doesn't apply to only 1 op. doesn't apply to only 1 op.
* Example of use: Determine if we should move computation to the * Example of use: Determine if we should move computation to the
GPU or not depending on the input size. GPU or not depending on the input size.
* Possible implementation note: allow Theano Variable in the env to * Possible implementation note: allow Theano Variable in the fgraph to
have more than 1 owner. have more than 1 owner.
* We have a CUDA backend for tensors of type `float32` only. * We have a CUDA backend for tensors of type `float32` only.
......
.. _libdoc_gof_env: .. _libdoc_gof_fgraph:
================================================ ================================================
:mod:`env` -- Graph Container [doc TODO] :mod:`fgraph` -- Graph Container [doc TODO]
================================================ ================================================
.. module:: env .. module:: fgraph
:platform: Unix, Windows :platform: Unix, Windows
:synopsis: Theano Internals :synopsis: Theano Internals
.. moduleauthor:: LISA .. moduleauthor:: LISA
...@@ -14,17 +14,17 @@ ...@@ -14,17 +14,17 @@
Guide Guide
===== =====
Env FunctionGraph
--- ---
.. _libdoc_gof_envfeature: .. _libdoc_gof_fgraphfeature:
Env Features FunctionGraph Features
------------- -------------
.. _libdoc_gof_envfeaturelist: .. _libdoc_gof_fgraphfeaturelist:
Env Feature List FunctionGraph Feature List
^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^
* ReplaceValidate * ReplaceValidate
* DestroyHandler * DestroyHandler
...@@ -32,7 +32,7 @@ Env Feature List ...@@ -32,7 +32,7 @@ Env Feature List
Reference Reference
========= =========
.. class:: Env .. class:: FunctionGraph
***TODO*** ***TODO***
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
env fgraph
toolbox toolbox
type type
......
...@@ -16,7 +16,7 @@ Guide ...@@ -16,7 +16,7 @@ Guide
.. class:: History(object) .. class:: History(object)
.. method:: revert(env, checkpoint) .. method:: revert(fgraph, checkpoint)
Reverts the graph to whatever it was at the provided Reverts the graph to whatever it was at the provided
checkpoint (undoes all replacements). A checkpoint at any checkpoint (undoes all replacements). A checkpoint at any
given time can be obtained using self.checkpoint(). given time can be obtained using self.checkpoint().
...@@ -25,7 +25,7 @@ Guide ...@@ -25,7 +25,7 @@ Guide
.. class:: ReplaceValidate(History, Validator) .. class:: ReplaceValidate(History, Validator)
.. method:: replace_validate(env, var, new_var, reason=None) .. method:: replace_validate(fgraph, var, new_var, reason=None)
.. class:: NodeFinder(Bookkeeper) .. class:: NodeFinder(Bookkeeper)
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
:mod:`type` -- Interface for types of variables :mod:`type` -- Interface for types of variables
================================================ ================================================
.. module:: env .. module:: fgraph
:platform: Unix, Windows :platform: Unix, Windows
:synopsis: Interface for types of symbolic variables :synopsis: Interface for types of symbolic variables
.. moduleauthor:: LISA .. moduleauthor:: LISA
......
...@@ -52,7 +52,7 @@ Theano also provides :func:`pydotprint` that creates a png image of the function ...@@ -52,7 +52,7 @@ Theano also provides :func:`pydotprint` that creates a png image of the function
>>> pp(gy) # print out the gradient prior to optimization >>> pp(gy) # print out the gradient prior to optimization
'((fill((x ** 2), 1.0) * 2) * (x ** (2 - 1)))' '((fill((x ** 2), 1.0) * 2) * (x ** (2 - 1)))'
>>> f = function([x], gy) >>> f = function([x], gy)
>>> pp(f.maker.env.outputs[0]) >>> pp(f.maker.fgraph.outputs[0])
'(2.0 * x)' '(2.0 * x)'
The parameter in T.dscalar('x') in the first line is the name of this variable The parameter in T.dscalar('x') in the first line is the name of this variable
...@@ -73,7 +73,7 @@ iteration number or other kinds of information in the name. ...@@ -73,7 +73,7 @@ iteration number or other kinds of information in the name.
2) The second function to print a graph is :func:`theano.printing.debugprint(variable_or_function, depth=-1)` 2) The second function to print a graph is :func:`theano.printing.debugprint(variable_or_function, depth=-1)`
>>> theano.printing.debugprint(f.maker.env.outputs[0]) >>> theano.printing.debugprint(f.maker.fgraph.outputs[0])
Elemwise{mul,no_inplace} 46950805397392 Elemwise{mul,no_inplace} 46950805397392
2.0 46950805310800 2.0 46950805310800
x 46950804895504 x 46950804895504
......
.. _env: .. _fgraph:
=== ===
Env FunctionGraph
=== ===
WRITEME WRITEME
.. _envfeature: .. _fgraphfeature:
Feature Feature
======= =======
......
...@@ -8,7 +8,7 @@ Advanced Topics (under construction) ...@@ -8,7 +8,7 @@ Advanced Topics (under construction)
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
env fgraph
compilation compilation
ccodegen ccodegen
function function
......
...@@ -32,7 +32,7 @@ A good, simple way to do it would be to have those commands as methods of a stru ...@@ -32,7 +32,7 @@ A good, simple way to do it would be to have those commands as methods of a stru
>>> a, b, c = Tensor(), Tensor(), Tensor() >>> a, b, c = Tensor(), Tensor(), Tensor()
>>> d = b * c >>> d = b * c
>>> e = a + d >>> e = a + d
>>> debug = DebugLinker(Env([a, b, c], [e])).make_function() >>> debug = DebugLinker(FunctionGraph([a, b, c], [e])).make_function()
>>> debug.set_breakpoint(d) >>> debug.set_breakpoint(d)
>>> debug.debug(10, 20, 30) # a, b, c = 10, 20, 30 >>> debug.debug(10, 20, 30) # a, b, c = 10, 20, 30
Now at: Mul(b, c) Now at: Mul(b, c)
......
...@@ -114,7 +114,7 @@ Caching ...@@ -114,7 +114,7 @@ Caching
The current way of caching is from a hash of the generated code. That is inefficient because code has to be generated each time, which might be a costly process. Furthermore, usage of hashing in sets make it difficult to ensure a consistent ordering of Ops in graphs where several orderings are valid, so the generated C code is potentially different each time. Here is a proposal for a better way to compute the hash: The current way of caching is from a hash of the generated code. That is inefficient because code has to be generated each time, which might be a costly process. Furthermore, usage of hashing in sets make it difficult to ensure a consistent ordering of Ops in graphs where several orderings are valid, so the generated C code is potentially different each time. Here is a proposal for a better way to compute the hash:
* Result_hash = Result version + Result desc * Result_hash = Result version + Result desc
* Op_hash = Op version + Op desc + input/output hashes * Op_hash = Op version + Op desc + input/output hashes
* Env_hash = Env version + combination of the Op hashes and their traversal order wrt a consistent traversal method * FunctionGraph_hash = FunctionGraph version + combination of the Op hashes and their traversal order wrt a consistent traversal method
The version could be set explicitly via a ``__version__`` field or it could simply be equal to the file's last modification date. We could also have a ``__nocache__`` field indicating that code produced by the Op or Result cannot be cached. The version could be set explicitly via a ``__version__`` field or it could simply be equal to the file's last modification date. We could also have a ``__nocache__`` field indicating that code produced by the Op or Result cannot be cached.
......
...@@ -62,7 +62,7 @@ Running the above code generates the following error message: ...@@ -62,7 +62,7 @@ Running the above code generates the following error message:
Definition in: Definition in:
File "/u/desjagui/workspace/PYTHON/theano/gof/opt.py", line 1102, in apply File "/u/desjagui/workspace/PYTHON/theano/gof/opt.py", line 1102, in apply
lopt_change = self.process_node(env, node, lopt) lopt_change = self.process_node(fgraph, node, lopt)
File "/u/desjagui/workspace/PYTHON/theano/gof/opt.py", line 882, in process_node File "/u/desjagui/workspace/PYTHON/theano/gof/opt.py", line 882, in process_node
replacements = lopt.transform(node) replacements = lopt.transform(node)
File "/u/desjagui/workspace/PYTHON/Theano/theano/tensor/blas.py", line 1030, in local_dot_to_dot22 File "/u/desjagui/workspace/PYTHON/Theano/theano/tensor/blas.py", line 1030, in local_dot_to_dot22
......
...@@ -44,7 +44,7 @@ the correct symbolic gradient. ...@@ -44,7 +44,7 @@ the correct symbolic gradient.
.. code-block:: python .. code-block:: python
pp(f.maker.env.outputs[0]) pp(f.maker.fgraph.outputs[0])
'(2.0 * x)' '(2.0 * x)'
After optimization there is only one Apply node left in the graph, which After optimization there is only one Apply node left in the graph, which
......
...@@ -46,7 +46,7 @@ file and run it. ...@@ -46,7 +46,7 @@ file and run it.
r = f() r = f()
print 'Looping %d times took'%iters, time.time() - t0, 'seconds' print 'Looping %d times took'%iters, time.time() - t0, 'seconds'
print 'Result is', r print 'Result is', r
print 'Used the','cpu' if numpy.any( [isinstance(x.op,T.Elemwise) for x in f.maker.env.toposort()]) else 'gpu' print 'Used the','cpu' if numpy.any( [isinstance(x.op,T.Elemwise) for x in f.maker.fgraph.toposort()]) else 'gpu'
The program just computes the exp() of a bunch of random numbers. The program just computes the exp() of a bunch of random numbers.
Note that we use the `shared` function to Note that we use the `shared` function to
...@@ -103,7 +103,7 @@ after the T.exp(x) is replaced by a GPU version of exp(). ...@@ -103,7 +103,7 @@ after the T.exp(x) is replaced by a GPU version of exp().
print 'Looping %d times took'%iters, time.time() - t0, 'seconds' print 'Looping %d times took'%iters, time.time() - t0, 'seconds'
print 'Result is', r print 'Result is', r
print 'Numpy result is', numpy.asarray(r) print 'Numpy result is', numpy.asarray(r)
print 'Used the','cpu' if numpy.any( [isinstance(x.op,T.Elemwise) for x in f.maker.env.toposort()]) else 'gpu' print 'Used the','cpu' if numpy.any( [isinstance(x.op,T.Elemwise) for x in f.maker.fgraph.toposort()]) else 'gpu'
The output from this program is The output from this program is
...@@ -158,7 +158,7 @@ that it has the un-wanted side-effect of really slowing things down. ...@@ -158,7 +158,7 @@ that it has the un-wanted side-effect of really slowing things down.
print 'Looping %d times took'%iters, time.time() - t0, 'seconds' print 'Looping %d times took'%iters, time.time() - t0, 'seconds'
print 'Result is', r print 'Result is', r
print 'Numpy result is', numpy.asarray(r) print 'Numpy result is', numpy.asarray(r)
print 'Used the','cpu' if numpy.any( [isinstance(x.op,T.Elemwise) for x in f.maker.env.toposort()]) else 'gpu' print 'Used the','cpu' if numpy.any( [isinstance(x.op,T.Elemwise) for x in f.maker.fgraph.toposort()]) else 'gpu'
Running this version of the code takes just under 0.05 seconds, over 140x faster than Running this version of the code takes just under 0.05 seconds, over 140x faster than
the CPU implementation! the CPU implementation!
......
...@@ -50,7 +50,7 @@ import gof ...@@ -50,7 +50,7 @@ import gof
from gof import \ from gof import \
CLinker, OpWiseCLinker, DualLinker, Linker, LocalLinker, PerformLinker, \ CLinker, OpWiseCLinker, DualLinker, Linker, LocalLinker, PerformLinker, \
Container, \ Container, \
InconsistencyError, Env, \ InconsistencyError, FunctionGraph, \
Apply, Variable, Constant, \ Apply, Variable, Constant, \
Op, \ Op, \
opt, \ opt, \
......
...@@ -85,19 +85,20 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None, ...@@ -85,19 +85,20 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None,
things more convenient for the user. The shared variables are things more convenient for the user. The shared variables are
transformed into implicit inputs and implicit outputs. The transformed into implicit inputs and implicit outputs. The
optimizations don't see which variables are shared or not. optimizations don't see which variables are shared or not.
2. Env: determines whether a graph is valid. for example, suppose 2. FunctionGraph: determines whether a graph is valid. For example,
suppose
you merge the two apply nodes in our example above, ie, do the you merge the two apply nodes in our example above, ie, do the
addition and the tanh at the same time. If you propose a merge that addition and the tanh at the same time. If you propose a merge that
changes the resulting dtype or broadcastable pattern of V4, the env changes the resulting dtype or broadcastable pattern of V4, the fgraph
will detect this. will detect this.
inplace optimizations: say we have an apply node that inplace optimizations: say we have an apply node that
does + on V1 and V2, with output V3. We can change the output to be does + on V1 and V2, with output V3. We can change the output to be
V1, to use less memory. theano must be told that this optimization is V1, to use less memory. theano must be told that this optimization is
happening though, so that other parts of the graph are given the happening though, so that other parts of the graph are given the
correct (pre + or post + ) version of V1. correct (pre + or post + ) version of V1.
env will raise an error if any of these types of fgraph will raise an error if any of these types of
modifications causes an error modifications causes an error
env also adds a field called "clients" to all variables. fgraph also adds a field called "clients" to all variables.
clients is a list of apply nodes that use the variable. this makes it clients is a list of apply nodes that use the variable. this makes it
possible to traverse the graph in both directions. this is useful for possible to traverse the graph in both directions. this is useful for
determining whether to do some optimizations. for example, a fusion determining whether to do some optimizations. for example, a fusion
......
...@@ -94,8 +94,8 @@ OPT_FAST_COMPILE.name = 'OPT_FAST_COMPILE' ...@@ -94,8 +94,8 @@ OPT_FAST_COMPILE.name = 'OPT_FAST_COMPILE'
OPT_STABILIZE.name = 'OPT_STABILIZE' OPT_STABILIZE.name = 'OPT_STABILIZE'
predefined_optimizers = { predefined_optimizers = {
None: (lambda env: None), None: (lambda fgraph: None),
'None': (lambda env: None), 'None': (lambda fgraph: None),
'merge': gof.MergeOptimizer(), 'merge': gof.MergeOptimizer(),
'fast_run': OPT_FAST_RUN, 'fast_run': OPT_FAST_RUN,
'fast_run_stable': OPT_FAST_RUN_STABLE, 'fast_run_stable': OPT_FAST_RUN_STABLE,
...@@ -182,20 +182,20 @@ _output_guard = OutputGuard() ...@@ -182,20 +182,20 @@ _output_guard = OutputGuard()
class AddDestroyHandler(gof.Optimizer): class AddDestroyHandler(gof.Optimizer):
"""This optimizer performs two important functions: """This optimizer performs two important functions:
1) it has a 'requirement' of the destroyhandler. This means that the env 1) it has a 'requirement' of the destroyhandler. This means that the fgraph
will include it as a feature for this optimization, and keep this feature will include it as a feature for this optimization, and keep this feature
enabled for subsequent optimizations. All optimizations that work inplace enabled for subsequent optimizations. All optimizations that work inplace
on any of their inputs must run *after* this optimization to ensure that on any of their inputs must run *after* this optimization to ensure that
the DestroyHandler has been included in the env. the DestroyHandler has been included in the fgraph.
2) It tries to replace each output with an Op that purports to destroy it 2) It tries to replace each output with an Op that purports to destroy it
(but it won't I promise). If this replacement succeeds it means that (but it won't I promise). If this replacement succeeds it means that
there is a bug in theano. It should not be possible to destroy outputs. there is a bug in theano. It should not be possible to destroy outputs.
""" """
def apply(self, env): def apply(self, fgraph):
for o in env.outputs: for o in fgraph.outputs:
try: try:
env.replace_validate(o, _output_guard(o), fgraph.replace_validate(o, _output_guard(o),
reason='output_guard') reason='output_guard')
_logger.info("Output variable %s required output_guard, " _logger.info("Output variable %s required output_guard, "
"how was this output left unprotected against " "how was this output left unprotected against "
...@@ -206,12 +206,12 @@ class AddDestroyHandler(gof.Optimizer): ...@@ -206,12 +206,12 @@ class AddDestroyHandler(gof.Optimizer):
# No guard necessary # No guard necessary
pass pass
def add_requirements(self, env): def add_requirements(self, fgraph):
super(AddDestroyHandler, self).add_requirements(env) super(AddDestroyHandler, self).add_requirements(fgraph)
env.extend(gof.DestroyHandler()) fgraph.extend(gof.DestroyHandler())
class PrintCurrentEnv(gof.Optimizer): class PrintCurrentFunctionGraph(gof.Optimizer):
"""This optimizer is for debugging. """This optimizer is for debugging.
Toss it into the optimization pipeline to see the state of things at any Toss it into the optimization pipeline to see the state of things at any
...@@ -220,10 +220,10 @@ class PrintCurrentEnv(gof.Optimizer): ...@@ -220,10 +220,10 @@ class PrintCurrentEnv(gof.Optimizer):
def __init__(self, header): def __init__(self, header):
self.header = header self.header = header
def apply(self, env): def apply(self, fgraph):
import theano.printing import theano.printing
print "PrintCurrentEnv:", self.header print "PrintCurrentFunctionGraph:", self.header
theano.printing.debugprint(env.outputs) theano.printing.debugprint(fgraph.outputs)
optdb = gof.SequenceDB() optdb = gof.SequenceDB()
...@@ -237,21 +237,21 @@ optdb.register('canonicalize', gof.EquilibriumDB(), ...@@ -237,21 +237,21 @@ optdb.register('canonicalize', gof.EquilibriumDB(),
optdb.register('merge1.2', gof.MergeOptimizer(), optdb.register('merge1.2', gof.MergeOptimizer(),
1.2, 'fast_run', 'fast_compile') 1.2, 'fast_run', 'fast_compile')
optdb.register('Print1.21', PrintCurrentEnv('Post-canonicalize'), optdb.register('Print1.21', PrintCurrentFunctionGraph('Post-canonicalize'),
1.21,) # 'fast_run', 'fast_compile') 1.21,) # 'fast_run', 'fast_compile')
# replace unstable subgraphs # replace unstable subgraphs
optdb.register('stabilize', gof.EquilibriumDB(), optdb.register('stabilize', gof.EquilibriumDB(),
1.5, 'fast_run') 1.5, 'fast_run')
optdb.register('Print1.51', PrintCurrentEnv('Post-stabilize'), optdb.register('Print1.51', PrintCurrentFunctionGraph('Post-stabilize'),
1.51,) # 'fast_run', 'fast_compile') 1.51,) # 'fast_run', 'fast_compile')
# misc special cases for speed # misc special cases for speed
optdb.register('specialize', gof.EquilibriumDB(), optdb.register('specialize', gof.EquilibriumDB(),
2, 'fast_run') 2, 'fast_run')
optdb.register('Print2.01', PrintCurrentEnv('Post-specialize'), optdb.register('Print2.01', PrintCurrentFunctionGraph('Post-specialize'),
2.01,) # 'fast_run', 'fast_compile') 2.01,) # 'fast_run', 'fast_compile')
# misc special cases for speed that break canonicalization # misc special cases for speed that break canonicalization
......
...@@ -499,7 +499,7 @@ class Method(Component): ...@@ -499,7 +499,7 @@ class Method(Component):
mode = kwargs.pop('mode', None) mode = kwargs.pop('mode', None)
if mode: if mode:
f = self.build(mode, {}, True) f = self.build(mode, {}, True)
einputs, eoutputs = f.maker.env.inputs, f.maker.env.outputs einputs, eoutputs = f.maker.fgraph.inputs, f.maker.fgraph.outputs
updates = dict(((k, v) for k, v in zip(einputs[len(inputs):], eoutputs[len(outputs):]))) updates = dict(((k, v) for k, v in zip(einputs[len(inputs):], eoutputs[len(outputs):])))
inputs, outputs = einputs[:len(inputs)], eoutputs[:len(outputs)] inputs, outputs = einputs[:len(inputs)], eoutputs[:len(outputs)]
rval += pprint(inputs, outputs, updates, False) rval += pprint(inputs, outputs, updates, False)
......
...@@ -82,8 +82,8 @@ def rebuild_collect_shared(outputs, ...@@ -82,8 +82,8 @@ def rebuild_collect_shared(outputs,
shared inputs, and their default_update (if applicable) to update_d shared inputs, and their default_update (if applicable) to update_d
and update_expr. and update_expr.
v can have an env attached to it, case in which we want to clone v can have an fgraph attached to it, case in which we want to clone
constants ( to avoid having a constant belonging to two envs) constants ( to avoid having a constant belonging to two fgraphs)
''' '''
# this co-recurses with clone_a # this co-recurses with clone_a
assert v is not None assert v is not None
...@@ -113,7 +113,7 @@ def rebuild_collect_shared(outputs, ...@@ -113,7 +113,7 @@ def rebuild_collect_shared(outputs,
update_d[v] = v_update update_d[v] = v_update
update_expr.append((v, v_update)) update_expr.append((v, v_update))
if not copy_inputs_over or (isinstance(v, Constant) and if not copy_inputs_over or (isinstance(v, Constant) and
hasattr(v, 'env')): hasattr(v, 'fgraph')):
### Cloning shared variables implies copying their underlying ### Cloning shared variables implies copying their underlying
### memory buffer ?? No. ### memory buffer ?? No.
return clone_d.setdefault(v, v.clone()) return clone_d.setdefault(v, v.clone())
......
...@@ -48,7 +48,7 @@ class Profile_Maker(FunctionMaker): ...@@ -48,7 +48,7 @@ class Profile_Maker(FunctionMaker):
ret.profile = profile ret.profile = profile
#initialize the timers #initialize the timers
for i, node in enumerate(ret.maker.env.toposort()): for i, node in enumerate(ret.maker.fgraph.toposort()):
profile.apply_time[node] = 0.0 profile.apply_time[node] = 0.0
profile.outputs_size[node] = [0.0] * len(node.outputs) profile.outputs_size[node] = [0.0] * len(node.outputs)
...@@ -240,7 +240,7 @@ class ProfileMode(Mode): ...@@ -240,7 +240,7 @@ class ProfileMode(Mode):
apply_time = {} apply_time = {}
for fn, ps in self.profile_stats.items(): for fn, ps in self.profile_stats.items():
for (i, node) in enumerate(fn.maker.env.toposort()): for (i, node) in enumerate(fn.maker.fgraph.toposort()):
apply_time[(i, node)] = ps.apply_time[node] apply_time[(i, node)] = ps.apply_time[node]
for (i, n), t in apply_time.items(): for (i, n), t in apply_time.items():
if t == 0: if t == 0:
...@@ -384,7 +384,7 @@ class ProfileMode(Mode): ...@@ -384,7 +384,7 @@ class ProfileMode(Mode):
op_apply.setdefault(op,0) op_apply.setdefault(op,0)
sop_apply.setdefault(type(a.op),0) sop_apply.setdefault(type(a.op),0)
op_time[op]+=t op_time[op]+=t
nb_call = [v for k,v in fct_call.items() if k.maker.env is a.env][0] nb_call = [v for k,v in fct_call.items() if k.maker.fgraph is a.fgraph][0]
op_cimpl.setdefault(a.op, True) op_cimpl.setdefault(a.op, True)
op_cimpl[a.op] = op_cimpl[a.op] and apply_cimpl.get(a, False) op_cimpl[a.op] = op_cimpl[a.op] and apply_cimpl.get(a, False)
if t==0: if t==0:
...@@ -480,7 +480,7 @@ class ProfileMode(Mode): ...@@ -480,7 +480,7 @@ class ProfileMode(Mode):
print print
print 'Apply-wise summary:' print 'Apply-wise summary:'
print '<% of local_time spent at this position> <cumulative %%> <apply time> <cumulative seconds> <time per call> [*] <nb_call> <Apply position> <Apply Op name>' print '<% of local_time spent at this position> <cumulative %%> <apply time> <cumulative seconds> <time per call> [*] <nb_call> <Apply position> <Apply Op name>'
atimes = [(t*100/local_time, t, a, [v for k,v in fct_call.items() if k.maker.env is a[1].env][0]) for a, t in apply_time.items()] atimes = [(t*100/local_time, t, a, [v for k,v in fct_call.items() if k.maker.fgraph is a[1].fgraph][0]) for a, t in apply_time.items()]
atimes.sort() atimes.sort()
atimes.reverse() atimes.reverse()
tot=0 tot=0
...@@ -509,23 +509,23 @@ class ProfileMode(Mode): ...@@ -509,23 +509,23 @@ class ProfileMode(Mode):
print """\nProfile of Theano intermediate memory disabled. print """\nProfile of Theano intermediate memory disabled.
To enabled, put the Theano flag ProfileMode.profile_memory to True.""" To enabled, put the Theano flag ProfileMode.profile_memory to True."""
else: else:
fct_memory={}#env->dict(node->(outputs size)) fct_memory={}#fgraph->dict(node->(outputs size))
var_mem = {} var_mem = {}
for node, val in outputs_size.items(): for node, val in outputs_size.items():
fct_memory.setdefault(node.env, {}) fct_memory.setdefault(node.fgraph, {})
fct_memory[node.env][node]=val fct_memory[node.fgraph][node]=val
for out,v in zip(node.outputs,val): for out,v in zip(node.outputs,val):
var_mem[out]=v var_mem[out]=v
print print
print "Profile of Theano functions memory:" print "Profile of Theano functions memory:"
print "(This check only the output of each apply node. It don't check the temporary memory used by the op in the apply node.)" print "(This check only the output of each apply node. It don't check the temporary memory used by the op in the apply node.)"
nb_skipped = 0 nb_skipped = 0
for env,nodes_mem in fct_memory.iteritems(): for fgraph,nodes_mem in fct_memory.iteritems():
size_sum=sum([sum(val) for key,val in nodes_mem.iteritems()]) size_sum=sum([sum(val) for key,val in nodes_mem.iteritems()])
if size_sum < min_memory_size: if size_sum < min_memory_size:
nb_skipped += 1 nb_skipped += 1
continue continue
print "Theano fct:", [fct for fct in fct_call.keys() if fct.maker.env is env][0].name print "Theano fct:", [fct for fct in fct_call.keys() if fct.maker.fgraph is fgraph][0].name
print " Max without gc, inplace and view (KB)",size_sum/1024 print " Max without gc, inplace and view (KB)",size_sum/1024
node_memory_size = 0 node_memory_size = 0
...@@ -538,12 +538,12 @@ class ProfileMode(Mode): ...@@ -538,12 +538,12 @@ class ProfileMode(Mode):
items.sort(key=lambda a: a[1]) items.sort(key=lambda a: a[1])
items.reverse() items.reverse()
order = env.toposort() order = fgraph.toposort()
computed, last_user = gof.link.gc_helper(order) computed, last_user = gof.link.gc_helper(order)
for node in order: for node in order:
post_thunk_old_storage.append([ input_idx post_thunk_old_storage.append([ input_idx
for input_idx,input in enumerate(node.inputs) for input_idx,input in enumerate(node.inputs)
if (input in computed) and (input not in env.outputs) and node == last_user[input]]) if (input in computed) and (input not in fgraph.outputs) and node == last_user[input]])
for node,val in items[:n_apply_to_print]: for node,val in items[:n_apply_to_print]:
dmap = getattr(node.op,'destroy_map',None) dmap = getattr(node.op,'destroy_map',None)
vmap = getattr(node.op,'view_map',None) vmap = getattr(node.op,'view_map',None)
...@@ -624,7 +624,7 @@ Test them first, as they are not guaranteed to always provide a speedup.""" ...@@ -624,7 +624,7 @@ Test them first, as they are not guaranteed to always provide a speedup."""
def get_scalar_ops(s): def get_scalar_ops(s):
if isinstance(s, theano.scalar.Composite): if isinstance(s, theano.scalar.Composite):
l = [] l = []
for node in s.env.toposort(): for node in s.fgraph.toposort():
l += get_scalar_ops(node.op) l += get_scalar_ops(node.op)
return l return l
else: else:
......
...@@ -133,7 +133,7 @@ class ProfileStats(object): ...@@ -133,7 +133,7 @@ class ProfileStats(object):
# time spent optimizing graph (FunctionMaker.__init__) # time spent optimizing graph (FunctionMaker.__init__)
validate_time = 0.0 validate_time = 0.0
# time spent in env.validate # time spent in fgraph.validate
# This is a subset of optimizer_time that is dominated by toposort() # This is a subset of optimizer_time that is dominated by toposort()
# when the destorymap feature is included. # when the destorymap feature is included.
...@@ -506,7 +506,7 @@ class ProfileStats(object): ...@@ -506,7 +506,7 @@ class ProfileStats(object):
t * 100 / local_time, t * 100 / local_time,
t, t,
a, a,
a.env.toposort().index(a), a.fgraph.toposort().index(a),
self.apply_callcount[a]) self.apply_callcount[a])
for a, t in self.apply_time.items()] for a, t in self.apply_time.items()]
atimes.sort() atimes.sort()
...@@ -671,7 +671,7 @@ if 0: # old code still to be ported from ProfileMode ...@@ -671,7 +671,7 @@ if 0: # old code still to be ported from ProfileMode
print "List of apply that don't have float64 as input but have float64 in outputs. Usefull to know if we forgot some cast when using floatX=float32 or gpu code." print "List of apply that don't have float64 as input but have float64 in outputs. Usefull to know if we forgot some cast when using floatX=float32 or gpu code."
print '<Apply> <Apply position> <fct name> <inputs type> <outputs type>' print '<Apply> <Apply position> <fct name> <inputs type> <outputs type>'
for fct in fct_call.keys(): for fct in fct_call.keys():
for idx, node in enumerate(fct.maker.env.toposort()): for idx, node in enumerate(fct.maker.fgraph.toposort()):
if any(hasattr(i, 'dtype') and i.dtype == 'float64' for i in node.outputs) and not any(hasattr(i, 'dtype') and i.dtype == 'float64' for i in node.inputs): if any(hasattr(i, 'dtype') and i.dtype == 'float64' for i in node.outputs) and not any(hasattr(i, 'dtype') and i.dtype == 'float64' for i in node.inputs):
print str(node), idx, fct.name, str([getattr(i,'dtype',None) for i in node.inputs]),str([getattr(i,'dtype',None) for i in node.outputs]) print str(node), idx, fct.name, str([getattr(i,'dtype',None) for i in node.inputs]),str([getattr(i,'dtype',None) for i in node.outputs])
...@@ -702,17 +702,17 @@ if 0: # old code still to be ported from ProfileMode ...@@ -702,17 +702,17 @@ if 0: # old code still to be ported from ProfileMode
print fct.name, i.name, i.type, i print fct.name, i.name, i.type, i
if outputs_size: if outputs_size:
fct_memory={}#env->dict(node->(outputs size)) fct_memory={}#fgraph->dict(node->(outputs size))
var_mem = {} var_mem = {}
for node,val in outputs_size.items(): for node,val in outputs_size.items():
fct_memory.setdefault(node.env,{}) fct_memory.setdefault(node.fgraph,{})
fct_memory[node.env][node]=val fct_memory[node.fgraph][node]=val
for out,v in zip(node.outputs,val): for out,v in zip(node.outputs,val):
var_mem[out]=v var_mem[out]=v
print print
print "Profile of Theano functions memory:" print "Profile of Theano functions memory:"
for env,nodes_mem in fct_memory.iteritems(): for fgraph,nodes_mem in fct_memory.iteritems():
print "Theano fct:", [fct for fct in fct_call.keys() if fct.maker.env is env][0].name print "Theano fct:", [fct for fct in fct_call.keys() if fct.maker.fgraph is fgraph][0].name
size_sum=sum([sum(val) for key,val in nodes_mem.iteritems()]) size_sum=sum([sum(val) for key,val in nodes_mem.iteritems()])
print " Max without gc, inplace and view (KB)",size_sum/1024 print " Max without gc, inplace and view (KB)",size_sum/1024
...@@ -726,12 +726,12 @@ if 0: # old code still to be ported from ProfileMode ...@@ -726,12 +726,12 @@ if 0: # old code still to be ported from ProfileMode
items.sort(key=lambda a: a[1]) items.sort(key=lambda a: a[1])
items.reverse() items.reverse()
order = env.toposort() order = fgraph.toposort()
computed, last_user = gc_helper(order) computed, last_user = gc_helper(order)
for node in order: for node in order:
post_thunk_old_storage.append([ input_idx post_thunk_old_storage.append([ input_idx
for input_idx,input in enumerate(node.inputs) for input_idx,input in enumerate(node.inputs)
if (input in computed) and (input not in env.outputs) and node == last_user[input]]) if (input in computed) and (input not in fgraph.outputs) and node == last_user[input]])
for node,val in items[:n_apply_to_print]: for node,val in items[:n_apply_to_print]:
dmap = getattr(node.op,'destroy_map',None) dmap = getattr(node.op,'destroy_map',None)
vmap = getattr(node.op,'view_map',None) vmap = getattr(node.op,'view_map',None)
...@@ -787,7 +787,7 @@ if 0: # old code still to be ported from ProfileMode ...@@ -787,7 +787,7 @@ if 0: # old code still to be ported from ProfileMode
def get_scalar_ops(s): def get_scalar_ops(s):
if isinstance(s, theano.scalar.Composite): if isinstance(s, theano.scalar.Composite):
l = [] l = []
for node in s.env.toposort(): for node in s.fgraph.toposort():
l+=get_scalar_ops(node.op) l+=get_scalar_ops(node.op)
return l return l
else: return [s] else: return [s]
......
...@@ -23,7 +23,7 @@ class T_OpFromGraph(unittest.TestCase): ...@@ -23,7 +23,7 @@ class T_OpFromGraph(unittest.TestCase):
yv = numpy.ones((2, 2), dtype=config.floatX)*3 yv = numpy.ones((2, 2), dtype=config.floatX)*3
zv = numpy.ones((2, 2), dtype=config.floatX)*5 zv = numpy.ones((2, 2), dtype=config.floatX)*5
#print function, function.__module__ #print function, function.__module__
#print fn.maker.env.toposort() #print fn.maker.fgraph.toposort()
fn(xv, yv, zv) fn(xv, yv, zv)
assert numpy.all(8.0 == fn(xv, yv, zv)) assert numpy.all(8.0 == fn(xv, yv, zv))
assert numpy.all(8.0 == fn(xv, yv, zv)) assert numpy.all(8.0 == fn(xv, yv, zv))
......
...@@ -307,7 +307,7 @@ class T_function(unittest.TestCase): ...@@ -307,7 +307,7 @@ class T_function(unittest.TestCase):
def test_constant_output(self): def test_constant_output(self):
# Test that if the output is a constant, we respect the theano memory interface # Test that if the output is a constant, we respect the theano memory interface
f = theano.function([],theano.tensor.constant([4])) f = theano.function([],theano.tensor.constant([4]))
#print f.maker.env.toposort() #print f.maker.fgraph.toposort()
out = f() out = f()
assert (out==4).all() assert (out==4).all()
out[0]=3 out[0]=3
...@@ -318,7 +318,7 @@ class T_function(unittest.TestCase): ...@@ -318,7 +318,7 @@ class T_function(unittest.TestCase):
# Test that if the output is a constant and borrow, we respect the theano memory interface # Test that if the output is a constant and borrow, we respect the theano memory interface
f = theano.function([],Out(theano.tensor.constant([4]), borrow=True)) f = theano.function([],Out(theano.tensor.constant([4]), borrow=True))
#print f.maker.env.toposort() #print f.maker.fgraph.toposort()
out = f() out = f()
assert (out==4).all() assert (out==4).all()
out[0]=3 out[0]=3
...@@ -521,9 +521,9 @@ class T_picklefunction(unittest.TestCase): ...@@ -521,9 +521,9 @@ class T_picklefunction(unittest.TestCase):
return return
assert f.maker is not g.maker assert f.maker is not g.maker
assert f.maker.env is not g.maker.env assert f.maker.fgraph is not g.maker.fgraph
tf = f.maker.env.toposort() tf = f.maker.fgraph.toposort()
tg = f.maker.env.toposort() tg = f.maker.fgraph.toposort()
assert len(tf) == len(tg) assert len(tf) == len(tg)
for nf, ng in zip(tf, tg): for nf, ng in zip(tf, tg):
assert nf.op == ng.op assert nf.op == ng.op
......
...@@ -215,7 +215,7 @@ def test_example_rnn(): ...@@ -215,7 +215,7 @@ def test_example_rnn():
y[LAG:] = x[:-LAG, 0:n_out] y[LAG:] = x[:-LAG, 0:n_out]
if 0: if 0:
for i, node in enumerate(rnn.minimizer.step_cost.maker.env.toposort()): for i, node in enumerate(rnn.minimizer.step_cost.maker.fgraph.toposort()):
print i, node print i, node
niter=1500 niter=1500
...@@ -258,14 +258,14 @@ def test_WEIRD_STUFF(): ...@@ -258,14 +258,14 @@ def test_WEIRD_STUFF():
# for n in m.optimizer: print n.name # for n in m.optimizer: print n.name
if 0: if 0:
topo1=rnn1.minimizer.step_cost.maker.env.toposort() topo1=rnn1.minimizer.step_cost.maker.fgraph.toposort()
topo2=rnn2.minimizer.step_cost.maker.env.toposort() topo2=rnn2.minimizer.step_cost.maker.fgraph.toposort()
for i in range(len(topo1)): for i in range(len(topo1)):
print '1',i, topo1[i] print '1',i, topo1[i]
print '2',i, topo2[i] print '2',i, topo2[i]
if 0: if 0:
topo1=rnn1.minimizer.step.maker.env.toposort() topo1=rnn1.minimizer.step.maker.fgraph.toposort()
topo2=rnn2.minimizer.step.maker.env.toposort() topo2=rnn2.minimizer.step.maker.fgraph.toposort()
for i in range(len(topo1)): for i in range(len(topo1)):
print '1',i, topo1[i] print '1',i, topo1[i]
print '2',i, topo2[i] print '2',i, topo2[i]
......
...@@ -591,8 +591,8 @@ class Test_pfunc(unittest.TestCase): ...@@ -591,8 +591,8 @@ class Test_pfunc(unittest.TestCase):
c = a + 10 c = a + 10
f = pfunc([b], c, givens={a: b}) f = pfunc([b], c, givens={a: b})
assert len(f.maker.env.inputs) == 1 assert len(f.maker.fgraph.inputs) == 1
assert len(f.maker.env.outputs) == 1 assert len(f.maker.fgraph.outputs) == 1
def test_givens_replaces_shared_variable2(self): def test_givens_replaces_shared_variable2(self):
a = shared(1., 'a') a = shared(1., 'a')
......
...@@ -44,8 +44,6 @@ import compiledir # adds config vars ...@@ -44,8 +44,6 @@ import compiledir # adds config vars
from fg import \ from fg import \
InconsistencyError, MissingInputError, FunctionGraph InconsistencyError, MissingInputError, FunctionGraph
#deprecated alias to support code written with old name
Env = FunctionGraph
from destroyhandler import \ from destroyhandler import \
DestroyHandler DestroyHandler
......
差异被折叠。
...@@ -23,27 +23,27 @@ class DestroyHandler(object): ...@@ -23,27 +23,27 @@ class DestroyHandler(object):
self.map = {} self.map = {}
self.do_imports_on_attach=do_imports_on_attach self.do_imports_on_attach=do_imports_on_attach
def on_attach(self, env): def on_attach(self, fgraph):
dh = self.map.setdefault(env, DestroyHandlerHelper2(do_imports_on_attach=self.do_imports_on_attach)) dh = self.map.setdefault(fgraph, DestroyHandlerHelper2(do_imports_on_attach=self.do_imports_on_attach))
dh.on_attach(env) dh.on_attach(fgraph)
def on_detach(self, env): def on_detach(self, fgraph):
self.map[env].on_detach(env) self.map[fgraph].on_detach(fgraph)
def on_import(self, env, op): def on_import(self, fgraph, op):
self.map[env].on_import(env, op) self.map[fgraph].on_import(fgraph, op)
def on_prune(self, env, op): def on_prune(self, fgraph, op):
self.map[env].on_prune(env, op) self.map[fgraph].on_prune(fgraph, op)
def on_change_input(self, env, node, i, r, new_r): def on_change_input(self, fgraph, node, i, r, new_r):
self.map[env].on_change_input(env, node, i, r, new_r) self.map[fgraph].on_change_input(fgraph, node, i, r, new_r)
def validate(self, env): def validate(self, fgraph):
self.map[env].validate(env) self.map[fgraph].validate(fgraph)
def orderings(self, env): def orderings(self, fgraph):
return self.map[env].orderings(env) return self.map[fgraph].orderings(fgraph)
def _dfs_toposort(i, r_out, orderings): def _dfs_toposort(i, r_out, orderings):
...@@ -165,14 +165,14 @@ def fast_inplace_check(inputs): ...@@ -165,14 +165,14 @@ def fast_inplace_check(inputs):
:type inputs: list :type inputs: list
:param inputs: inputs Variable that you want to use as inplace destination :param inputs: inputs Variable that you want to use as inplace destination
""" """
env = inputs[0].env fgraph = inputs[0].fgraph
protected_inputs = [f.protected for f in env._features if isinstance(f,theano.compile.function_module.Supervisor)] protected_inputs = [f.protected for f in fgraph._features if isinstance(f,theano.compile.function_module.Supervisor)]
protected_inputs = sum(protected_inputs,[])#flatten the list protected_inputs = sum(protected_inputs,[])#flatten the list
protected_inputs.extend(env.outputs) protected_inputs.extend(fgraph.outputs)
inputs = [i for i in inputs if inputs = [i for i in inputs if
not isinstance(i,graph.Constant) not isinstance(i,graph.Constant)
and not env.destroyers(i) and not fgraph.destroyers(i)
and i not in protected_inputs] and i not in protected_inputs]
return inputs return inputs
...@@ -211,15 +211,15 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper): ...@@ -211,15 +211,15 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
""" """
def __init__(self, do_imports_on_attach=True): def __init__(self, do_imports_on_attach=True):
self.env = None self.fgraph = None
self.do_imports_on_attach = do_imports_on_attach self.do_imports_on_attach = do_imports_on_attach
def on_attach(self, env): def on_attach(self, fgraph):
#boilerplate from old implementation #boilerplate from old implementation
if self.env is not None: if self.fgraph is not None:
raise Exception("A DestroyHandler instance can only serve one Env.") raise Exception("A DestroyHandler instance can only serve one FunctionGraph. (Matthew 6:24)")
for attr in ('destroyers', 'destroy_handler'): for attr in ('destroyers', 'destroy_handler'):
if hasattr(env, attr): if hasattr(fgraph, attr):
raise toolbox.AlreadyThere("DestroyHandler feature is already present or in conflict with another plugin.") raise toolbox.AlreadyThere("DestroyHandler feature is already present or in conflict with another plugin.")
def get_destroyers_of(r): def get_destroyers_of(r):
...@@ -229,10 +229,10 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper): ...@@ -229,10 +229,10 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
except Exception: except Exception:
return [] return []
env.destroyers = get_destroyers_of fgraph.destroyers = get_destroyers_of
env.destroy_handler = self fgraph.destroy_handler = self
self.env = env self.fgraph = fgraph
self.destroyers = set() #set of Apply instances with non-null destroy_map self.destroyers = set() #set of Apply instances with non-null destroy_map
self.view_i = {} # variable -> variable used in calculation self.view_i = {} # variable -> variable used in calculation
self.view_o = {} # variable -> set of variables that use this one as a direct input self.view_o = {} # variable -> set of variables that use this one as a direct input
...@@ -242,7 +242,7 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper): ...@@ -242,7 +242,7 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
self.debug_all_apps = set() self.debug_all_apps = set()
if self.do_imports_on_attach: if self.do_imports_on_attach:
toolbox.Bookkeeper.on_attach(self, env) toolbox.Bookkeeper.on_attach(self, fgraph)
def refresh_droot_impact(self): def refresh_droot_impact(self):
if self.stale_droot: if self.stale_droot:
...@@ -278,20 +278,20 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper): ...@@ -278,20 +278,20 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
return droot, impact, root_destroyer return droot, impact, root_destroyer
def on_detach(self, env): def on_detach(self, fgraph):
if env is not self.env: if fgraph is not self.fgraph:
raise Exception("detaching wrong env", env) raise Exception("detaching wrong fgraph", fgraph)
del self.destroyers del self.destroyers
del self.view_i del self.view_i
del self.view_o del self.view_o
del self.clients del self.clients
del self.stale_droot del self.stale_droot
assert self.env.destroyer_handler is self assert self.fgraph.destroyer_handler is self
delattr(self.env, 'destroyers') delattr(self.fgraph, 'destroyers')
delattr(self.env, 'destroy_handler') delattr(self.fgraph, 'destroy_handler')
self.env = None self.fgraph = None
def on_import(self, env, app): def on_import(self, fgraph, app):
"""Add Apply instance to set which must be computed""" """Add Apply instance to set which must be computed"""
if app in self.debug_all_apps: raise ProtocolError("double import") if app in self.debug_all_apps: raise ProtocolError("double import")
...@@ -321,7 +321,7 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper): ...@@ -321,7 +321,7 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
self.stale_droot = True self.stale_droot = True
def on_prune(self, env, app): def on_prune(self, fgraph, app):
"""Remove Apply instance from set which must be computed""" """Remove Apply instance from set which must be computed"""
if app not in self.debug_all_apps: raise ProtocolError("prune without import") if app not in self.debug_all_apps: raise ProtocolError("prune without import")
self.debug_all_apps.remove(app) self.debug_all_apps.remove(app)
...@@ -353,10 +353,10 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper): ...@@ -353,10 +353,10 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
self.stale_droot = True self.stale_droot = True
def on_change_input(self, env, app, i, old_r, new_r): def on_change_input(self, fgraph, app, i, old_r, new_r):
"""app.inputs[i] changed from old_r to new_r """ """app.inputs[i] changed from old_r to new_r """
if app == 'output': if app == 'output':
# app == 'output' is special key that means Env is redefining which nodes are being # app == 'output' is special key that means FunctionGraph is redefining which nodes are being
# considered 'outputs' of the graph. # considered 'outputs' of the graph.
pass pass
else: else:
...@@ -391,7 +391,7 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper): ...@@ -391,7 +391,7 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
self.stale_droot = True self.stale_droot = True
def validate(self, env): def validate(self, fgraph):
"""Return None """Return None
Raise InconsistencyError when Raise InconsistencyError when
...@@ -402,14 +402,14 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper): ...@@ -402,14 +402,14 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
#print '\nVALIDATE' #print '\nVALIDATE'
if self.destroyers: if self.destroyers:
try: try:
ords = self.orderings(env) ords = self.orderings(fgraph)
except Exception, e: except Exception, e:
#print 'orderings failed with:', type(e), e.args #print 'orderings failed with:', type(e), e.args
raise raise
#print 'orderings:', ords #print 'orderings:', ords
try: try:
### graph.io_toposort(env.inputs, env.outputs, ords) ### graph.io_toposort(fgraph.inputs, fgraph.outputs, ords)
_dfs_toposort(env.inputs, env.outputs, ords) _dfs_toposort(fgraph.inputs, fgraph.outputs, ords)
except ValueError, e: except ValueError, e:
#print 'not passing.', ords #print 'not passing.', ords
if 'cycles' in str(e): if 'cycles' in str(e):
...@@ -423,7 +423,7 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper): ...@@ -423,7 +423,7 @@ class DestroyHandlerHelper2(toolbox.Bookkeeper):
pass pass
return True return True
def orderings(self, env): def orderings(self, fgraph):
"""Return orderings induced by destructive operations. """Return orderings induced by destructive operations.
Raise InconsistencyError when Raise InconsistencyError when
......
...@@ -35,9 +35,6 @@ class FunctionGraph(utils.object2): ...@@ -35,9 +35,6 @@ class FunctionGraph(utils.object2):
on which the outputs depend. Variables of type Constant are on which the outputs depend. Variables of type Constant are
not counted as inputs. not counted as inputs.
Historically, the FunctionGraph was called an Env. Many other objects refer
to the FunctionGraph they belong to as their "env".
The FunctionGraph supports the replace operation which allows to replace a The FunctionGraph supports the replace operation which allows to replace a
variable in the subgraph by another, e.g. replace (x + x).out by (2 variable in the subgraph by another, e.g. replace (x + x).out by (2
* x).out. This is the basis for optimization in theano. * x).out. This is the basis for optimization in theano.
...@@ -90,10 +87,13 @@ class FunctionGraph(utils.object2): ...@@ -90,10 +87,13 @@ class FunctionGraph(utils.object2):
- feature.on_setup_variable(function_graph, variable): - feature.on_setup_variable(function_graph, variable):
WRITEME WRITEME
Historically, the FunctionGraph was called an Env. Keep this in mind
while reading out-of-date documentation, e-mail support threads, etc.
""" """
### Special ### ### Special ###
# TODO: document which things that features can do to the env # TODO: document which things that features can do to the fgraph
def __init__(self, inputs, outputs, features=None): def __init__(self, inputs, outputs, features=None):
""" """
...@@ -149,17 +149,17 @@ class FunctionGraph(utils.object2): ...@@ -149,17 +149,17 @@ class FunctionGraph(utils.object2):
### Setup a Variable ### ### Setup a Variable ###
def __setup_r__(self, r): def __setup_r__(self, r):
# sets up r so it belongs to this env # sets up r so it belongs to this fgraph
if hasattr(r, 'env') and r.env is not None and r.env is not self: if hasattr(r, 'fgraph') and r.fgraph is not None and r.fgraph is not self:
raise Exception("%s is already owned by another env" % r) raise Exception("%s is already owned by another fgraph" % r)
r.env = self r.fgraph = self
r.clients = [] r.clients = []
#self.execute_callbacks('on_setup_variable', r) #self.execute_callbacks('on_setup_variable', r)
def __setup_node__(self, node): def __setup_node__(self, node):
# sets up node so it belongs to this env # sets up node so it belongs to this fgraph
if hasattr(node, 'env') and node.env is not self: if hasattr(node, 'fgraph') and node.fgraph is not self:
raise Exception("%s is already owned by another env" % node) raise Exception("%s is already owned by another fgraph" % node)
if (hasattr(node.op, 'view_map') and if (hasattr(node.op, 'view_map') and
not all([isinstance(view, (list, tuple)) not all([isinstance(view, (list, tuple))
for view in node.op.view_map.values()])): for view in node.op.view_map.values()])):
...@@ -172,7 +172,7 @@ class FunctionGraph(utils.object2): ...@@ -172,7 +172,7 @@ class FunctionGraph(utils.object2):
raise Exception("Op '%s' have a bad destroy map '%s'," raise Exception("Op '%s' have a bad destroy map '%s',"
" the values must be tuples or lists." % ( " the values must be tuples or lists." % (
str(node.op), str(node.op.destroy_map))) str(node.op), str(node.op.destroy_map)))
node.env = self node.fgraph = self
node.deps = {} node.deps = {}
#self.execute_callbacks('on_setup_node', node) #self.execute_callbacks('on_setup_node', node)
...@@ -188,10 +188,10 @@ class FunctionGraph(utils.object2): ...@@ -188,10 +188,10 @@ class FunctionGraph(utils.object2):
them back to what they were originally. them back to what they were originally.
""" """
for node in self.nodes: for node in self.nodes:
del node.env del node.fgraph
del node.deps del node.deps
for variable in self.variables: for variable in self.variables:
del variable.env del variable.fgraph
del variable.clients del variable.clients
self.nodes = set() self.nodes = set()
self.variables = set() self.variables = set()
...@@ -256,7 +256,7 @@ class FunctionGraph(utils.object2): ...@@ -256,7 +256,7 @@ class FunctionGraph(utils.object2):
for r in variables: for r in variables:
if r.owner is None and not isinstance(r, graph.Constant) and r not in self.inputs: if r.owner is None and not isinstance(r, graph.Constant) and r not in self.inputs:
raise MissingInputError("Undeclared input", r) raise MissingInputError("Undeclared input", r)
if not getattr(r, 'env', None) is self: if not getattr(r, 'fgraph', None) is self:
self.__setup_r__(r) self.__setup_r__(r)
self.variables.add(r) self.variables.add(r)
...@@ -269,11 +269,11 @@ class FunctionGraph(utils.object2): ...@@ -269,11 +269,11 @@ class FunctionGraph(utils.object2):
if check: if check:
for node in new_nodes: for node in new_nodes:
if hasattr(node, 'env') and node.env is not self: if hasattr(node, 'fgraph') and node.fgraph is not self:
raise Exception("%s is already owned by another env" % node) raise Exception("%s is already owned by another fgraph" % node)
for r in node.inputs: for r in node.inputs:
if hasattr(r, 'env') and r.env is not self: if hasattr(r, 'fgraph') and r.fgraph is not self:
raise Exception("%s is already owned by another env" % r) raise Exception("%s is already owned by another fgraph" % r)
if r.owner is None and not isinstance(r, graph.Constant) and r not in self.inputs: if r.owner is None and not isinstance(r, graph.Constant) and r not in self.inputs:
#Verbose error message #Verbose error message
...@@ -353,7 +353,7 @@ class FunctionGraph(utils.object2): ...@@ -353,7 +353,7 @@ class FunctionGraph(utils.object2):
self.__setup_r__(input) self.__setup_r__(input)
self.variables.add(input) self.variables.add(input)
self.__add_clients__(input, [(node, i)]) self.__add_clients__(input, [(node, i)])
assert node.env is self assert node.fgraph is self
self.execute_callbacks('on_import', node) self.execute_callbacks('on_import', node)
...@@ -370,7 +370,7 @@ class FunctionGraph(utils.object2): ...@@ -370,7 +370,7 @@ class FunctionGraph(utils.object2):
def __prune__(self, node): def __prune__(self, node):
if node not in self.nodes: if node not in self.nodes:
raise Exception("%s does not belong to this FunctionGraph and cannot be pruned." % node) raise Exception("%s does not belong to this FunctionGraph and cannot be pruned." % node)
assert node.env is self assert node.fgraph is self
# If node's outputs have no clients, removes it from the graph # If node's outputs have no clients, removes it from the graph
# and recursively tries to prune its inputs. If at least one # and recursively tries to prune its inputs. If at least one
# of the op's outputs is an output to the graph or has a client # of the op's outputs is an output to the graph or has a client
...@@ -410,7 +410,7 @@ class FunctionGraph(utils.object2): ...@@ -410,7 +410,7 @@ class FunctionGraph(utils.object2):
r, new_r) r, new_r)
self.outputs[i] = new_r self.outputs[i] = new_r
else: else:
if node.env is not self: if node.fgraph is not self:
raise Exception("Cannot operate on %s because it does not" raise Exception("Cannot operate on %s because it does not"
" belong to this FunctionGraph" % node) " belong to this FunctionGraph" % node)
r = node.inputs[i] r = node.inputs[i]
...@@ -442,7 +442,7 @@ class FunctionGraph(utils.object2): ...@@ -442,7 +442,7 @@ class FunctionGraph(utils.object2):
This is the main interface to manipulate the subgraph in FunctionGraph. This is the main interface to manipulate the subgraph in FunctionGraph.
For every node that uses r as input, makes it use new_r instead. For every node that uses r as input, makes it use new_r instead.
""" """
if r.env is not self: if r.fgraph is not self:
raise Exception("Cannot replace %s because it does not belong to this FunctionGraph" % r, str(reason)) raise Exception("Cannot replace %s because it does not belong to this FunctionGraph" % r, str(reason))
if not r.type == new_r.type: if not r.type == new_r.type:
raise TypeError("The type of the replacement must be the same as the type of the original Variable.", r, new_r, r.type, new_r.type, str(reason)) raise TypeError("The type of the replacement must be the same as the type of the original Variable.", r, new_r, r.type, new_r.type, str(reason))
...@@ -611,10 +611,10 @@ class FunctionGraph(utils.object2): ...@@ -611,10 +611,10 @@ class FunctionGraph(utils.object2):
excess = self.nodes.difference(nodes) excess = self.nodes.difference(nodes)
raise Exception("The nodes are inappropriately cached. missing, in excess: ", missing, excess) raise Exception("The nodes are inappropriately cached. missing, in excess: ", missing, excess)
for node in nodes: for node in nodes:
if node.env is not self: if node.fgraph is not self:
raise Exception("Node should belong to the FunctionGraph.", node) raise Exception("Node should belong to the FunctionGraph.", node)
for i, variable in enumerate(node.inputs): for i, variable in enumerate(node.inputs):
if variable.env is not self: if variable.fgraph is not self:
raise Exception("Input of node should belong to the FunctionGraph.", variable, (node, i)) raise Exception("Input of node should belong to the FunctionGraph.", variable, (node, i))
if (node, i) not in variable.clients: if (node, i) not in variable.clients:
raise Exception("Inconsistent clients list.", (node, i), variable.clients) raise Exception("Inconsistent clients list.", (node, i), variable.clients)
...@@ -626,7 +626,7 @@ class FunctionGraph(utils.object2): ...@@ -626,7 +626,7 @@ class FunctionGraph(utils.object2):
for variable in variables: for variable in variables:
if variable.owner is None and variable not in self.inputs and not isinstance(variable, graph.Constant): if variable.owner is None and variable not in self.inputs and not isinstance(variable, graph.Constant):
raise Exception("Undeclared input.", variable) raise Exception("Undeclared input.", variable)
if variable.env is not self: if variable.fgraph is not self:
raise Exception("Variable should belong to the FunctionGraph.", variable) raise Exception("Variable should belong to the FunctionGraph.", variable)
for node, i in variable.clients: for node, i in variable.clients:
if node == 'output': if node == 'output':
......
...@@ -12,6 +12,7 @@ __docformat__ = "restructuredtext en" ...@@ -12,6 +12,7 @@ __docformat__ = "restructuredtext en"
from copy import copy from copy import copy
import theano import theano
import warnings
from theano.gof import utils from theano.gof import utils
from theano.gof.python25 import deque from theano.gof.python25 import deque
...@@ -119,6 +120,23 @@ class Apply(utils.object2): ...@@ -119,6 +120,23 @@ class Apply(utils.object2):
raise AttributeError("%s.default_output is out of range." % self.op) raise AttributeError("%s.default_output is out of range." % self.op)
return self.outputs[do] return self.outputs[do]
@property
def env(self):
warnings.warn("Apply.env is deprecated, it has been renamed 'fgraph'")
return self.fgraph
@env.setter
def env(self,value):
warnings.warn("Apply.env is deprecated, it has been renamed 'fgraph'")
self.fgraph = value
@env.deleter
def env(self):
warnings.warn("Apply.env is deprecated, it has been renamed 'fgraph'")
del self.fgraph
out = property(default_output, out = property(default_output,
doc = "alias for self.default_output()") doc = "alias for self.default_output()")
"""Alias for self.default_output()""" """Alias for self.default_output()"""
...@@ -234,7 +252,7 @@ class Variable(utils.object2): ...@@ -234,7 +252,7 @@ class Variable(utils.object2):
Using the Variables' owner field and the Apply nodes' inputs fields, one can navigate a graph Using the Variables' owner field and the Apply nodes' inputs fields, one can navigate a graph
from an output all the way to the inputs. The opposite direction is not possible until an from an output all the way to the inputs. The opposite direction is not possible until an
Env has annotated the Variables with the clients field, ie, before the compilation process FunctionGraph has annotated the Variables with the clients field, ie, before the compilation process
has begun a Variable does not know which Apply nodes take it as input. has begun a Variable does not know which Apply nodes take it as input.
**Code Example** **Code Example**
...@@ -338,6 +356,24 @@ class Variable(utils.object2): ...@@ -338,6 +356,24 @@ class Variable(utils.object2):
raise NotImplementedError('Subclasses of Variable must provide __ge__', raise NotImplementedError('Subclasses of Variable must provide __ge__',
self.__class__.__name__) self.__class__.__name__)
@property
def env(self):
warnings.warn("Variable.env is deprecated, it has been renamed 'fgraph'")
return self.fgraph
@env.setter
def env(self,value):
warnings.warn("Variable.env is deprecated, it has been renamed 'fgraph'")
self.fgraph = value
@env.deleter
def env(self):
warnings.warn("Variable.env is deprecated, it has been renamed 'fgraph'")
del self.fgraph
class Constant(Variable): class Constant(Variable):
""" """
A :term:`Constant` is a `Variable` with a `value` field that cannot be changed at runtime. A :term:`Constant` is a `Variable` with a `value` field that cannot be changed at runtime.
......
差异被折叠。
...@@ -21,7 +21,7 @@ from theano import config ...@@ -21,7 +21,7 @@ from theano import config
import cc import cc
import graph import graph
import utils import utils
from fg import FunctionGraph as Env from fg import FunctionGraph
class CLinkerObject(object): class CLinkerObject(object):
...@@ -563,7 +563,7 @@ class Op(utils.object2, PureOp, CLinkerOp): ...@@ -563,7 +563,7 @@ class Op(utils.object2, PureOp, CLinkerOp):
#logger.debug('Compiling node %i of graph' % node_idx) #logger.debug('Compiling node %i of graph' % node_idx)
if self._op_use_c_code: if self._op_use_c_code:
try: try:
e = Env(*graph.clone(node.inputs, node.outputs)) e = FunctionGraph(*graph.clone(node.inputs, node.outputs))
e_no_recycling = [new_o e_no_recycling = [new_o
for (new_o, old_o) in zip(e.outputs, node.outputs) for (new_o, old_o) in zip(e.outputs, node.outputs)
......
差异被折叠。
...@@ -69,11 +69,11 @@ if 0: ...@@ -69,11 +69,11 @@ if 0:
candidates = tracks candidates = tracks
tracks = [] tracks = []
def apply(self, env): def apply(self, fgraph):
tasks = defaultdict(list) tasks = defaultdict(list)
if self.max_use_ratio is not None: if self.max_use_ratio is not None:
max_uses = self.max_use_ratio * len(env.nodes) max_uses = self.max_use_ratio * len(fgraph.nodes)
runs = defaultdict(int) runs = defaultdict(int)
else: else:
runs = None runs = None
...@@ -91,14 +91,14 @@ if 0: ...@@ -91,14 +91,14 @@ if 0:
self.backtrack(new_r.owner, tasks) self.backtrack(new_r.owner, tasks)
# # == NOT IDEAL == # # # == NOT IDEAL == #
# for node in env.nodes: # for node in fgraph.nodes:
# importer(node) # importer(node)
for node in env.toposort(): for node in fgraph.toposort():
tasks[node].extend(lopt for track, i, lopt in self.fetch_tracks0(node.op)) tasks[node].extend(lopt for track, i, lopt in self.fetch_tracks0(node.op))
u = self.attach_updater(env, importer, pruner, chin) u = self.attach_updater(fgraph, importer, pruner, chin)
print 'KEYS', map(hash, tasks.keys()) print 'KEYS', map(hash, tasks.keys())
while tasks: while tasks:
for node in tasks.iterkeys(): for node in tasks.iterkeys():
...@@ -108,11 +108,11 @@ if 0: ...@@ -108,11 +108,11 @@ if 0:
if runs is not None and runs[lopt] >= max_uses: if runs is not None and runs[lopt] >= max_uses:
print >>sys.stderr, 'Warning: optimization exceeded its maximal use ratio: %s, %s' % (lopt, max_uses) print >>sys.stderr, 'Warning: optimization exceeded its maximal use ratio: %s, %s' % (lopt, max_uses)
continue continue
success = self.process_node(env, node, lopt) success = self.process_node(fgraph, node, lopt)
if success: if success:
if runs is not None: runs[lopt] += 1 if runs is not None: runs[lopt] += 1
break break
self.detach_updater(env, u) self.detach_updater(fgraph, u)
# def match(self, node, candidates): # def match(self, node, candidates):
# candidates[:] = [candidate # candidates[:] = [candidate
......
...@@ -7,8 +7,6 @@ from theano.gof.type import Type ...@@ -7,8 +7,6 @@ from theano.gof.type import Type
from theano.gof.graph import Variable, Apply, Constant from theano.gof.graph import Variable, Apply, Constant
from theano.gof.op import Op from theano.gof.op import Op
from theano.gof import fg from theano.gof import fg
env = fg
from theano.gof import toolbox
def as_variable(x): def as_variable(x):
......
...@@ -69,12 +69,12 @@ def inputs(): ...@@ -69,12 +69,12 @@ def inputs():
return x, y, z return x, y, z
def perform_linker(env): def perform_linker(fgraph):
lnk = PerformLinker().accept(env) lnk = PerformLinker().accept(fgraph)
return lnk return lnk
def Env(inputs, outputs): def FunctionGraph(inputs, outputs):
e = fg.FunctionGraph(inputs, outputs) e = fg.FunctionGraph(inputs, outputs)
return e return e
...@@ -83,7 +83,7 @@ class TestPerformLinker(unittest.TestCase): ...@@ -83,7 +83,7 @@ class TestPerformLinker(unittest.TestCase):
def test_thunk(self): def test_thunk(self):
x, y, z = inputs() x, y, z = inputs()
e = mul(add(x, y), div(x, y)) e = mul(add(x, y), div(x, y))
fn, i, o = perform_linker(Env([x, y, z], [e])).make_thunk() fn, i, o = perform_linker(FunctionGraph([x, y, z], [e])).make_thunk()
i[0].data = 1 i[0].data = 1
i[1].data = 2 i[1].data = 2
fn() fn()
...@@ -92,26 +92,26 @@ class TestPerformLinker(unittest.TestCase): ...@@ -92,26 +92,26 @@ class TestPerformLinker(unittest.TestCase):
def test_function(self): def test_function(self):
x, y, z = inputs() x, y, z = inputs()
e = mul(add(x, y), div(x, y)) e = mul(add(x, y), div(x, y))
fn = perform_linker(Env([x, y, z], [e])).make_function() fn = perform_linker(FunctionGraph([x, y, z], [e])).make_function()
assert fn(1.0, 2.0, 3.0) == 1.5 assert fn(1.0, 2.0, 3.0) == 1.5
def test_constant(self): def test_constant(self):
x, y, z = inputs() x, y, z = inputs()
y = Constant(tdouble, 2.0) y = Constant(tdouble, 2.0)
e = mul(add(x, y), div(x, y)) e = mul(add(x, y), div(x, y))
fn = perform_linker(Env([x], [e])).make_function() fn = perform_linker(FunctionGraph([x], [e])).make_function()
assert fn(1.0) == 1.5 assert fn(1.0) == 1.5
def test_input_output_same(self): def test_input_output_same(self):
x, y, z = inputs() x, y, z = inputs()
fn = perform_linker(Env([x], [x])).make_function() fn = perform_linker(FunctionGraph([x], [x])).make_function()
assert 1.0 is fn(1.0) assert 1.0 is fn(1.0)
def test_input_dependency0(self): def test_input_dependency0(self):
x, y, z = inputs() x, y, z = inputs()
a, d = add(x, y), div(x, y) a, d = add(x, y), div(x, y)
e = mul(a, d) e = mul(a, d)
fn = perform_linker(Env(*graph.clone([x, y, a], [e]))).make_function() fn = perform_linker(FunctionGraph(*graph.clone([x, y, a], [e]))).make_function()
assert fn(1.0, 2.0, 9.0) == 4.5 assert fn(1.0, 2.0, 9.0) == 4.5
def test_skiphole(self): def test_skiphole(self):
...@@ -119,12 +119,12 @@ class TestPerformLinker(unittest.TestCase): ...@@ -119,12 +119,12 @@ class TestPerformLinker(unittest.TestCase):
a = add(x, y) a = add(x, y)
r = raise_err(a) r = raise_err(a)
e = add(r, a) e = add(r, a)
fn = perform_linker(Env(*graph.clone([x, y, r], [e]))).make_function() fn = perform_linker(FunctionGraph(*graph.clone([x, y, r], [e]))).make_function()
assert fn(1.0, 2.0, 4.5) == 7.5 assert fn(1.0, 2.0, 4.5) == 7.5
def wrap_linker(env, linkers, wrapper): def wrap_linker(fgraph, linkers, wrapper):
lnk = WrapLinker(linkers, wrapper).accept(env) lnk = WrapLinker(linkers, wrapper).accept(fgraph)
return lnk return lnk
...@@ -138,7 +138,7 @@ class TestWrapLinker(unittest.TestCase): ...@@ -138,7 +138,7 @@ class TestWrapLinker(unittest.TestCase):
x, y, z = inputs() x, y, z = inputs()
e = mul(add(x, y), div(x, y)) e = mul(add(x, y), div(x, y))
fn, i, o = wrap_linker( fn, i, o = wrap_linker(
Env([x, y, z], [e]), FunctionGraph([x, y, z], [e]),
[PerformLinker(allow_gc=False)], wrap).make_thunk() [PerformLinker(allow_gc=False)], wrap).make_thunk()
i[0].data = 1 i[0].data = 1
i[1].data = 2 i[1].data = 2
...@@ -156,7 +156,7 @@ class TestWrapLinker(unittest.TestCase): ...@@ -156,7 +156,7 @@ class TestWrapLinker(unittest.TestCase):
x, y, z = inputs() x, y, z = inputs()
e = mul(add(x, y), div(x, y)) e = mul(add(x, y), div(x, y))
fn, i, o = wrap_linker( fn, i, o = wrap_linker(
Env([x, y, z], [e]), FunctionGraph([x, y, z], [e]),
[PerformLinker(allow_gc=False)], wrap).make_thunk() [PerformLinker(allow_gc=False)], wrap).make_thunk()
i[0].data = 1 i[0].data = 1
i[1].data = 2 i[1].data = 2
......
...@@ -38,9 +38,9 @@ class TestCallbacks(unittest.TestCase): ...@@ -38,9 +38,9 @@ class TestCallbacks(unittest.TestCase):
linker=vm.VM_Linker(callback=self.callback))) linker=vm.VM_Linker(callback=self.callback)))
f(1, 2, 3) f(1, 2, 3)
assert sum(self.n_callbacks.values()) == len(f.maker.env.toposort()) assert sum(self.n_callbacks.values()) == len(f.maker.fgraph.toposort())
f(1, 2, 3) f(1, 2, 3)
assert sum(self.n_callbacks.values()) == len(f.maker.env.toposort()) * 2 assert sum(self.n_callbacks.values()) == len(f.maker.fgraph.toposort()) * 2
def test_callback_with_ifelse(self): def test_callback_with_ifelse(self):
......
差异被折叠。
...@@ -215,14 +215,14 @@ class Stack(VM): ...@@ -215,14 +215,14 @@ class Stack(VM):
""" """
def __init__(self, nodes, thunks, pre_call_clear, def __init__(self, nodes, thunks, pre_call_clear,
storage_map, compute_map, env, allow_gc, storage_map, compute_map, fgraph, allow_gc,
dependencies=None, callback=None): dependencies=None, callback=None):
super(Stack, self).__init__(nodes, thunks, pre_call_clear) super(Stack, self).__init__(nodes, thunks, pre_call_clear)
self.allow_gc = allow_gc self.allow_gc = allow_gc
self.message = "" self.message = ""
self.base_apply_stack = [o.owner for o in env.outputs if o.owner] self.base_apply_stack = [o.owner for o in fgraph.outputs if o.owner]
self.outputs = env.outputs self.outputs = fgraph.outputs
self.storage_map = storage_map self.storage_map = storage_map
self.apply_time = {} self.apply_time = {}
self.outputs_size = {} self.outputs_size = {}
...@@ -230,7 +230,7 @@ class Stack(VM): ...@@ -230,7 +230,7 @@ class Stack(VM):
self.node_idx = node_idx = {} self.node_idx = node_idx = {}
self.callback = callback self.callback = callback
ords = env.orderings() ords = fgraph.orderings()
for i, node in enumerate(self.nodes): for i, node in enumerate(self.nodes):
node_idx[node] = i node_idx[node] = i
...@@ -477,15 +477,15 @@ class VM_Linker(link.LocalLinker): ...@@ -477,15 +477,15 @@ class VM_Linker(link.LocalLinker):
'node', 'thunk', 'storage_map', and 'compute_map'. 'node', 'thunk', 'storage_map', and 'compute_map'.
""" """
self.env = None self.fgraph = None
self.allow_gc = allow_gc self.allow_gc = allow_gc
self.use_cloop = use_cloop self.use_cloop = use_cloop
self.callback = callback self.callback = callback
self.updated_vars = {} self.updated_vars = {}
def accept(self, env, no_recycling=None): def accept(self, fgraph, no_recycling=None):
""" """
:param env: a PerformLinker can have accepted one Env instance :param fgraph: a PerformLinker can have accepted one FunctionGraph instance
at a time. at a time.
:param no_recycling: WRITEME :param no_recycling: WRITEME
...@@ -494,9 +494,9 @@ class VM_Linker(link.LocalLinker): ...@@ -494,9 +494,9 @@ class VM_Linker(link.LocalLinker):
""" """
if no_recycling is None: if no_recycling is None:
no_recycling = [] no_recycling = []
if self.env is not None and self.env is not env: if self.fgraph is not None and self.fgraph is not fgraph:
return type(self)().accept(env, no_recycling) return type(self)().accept(fgraph, no_recycling)
self.env = env self.fgraph = fgraph
self.no_recycling = no_recycling self.no_recycling = no_recycling
return self return self
...@@ -565,7 +565,7 @@ class VM_Linker(link.LocalLinker): ...@@ -565,7 +565,7 @@ class VM_Linker(link.LocalLinker):
vm = Stack( vm = Stack(
nodes, thunks, pre_call_clear, nodes, thunks, pre_call_clear,
storage_map, compute_map, storage_map, compute_map,
self.env, self.allow_gc, self.fgraph, self.allow_gc,
dependencies=deps, dependencies=deps,
callback=self.callback) callback=self.callback)
elif self.use_cloop: elif self.use_cloop:
...@@ -576,7 +576,7 @@ class VM_Linker(link.LocalLinker): ...@@ -576,7 +576,7 @@ class VM_Linker(link.LocalLinker):
nodes_idx[node] = i nodes_idx[node] = i
for v in node.inputs + node.outputs: for v in node.inputs + node.outputs:
vars_idx.setdefault(v, len(vars_idx)) vars_idx.setdefault(v, len(vars_idx))
for v in self.env.inputs + self.env.outputs: for v in self.fgraph.inputs + self.fgraph.outputs:
vars_idx.setdefault(v, len(vars_idx)) vars_idx.setdefault(v, len(vars_idx))
nodes_idx_inv = {} nodes_idx_inv = {}
...@@ -627,10 +627,10 @@ class VM_Linker(link.LocalLinker): ...@@ -627,10 +627,10 @@ class VM_Linker(link.LocalLinker):
var_owner[i] = nodes_idx[var.owner] var_owner[i] = nodes_idx[var.owner]
is_lazy_list = [int(th.lazy) for th in thunks] is_lazy_list = [int(th.lazy) for th in thunks]
output_vars = [vars_idx[v] for v in self.env.outputs] output_vars = [vars_idx[v] for v in self.fgraph.outputs]
# builds the list of prereqs induced by e.g. destroy_handler # builds the list of prereqs induced by e.g. destroy_handler
ords = self.env.orderings() ords = self.fgraph.orderings()
node_prereqs = [] node_prereqs = []
node_output_size = [] node_output_size = []
for i, node in enumerate(nodes): for i, node in enumerate(nodes):
...@@ -694,7 +694,7 @@ class VM_Linker(link.LocalLinker): ...@@ -694,7 +694,7 @@ class VM_Linker(link.LocalLinker):
vm = Stack( vm = Stack(
nodes, thunks, pre_call_clear, nodes, thunks, pre_call_clear,
storage_map, compute_map, storage_map, compute_map,
self.env, self.allow_gc, self.fgraph, self.allow_gc,
dependencies=deps dependencies=deps
) )
return vm return vm
...@@ -702,12 +702,12 @@ class VM_Linker(link.LocalLinker): ...@@ -702,12 +702,12 @@ class VM_Linker(link.LocalLinker):
def make_all(self, profiler=None, input_storage=None, def make_all(self, profiler=None, input_storage=None,
output_storage = None, output_storage = None,
): ):
env = self.env fgraph = self.fgraph
order = list(env.toposort()) order = list(fgraph.toposort())
no_recycling = self.no_recycling no_recycling = self.no_recycling
input_storage, output_storage, storage_map = link.map_storage( input_storage, output_storage, storage_map = link.map_storage(
env, order, input_storage, output_storage) fgraph, order, input_storage, output_storage)
compute_map = {} compute_map = {}
for k in storage_map: for k in storage_map:
compute_map[k] = [k.owner is None] compute_map[k] = [k.owner is None]
...@@ -725,7 +725,7 @@ class VM_Linker(link.LocalLinker): ...@@ -725,7 +725,7 @@ class VM_Linker(link.LocalLinker):
clear_after_this_thunk = [] clear_after_this_thunk = []
for input in node.inputs: for input in node.inputs:
if ((input in computed) if ((input in computed)
and (input not in env.outputs) and (input not in fgraph.outputs)
and (node == last_user[input])): and (node == last_user[input])):
clear_after_this_thunk.append(storage_map[input]) clear_after_this_thunk.append(storage_map[input])
post_thunk_clear.append(clear_after_this_thunk) post_thunk_clear.append(clear_after_this_thunk)
...@@ -742,8 +742,8 @@ class VM_Linker(link.LocalLinker): ...@@ -742,8 +742,8 @@ class VM_Linker(link.LocalLinker):
return (vm, return (vm,
[link.Container(input, storage) [link.Container(input, storage)
for input, storage in zip(env.inputs, input_storage)], for input, storage in zip(fgraph.inputs, input_storage)],
[link.Container(output, storage, True) [link.Container(output, storage, True)
for output, storage in zip(env.outputs, output_storage)], for output, storage in zip(fgraph.outputs, output_storage)],
thunks, thunks,
order) order)
...@@ -536,11 +536,11 @@ def cond_merge_ifs_false(node): ...@@ -536,11 +536,11 @@ def cond_merge_ifs_false(node):
class CondMerge(gof.Optimizer): class CondMerge(gof.Optimizer):
""" Graph Optimizer that merges different cond ops """ """ Graph Optimizer that merges different cond ops """
def add_requirements(self, env): def add_requirements(self, fgraph):
env.extend(gof.toolbox.ReplaceValidate()) fgraph.extend(gof.toolbox.ReplaceValidate())
def apply(self, env): def apply(self, fgraph):
nodelist = list(env.toposort()) nodelist = list(fgraph.toposort())
cond_nodes = filter(lambda s: isinstance(s.op, IfElse), nodelist) cond_nodes = filter(lambda s: isinstance(s.op, IfElse), nodelist)
if len(cond_nodes) < 2: if len(cond_nodes) < 2:
return False return False
...@@ -581,7 +581,7 @@ class CondMerge(gof.Optimizer): ...@@ -581,7 +581,7 @@ class CondMerge(gof.Optimizer):
else: else:
old_outs += proposal.outputs old_outs += proposal.outputs
pairs = zip(old_outs, new_outs) pairs = zip(old_outs, new_outs)
env.replace_all_validate(pairs, reason='cond_merge') fgraph.replace_all_validate(pairs, reason='cond_merge')
@gof.local_optimizer([None]) @gof.local_optimizer([None])
......
...@@ -66,7 +66,7 @@ def execute(execute=True, verbose=True, M=2000, N=2000, K=2000, ...@@ -66,7 +66,7 @@ def execute(execute=True, verbose=True, M=2000, N=2000, K=2000,
if any([x.op.__class__.__name__ == 'Gemm' for x in if any([x.op.__class__.__name__ == 'Gemm' for x in
f.maker.env.toposort()]): f.maker.fgraph.toposort()]):
c_impl = f.profile.apply_cimpl.values() c_impl = f.profile.apply_cimpl.values()
assert len(c_impl) == 1 assert len(c_impl) == 1
if c_impl[0]: if c_impl[0]:
...@@ -74,11 +74,11 @@ def execute(execute=True, verbose=True, M=2000, N=2000, K=2000, ...@@ -74,11 +74,11 @@ def execute(execute=True, verbose=True, M=2000, N=2000, K=2000,
else: else:
impl = 'CPU (without direct Theano binding to blas but with numpy/scipy binding to blas)' impl = 'CPU (without direct Theano binding to blas but with numpy/scipy binding to blas)'
elif any([x.op.__class__.__name__ == 'GpuGemm' for x in elif any([x.op.__class__.__name__ == 'GpuGemm' for x in
f.maker.env.toposort()]): f.maker.fgraph.toposort()]):
impl = 'GPU' impl = 'GPU'
else: else:
impl = 'ERROR, unable to tell if Theano used the cpu or the gpu:\n' impl = 'ERROR, unable to tell if Theano used the cpu or the gpu:\n'
impl += str(f.maker.env.toposort()) impl += str(f.maker.fgraph.toposort())
t0 = 0 t0 = 0
t1 = -1 t1 = -1
......
...@@ -8,8 +8,8 @@ y = theano.tensor.fvector() ...@@ -8,8 +8,8 @@ y = theano.tensor.fvector()
x = theano.shared(numpy.zeros(1,dtype='float32')) x = theano.shared(numpy.zeros(1,dtype='float32'))
f1 = theano.function([y],updates={x:y}) f1 = theano.function([y],updates={x:y})
f2 = theano.function([],theano.sandbox.cuda.host_from_gpu(x)) f2 = theano.function([],theano.sandbox.cuda.host_from_gpu(x))
print f1.maker.env.toposort() print f1.maker.fgraph.toposort()
print f2.maker.env.toposort() print f2.maker.fgraph.toposort()
for i in [1,10,100,1000, 10000, 100000,1000000, 10000000]: for i in [1,10,100,1000, 10000, 100000,1000000, 10000000]:
o = numpy.zeros(i, dtype='float32') o = numpy.zeros(i, dtype='float32')
t0=time.time();f1(o);t1=time.time(); t0=time.time();f1(o);t1=time.time();
......
...@@ -50,14 +50,14 @@ def test_pycuda_elemwise_source_module(): ...@@ -50,14 +50,14 @@ def test_pycuda_elemwise_source_module():
mode=mode_with_gpu) mode=mode_with_gpu)
assert any([isinstance(node.op, theano.sandbox.cuda.GpuElemwise) assert any([isinstance(node.op, theano.sandbox.cuda.GpuElemwise)
for node in f.maker.env.toposort()]) for node in f.maker.fgraph.toposort()])
assert any([isinstance(node.op, PycudaElemwiseSourceModuleOp) assert any([isinstance(node.op, PycudaElemwiseSourceModuleOp)
for node in f2.maker.env.toposort()]) for node in f2.maker.fgraph.toposort()])
assert any([isinstance(node.op, PycudaElemwiseSourceModuleOp) assert any([isinstance(node.op, PycudaElemwiseSourceModuleOp)
for node in f3.maker.env.toposort()]) for node in f3.maker.fgraph.toposort()])
assert any([isinstance(node.op, assert any([isinstance(node.op,
PycudaElemwiseSourceModuleMakeThunkOp) PycudaElemwiseSourceModuleMakeThunkOp)
for node in f4.maker.env.toposort()]) for node in f4.maker.fgraph.toposort()])
val1 = numpy.asarray(numpy.random.rand(*shape), dtype='float32') val1 = numpy.asarray(numpy.random.rand(*shape), dtype='float32')
val2 = numpy.asarray(numpy.random.rand(*shape), dtype='float32') val2 = numpy.asarray(numpy.random.rand(*shape), dtype='float32')
...@@ -73,15 +73,15 @@ def test_pycuda_elemwise_kernel(): ...@@ -73,15 +73,15 @@ def test_pycuda_elemwise_kernel():
x = T.fmatrix('x') x = T.fmatrix('x')
y = T.fmatrix('y') y = T.fmatrix('y')
f = theano.function([x, y], x + y, mode=mode_with_gpu) f = theano.function([x, y], x + y, mode=mode_with_gpu)
print f.maker.env.toposort() print f.maker.fgraph.toposort()
mode_pycuda = mode_with_gpu.including("local_pycuda_gpu_elemwise_kernel") mode_pycuda = mode_with_gpu.including("local_pycuda_gpu_elemwise_kernel")
f2 = theano.function([x, y], x + y, mode=mode_pycuda) f2 = theano.function([x, y], x + y, mode=mode_pycuda)
print f2.maker.env.toposort() print f2.maker.fgraph.toposort()
assert any([isinstance(node.op, theano.sandbox.cuda.GpuElemwise) assert any([isinstance(node.op, theano.sandbox.cuda.GpuElemwise)
for node in f.maker.env.toposort()]) for node in f.maker.fgraph.toposort()])
assert any([isinstance(node.op, PycudaElemwiseKernelOp) assert any([isinstance(node.op, PycudaElemwiseKernelOp)
for node in f2.maker.env.toposort()]) for node in f2.maker.fgraph.toposort()])
val1 = numpy.asarray(numpy.random.rand(5, 5), dtype='float32') val1 = numpy.asarray(numpy.random.rand(5, 5), dtype='float32')
val2 = numpy.asarray(numpy.random.rand(5, 5), dtype='float32') val2 = numpy.asarray(numpy.random.rand(5, 5), dtype='float32')
...@@ -96,9 +96,9 @@ def test_pycuda_elemwise_kernel(): ...@@ -96,9 +96,9 @@ def test_pycuda_elemwise_kernel():
z3 = T.ftensor3('y') z3 = T.ftensor3('y')
f4 = theano.function([x3, y3, z3], x3 * y3 + z3, mode=mode_pycuda) f4 = theano.function([x3, y3, z3], x3 * y3 + z3, mode=mode_pycuda)
print f4.maker.env.toposort() print f4.maker.fgraph.toposort()
assert any([isinstance(node.op, PycudaElemwiseKernelOp) assert any([isinstance(node.op, PycudaElemwiseKernelOp)
for node in f4.maker.env.toposort()]) for node in f4.maker.fgraph.toposort()])
val1 = numpy.random.rand(2, 2, 2) val1 = numpy.random.rand(2, 2, 2)
print val1 print val1
......
...@@ -81,11 +81,11 @@ def debugprint(obj, depth=-1, print_type=False, ...@@ -81,11 +81,11 @@ def debugprint(obj, depth=-1, print_type=False,
elif isinstance(obj, gof.Apply): elif isinstance(obj, gof.Apply):
results_to_print.extend(obj.outputs) results_to_print.extend(obj.outputs)
elif isinstance(obj, Function): elif isinstance(obj, Function):
results_to_print.extend(obj.maker.env.outputs) results_to_print.extend(obj.maker.fgraph.outputs)
order = obj.maker.env.toposort() order = obj.maker.fgraph.toposort()
elif isinstance(obj, (list, tuple)): elif isinstance(obj, (list, tuple)):
results_to_print.extend(obj) results_to_print.extend(obj)
elif isinstance(obj, gof.Env): elif isinstance(obj, gof.FunctionGraph):
results_to_print.extend(obj.outputs) results_to_print.extend(obj.outputs)
order = obj.toposort() order = obj.toposort()
else: else:
...@@ -536,16 +536,16 @@ def pydotprint(fct, outfile=None, ...@@ -536,16 +536,16 @@ def pydotprint(fct, outfile=None,
if isinstance(fct, Function): if isinstance(fct, Function):
mode = fct.maker.mode mode = fct.maker.mode
fct_env = fct.maker.env fct_fgraph = fct.maker.fgraph
if (not isinstance(mode, ProfileMode) if (not isinstance(mode, ProfileMode)
or not fct in mode.profile_stats): or not fct in mode.profile_stats):
mode = None mode = None
elif isinstance(fct, gof.Env): elif isinstance(fct, gof.FunctionGraph):
mode = None mode = None
fct_env = fct fct_fgraph = fct
else: else:
raise ValueError(('pydotprint expects as input a theano.function or ' raise ValueError(('pydotprint expects as input a theano.function or '
'the env of a function!'), fct) 'the FunctionGraph of a function!'), fct)
if not pydot_imported: if not pydot_imported:
raise RuntimeError("Failed to import pydot. You must install pydot" raise RuntimeError("Failed to import pydot. You must install pydot"
...@@ -558,7 +558,7 @@ def pydotprint(fct, outfile=None, ...@@ -558,7 +558,7 @@ def pydotprint(fct, outfile=None,
c2 = pd.Cluster('Right') c2 = pd.Cluster('Right')
c3 = pd.Cluster('Middle') c3 = pd.Cluster('Middle')
cond = None cond = None
for node in fct_env.toposort(): for node in fct_fgraph.toposort():
if (node.op.__class__.__name__ == 'IfElse' if (node.op.__class__.__name__ == 'IfElse'
and node.op.name == cond_highlight): and node.op.name == cond_highlight):
cond = node cond = node
...@@ -626,7 +626,7 @@ def pydotprint(fct, outfile=None, ...@@ -626,7 +626,7 @@ def pydotprint(fct, outfile=None,
all_strings.add(varstr) all_strings.add(varstr)
return varstr return varstr
topo = fct_env.toposort() topo = fct_fgraph.toposort()
apply_name_cache = {} apply_name_cache = {}
def apply_name(node): def apply_name(node):
...@@ -663,7 +663,7 @@ def pydotprint(fct, outfile=None, ...@@ -663,7 +663,7 @@ def pydotprint(fct, outfile=None,
# Update the inputs that have an update function # Update the inputs that have an update function
input_update = {} input_update = {}
outputs = list(fct_env.outputs) outputs = list(fct_fgraph.outputs)
if isinstance(fct, Function): if isinstance(fct, Function):
for i in reversed(fct.maker.expanded_inputs): for i in reversed(fct.maker.expanded_inputs):
if i.update is not None: if i.update is not None:
...@@ -756,7 +756,7 @@ def pydotprint(fct, outfile=None, ...@@ -756,7 +756,7 @@ def pydotprint(fct, outfile=None,
if print_output_file: if print_output_file:
print 'The output file is available at', outfile print 'The output file is available at', outfile
if scan_graphs: if scan_graphs:
scan_ops = [(idx, x) for idx, x in enumerate(fct_env.toposort()) scan_ops = [(idx, x) for idx, x in enumerate(fct_fgraph.toposort())
if isinstance(x.op, theano.scan_module.scan_op.Scan)] if isinstance(x.op, theano.scan_module.scan_op.Scan)]
path, fn = os.path.split(outfile) path, fn = os.path.split(outfile)
basename = '.'.join(fn.split('.')[:-1]) basename = '.'.join(fn.split('.')[:-1])
......
...@@ -2442,6 +2442,6 @@ def profile_printer(fct_name, compile_time, fct_call_time, fct_call, ...@@ -2442,6 +2442,6 @@ def profile_printer(fct_name, compile_time, fct_call_time, fct_call,
print " (Useful to know if we forgot some cast when using floatX=float32 or gpu code)" print " (Useful to know if we forgot some cast when using floatX=float32 or gpu code)"
print ' <Apply> <Apply position> <fct name> <inputs type> <outputs type>' print ' <Apply> <Apply position> <fct name> <inputs type> <outputs type>'
for fct in fct_call.keys(): for fct in fct_call.keys():
for idx, node in enumerate(fct.maker.env.toposort()): for idx, node in enumerate(fct.maker.fgraph.toposort()):
if any(hasattr(i,'dtype') and i.dtype=='float64' for i in node.outputs) and not any(hasattr(i,'dtype') and i.dtype=='float64' for i in node.inputs): if any(hasattr(i,'dtype') and i.dtype=='float64' for i in node.outputs) and not any(hasattr(i,'dtype') and i.dtype=='float64' for i in node.inputs):
print ' ', str(node), idx, fct.name, str([getattr(i,'dtype',None) for i in node.inputs]),str([getattr(i,'dtype',None) for i in node.outputs]) print ' ', str(node), idx, fct.name, str([getattr(i,'dtype',None) for i in node.inputs]),str([getattr(i,'dtype',None) for i in node.outputs])
...@@ -74,17 +74,17 @@ register_opt()(theano.tensor.opt.local_track_shape_i) ...@@ -74,17 +74,17 @@ register_opt()(theano.tensor.opt.local_track_shape_i)
class InputToGpuOptimizer(Optimizer): class InputToGpuOptimizer(Optimizer):
"""Transfert the input of a graph to the gpu if needed """Transfert the input of a graph to the gpu if needed
It should make this part of the optimizer faster we will will need only 1 It should make this part of the optimizer faster we will will need only 1
pass on the env. pass on the fgraph.
""" """
def __init__(self): def __init__(self):
Optimizer.__init__(self) Optimizer.__init__(self)
def add_requirements(self, env): def add_requirements(self, fgraph):
env.extend(toolbox.ReplaceValidate()) fgraph.extend(toolbox.ReplaceValidate())
env.extend(DestroyHandler()) fgraph.extend(DestroyHandler())
def apply(self, env): def apply(self, fgraph):
for input in env.inputs: for input in fgraph.inputs:
if isinstance(input.type, CudaNdarrayType): if isinstance(input.type, CudaNdarrayType):
return return
...@@ -98,7 +98,7 @@ class InputToGpuOptimizer(Optimizer): ...@@ -98,7 +98,7 @@ class InputToGpuOptimizer(Optimizer):
new_input = host_from_gpu(gpu_from_host(input)) new_input = host_from_gpu(gpu_from_host(input))
if new_input.type == input.type: if new_input.type == input.type:
env.replace_validate(input, new_input, fgraph.replace_validate(input, new_input,
"InputToGpuOptimizer") "InputToGpuOptimizer")
except TypeError, e: except TypeError, e:
#as we currently only support float32, this can fail. #as we currently only support float32, this can fail.
...@@ -146,7 +146,7 @@ def dtype_in_elemwise_supported(op): ...@@ -146,7 +146,7 @@ def dtype_in_elemwise_supported(op):
""" """
def get_all_basic_scalar(composite_op): def get_all_basic_scalar(composite_op):
l = [] l = []
for i in composite_op.env.toposort(): for i in composite_op.fgraph.toposort():
if isinstance(i, theano.scalar.Composite): if isinstance(i, theano.scalar.Composite):
l += get_all_basic_scalar(i) l += get_all_basic_scalar(i)
else: else:
...@@ -629,7 +629,7 @@ def local_gpu_sum(node): ...@@ -629,7 +629,7 @@ def local_gpu_sum(node):
# to make them a single dimension, do the sum, and then # to make them a single dimension, do the sum, and then
# reshape to get them back. # reshape to get them back.
shape_of = node.env.shape_feature.shape_of shape_of = node.fgraph.shape_feature.shape_of
x_shape = shape_of[x] x_shape = shape_of[x]
...@@ -1471,8 +1471,8 @@ def gpuScanOptimization(node): ...@@ -1471,8 +1471,8 @@ def gpuScanOptimization(node):
# handle graphs with inputs being Cuda Ndarrays # handle graphs with inputs being Cuda Ndarrays
tmp_in, tmp_out = gpu_reconstruct_graph(scan_ins, tmp_in, tmp_out = gpu_reconstruct_graph(scan_ins,
scan_outs) scan_outs)
local_env = gof.Env(tmp_in, tmp_out) local_fgraph = gof.FunctionGraph(tmp_in, tmp_out)
_cmodule_key = gof.CLinker.cmodule_key_(local_env, []) _cmodule_key = gof.CLinker.cmodule_key_(local_fgraph, [])
info['gpu_hash'] = hash(_cmodule_key) info['gpu_hash'] = hash(_cmodule_key)
typeConstructor = lambda broadcastable, dtype: CudaNdarrayType( typeConstructor = lambda broadcastable, dtype: CudaNdarrayType(
...@@ -1520,8 +1520,8 @@ def gpuScanOptimization(node): ...@@ -1520,8 +1520,8 @@ def gpuScanOptimization(node):
# handle graphs with inputs being Cuda Ndarrays # handle graphs with inputs being Cuda Ndarrays
tmp_in, tmp_out = gpu_reconstruct_graph(scan_ins, tmp_in, tmp_out = gpu_reconstruct_graph(scan_ins,
scan_outs) scan_outs)
local_env = gof.Env(tmp_in, tmp_out) local_fgraph = gof.FunctionGraph(tmp_in, tmp_out)
_cmodule_key = gof.CLinker.cmodule_key_(local_env, []) _cmodule_key = gof.CLinker.cmodule_key_(local_fgraph, [])
info['gpu_hash'] = hash(_cmodule_key) info['gpu_hash'] = hash(_cmodule_key)
typeConstructor = lambda broadcastable, dtype: CudaNdarrayType( typeConstructor = lambda broadcastable, dtype: CudaNdarrayType(
broadcastable=broadcastable) broadcastable=broadcastable)
......
...@@ -89,7 +89,7 @@ def test_dot22scalar(): ...@@ -89,7 +89,7 @@ def test_dot22scalar():
f2 = theano.function( f2 = theano.function(
[a, b], [a, b],
tensor.dot(a, b) * numpy.asarray(4, 'float32')) tensor.dot(a, b) * numpy.asarray(4, 'float32'))
t = f.maker.env.toposort() t = f.maker.fgraph.toposort()
assert len(t) == 4 assert len(t) == 4
assert isinstance(t[0].op, tcn.GpuFromHost) assert isinstance(t[0].op, tcn.GpuFromHost)
assert isinstance(t[1].op, tcn.GpuFromHost) assert isinstance(t[1].op, tcn.GpuFromHost)
...@@ -100,7 +100,7 @@ def test_dot22scalar(): ...@@ -100,7 +100,7 @@ def test_dot22scalar():
f = theano.function([a, b, scalar], tensor.dot(a, b) * scalar, f = theano.function([a, b, scalar], tensor.dot(a, b) * scalar,
mode=mode_with_gpu) mode=mode_with_gpu)
f2 = theano.function([a, b, scalar], tensor.dot(a, b) * scalar) f2 = theano.function([a, b, scalar], tensor.dot(a, b) * scalar)
t = f.maker.env.toposort() t = f.maker.fgraph.toposort()
assert len(t) == 4 assert len(t) == 4
assert isinstance(t[0].op, tcn.GpuFromHost) assert isinstance(t[0].op, tcn.GpuFromHost)
assert isinstance(t[1].op, tcn.GpuFromHost) assert isinstance(t[1].op, tcn.GpuFromHost)
...@@ -127,7 +127,7 @@ def test_gemm(): ...@@ -127,7 +127,7 @@ def test_gemm():
f = pfunc([b, c], [], updates=[(a, tensor.dot(a, b) + tensor.exp(c))], f = pfunc([b, c], [], updates=[(a, tensor.dot(a, b) + tensor.exp(c))],
mode=mode_with_gpu) mode=mode_with_gpu)
assert any([node.op == tcn.blas.gpu_gemm_inplace assert any([node.op == tcn.blas.gpu_gemm_inplace
for node in f.maker.env.toposort()]) for node in f.maker.fgraph.toposort()])
bval = my_rand(*b_shp) bval = my_rand(*b_shp)
cval = my_rand(a_shp[0], b_shp[1]) cval = my_rand(a_shp[0], b_shp[1])
...@@ -170,7 +170,7 @@ def test_gemm_no_inplace(): ...@@ -170,7 +170,7 @@ def test_gemm_no_inplace():
mode=mode_with_gpu) mode=mode_with_gpu)
assert any([node.op == tcn.blas.gpu_gemm_no_inplace assert any([node.op == tcn.blas.gpu_gemm_no_inplace
for node in f.maker.env.toposort()]) for node in f.maker.fgraph.toposort()])
bval = my_rand(*b_shp) bval = my_rand(*b_shp)
bval2 = my_rand(*b_shp) bval2 = my_rand(*b_shp)
rval = f(bval, bval2) rval = f(bval, bval2)
...@@ -303,9 +303,9 @@ def test_downsample(): ...@@ -303,9 +303,9 @@ def test_downsample():
mode=mode_without_gpu) mode=mode_without_gpu)
assert any([isinstance(node.op, assert any([isinstance(node.op,
tcn.blas.GpuDownsampleFactorMax) tcn.blas.GpuDownsampleFactorMax)
for node in f.maker.env.toposort()]) for node in f.maker.fgraph.toposort()])
assert any([isinstance(node.op, DownsampleFactorMax) assert any([isinstance(node.op, DownsampleFactorMax)
for node in f2.maker.env.toposort()]) for node in f2.maker.fgraph.toposort()])
assert numpy.allclose(f(), f2()) assert numpy.allclose(f(), f2())
# The grad is too slow on GT220 GPU # The grad is too slow on GT220 GPU
...@@ -328,9 +328,9 @@ def test_downsample(): ...@@ -328,9 +328,9 @@ def test_downsample():
mode=mode_without_gpu) mode=mode_without_gpu)
assert any([isinstance(node.op, assert any([isinstance(node.op,
tcn.blas.GpuDownsampleFactorMaxGrad) tcn.blas.GpuDownsampleFactorMaxGrad)
for node in g.maker.env.toposort()]) for node in g.maker.fgraph.toposort()])
assert any([isinstance(node.op, DownsampleFactorMaxGrad) assert any([isinstance(node.op, DownsampleFactorMaxGrad)
for node in g2.maker.env.toposort()]) for node in g2.maker.fgraph.toposort()])
assert numpy.allclose(g(), g2()), shp assert numpy.allclose(g(), g2()), shp
# We already check that the gpu version return # We already check that the gpu version return
...@@ -397,9 +397,9 @@ class TestVectorMatrixDot(TestCase): ...@@ -397,9 +397,9 @@ class TestVectorMatrixDot(TestCase):
assert numpy.allclose(no_gpu_f(), gpu_f2(), atol=self.atol) assert numpy.allclose(no_gpu_f(), gpu_f2(), atol=self.atol)
# Assert that the gpu version actually uses gpu # Assert that the gpu version actually uses gpu
assert sum([node.op is gpu_gemv_inplace for node in assert sum([node.op is gpu_gemv_inplace for node in
gpu_f.maker.env.toposort()]) == 1 gpu_f.maker.fgraph.toposort()]) == 1
assert sum([node.op is gpu_gemv_inplace for node in assert sum([node.op is gpu_gemv_inplace for node in
gpu_f2.maker.env.toposort()]) == 1 gpu_f2.maker.fgraph.toposort()]) == 1
# Check double-strided m # Check double-strided m
m.set_value( m.set_value(
...@@ -426,9 +426,9 @@ class TestVectorMatrixDot(TestCase): ...@@ -426,9 +426,9 @@ class TestVectorMatrixDot(TestCase):
assert numpy.allclose(no_gpu_f(), gpu_f2(), atol=self.atol) assert numpy.allclose(no_gpu_f(), gpu_f2(), atol=self.atol)
# Assert that the gpu version actually uses gpu # Assert that the gpu version actually uses gpu
assert sum([node.op is gpu_gemv_inplace for node in assert sum([node.op is gpu_gemv_inplace for node in
gpu_f.maker.env.toposort()]) == 1 gpu_f.maker.fgraph.toposort()]) == 1
assert sum([node.op is gpu_gemv_inplace for node in assert sum([node.op is gpu_gemv_inplace for node in
gpu_f2.maker.env.toposort()]) == 1 gpu_f2.maker.fgraph.toposort()]) == 1
def test_gemv1(self): def test_gemv1(self):
''' test vector1+dot(matrix,vector2) ''' ''' test vector1+dot(matrix,vector2) '''
...@@ -452,9 +452,9 @@ class TestVectorMatrixDot(TestCase): ...@@ -452,9 +452,9 @@ class TestVectorMatrixDot(TestCase):
assert numpy.allclose(no_gpu_f(), gpu_f2(), atol=self.atol) assert numpy.allclose(no_gpu_f(), gpu_f2(), atol=self.atol)
# Assert that the gpu version actually uses gpu # Assert that the gpu version actually uses gpu
assert sum([node.op is gpu_gemv_inplace for node in assert sum([node.op is gpu_gemv_inplace for node in
gpu_f2.maker.env.toposort()]) == 1 gpu_f2.maker.fgraph.toposort()]) == 1
assert sum([node.op is gpu_gemv_inplace for node in assert sum([node.op is gpu_gemv_inplace for node in
gpu_f.maker.env.toposort()]) == 1 gpu_f.maker.fgraph.toposort()]) == 1
def test_gemv2(self): def test_gemv2(self):
''' test vector1+dot(vector2,matrix) ''' ''' test vector1+dot(vector2,matrix) '''
...@@ -477,9 +477,9 @@ class TestVectorMatrixDot(TestCase): ...@@ -477,9 +477,9 @@ class TestVectorMatrixDot(TestCase):
assert numpy.allclose(no_gpu_f(), gpu_f2(), atol=self.atol) assert numpy.allclose(no_gpu_f(), gpu_f2(), atol=self.atol)
# Assert that the gpu version actually uses gpu # Assert that the gpu version actually uses gpu
assert sum([node.op is gpu_gemv_inplace for node in assert sum([node.op is gpu_gemv_inplace for node in
gpu_f2.maker.env.toposort()]) == 1 gpu_f2.maker.fgraph.toposort()]) == 1
assert sum([node.op is gpu_gemv_inplace for node in assert sum([node.op is gpu_gemv_inplace for node in
gpu_f.maker.env.toposort()]) == 1 gpu_f.maker.fgraph.toposort()]) == 1
class TestGpuGer(TestGer): class TestGpuGer(TestGer):
......
...@@ -26,7 +26,7 @@ def test_nvidia_driver1(): ...@@ -26,7 +26,7 @@ def test_nvidia_driver1():
A = cuda.shared_constructor(a) A = cuda.shared_constructor(a)
f = theano.function(inputs=[], outputs=A.sum(), mode=mode_with_gpu, f = theano.function(inputs=[], outputs=A.sum(), mode=mode_with_gpu,
profile=False) profile=False)
topo = f.maker.env.toposort() topo = f.maker.fgraph.toposort()
assert len(topo) == 2 assert len(topo) == 2
assert sum(isinstance(node.op, B.GpuSum) for node in topo) == 1 assert sum(isinstance(node.op, B.GpuSum) for node in topo) == 1
if not numpy.allclose(f(), a.sum()): if not numpy.allclose(f(), a.sum()):
...@@ -59,7 +59,7 @@ def test_nvidia_driver3(): ...@@ -59,7 +59,7 @@ def test_nvidia_driver3():
var = cuda.fvector() var = cuda.fvector()
f = theano.function([var], var + 1, mode=mode_with_gpu, f = theano.function([var], var + 1, mode=mode_with_gpu,
profile=False) profile=False)
topo = f.maker.env.toposort() topo = f.maker.fgraph.toposort()
assert any([isinstance(node.op, cuda.GpuElemwise) for node in topo]) assert any([isinstance(node.op, cuda.GpuElemwise) for node in topo])
assert theano.sandbox.cuda.use.device_number is not None assert theano.sandbox.cuda.use.device_number is not None
......
...@@ -108,7 +108,7 @@ def run_nnet(use_gpu, n_batch=60, n_in=1024, n_hid=2048, n_out=10, ...@@ -108,7 +108,7 @@ def run_nnet(use_gpu, n_batch=60, n_in=1024, n_hid=2048, n_out=10,
updates=[(p, p - g) for p, g in izip(params, gparams)]) updates=[(p, p - g) for p, g in izip(params, gparams)])
if 0: if 0:
for i, n in enumerate(train.maker.env.toposort()): for i, n in enumerate(train.maker.fgraph.toposort()):
print i, n print i, n
xval = my_rand(n_batch, n_in) xval = my_rand(n_batch, n_in)
...@@ -202,7 +202,7 @@ def run_conv_nnet1(use_gpu): ...@@ -202,7 +202,7 @@ def run_conv_nnet1(use_gpu):
#print 'building pfunc ...' #print 'building pfunc ...'
train = pfunc([x,y,lr], [loss], mode=mode, updates=[(p, p-g) for p,g in zip(params, gparams)]) train = pfunc([x,y,lr], [loss], mode=mode, updates=[(p, p-g) for p,g in zip(params, gparams)])
# for i, n in enumerate(train.maker.env.toposort()): # for i, n in enumerate(train.maker.fgraph.toposort()):
# print i, n # print i, n
xval = my_rand(*shape_img) xval = my_rand(*shape_img)
...@@ -291,7 +291,7 @@ def run_conv_nnet2(use_gpu): # pretend we are training LeNet for MNIST ...@@ -291,7 +291,7 @@ def run_conv_nnet2(use_gpu): # pretend we are training LeNet for MNIST
#print 'building pfunc ...' #print 'building pfunc ...'
train = pfunc([x,y,lr], [loss], mode=mode, updates=[(p, p-g) for p,g in zip(params, gparams)]) train = pfunc([x,y,lr], [loss], mode=mode, updates=[(p, p-g) for p,g in zip(params, gparams)])
# for i, n in enumerate(train.maker.env.toposort()): # for i, n in enumerate(train.maker.fgraph.toposort()):
# print i, n # print i, n
xval = my_rand(*shape_img) xval = my_rand(*shape_img)
...@@ -389,7 +389,7 @@ def build_conv_nnet2_classif(use_gpu, isize, ksize, n_batch, ...@@ -389,7 +389,7 @@ def build_conv_nnet2_classif(use_gpu, isize, ksize, n_batch,
theano.printing.debugprint(train) theano.printing.debugprint(train)
if use_gpu: if use_gpu:
# Check that GpuConv is used # Check that GpuConv is used
topo = train.maker.env.toposort() topo = train.maker.fgraph.toposort()
assert len([n for n in topo if isinstance(n.op, tcn.blas.GpuConv)]) > 0 assert len([n for n in topo if isinstance(n.op, tcn.blas.GpuConv)]) > 0
shape_target = (n_batch,n_out) shape_target = (n_batch,n_out)
......
...@@ -77,10 +77,10 @@ def test_GpuCrossentropySoftmaxArgmax1HotWithBias(): ...@@ -77,10 +77,10 @@ def test_GpuCrossentropySoftmaxArgmax1HotWithBias():
assert any([isinstance(node.op, assert any([isinstance(node.op,
T.nnet.CrossentropySoftmaxArgmax1HotWithBias) T.nnet.CrossentropySoftmaxArgmax1HotWithBias)
for node in classify.maker.env.toposort()]) for node in classify.maker.fgraph.toposort()])
assert any([isinstance(node.op, assert any([isinstance(node.op,
cuda.nnet.GpuCrossentropySoftmaxArgmax1HotWithBias) cuda.nnet.GpuCrossentropySoftmaxArgmax1HotWithBias)
for node in classify_gpu.maker.env.toposort()]) for node in classify_gpu.maker.fgraph.toposort()])
out = classify(yy, b_values, dot_value) out = classify(yy, b_values, dot_value)
gout = classify_gpu(yy, b_values, dot_value) gout = classify_gpu(yy, b_values, dot_value)
...@@ -129,10 +129,10 @@ def test_GpuCrossentropySoftmax1HotWithBiasDx(): ...@@ -129,10 +129,10 @@ def test_GpuCrossentropySoftmax1HotWithBiasDx():
#theano.printing.debugprint(gpu_f) #theano.printing.debugprint(gpu_f)
assert any([isinstance(node.op, T.nnet.CrossentropySoftmax1HotWithBiasDx) assert any([isinstance(node.op, T.nnet.CrossentropySoftmax1HotWithBiasDx)
for node in cpu_f.maker.env.toposort()]) for node in cpu_f.maker.fgraph.toposort()])
assert any([isinstance(node.op, assert any([isinstance(node.op,
cuda.nnet.GpuCrossentropySoftmax1HotWithBiasDx) cuda.nnet.GpuCrossentropySoftmax1HotWithBiasDx)
for node in gpu_f.maker.env.toposort()]) for node in gpu_f.maker.fgraph.toposort()])
cpu_out = cpu_f(softmax_output_value) cpu_out = cpu_f(softmax_output_value)
gpu_out = gpu_f(softmax_output_value) gpu_out = gpu_f(softmax_output_value)
...@@ -177,8 +177,8 @@ def test_softmax_with_bias(): ...@@ -177,8 +177,8 @@ def test_softmax_with_bias():
f = theano.function([x], z, mode=mode_without_gpu) f = theano.function([x], z, mode=mode_without_gpu)
f_gpu = theano.function([x], z, mode=mode_with_gpu) f_gpu = theano.function([x], z, mode=mode_with_gpu)
assert f.maker.env.toposort()[-1].op == T.nnet.softmax_with_bias assert f.maker.fgraph.toposort()[-1].op == T.nnet.softmax_with_bias
assert isinstance(f_gpu.maker.env.toposort()[-2].op, assert isinstance(f_gpu.maker.fgraph.toposort()[-2].op,
cuda.nnet.GpuSoftmaxWithBias) cuda.nnet.GpuSoftmaxWithBias)
def cmp(n, m, catch=False): def cmp(n, m, catch=False):
...@@ -222,8 +222,8 @@ def test_softmax(): ...@@ -222,8 +222,8 @@ def test_softmax():
z = T.nnet.softmax(x) z = T.nnet.softmax(x)
f = theano.function([x], z, mode=mode_without_gpu) f = theano.function([x], z, mode=mode_without_gpu)
f_gpu = theano.function([x], z, mode=mode_with_gpu) f_gpu = theano.function([x], z, mode=mode_with_gpu)
assert f.maker.env.toposort()[-1].op == T.nnet.softmax assert f.maker.fgraph.toposort()[-1].op == T.nnet.softmax
assert isinstance(f_gpu.maker.env.toposort()[-2].op, assert isinstance(f_gpu.maker.fgraph.toposort()[-2].op,
cuda.nnet.GpuSoftmax) cuda.nnet.GpuSoftmax)
def cmp(n, m, catch=False): def cmp(n, m, catch=False):
......
...@@ -32,7 +32,7 @@ def test_no_shared_var_graph(): ...@@ -32,7 +32,7 @@ def test_no_shared_var_graph():
a=tensor.fmatrix() a=tensor.fmatrix()
b=tensor.fmatrix() b=tensor.fmatrix()
f = theano.function([a,b],[a+b], mode=mode_with_gpu) f = theano.function([a,b],[a+b], mode=mode_with_gpu)
l = f.maker.env.toposort() l = f.maker.fgraph.toposort()
assert len(l)==4 assert len(l)==4
assert numpy.any(isinstance(x.op,cuda.GpuElemwise) for x in l) assert numpy.any(isinstance(x.op,cuda.GpuElemwise) for x in l)
assert numpy.any(isinstance(x.op,cuda.GpuFromHost) for x in l) assert numpy.any(isinstance(x.op,cuda.GpuFromHost) for x in l)
...@@ -43,11 +43,11 @@ def test_int_pow(): ...@@ -43,11 +43,11 @@ def test_int_pow():
f = theano.function([a], (a*4).sum(), mode=mode_with_gpu) f = theano.function([a], (a*4).sum(), mode=mode_with_gpu)
op_names = [n.op.__class__.__name__ for n in f.maker.env.toposort()] op_names = [n.op.__class__.__name__ for n in f.maker.fgraph.toposort()]
assert op_names == ['GpuSum', 'GpuElemwise', 'HostFromGpu'] assert op_names == ['GpuSum', 'GpuElemwise', 'HostFromGpu']
f = theano.function([a], tensor.pow(a,4).sum(), mode=mode_with_gpu) f = theano.function([a], tensor.pow(a,4).sum(), mode=mode_with_gpu)
op_names = [n.op.__class__.__name__ for n in f.maker.env.toposort()] op_names = [n.op.__class__.__name__ for n in f.maker.fgraph.toposort()]
assert op_names == ['GpuElemwise', 'GpuSum', 'HostFromGpu'] assert op_names == ['GpuElemwise', 'GpuSum', 'HostFromGpu']
#theano.printing.debugprint(f) #theano.printing.debugprint(f)
...@@ -66,7 +66,7 @@ def test_gpualloc(): ...@@ -66,7 +66,7 @@ def test_gpualloc():
m = (x).dimshuffle(['x',0]) m = (x).dimshuffle(['x',0])
v = tensor.alloc(1., *m.shape) v = tensor.alloc(1., *m.shape)
f = theano.function([], v+x) f = theano.function([], v+x)
l = f.maker.env.toposort() l = f.maker.fgraph.toposort()
assert numpy.any(ininstance(x.op, cuda.GpuAlloc) for x in l ) assert numpy.any(ininstance(x.op, cuda.GpuAlloc) for x in l )
...@@ -75,7 +75,7 @@ def test_gpuspecifyshape(): ...@@ -75,7 +75,7 @@ def test_gpuspecifyshape():
m = theano.tensor.specify_shape(x + numpy.float32(1), (3,)) m = theano.tensor.specify_shape(x + numpy.float32(1), (3,))
f = theano.function([], updates={x:m * numpy.float32(2)}, f = theano.function([], updates={x:m * numpy.float32(2)},
mode=mode_with_gpu) mode=mode_with_gpu)
l = f.maker.env.toposort() l = f.maker.fgraph.toposort()
assert not numpy.any([isinstance(x.op, cuda.HostFromGpu) for x in l]) assert not numpy.any([isinstance(x.op, cuda.HostFromGpu) for x in l])
...@@ -85,7 +85,7 @@ def test_softmax(): ...@@ -85,7 +85,7 @@ def test_softmax():
f = theano.function([x],tensor.nnet.nnet.Softmax()(x), mode=mode_with_gpu) f = theano.function([x],tensor.nnet.nnet.Softmax()(x), mode=mode_with_gpu)
f2 = theano.function([x],tensor.nnet.nnet.Softmax()(x), mode=mode_without_gpu) f2 = theano.function([x],tensor.nnet.nnet.Softmax()(x), mode=mode_without_gpu)
assert isinstance(f.maker.env.toposort()[1].op,cuda.nnet.GpuSoftmax) assert isinstance(f.maker.fgraph.toposort()[1].op,cuda.nnet.GpuSoftmax)
xv=numpy.random.rand(7,8).astype('float32') xv=numpy.random.rand(7,8).astype('float32')
assert numpy.allclose(f(xv),f2(xv)) assert numpy.allclose(f(xv),f2(xv))
...@@ -96,7 +96,7 @@ def test_softmax_with_bias(): ...@@ -96,7 +96,7 @@ def test_softmax_with_bias():
f = theano.function([x,b],tensor.nnet.nnet.SoftmaxWithBias()(x,b), mode=mode_with_gpu) f = theano.function([x,b],tensor.nnet.nnet.SoftmaxWithBias()(x,b), mode=mode_with_gpu)
f2 = theano.function([x,b],tensor.nnet.nnet.SoftmaxWithBias()(x,b), mode=mode_without_gpu) f2 = theano.function([x,b],tensor.nnet.nnet.SoftmaxWithBias()(x,b), mode=mode_without_gpu)
assert isinstance(f.maker.env.toposort()[2].op,cuda.nnet.GpuSoftmaxWithBias) assert isinstance(f.maker.fgraph.toposort()[2].op,cuda.nnet.GpuSoftmaxWithBias)
xv=numpy.random.rand(7,8).astype('float32') xv=numpy.random.rand(7,8).astype('float32')
bv=numpy.random.rand(8).astype('float32') bv=numpy.random.rand(8).astype('float32')
assert numpy.allclose(f(xv,bv),f2(xv,bv)) assert numpy.allclose(f(xv,bv),f2(xv,bv))
...@@ -116,7 +116,7 @@ def test_opt_gpujoin_onlyajoin(): ...@@ -116,7 +116,7 @@ def test_opt_gpujoin_onlyajoin():
f() f()
graph_nodes = f.maker.env.toposort() graph_nodes = f.maker.fgraph.toposort()
assert isinstance(graph_nodes[-1].op, cuda.HostFromGpu) assert isinstance(graph_nodes[-1].op, cuda.HostFromGpu)
assert isinstance(graph_nodes[-2].op, cuda.GpuJoin) assert isinstance(graph_nodes[-2].op, cuda.GpuJoin)
...@@ -143,7 +143,7 @@ def test_opt_gpujoin_joinvectors_elemwise_then_minusone(): ...@@ -143,7 +143,7 @@ def test_opt_gpujoin_joinvectors_elemwise_then_minusone():
#theano.printing.debugprint(f) #theano.printing.debugprint(f)
graph_nodes = f.maker.env.toposort() graph_nodes = f.maker.fgraph.toposort()
assert isinstance(graph_nodes[-1].op, cuda.HostFromGpu) assert isinstance(graph_nodes[-1].op, cuda.HostFromGpu)
assert isinstance(graph_nodes[-2].op, cuda.GpuSubtensor) assert isinstance(graph_nodes[-2].op, cuda.GpuSubtensor)
...@@ -159,9 +159,9 @@ def test_print_op(): ...@@ -159,9 +159,9 @@ def test_print_op():
b = tensor.fmatrix() b = tensor.fmatrix()
f = theano.function([b],theano.printing.Print()(b)*2, mode=mode_with_gpu) f = theano.function([b],theano.printing.Print()(b)*2, mode=mode_with_gpu)
#theano.printing.debugprint(f) #theano.printing.debugprint(f)
#print f.maker.env.toposort() #print f.maker.fgraph.toposort()
#[GpuFromHost(<TensorType(float32, matrix)>), <theano.printing.Print object at 0x3581210>(GpuFromHost.0), GpuElemwise{mul}(CudaNdarray{[[ 2.]]}, <theano.printing.Print object at 0x3581210>.0), HostFromGpu(GpuElemwise{mul}.0)] #[GpuFromHost(<TensorType(float32, matrix)>), <theano.printing.Print object at 0x3581210>(GpuFromHost.0), GpuElemwise{mul}(CudaNdarray{[[ 2.]]}, <theano.printing.Print object at 0x3581210>.0), HostFromGpu(GpuElemwise{mul}.0)]
topo = f.maker.env.toposort() topo = f.maker.fgraph.toposort()
assert topo[0].op == cuda.gpu_from_host assert topo[0].op == cuda.gpu_from_host
assert isinstance(topo[1].op, theano.printing.Print) assert isinstance(topo[1].op, theano.printing.Print)
assert isinstance(topo[2].op, cuda.GpuElemwise) assert isinstance(topo[2].op, cuda.GpuElemwise)
...@@ -180,7 +180,7 @@ def test_huge_elemwise_fusion(): ...@@ -180,7 +180,7 @@ def test_huge_elemwise_fusion():
vars = [tensor.tanh(ttype) for x in range(7)] vars = [tensor.tanh(ttype) for x in range(7)]
f = pfunc(vars, [vars[0] - vars[1] - vars[2] - vars[3] - vars[4] - f = pfunc(vars, [vars[0] - vars[1] - vars[2] - vars[3] - vars[4] -
vars[5] - vars[6]], mode=mode_with_gpu) vars[5] - vars[6]], mode=mode_with_gpu)
topo = f.maker.env.toposort() topo = f.maker.fgraph.toposort()
#theano.printing.debugprint(f) #theano.printing.debugprint(f)
#for i, node in enumerate(topo): #for i, node in enumerate(topo):
# print >> sys.stdout, i, node # print >> sys.stdout, i, node
...@@ -200,7 +200,7 @@ def test_huge_elemwise_fusion(): ...@@ -200,7 +200,7 @@ def test_huge_elemwise_fusion():
vars = [tensor.tanh(ttype) for x in range(7)] vars = [tensor.tanh(ttype) for x in range(7)]
f = pfunc(vars, [vars[0] - vars[1] - vars[2] - vars[3] - vars[4] - f = pfunc(vars, [vars[0] - vars[1] - vars[2] - vars[3] - vars[4] -
vars[5] - vars[6]], mode=mode_with_gpu) vars[5] - vars[6]], mode=mode_with_gpu)
topo = f.maker.env.toposort() topo = f.maker.fgraph.toposort()
#theano.printing.debugprint(f) #theano.printing.debugprint(f)
assert len(topo) == 1 assert len(topo) == 1
assert sum([isinstance(node.op, cuda.GpuElemwise) for node in topo]) == 0 assert sum([isinstance(node.op, cuda.GpuElemwise) for node in topo]) == 0
...@@ -234,7 +234,7 @@ def test_huge_elemwise_fusion(): ...@@ -234,7 +234,7 @@ def test_huge_elemwise_fusion():
if not isinstance(out.type, CudaNdarrayType): if not isinstance(out.type, CudaNdarrayType):
out = cuda.gpu_from_host(out) out = cuda.gpu_from_host(out)
f = pfunc([], [out], mode=mode_with_gpu) f = pfunc([], [out], mode=mode_with_gpu)
topo = f.maker.env.toposort() topo = f.maker.fgraph.toposort()
#print shape, nb_var, use_tan, len(topo) #print shape, nb_var, use_tan, len(topo)
assert (sum([isinstance(node.op, cuda.GpuElemwise) assert (sum([isinstance(node.op, cuda.GpuElemwise)
for node in topo]) == len(topo) or for node in topo]) == len(topo) or
...@@ -262,7 +262,7 @@ def test_local_gpu_elemwise_0(): ...@@ -262,7 +262,7 @@ def test_local_gpu_elemwise_0():
# the op are on the gpu. # the op are on the gpu.
f = theano.function([a, b, c], [a + b + c], mode=mode_with_gpu) f = theano.function([a, b, c], [a + b + c], mode=mode_with_gpu)
#theano.printing.debugprint(f) #theano.printing.debugprint(f)
topo = f.maker.env.toposort() topo = f.maker.fgraph.toposort()
assert sum(isinstance(node.op, cuda.GpuElemwise) for node in topo) == 1 assert sum(isinstance(node.op, cuda.GpuElemwise) for node in topo) == 1
assert sum(isinstance(node.op, tensor.Elemwise) for node in topo) == 1 assert sum(isinstance(node.op, tensor.Elemwise) for node in topo) == 1
f(a_v, b_v, c_v) f(a_v, b_v, c_v)
...@@ -276,7 +276,7 @@ def test_local_gpu_elemwise_0(): ...@@ -276,7 +276,7 @@ def test_local_gpu_elemwise_0():
out_op = tensor.Elemwise(out_s) out_op = tensor.Elemwise(out_s)
f = theano.function([a, b, c], [out_op(a, b, c)], mode=mode_with_gpu) f = theano.function([a, b, c], [out_op(a, b, c)], mode=mode_with_gpu)
#theano.printing.debugprint(f) #theano.printing.debugprint(f)
topo = f.maker.env.toposort() topo = f.maker.fgraph.toposort()
assert sum(isinstance(node.op, cuda.GpuElemwise) for node in topo) == 1 assert sum(isinstance(node.op, cuda.GpuElemwise) for node in topo) == 1
assert sum(isinstance(node.op, tensor.Elemwise) for node in topo) == 1 assert sum(isinstance(node.op, tensor.Elemwise) for node in topo) == 1
f(a_v, b_v, c_v) f(a_v, b_v, c_v)
...@@ -290,7 +290,7 @@ def test_elemwise_fusion(): ...@@ -290,7 +290,7 @@ def test_elemwise_fusion():
b = tensor.fmatrix() b = tensor.fmatrix()
c = tensor.fmatrix() c = tensor.fmatrix()
f = pfunc([b, c], [a + b + c], mode=mode_with_gpu) f = pfunc([b, c], [a + b + c], mode=mode_with_gpu)
topo = f.maker.env.toposort() topo = f.maker.fgraph.toposort()
for i, node in enumerate(topo): for i, node in enumerate(topo):
print >> sys.stdout, i, node print >> sys.stdout, i, node
assert len(topo) == 4 assert len(topo) == 4
...@@ -314,20 +314,20 @@ class test_local_gpu_tensordot(unittest.TestCase): ...@@ -314,20 +314,20 @@ class test_local_gpu_tensordot(unittest.TestCase):
tdot1 = tensor.tensordot(x, y, 2) tdot1 = tensor.tensordot(x, y, 2)
f1 = theano.function([x, y], tdot1, mode=mode_with_gpu) f1 = theano.function([x, y], tdot1, mode=mode_with_gpu)
topo1 = f1.maker.env.toposort() topo1 = f1.maker.fgraph.toposort()
assert topo1[-1].op == cuda.host_from_gpu assert topo1[-1].op == cuda.host_from_gpu
# Let DebugMode debug # Let DebugMode debug
f1(tensor1, tensor2) f1(tensor1, tensor2)
tdot2 = tensor.tensordot(x, y, axes=[(0, 3), (1, 0)]) tdot2 = tensor.tensordot(x, y, axes=[(0, 3), (1, 0)])
f2 = theano.function([x, y], tdot2, mode=mode_with_gpu) f2 = theano.function([x, y], tdot2, mode=mode_with_gpu)
topo2 = f2.maker.env.toposort() topo2 = f2.maker.fgraph.toposort()
assert topo2[-1].op == cuda.host_from_gpu assert topo2[-1].op == cuda.host_from_gpu
f2(tensor1, tensor3) f2(tensor1, tensor3)
tdot3 = tensor.tensordot(x, y, axes=[(0, 3, 2), (1, 0, 2)]) tdot3 = tensor.tensordot(x, y, axes=[(0, 3, 2), (1, 0, 2)])
f3 = theano.function([x, y], tdot3, mode=mode_with_gpu) f3 = theano.function([x, y], tdot3, mode=mode_with_gpu)
topo3 = f3.maker.env.toposort() topo3 = f3.maker.fgraph.toposort()
assert topo3[-1].op == cuda.host_from_gpu assert topo3[-1].op == cuda.host_from_gpu
f3(tensor1, tensor3) f3(tensor1, tensor3)
......
...@@ -28,7 +28,7 @@ def test_shape_i(): ...@@ -28,7 +28,7 @@ def test_shape_i():
x = cuda.ftensor3() x = cuda.ftensor3()
v = cuda.CudaNdarray(numpy.zeros((3,4,5),dtype='float32')) v = cuda.CudaNdarray(numpy.zeros((3,4,5),dtype='float32'))
f = theano.function([x],x.shape[1]) f = theano.function([x],x.shape[1])
topo = f.maker.env.toposort() topo = f.maker.fgraph.toposort()
assert f(v)==4 assert f(v)==4
if theano.config.mode!='FAST_COMPILE': if theano.config.mode!='FAST_COMPILE':
assert len(topo)==1 assert len(topo)==1
...@@ -38,7 +38,7 @@ def test_shape(): ...@@ -38,7 +38,7 @@ def test_shape():
x = cuda.ftensor3() x = cuda.ftensor3()
v = cuda.CudaNdarray(numpy.zeros((3,4,5),dtype='float32')) v = cuda.CudaNdarray(numpy.zeros((3,4,5),dtype='float32'))
f = theano.function([x],x.shape) f = theano.function([x],x.shape)
topo = f.maker.env.toposort() topo = f.maker.fgraph.toposort()
assert numpy.all(f(v)==(3,4,5)) assert numpy.all(f(v)==(3,4,5))
if theano.config.mode!='FAST_COMPILE': if theano.config.mode!='FAST_COMPILE':
assert len(topo)==4 assert len(topo)==4
...@@ -55,16 +55,16 @@ def test_softmax_optimizations(): ...@@ -55,16 +55,16 @@ def test_softmax_optimizations():
xe = op(x, one_of_n) xe = op(x, one_of_n)
env = theano.gof.Env( fgraph = theano.gof.FunctionGraph(
[x, one_of_n], [x, one_of_n],
[op(softmax(x), one_of_n)]) [op(softmax(x), one_of_n)])
assert env.outputs[0].owner.op == op assert fgraph.outputs[0].owner.op == op
mode_with_gpu.optimizer.optimize(env) mode_with_gpu.optimizer.optimize(fgraph)
assert str(env.outputs[0].owner.op) == 'OutputGuard' assert str(fgraph.outputs[0].owner.op) == 'OutputGuard'
assert env.outputs[0].owner.inputs[0].owner.op == cuda.host_from_gpu assert fgraph.outputs[0].owner.inputs[0].owner.op == cuda.host_from_gpu
assert env.outputs[0].owner.inputs[0].owner.inputs[0].owner.op == cuda.nnet.gpu_crossentropy_softmax_argmax_1hot_with_bias assert fgraph.outputs[0].owner.inputs[0].owner.inputs[0].owner.op == cuda.nnet.gpu_crossentropy_softmax_argmax_1hot_with_bias
def test_may_share_memory_cuda(): def test_may_share_memory_cuda():
from theano.misc.may_share_memory import may_share_memory from theano.misc.may_share_memory import may_share_memory
......
...@@ -11,7 +11,7 @@ def compare_fns(fns, input, reps=10): ...@@ -11,7 +11,7 @@ def compare_fns(fns, input, reps=10):
for implname, impl in fns.iteritems(): for implname, impl in fns.iteritems():
try: try:
print 'TOPOSORT', implname print 'TOPOSORT', implname
for i, n in enumerate(impl.maker.env.toposort()): for i, n in enumerate(impl.maker.fgraph.toposort()):
print i, n print i, n
except Exception: except Exception:
pass pass
......
...@@ -24,7 +24,7 @@ class DebugLinker(gof.WrapLinker): ...@@ -24,7 +24,7 @@ class DebugLinker(gof.WrapLinker):
linkers = linkers, linkers = linkers,
wrapper = self.wrapper) wrapper = self.wrapper)
self.env = None self.fgraph = None
self.compare_fn = compare_fn self.compare_fn = compare_fn
...@@ -46,11 +46,11 @@ class DebugLinker(gof.WrapLinker): ...@@ -46,11 +46,11 @@ class DebugLinker(gof.WrapLinker):
if compare_variables is not None: if compare_variables is not None:
self.debug_post.append(self.compare_variables) self.debug_post.append(self.compare_variables)
def accept(self, env, no_recycling=None): def accept(self, fgraph, no_recycling=None):
if no_recycling is None: if no_recycling is None:
no_recycling = [] no_recycling = []
return gof.WrapLinker.accept(self, return gof.WrapLinker.accept(self,
env=env, fgraph=fgraph,
no_recycling=no_recycling) no_recycling=no_recycling)
def store_value(self, i, node, *thunks): def store_value(self, i, node, *thunks):
...@@ -103,19 +103,19 @@ class DebugLinker(gof.WrapLinker): ...@@ -103,19 +103,19 @@ class DebugLinker(gof.WrapLinker):
raise exc raise exc
def pre(self, f, inputs, order, thunk_groups): def pre(self, f, inputs, order, thunk_groups):
env = f.env fgraph = f.fgraph
for r in env.variables: for r in fgraph.variables:
if r.owner is None: if r.owner is None:
r.step = "value" # this will be overwritten if r is an input r.step = "value" # this will be overwritten if r is an input
else: else:
r.step = None r.step = None
r.value = None r.value = None
r.original_value = None r.original_value = None
if r.owner is None and r not in env.inputs: if r.owner is None and r not in fgraph.inputs:
r.value = r.data r.value = r.data
if self.copy_originals: if self.copy_originals:
r.original_value = copy(r.data) r.original_value = copy(r.data)
for idx, (i, r) in enumerate(zip(inputs, env.inputs)): for idx, (i, r) in enumerate(zip(inputs, fgraph.inputs)):
r.step = "input %i" % idx r.step = "input %i" % idx
r.value = i r.value = i
if self.copy_originals: if self.copy_originals:
......
...@@ -28,7 +28,7 @@ class Hint(Op): ...@@ -28,7 +28,7 @@ class Hint(Op):
These ops are removed from the graph during canonicalization These ops are removed from the graph during canonicalization
in order to not interfere with other optimizations. in order to not interfere with other optimizations.
The idea is that prior to canonicalization, one or more Features of the The idea is that prior to canonicalization, one or more Features of the
env should register the information contained in any Hint node, and fgraph should register the information contained in any Hint node, and
transfer that information out of the graph. transfer that information out of the graph.
""" """
...@@ -57,9 +57,9 @@ def is_hint_node(node): ...@@ -57,9 +57,9 @@ def is_hint_node(node):
def hints(variable): def hints(variable):
if hasattr(variable, 'env'): if hasattr(variable, 'fgraph'):
try: try:
return variable.env.hints_feature.hints[variable] return variable.fgraph.hints_feature.hints[variable]
except AttributeError: except AttributeError:
return {} return {}
else: else:
...@@ -76,7 +76,7 @@ def remove_hint_nodes(node): ...@@ -76,7 +76,7 @@ def remove_hint_nodes(node):
# transfer hints from graph to Feature # transfer hints from graph to Feature
try: try:
for k, v in node.op.hints: for k, v in node.op.hints:
node.env.hints_feature.add_hint(node.inputs[0], k, v) node.fgraph.hints_feature.add_hint(node.inputs[0], k, v)
except AttributeError: except AttributeError:
pass pass
return node.inputs return node.inputs
...@@ -84,7 +84,7 @@ def remove_hint_nodes(node): ...@@ -84,7 +84,7 @@ def remove_hint_nodes(node):
class HintsFeature(object): class HintsFeature(object):
""" """
Env Feature to track matrix properties FunctionGraph Feature to track matrix properties
This is a similar feature to variable 'tags'. In fact, tags are one way This is a similar feature to variable 'tags'. In fact, tags are one way
to provide hints. to provide hints.
...@@ -129,15 +129,15 @@ class HintsFeature(object): ...@@ -129,15 +129,15 @@ class HintsFeature(object):
# Feature inteface # Feature inteface
# #
# #
def on_attach(self, env): def on_attach(self, fgraph):
assert not hasattr(env, 'hints_feature') assert not hasattr(fgraph, 'hints_feature')
env.hints_feature = self fgraph.hints_feature = self
# Variable -> tuple(scalars) or None (All tensor vars map to tuple) # Variable -> tuple(scalars) or None (All tensor vars map to tuple)
self.hints = {} self.hints = {}
for node in env.toposort(): for node in fgraph.toposort():
self.on_import(env, node) self.on_import(fgraph, node)
def on_import(self, env, node): def on_import(self, fgraph, node):
if node.outputs[0] in self.hints: if node.outputs[0] in self.hints:
# this is a revert, not really an import # this is a revert, not really an import
for r in node.outputs + node.inputs: for r in node.outputs + node.inputs:
...@@ -157,7 +157,7 @@ class HintsFeature(object): ...@@ -157,7 +157,7 @@ class HintsFeature(object):
if k not in new_hints: if k not in new_hints:
new_hints[k] = v new_hints[k] = v
def on_change_input(self, env, node, i, r, new_r): def on_change_input(self, fgraph, node, i, r, new_r):
# TODO: # TODO:
# This tells us that r and new_r must have the same shape # This tells us that r and new_r must have the same shape
# if we didn't know that the shapes are related, now we do. # if we didn't know that the shapes are related, now we do.
...@@ -171,15 +171,15 @@ class HintsFeature(object): ...@@ -171,15 +171,15 @@ class HintsFeature(object):
class HintsOptimizer(Optimizer): class HintsOptimizer(Optimizer):
"""Optimizer that serves to add HintsFeature as an env feature. """Optimizer that serves to add HintsFeature as an fgraph feature.
""" """
def __init__(self): def __init__(self):
Optimizer.__init__(self) Optimizer.__init__(self)
def add_requirements(self, env): def add_requirements(self, fgraph):
env.extend(HintsFeature()) fgraph.extend(HintsFeature())
def apply(self, env): def apply(self, fgraph):
pass pass
# -1 should make it run right before the first merge # -1 should make it run right before the first merge
theano.compile.mode.optdb.register('HintsOpt', theano.compile.mode.optdb.register('HintsOpt',
......
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论