提交 163d88ea authored 作者: Ian Goodfellow's avatar Ian Goodfellow

merged

...@@ -47,7 +47,7 @@ example: ...@@ -47,7 +47,7 @@ example:
# test passes cleanly # test passes cleanly
def test1(self): def test1(self):
self.failUnless(2+2 == 5) self.assertTrue(2+2 == 5)
# raises an exception, causes test to fail # raises an exception, causes test to fail
def test2(self): def test2(self):
...@@ -221,13 +221,13 @@ Example: ...@@ -221,13 +221,13 @@ Example:
c = T.dot(a,b) c = T.dot(a,b)
f = theano.function([a,b],[c]) f = theano.function([a,b],[c])
cmp = f(self.avals,self.bvals) == numpy.dot(self.avals,self.bvals) cmp = f(self.avals,self.bvals) == numpy.dot(self.avals,self.bvals)
self.failUnless(numpy.all(cmp)) self.assertTrue(numpy.all(cmp))
Avoid hard-coding variables, as in the following case: Avoid hard-coding variables, as in the following case:
.. code-block:: python .. code-block:: python
self.failUnless(numpy.all(f(self.avals,self.bvals)==numpy.array([[25,25,30,28],[21,18,14,25]]))) self.assertTrue(numpy.all(f(self.avals,self.bvals)==numpy.array([[25,25,30,28],[21,18,14,25]])))
This makes the test case less manageable and forces the user to update This makes the test case less manageable and forces the user to update
the variables each time the input is changed or possibly when the the variables each time the input is changed or possibly when the
...@@ -238,15 +238,15 @@ idea. ...@@ -238,15 +238,15 @@ idea.
Here is a list of useful functions, as defined by TestCase: Here is a list of useful functions, as defined by TestCase:
* checking the state of boolean variables: assert, failUnless, * checking the state of boolean variables: assert,
assertTrue, failIf, assertFalse assertTrue, assertFalse
* checking for (in)equality constraints: assertEqual, failUnlessEqual, * checking for (in)equality constraints: assertEqual,
assertNotEqual, failIfEqual assertNotEqual
* checking for (in)equality constraints up to a given precision (very * checking for (in)equality constraints up to a given precision (very
useful in theano): assertAlmostEqual, failUnlessAlmostEqual, useful in theano): assertAlmostEqual,
assertNotAlmostEqual, failIfAlmostEqual assertNotAlmostEqual
Checking for errors Checking for errors
...@@ -270,11 +270,11 @@ Example: ...@@ -270,11 +270,11 @@ Example:
b = T.dmatrix() b = T.dmatrix()
c = T.dot(a,b) # we expect this to fail c = T.dot(a,b) # we expect this to fail
# above should fail as dot operates on 2D tensors only # above should fail as dot operates on 2D tensors only
self.failUnlessRaises(TypeError, func) self.assertRaises(TypeError, func)
Useful functions, as defined by TestCase: Useful function, as defined by TestCase:
* assertRaises, failUnlessRaises * assertRaises
Test Cases and Theano Modes Test Cases and Theano Modes
......
...@@ -17,6 +17,17 @@ Theano has been powering large-scale computationally intensive scientific invest ...@@ -17,6 +17,17 @@ Theano has been powering large-scale computationally intensive scientific invest
since 2007. But it is also approachable enough to be used in the classroom since 2007. But it is also approachable enough to be used in the classroom
(IFT6266 at the University of Montreal). (IFT6266 at the University of Montreal).
.. image:: images/talk2010.gif
:scale: 75%
:align: left
**NEW!** You can watch a quick (20 minute) introduction to Theano given as a talk at `SciPy 2010 <http://conference.scipy.org/scipy2010/>`_ via streaming (or downloaded) video:
`Transparent GPU Computing With Theano`_.
James Bergstra, SciPy 2010, June 30, 2010.
.. _Transparent GPU Computing With Theano: http://www.archive.org/details/Scipy2010-JamesBergstra-TransparentGpuComputingWithTheano
Download Download
======== ========
...@@ -24,7 +35,7 @@ Theano is now `available on PyPI`_, and can be installed via ``easy_install ...@@ -24,7 +35,7 @@ Theano is now `available on PyPI`_, and can be installed via ``easy_install
Theano``, or by downloading and unpacking the tarball and typing ``python Theano``, or by downloading and unpacking the tarball and typing ``python
setup.py install``. setup.py install``.
Those interested in bleeding-edge features should obtain the latest development Those interested in bleeding-edge features should obtain the latest development
version, available via:: version, available via::
hg clone http://hg.assembla.com/theano Theano hg clone http://hg.assembla.com/theano Theano
...@@ -37,6 +48,25 @@ installation and configuration, see :ref:`installing Theano <install>`. ...@@ -37,6 +48,25 @@ installation and configuration, see :ref:`installing Theano <install>`.
.. _available on PyPI: http://pypi.python.org/pypi/Theano .. _available on PyPI: http://pypi.python.org/pypi/Theano
Citing Theano
==============
If you use Theano for academic research, you are highly encouraged (though not
required) to cite the following paper:
* J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R.
Pascanu, G. Desjardins, J. Turian, D. Warde-Farley and Y.
Bengio. `"Theano: A CPU and GPU Math Expression Compiler"
<http://www.iro.umontreal.ca/~lisa/pointeurs/theano_scipy2010.pdf>`_.
*Proceedings of the Python for Scientific Computing Conference (SciPy)
2010. June 30 - July 3, Austin, TX* (`BibTeX
<http://www.iro.umontreal.ca/~lisa/publications2/index.php/export/publication/461/bibtex>`_)
Theano is primarily developed by academics, and so citations matter a lot to
us. As an added benefit, you increase Theano's exposure and potential user
(and developer) base, which is to the benefit of all users of Theano. Thanks
in advance!
Documentation Documentation
============= =============
...@@ -62,7 +92,7 @@ Community ...@@ -62,7 +92,7 @@ Community
"Thank YOU for correcting it so quickly. I wish all packages I worked "Thank YOU for correcting it so quickly. I wish all packages I worked
with would have such an active maintenance - this is as good as it with would have such an active maintenance - this is as good as it
gets :-)" gets :-)"
(theano-users, Aug 2, 2010) (theano-users, Aug 2, 2010)
...@@ -77,7 +107,7 @@ Community ...@@ -77,7 +107,7 @@ Community
* Ask/view questions/answers at `metaoptimize/qa/tags/theano/`_ (it's like stack overflow for machine learning) * Ask/view questions/answers at `metaoptimize/qa/tags/theano/`_ (it's like stack overflow for machine learning)
* We try to stay organized with `Theano's Trac <http://trac-hg.assembla.com/theano/report/1>`__ * We try to stay organized with `Theano's Trac <http://trac-hg.assembla.com/theano/report/1>`__
* Come visit us in Montreal! Most of the developers are students in the LISA_ group at the `University of Montreal`_. * Come visit us in Montreal! Most of the developers are students in the LISA_ group at the `University of Montreal`_.
......
...@@ -113,6 +113,8 @@ import theano and print the config variable, as in: ...@@ -113,6 +113,8 @@ import theano and print the config variable, as in:
to use a specific device. If we are not able to use the GPU, either we fall back to use a specific device. If we are not able to use the GPU, either we fall back
on the CPU, or an error is raised, depending on the :attr:`force_device` flag. on the CPU, or an error is raised, depending on the :attr:`force_device` flag.
This flag's value cannot be modified during the program execution.
.. attribute:: force_device .. attribute:: force_device
Bool value: either ``True`` or ``False`` Bool value: either ``True`` or ``False``
...@@ -122,6 +124,8 @@ import theano and print the config variable, as in: ...@@ -122,6 +124,8 @@ import theano and print the config variable, as in:
If ``True``, we raise an error if we cannot use the specified :attr:`device`. If ``True``, we raise an error if we cannot use the specified :attr:`device`.
If ``False``, we fall back to the CPU. If ``False``, we fall back to the CPU.
This flag's value cannot be modified during the program execution.
.. attribute:: init_gpu_device .. attribute:: init_gpu_device
String value: either ``''``, ``'gpu'``, ``'gpu0'``, ``'gpu1'``, ``'gpu2'``, String value: either ``''``, ``'gpu'``, ``'gpu0'``, ``'gpu1'``, ``'gpu2'``,
...@@ -136,6 +140,8 @@ import theano and print the config variable, as in: ...@@ -136,6 +140,8 @@ import theano and print the config variable, as in:
This flag is useful to run GPU-specific tests on a particular GPU, instead This flag is useful to run GPU-specific tests on a particular GPU, instead
of using the default one. of using the default one.
This flag's value cannot be modified during the program execution.
.. attribute:: floatX .. attribute:: floatX
String value: either 'float64' or 'float32'. String value: either 'float64' or 'float32'.
...@@ -210,6 +216,8 @@ import theano and print the config variable, as in: ...@@ -210,6 +216,8 @@ import theano and print the config variable, as in:
It is also recommended you put this into your .theanorc, so this setting It is also recommended you put this into your .theanorc, so this setting
will always be used. will always be used.
This flag's value cannot be modified during the program execution.
.. attribute:: home .. attribute:: home
Default: env-variable $HOME Default: env-variable $HOME
...@@ -222,6 +230,8 @@ import theano and print the config variable, as in: ...@@ -222,6 +230,8 @@ import theano and print the config variable, as in:
This directory stores the architecture-dependent compilation directories. This directory stores the architecture-dependent compilation directories.
This flag's value cannot be modified during the program execution.
.. attribute:: compiledir .. attribute:: compiledir
Default: $HOME/.theano/<arch-identifier> Default: $HOME/.theano/<arch-identifier>
...@@ -229,6 +239,8 @@ import theano and print the config variable, as in: ...@@ -229,6 +239,8 @@ import theano and print the config variable, as in:
This directory stores dynamically-compiled modules for a particular This directory stores dynamically-compiled modules for a particular
architecture. architecture.
This flag's value cannot be modified during the program execution.
.. attribute:: config.blas.ldflags .. attribute:: config.blas.ldflags
Default: '-lblas' Default: '-lblas'
...@@ -258,6 +270,8 @@ import theano and print the config variable, as in: ...@@ -258,6 +270,8 @@ import theano and print the config variable, as in:
use the flags: optimizer_excluding:inplace_opt, where use the flags: optimizer_excluding:inplace_opt, where
inplace_opt is the name of that optimization. inplace_opt is the name of that optimization.
This flag's value cannot be modified during the program execution.
.. attribute:: optimizer_including .. attribute:: optimizer_including
Default: "" Default: ""
...@@ -265,6 +279,8 @@ import theano and print the config variable, as in: ...@@ -265,6 +279,8 @@ import theano and print the config variable, as in:
A list of optimizer tags that we want included in the default Mode. A list of optimizer tags that we want included in the default Mode.
If multiple tags, separate them by ':'. If multiple tags, separate them by ':'.
This flag's value cannot be modified during the program execution.
.. attribute:: optimizer_requiring .. attribute:: optimizer_requiring
Default: "" Default: ""
...@@ -272,6 +288,8 @@ import theano and print the config variable, as in: ...@@ -272,6 +288,8 @@ import theano and print the config variable, as in:
A list of optimizer tags that we require for optimizer in the default Mode. A list of optimizer tags that we require for optimizer in the default Mode.
If multiple tags, separate them by ':'. If multiple tags, separate them by ':'.
This flag's value cannot be modified during the program execution.
.. attribute:: nocleanup .. attribute:: nocleanup
Bool value: either True or False Bool value: either True or False
...@@ -282,4 +300,4 @@ import theano and print the config variable, as in: ...@@ -282,4 +300,4 @@ import theano and print the config variable, as in:
This mean it remove file that he tried to compile but failed. This mean it remove file that he tried to compile but failed.
Set to True to keep the source file that failed to compile to Set to True to keep the source file that failed to compile to
debug them. debug them.
...@@ -126,11 +126,11 @@ Guillaume can you make sure to hit these points: ...@@ -126,11 +126,11 @@ Guillaume can you make sure to hit these points:
* What is the right eq function to use? * What is the right eq function to use?
* There are a lot of tests that define their own epsilon, but this should be standardized. e.g. in test_elemwise.py ``self.failUnless((numpy.abs(f(xv) - zv) < 1e-10).all())`` * There are a lot of tests that define their own epsilon, but this should be standardized. e.g. in test_elemwise.py ``self.assertTrue((numpy.abs(f(xv) - zv) < 1e-10).all())``
* If the expected variable of a test is that an Exception is thrown, how do we correctly detect and handle that? * If the expected variable of a test is that an Exception is thrown, how do we correctly detect and handle that?
nosetests has ``failUnlessRaises`` nosetests has ``assertRaises``
* Convention is that all test files must start with ``test_``, not * Convention is that all test files must start with ``test_``, not
``_test_``, so rename all that use the old convention? ``_test_``, so rename all that use the old convention?
......
...@@ -99,7 +99,8 @@ Computing gradients ...@@ -99,7 +99,8 @@ Computing gradients
Now let's use Theano for a slightly more sophisticated task: create a Now let's use Theano for a slightly more sophisticated task: create a
function which computes the derivative of some expression ``y`` with function which computes the derivative of some expression ``y`` with
respect to its parameter ``x``. For instance, we can compute the respect to its parameter ``x``. To do this we will use the macro ``T.grad``.
For instance, we can compute the
gradient of :math:`x^2` with respect to :math:`x`. Note that: gradient of :math:`x^2` with respect to :math:`x`. Note that:
:math:`d(x^2)/dx = 2 \cdot x`. :math:`d(x^2)/dx = 2 \cdot x`.
...@@ -158,12 +159,13 @@ logistic is: :math:`ds(x)/dx = s(x) \cdot (1 - s(x))`. ...@@ -158,12 +159,13 @@ logistic is: :math:`ds(x)/dx = s(x) \cdot (1 - s(x))`.
array([[ 0.25 , 0.19661193], array([[ 0.25 , 0.19661193],
[ 0.19661193, 0.10499359]]) [ 0.19661193, 0.10499359]])
In general, for any **scalar** expression ``s``, ``T.grad(s, w)`` provides
The resulting function computes the gradient of its first argument the theano expression for computing :math:`\frac{\partial s}{\partial w}`. In
with respect to the second. In this way, Theano can be used for this way Theano can be used for doing **efficient** symbolic differentiation
`automatic differentiation <http://en.wikipedia.org/wiki/Automatic_differentiation>`_. (as
As opposed to what this page tell, Theano do efficient symbolic differentiation the expression return by ``T.grad`` will be optimized during compilation) even for
even for function with many inputs. function with many inputs. ( see `automatic differentiation <http://en.wikipedia.org/wiki/Automatic_differentiation>`_ for a description
of symbolic differentiation).
.. note:: .. note::
......
...@@ -69,8 +69,8 @@ FancyModule = Module ...@@ -69,8 +69,8 @@ FancyModule = Module
from printing import \ from printing import \
pprint, pp pprint, pp
import scan as scan_module import scan_module
from scan import scan, map, reduce, foldl, foldr, Scan, ScanGrad from scan_module import scan, map, reduce, foldl, foldr, clone
import tensor import tensor
import scalar import scalar
......
...@@ -22,7 +22,8 @@ from theano.compile.function_module import (FunctionMaker, ...@@ -22,7 +22,8 @@ from theano.compile.function_module import (FunctionMaker,
SymbolicInputKit, SymbolicInputKit,
SymbolicOutput, SymbolicOutput,
Supervisor, Supervisor,
view_tree_set) view_tree_set,
insert_deepcopy)
from theano.compile.mode import Mode, register_mode from theano.compile.mode import Mode, register_mode
AddConfigVar('DebugMode.patience', AddConfigVar('DebugMode.patience',
...@@ -1403,42 +1404,8 @@ class _Maker(FunctionMaker): #inheritance buys a few helper functions ...@@ -1403,42 +1404,8 @@ class _Maker(FunctionMaker): #inheritance buys a few helper functions
env.equivalence_tracker = equivalence_tracker env.equivalence_tracker = equivalence_tracker
# optimize the env # optimize the env
optimizer(env) optimizer(env)
# This loop was inserted to remove aliasing between outputs when they all
# evaluete to the same value. Originally it was OK for outputs to be aliased,
# but some of the outputs can be shared variables, and is not good for shared
# variables to be aliased. It might be possible to optimize this by making sure
# there is no aliasing only between shared variables.
#import pdb;pdb.set_trace()
assert len(inputs) == len(env.inputs)
updated_env_inputs = [env_i for ii, env_i in zip(inputs, env.inputs) if getattr(ii, 'update', False)]
for out_i in xrange(len(env.outputs)):
views_of_output_i = set()
view_tree_set(alias_root(env.outputs[out_i]), views_of_output_i)
copied = False
# do not allow outputs to be aliased
for j in xrange(out_i+1, len(env.outputs)):
if env.outputs[j] in views_of_output_i:
#import pdb;pdb.set_trace()
env.change_input('output', out_i, deep_copy_op(env.outputs[out_i]))
copied = True
break
if not copied:
for input_j in env.inputs:
# do not allow outputs to be aliased to an inputs (j), unless
# a) that j'th input has been 'destroyed' by e.g. in-place computations
# b) that j'th input is a shared variable that is also being updated
if hasattr(env,'get_destroyers_of') and env.get_destroyers_of(input_j):
continue
if input_j in updated_env_inputs:
continue
if input_j in views_of_output_i:
#import pdb;pdb.set_trace()
env.change_input('output', out_i, deep_copy_op(env.outputs[out_i]))
break
theano.compile.function_module.insert_deepcopy(env, inputs, outputs+additional_outputs)
if i: if i:
li = env.equivalence_tracker.event_list li = env.equivalence_tracker.event_list
......
...@@ -3,19 +3,18 @@ ...@@ -3,19 +3,18 @@
""" """
__docformat__ = "restructuredtext en" __docformat__ = "restructuredtext en"
import copy
import copy_reg import copy_reg
import cPickle
import itertools import itertools
import time
import sys, time, copy import numpy
import theano
from theano import gof
from theano.gof.python25 import partial from theano.gof.python25 import partial
import numpy
import theano.gof
#from theano import gof
import mode as mode_module import mode as mode_module
from io import * from io import In, SymbolicInput, SymbolicInputKit, SymbolicOutput
import logging import logging
...@@ -62,43 +61,8 @@ def infer_reuse_pattern(env, outputs_to_disown): ...@@ -62,43 +61,8 @@ def infer_reuse_pattern(env, outputs_to_disown):
# remove from rval all of the inputs, constants, values. # remove from rval all of the inputs, constants, values.
rval = set(r for r in rval if r.owner is not None) rval = set(r for r in rval if r.owner is not None)
if 0:
# DEBUG STUFF
# verify that we return a superset of what we've been returning so far...
rval0 = _old_infer_reuse_pattern(env, outputs_to_disown)
rval0_set = set(rval0)
for r in rval0_set:
assert r in rval
return rval return rval
def _old_infer_reuse_pattern(env, outputs_to_disown):
"""
Given an env and a list of variables, returns the list of all
variables which may share the same underlying data storage as any of
the specified variables. Used internally by function, FunctionMaker.
This list is also refered to as no_recycling sometimes.
"""
do_not_reuse = list()
seen = set()
def walk(r):
if r.owner is None or r in seen:
return
seen.add(r)
do_not_reuse.append(r)
node = r.owner
op = node.op
dmap = getattr(op, 'destroy_map', {})
vmap = getattr(op, 'view_map', {})
for l in dmap.values() + vmap.values():
for i in l:
walk(node.inputs[i])
for output in outputs_to_disown:
walk(output)
return do_not_reuse
class Supervisor: class Supervisor:
""" """
...@@ -517,6 +481,8 @@ class Function(object): ...@@ -517,6 +481,8 @@ class Function(object):
#TODO: provide a Param option for skipping the filter if we #TODO: provide a Param option for skipping the filter if we
# really want speed. # really want speed.
s = self.input_storage[i] s = self.input_storage[i]
# see this emails for a discuation about None as input
# https://groups.google.com/group/theano-dev/browse_thread/thread/920a5e904e8a8525/4f1b311a28fc27e5
if arg is None: if arg is None:
s.storage[0] = arg s.storage[0] = arg
else: else:
...@@ -772,6 +738,55 @@ class SanityCheckFunction(Function): ...@@ -772,6 +738,55 @@ class SanityCheckFunction(Function):
### FunctionMaker ### FunctionMaker
### ###
def insert_deepcopy(env, wrapped_inputs, wrapped_outputs):
"""
Insert deepcopy in the env to break aliasing of outputs
"""
# This loop was inserted to remove aliasing between outputs when they all
# evaluete to the same value. Originally it was OK for outputs to be aliased,
# but some of the outputs can be shared variables, and is not good for shared
# variables to be aliased. It might be possible to optimize this by making sure
# there is no aliasing only between shared variables.
# If some outputs are constant, we add deep copy to respect the memory contract
# We don't insert deep copy when the output.borrow is True for all conserned outputs.
assert len(wrapped_inputs) == len(env.inputs)
assert len(wrapped_outputs) == len(env.outputs)
updated_env_inputs = [env_i for i, env_i in zip(wrapped_inputs, env.inputs) if getattr(i, 'update', False)]
# We can't use env.inputs as this don't include Constant Value.
all_graph_inputs = gof.graph.inputs(env.outputs)
for i in xrange(len(env.outputs)):
views_of_output_i = set()
view_tree_set(alias_root(env.outputs[i]), views_of_output_i)
copied = False
# do not allow outputs to be aliased
for j in xrange(i+1, len(env.outputs)):
# We could don't put deep copy if both outputs have borrow==True
# and not(wrapped_outputs[i].borrow and wrapped_outputs[j].borrow):
if env.outputs[j] in views_of_output_i:
env.change_input('output', i, deep_copy_op(env.outputs[i]))
copied = True
break
if not copied:
for input_j in all_graph_inputs:
# do not allow outputs to be aliased to an inputs (j), unless
# a) that j'th input has been 'destroyed' by e.g. in-place computations
# b) that j'th input is a shared variable that is also being updated
if hasattr(env,'get_destroyers_of') and env.get_destroyers_of(input_j):
continue
if input_j in updated_env_inputs:
continue
# We could don't put deep_copy_op if the input and the output have borrow==True
if input_j in views_of_output_i:
env.change_input('output', i, deep_copy_op(env.outputs[i]))
break
NODEFAULT = ['NODEFAULT'] NODEFAULT = ['NODEFAULT']
class FunctionMaker(object): class FunctionMaker(object):
"""`FunctionMaker` is the class to `create` `Function` instances. """`FunctionMaker` is the class to `create` `Function` instances.
...@@ -876,41 +891,8 @@ class FunctionMaker(object): ...@@ -876,41 +891,8 @@ class FunctionMaker(object):
mode.optimizer_time += end_optimizer - start_optimizer mode.optimizer_time += end_optimizer - start_optimizer
_logger.debug('Optimizing took %f seconds' % (end_optimizer - start_optimizer)) _logger.debug('Optimizing took %f seconds' % (end_optimizer - start_optimizer))
# This loop was inserted to remove aliasing between outputs when they all #Add deep copy to respect the memory interface
# evaluete to the same value. Originally it was OK for outputs to be aliased, insert_deepcopy(env, inputs, outputs+additional_outputs)
# but some of the outputs can be shared variables, and is not good for shared
# variables to be aliased. It might be possible to optimize this by making sure
# there is no aliasing only between shared variables.
assert len(inputs) == len(env.inputs)
updated_env_inputs = [env_i for i, env_i in zip(inputs, env.inputs) if getattr(i, 'update', False)]
for i in xrange(len(env.outputs)):
views_of_output_i = set()
view_tree_set(alias_root(env.outputs[i]), views_of_output_i)
copied = False
# do not allow outputs to be aliased
for j in xrange(i+1, len(env.outputs)):
if env.outputs[j] in views_of_output_i:
env.change_input('output', i, deep_copy_op(env.outputs[i]))
copied = True
break
if not copied:
for input_j in env.inputs:
# do not allow outputs to be aliased to an inputs (j), unless
# a) that j'th input has been 'destroyed' by e.g. in-place computations
# b) that j'th input is a shared variable that is also being updated
if hasattr(env,'get_destroyers_of') and env.get_destroyers_of(input_j):
continue
if input_j in updated_env_inputs:
continue
if input_j in views_of_output_i:
env.change_input('output', i, deep_copy_op(env.outputs[i]))
break
# initialize the linker # initialize the linker
if not hasattr(linker, 'accept'): if not hasattr(linker, 'accept'):
......
...@@ -105,16 +105,34 @@ class OutputGuard(gof.Op): ...@@ -105,16 +105,34 @@ class OutputGuard(gof.Op):
z[0] = x z[0] = x
def __str__(self): def __str__(self):
return '%s' % self.__class__.__name__ return '%s' % self.__class__.__name__
def c_code(self, node, nodename, inp, out, sub): def c_code(self, node, nodename, inp, out, sub):
x, = inp x, = inp
z, = out z, = out
return """ if isinstance(node.inputs[0].type, theano.scalar.Scalar):
Py_XDECREF(%(z)s); # Scalars are C objects on the stacks, and should not be inc/decrefed
%(z)s = %(x)s; return """
Py_XINCREF(%(z)s); %(z)s = %(x)s;
""" %locals() """ % locals()
elif (isinstance(node.inputs[0].type,
(theano.tensor.TensorType,
theano.sandbox.cuda.CudaNdarrayType,
theano.tensor.raw_random.RandomStateType)) or
node.inputs[0].type.__class__.__name__ == 'SparseType'
):
# These are Python object types
return """
Py_XDECREF(%(z)s);
%(z)s = %(x)s;
Py_XINCREF(%(z)s);
""" % locals()
# Else, no C code for you
return super(OutputGuard, self).c_code(node, nodename, inp, out, sub)
def c_code_cache_version(self): def c_code_cache_version(self):
return (1,) return (2,)
_output_guard = OutputGuard() _output_guard = OutputGuard()
class AddDestroyHandler(gof.Optimizer): class AddDestroyHandler(gof.Optimizer):
......
差异被折叠。
...@@ -25,7 +25,7 @@ AddConfigVar('ProfileMode.min_memory_size', ...@@ -25,7 +25,7 @@ AddConfigVar('ProfileMode.min_memory_size',
AddConfigVar('ProfileMode.profile_memory', AddConfigVar('ProfileMode.profile_memory',
"""Enable profiling of memory used by Theano functions""", """Enable profiling of memory used by Theano functions""",
BoolParam(True)) BoolParam(False))
class Profile_Maker(FunctionMaker): class Profile_Maker(FunctionMaker):
def create(self, input_storage=None, trustme=False): def create(self, input_storage=None, trustme=False):
...@@ -146,7 +146,7 @@ class ProfileMode(Mode): ...@@ -146,7 +146,7 @@ class ProfileMode(Mode):
if isinstance(linker, str) or linker is None: if isinstance(linker, str) or linker is None:
linker = predefined_linkers[linker] linker = predefined_linkers[linker]
if config.ProfileMode.profile_memory: if not config.ProfileMode.profile_memory:
p_thunk = profile_thunk p_thunk = profile_thunk
else: else:
p_thunk = profile_thunk2 p_thunk = profile_thunk2
......
...@@ -475,15 +475,15 @@ class Test_check_isfinite(unittest.TestCase): ...@@ -475,15 +475,15 @@ class Test_check_isfinite(unittest.TestCase):
# if TensorType.filter_checks_isfinite were true, these would raise ValueError # if TensorType.filter_checks_isfinite were true, these would raise ValueError
# if not, DebugMode will check internally, and raise InvalidValueError # if not, DebugMode will check internally, and raise InvalidValueError
# passing an invalid value as an input should trigger ValueError # passing an invalid value as an input should trigger ValueError
self.failUnlessRaises(debugmode.InvalidValueError, f, self.assertRaises(debugmode.InvalidValueError, f,
numpy.log([3, -4, 5]).astype(config.floatX)) numpy.log([3, -4, 5]).astype(config.floatX))
self.failUnlessRaises(debugmode.InvalidValueError, f, self.assertRaises(debugmode.InvalidValueError, f,
(numpy.asarray([0, 1.0, 0])/0).astype(config.floatX)) (numpy.asarray([0, 1.0, 0])/0).astype(config.floatX))
self.failUnlessRaises(debugmode.InvalidValueError, f, self.assertRaises(debugmode.InvalidValueError, f,
(numpy.asarray([1.0, 1.0, 1.0])/0).astype(config.floatX)) (numpy.asarray([1.0, 1.0, 1.0])/0).astype(config.floatX))
# generating an invalid value internally should trigger InvalidValueError # generating an invalid value internally should trigger InvalidValueError
self.failUnlessRaises(debugmode.InvalidValueError, g, self.assertRaises(debugmode.InvalidValueError, g,
numpy.asarray([3,-4,5], dtype=config.floatX)) numpy.asarray([3,-4,5], dtype=config.floatX))
# this should disable the exception # this should disable the exception
...@@ -505,4 +505,3 @@ class Test_check_isfinite(unittest.TestCase): ...@@ -505,4 +505,3 @@ class Test_check_isfinite(unittest.TestCase):
print infs print infs
f(infs) f(infs)
return return
...@@ -7,35 +7,35 @@ from theano.tensor.nnet import sigmoid ...@@ -7,35 +7,35 @@ from theano.tensor.nnet import sigmoid
class NNet(object): class NNet(object):
def __init__(self, def __init__(self,
input = tensor.dvector('input'), input = tensor.dvector('input'),
target = tensor.dvector('target'), target = tensor.dvector('target'),
n_input=1, n_hidden=1, n_output=1, lr=1e-3, **kw): n_input=1, n_hidden=1, n_output=1, lr=1e-3, **kw):
super(NNet, self).__init__(**kw) super(NNet, self).__init__(**kw)
self.input = input self.input = input
self.target = target self.target = target
self.lr = shared(lr, 'learning_rate') self.lr = shared(lr, 'learning_rate')
self.w1 = shared(numpy.zeros((n_hidden, n_input)), 'w1') self.w1 = shared(numpy.zeros((n_hidden, n_input)), 'w1')
self.w2 = shared(numpy.zeros((n_output, n_hidden)), 'w2') self.w2 = shared(numpy.zeros((n_output, n_hidden)), 'w2')
print self.lr.type print self.lr.type
self.hidden = sigmoid(tensor.dot(self.w1, self.input)) self.hidden = sigmoid(tensor.dot(self.w1, self.input))
self.output = tensor.dot(self.w2, self.hidden) self.output = tensor.dot(self.w2, self.hidden)
self.cost = tensor.sum((self.output - self.target)**2) self.cost = tensor.sum((self.output - self.target)**2)
self.sgd_updates = { self.sgd_updates = {
self.w1: self.w1 - self.lr * tensor.grad(self.cost, self.w1), self.w1: self.w1 - self.lr * tensor.grad(self.cost, self.w1),
self.w2: self.w2 - self.lr * tensor.grad(self.cost, self.w2)} self.w2: self.w2 - self.lr * tensor.grad(self.cost, self.w2)}
self.sgd_step = pfunc( self.sgd_step = pfunc(
params = [self.input, self.target], params = [self.input, self.target],
outputs = [self.output, self.cost], outputs = [self.output, self.cost],
updates = self.sgd_updates) updates = self.sgd_updates)
self.compute_output = pfunc([self.input], self.output) self.compute_output = pfunc([self.input], self.output)
self.output_from_hidden = pfunc([self.hidden], self.output) self.output_from_hidden = pfunc([self.hidden], self.output)
class TestNnet(unittest.TestCase): class TestNnet(unittest.TestCase):
...@@ -52,8 +52,7 @@ class TestNnet(unittest.TestCase): ...@@ -52,8 +52,7 @@ class TestNnet(unittest.TestCase):
mean_cost += cost mean_cost += cost
mean_cost /= float(len(data)) mean_cost /= float(len(data))
print 'Mean cost at epoch %s: %s' % (epoch, mean_cost) print 'Mean cost at epoch %s: %s' % (epoch, mean_cost)
self.failUnless(abs(mean_cost - 0.20588975452) < 1e-6) self.assertTrue(abs(mean_cost - 0.20588975452) < 1e-6)
# Just call functions to make sure they do not crash. # Just call functions to make sure they do not crash.
out = nnet.compute_output(input) out = nnet.compute_output(input)
out = nnet.output_from_hidden(numpy.ones(10)) out = nnet.output_from_hidden(numpy.ones(10))
...@@ -27,23 +27,23 @@ class Test_pfunc(unittest.TestCase): ...@@ -27,23 +27,23 @@ class Test_pfunc(unittest.TestCase):
b = shared(1) b = shared(1)
f1 = pfunc([a], a+b) f1 = pfunc([a], a+b)
f2 = pfunc([Param(a, default=44)], a + b, updates={b: b + 1}) f2 = pfunc([Param(a, default=44)], a + b, updates={b: b + 1})
self.failUnless(b.get_value() == 1) self.assertTrue(b.get_value() == 1)
self.failUnless(f1(3) == 4) self.assertTrue(f1(3) == 4)
self.failUnless(f2(3) == 4) self.assertTrue(f2(3) == 4)
self.failUnless(b.get_value() == 2) self.assertTrue(b.get_value() == 2)
self.failUnless(f1(3) == 5) self.assertTrue(f1(3) == 5)
b.set_value(0) b.set_value(0)
self.failUnless(f1(3) == 3) self.assertTrue(f1(3) == 3)
# Example #2. # Example #2.
a = tensor.lscalar() a = tensor.lscalar()
b = shared(7) b = shared(7)
f1 = pfunc([a], a + b) f1 = pfunc([a], a + b)
f2 = pfunc([a], a * b) f2 = pfunc([a], a * b)
self.failUnless(f1(5) == 12) self.assertTrue(f1(5) == 12)
b.set_value(8) b.set_value(8)
self.failUnless(f1(5) == 13) self.assertTrue(f1(5) == 13)
self.failUnless(f2(4) == 32) self.assertTrue(f2(4) == 32)
def test_shared(self): def test_shared(self):
...@@ -317,25 +317,25 @@ class Test_pfunc(unittest.TestCase): ...@@ -317,25 +317,25 @@ class Test_pfunc(unittest.TestCase):
x = shared(0) x = shared(0)
assign = pfunc([], [], updates = {x: 3}) assign = pfunc([], [], updates = {x: 3})
assign() assign()
self.failUnless(x.get_value() == 3) self.assertTrue(x.get_value() == 3)
# Basic increment function. # Basic increment function.
x.set_value(0) x.set_value(0)
inc = pfunc([], [], updates = {x: x + 1}) inc = pfunc([], [], updates = {x: x + 1})
inc() inc()
self.failUnless(x.get_value() == 1) self.assertTrue(x.get_value() == 1)
# Increment by a constant value. # Increment by a constant value.
x.set_value(-1) x.set_value(-1)
y = shared(2) y = shared(2)
inc_by_y = pfunc([], [], updates = {x: x + y}) inc_by_y = pfunc([], [], updates = {x: x + y})
inc_by_y() inc_by_y()
self.failUnless(x.get_value() == 1) self.assertTrue(x.get_value() == 1)
def test_duplicate_updates(self): def test_duplicate_updates(self):
x, y = dmatrices('x', 'y') x, y = dmatrices('x', 'y')
z = shared(numpy.ones((2,3))) z = shared(numpy.ones((2,3)))
self.failUnlessRaises(ValueError, theano.function, [x,y], [z], updates=[(z, z+x+y), (z, z-x)]) self.assertRaises(ValueError, theano.function, [x,y], [z], updates=[(z, z+x+y), (z, z-x)])
def test_givens(self): def test_givens(self):
x = shared(0) x = shared(0)
...@@ -419,9 +419,9 @@ class Test_pfunc(unittest.TestCase): ...@@ -419,9 +419,9 @@ class Test_pfunc(unittest.TestCase):
print x.get_value() print x.get_value()
assert x.get_value() == 6 assert x.get_value() == 6
self.failUnlessRaises(TypeError, pfunc, [], [x], no_default_updates=(x)) self.assertRaises(TypeError, pfunc, [], [x], no_default_updates=(x))
self.failUnlessRaises(TypeError, pfunc, [], [x], no_default_updates=x) self.assertRaises(TypeError, pfunc, [], [x], no_default_updates=x)
self.failUnlessRaises(TypeError, pfunc, [], [x], no_default_updates='canard') self.assertRaises(TypeError, pfunc, [], [x], no_default_updates='canard')
# Mix explicit updates and no_default_updates # Mix explicit updates and no_default_updates
g1 = pfunc([], [x], updates=[(x,x-1)], no_default_updates=True) g1 = pfunc([], [x], updates=[(x,x-1)], no_default_updates=True)
...@@ -582,7 +582,7 @@ class Test_pfunc(unittest.TestCase): ...@@ -582,7 +582,7 @@ class Test_pfunc(unittest.TestCase):
assert y.get_value() == 2 assert y.get_value() == 2
# a is needed as input if y.default_update is used # a is needed as input if y.default_update is used
self.failUnlessRaises(TypeError, pfunc, [], x) self.assertRaises(TypeError, pfunc, [], x)
def test_default_updates_partial_graph(self): def test_default_updates_partial_graph(self):
a = shared(0) a = shared(0)
......
...@@ -34,7 +34,7 @@ class Test_SharedVariable(unittest.TestCase): ...@@ -34,7 +34,7 @@ class Test_SharedVariable(unittest.TestCase):
assert shared([]).type == generic assert shared([]).type == generic
def badfunc(): def badfunc():
shared(7, bad_kw=False) shared(7, bad_kw=False)
self.failUnlessRaises(TypeError, badfunc) self.assertRaises(TypeError, badfunc)
def test_strict_generic(self): def test_strict_generic(self):
...@@ -119,38 +119,38 @@ class Test_SharedVariable(unittest.TestCase): ...@@ -119,38 +119,38 @@ class Test_SharedVariable(unittest.TestCase):
b = shared(numpy.int64(7), strict=True) b = shared(numpy.int64(7), strict=True)
assert b.type == theano.tensor.lscalar assert b.type == theano.tensor.lscalar
self.failUnlessRaises(TypeError, f, b, 8.23) self.assertRaises(TypeError, f, b, 8.23)
b = shared(numpy.int32(7), strict=True) b = shared(numpy.int32(7), strict=True)
assert b.type == theano.tensor.iscalar assert b.type == theano.tensor.iscalar
self.failUnlessRaises(TypeError, f, b, 8.23) self.assertRaises(TypeError, f, b, 8.23)
b = shared(numpy.int16(7), strict=True) b = shared(numpy.int16(7), strict=True)
assert b.type == theano.tensor.wscalar assert b.type == theano.tensor.wscalar
self.failUnlessRaises(TypeError, f, b, 8.23) self.assertRaises(TypeError, f, b, 8.23)
b = shared(numpy.int8(7), strict=True) b = shared(numpy.int8(7), strict=True)
assert b.type == theano.tensor.bscalar assert b.type == theano.tensor.bscalar
self.failUnlessRaises(TypeError, f, b, 8.23) self.assertRaises(TypeError, f, b, 8.23)
b = shared(numpy.float64(7.234), strict=True) b = shared(numpy.float64(7.234), strict=True)
assert b.type == theano.tensor.dscalar assert b.type == theano.tensor.dscalar
self.failUnlessRaises(TypeError, f, b, 8) self.assertRaises(TypeError, f, b, 8)
b = shared(numpy.float32(7.234), strict=True) b = shared(numpy.float32(7.234), strict=True)
assert b.type == theano.tensor.fscalar assert b.type == theano.tensor.fscalar
self.failUnlessRaises(TypeError, f, b, 8) self.assertRaises(TypeError, f, b, 8)
b = shared(numpy.float(7.234), strict=True) b = shared(numpy.float(7.234), strict=True)
assert b.type == theano.tensor.dscalar assert b.type == theano.tensor.dscalar
self.failUnlessRaises(TypeError, f, b, 8) self.assertRaises(TypeError, f, b, 8)
b = shared(7.234, strict=True) b = shared(7.234, strict=True)
assert b.type == theano.tensor.dscalar assert b.type == theano.tensor.dscalar
self.failUnlessRaises(TypeError, f, b, 8) self.assertRaises(TypeError, f, b, 8)
c = shared(numpy.zeros((5,5), dtype='float32')) c = shared(numpy.zeros((5,5), dtype='float32'))
self.failUnlessRaises(TypeError, f, b, numpy.random.rand(5,5)) self.assertRaises(TypeError, f, b, numpy.random.rand(5,5))
...@@ -160,40 +160,40 @@ class Test_SharedVariable(unittest.TestCase): ...@@ -160,40 +160,40 @@ class Test_SharedVariable(unittest.TestCase):
b = shared(numpy.int64([7]), strict=True) b = shared(numpy.int64([7]), strict=True)
assert b.type == theano.tensor.lvector assert b.type == theano.tensor.lvector
self.failUnlessRaises(TypeError, f, b, 8.23) self.assertRaises(TypeError, f, b, 8.23)
b = shared(numpy.int32([7]), strict=True) b = shared(numpy.int32([7]), strict=True)
assert b.type == theano.tensor.ivector assert b.type == theano.tensor.ivector
self.failUnlessRaises(TypeError, f, b, 8.23) self.assertRaises(TypeError, f, b, 8.23)
b = shared(numpy.int16([7]), strict=True) b = shared(numpy.int16([7]), strict=True)
assert b.type == theano.tensor.wvector assert b.type == theano.tensor.wvector
self.failUnlessRaises(TypeError, f, b, 8.23) self.assertRaises(TypeError, f, b, 8.23)
b = shared(numpy.int8([7]), strict=True) b = shared(numpy.int8([7]), strict=True)
assert b.type == theano.tensor.bvector assert b.type == theano.tensor.bvector
self.failUnlessRaises(TypeError, f, b, 8.23) self.assertRaises(TypeError, f, b, 8.23)
b = shared(numpy.float64([7.234]), strict=True) b = shared(numpy.float64([7.234]), strict=True)
assert b.type == theano.tensor.dvector assert b.type == theano.tensor.dvector
self.failUnlessRaises(TypeError, f, b, 8) self.assertRaises(TypeError, f, b, 8)
b = shared(numpy.float32([7.234]), strict=True) b = shared(numpy.float32([7.234]), strict=True)
assert b.type == theano.tensor.fvector assert b.type == theano.tensor.fvector
self.failUnlessRaises(TypeError, f, b, 8) self.assertRaises(TypeError, f, b, 8)
#numpy.float([7.234]) don't work #numpy.float([7.234]) don't work
# b = shared(numpy.float([7.234]), strict=True) # b = shared(numpy.float([7.234]), strict=True)
# assert b.type == theano.tensor.dvector # assert b.type == theano.tensor.dvector
# self.failUnlessRaises(TypeError, f, b, 8) # self.assertRaises(TypeError, f, b, 8)
#This generate a generic type. Should we cast? I don't think. #This generate a generic type. Should we cast? I don't think.
# b = shared([7.234], strict=True) # b = shared([7.234], strict=True)
# assert b.type == theano.tensor.dvector # assert b.type == theano.tensor.dvector
# self.failUnlessRaises(TypeError, f, b, 8) # self.assertRaises(TypeError, f, b, 8)
c = shared(numpy.zeros((5,5), dtype='float32')) c = shared(numpy.zeros((5,5), dtype='float32'))
self.failUnlessRaises(TypeError, f, b, numpy.random.rand(5,5)) self.assertRaises(TypeError, f, b, numpy.random.rand(5,5))
...@@ -252,7 +252,7 @@ class Test_SharedVariable(unittest.TestCase): ...@@ -252,7 +252,7 @@ class Test_SharedVariable(unittest.TestCase):
assert b.get_value()==8 assert b.get_value()==8
c = shared(numpy.zeros((5,5), dtype='float32'), allow_downcast=True) c = shared(numpy.zeros((5,5), dtype='float32'), allow_downcast=True)
self.failUnlessRaises(TypeError, f, b, numpy.random.rand(5,5)) self.assertRaises(TypeError, f, b, numpy.random.rand(5,5))
...@@ -306,4 +306,4 @@ class Test_SharedVariable(unittest.TestCase): ...@@ -306,4 +306,4 @@ class Test_SharedVariable(unittest.TestCase):
assert b.get_value() == 8 assert b.get_value() == 8
c = shared(numpy.zeros((5,5), dtype='float32'), allow_downcast=True) c = shared(numpy.zeros((5,5), dtype='float32'), allow_downcast=True)
self.failUnlessRaises(TypeError, f, b, numpy.random.rand(5,5)) self.assertRaises(TypeError, f, b, numpy.random.rand(5,5))
...@@ -364,6 +364,8 @@ def pre_constant_merge(vars): ...@@ -364,6 +364,8 @@ def pre_constant_merge(vars):
def recursive_merge(var): def recursive_merge(var):
if var in seen_var: if var in seen_var:
return var return var
if not hasattr(var, 'owner'):
return var
if var.owner and hasattr(var.owner, "env"): if var.owner and hasattr(var.owner, "env"):
return var return var
seen_var.add(var) seen_var.add(var)
...@@ -1164,7 +1166,7 @@ def pre_greedy_local_optimizer(list_optimizations, out): ...@@ -1164,7 +1166,7 @@ def pre_greedy_local_optimizer(list_optimizations, out):
be needed to call this function multiple time. be needed to call this function multiple time.
''' '''
def local_recursive_function( list_opt, out, optimized_vars, depth): def local_recursive_function( list_opt, out, optimized_vars, depth):
if not out.owner : if not getattr(out, 'owner', None):
return [out], optimized_vars return [out], optimized_vars
node = out.owner node = out.owner
if hasattr(node, 'env'): if hasattr(node, 'env'):
......
...@@ -4,6 +4,9 @@ They all allow different way to print a graph or the result of an Op in a graph( ...@@ -4,6 +4,9 @@ They all allow different way to print a graph or the result of an Op in a graph(
import sys, os, StringIO import sys, os, StringIO
from copy import copy from copy import copy
import numpy
import theano
import gof import gof
from theano import config from theano import config
from gof import Op, Apply from gof import Op, Apply
...@@ -388,8 +391,9 @@ default_colorCodes = {'GpuFromHost' : 'red', ...@@ -388,8 +391,9 @@ default_colorCodes = {'GpuFromHost' : 'red',
def pydotprint(fct, outfile=None, def pydotprint(fct, outfile=None,
compact=True, format='png', with_ids=False, compact=True, format='png', with_ids=False,
high_contrast=False, cond_highlight = None, colorCodes = None): high_contrast=False, cond_highlight = None, colorCodes = None,
max_label_size=50):
""" """
print to a file in png format the graph of op of a compile theano fct. print to a file in png format the graph of op of a compile theano fct.
...@@ -402,6 +406,12 @@ def pydotprint(fct, outfile=None, ...@@ -402,6 +406,12 @@ def pydotprint(fct, outfile=None,
the border the border
:param colorCodes: dictionary with names of ops as keys and colors as :param colorCodes: dictionary with names of ops as keys and colors as
values values
:param cond_highlight: Highlights a lazy if by sorrounding each of the 3
possible categories of ops with a border. The categories
are: ops that are on the left branch, ops that are on the
right branch, ops that are on both branches
As an alternative you can provide the node that represents
the lazy if
In the graph, box are an Apply Node(the execution of an op) and ellipse are variable. In the graph, box are an Apply Node(the execution of an op) and ellipse are variable.
If variable have name they are used as the text(if multiple var have the same name, they will be merged in the graph). If variable have name they are used as the text(if multiple var have the same name, they will be merged in the graph).
...@@ -422,7 +432,8 @@ def pydotprint(fct, outfile=None, ...@@ -422,7 +432,8 @@ def pydotprint(fct, outfile=None,
if outfile is None: if outfile is None:
outfile = os.path.join(config.compiledir,'theano.pydotprint.' + outfile = os.path.join(config.compiledir,'theano.pydotprint.' +
config.device + '.' + format) config.device + '.' + format)
if isinstance(fct, Function):
if isinstance(fct, (Function, theano.scan_module.scan_utils.ScanInnerFunction)):
mode = fct.maker.mode mode = fct.maker.mode
fct_env = fct.maker.env fct_env = fct.maker.env
if not isinstance(mode,ProfileMode) or not mode.fct_call.has_key(fct): if not isinstance(mode,ProfileMode) or not mode.fct_call.has_key(fct):
...@@ -431,7 +442,7 @@ def pydotprint(fct, outfile=None, ...@@ -431,7 +442,7 @@ def pydotprint(fct, outfile=None,
mode = None mode = None
fct_env = fct fct_env = fct
else: else:
raise ValueError(('pydotprint expects as input a theano.function or' raise ValueError(('pydotprint expects as input a theano.function or '
'the env of a function!'), fct) 'the env of a function!'), fct)
try: try:
...@@ -460,12 +471,12 @@ def pydotprint(fct, outfile=None, ...@@ -460,12 +471,12 @@ def pydotprint(fct, outfile=None,
left = set(recursive_pass(cond.inputs[1],[])) left = set(recursive_pass(cond.inputs[1],[]))
right =set(recursive_pass(cond.inputs[2],[])) right =set(recursive_pass(cond.inputs[2],[]))
middle = left.intersecton(right) middle = left.intersection(right)
left = left.difference(middle) left = left.difference(middle)
right = right.difference(middle) right = right.difference(middle)
middle = list(middle) middle = list(middle)
left = list(middle) left = list(left)
right = list(middle) right = list(right)
var_str={} var_str={}
all_strings = set() all_strings = set()
...@@ -478,11 +489,9 @@ def pydotprint(fct, outfile=None, ...@@ -478,11 +489,9 @@ def pydotprint(fct, outfile=None,
if var.name is not None: if var.name is not None:
varstr = 'name='+var.name+" "+str(var.type) varstr = 'name='+var.name+" "+str(var.type)
elif isinstance(var,gof.Constant): elif isinstance(var,gof.Constant):
dstr = 'val='+str(var.data) dstr = 'val='+str(numpy.asarray(var.data))
if '\n' in dstr: if '\n' in dstr:
dstr = dstr[:dstr.index('\n')] dstr = dstr[:dstr.index('\n')]
if len(dstr) > 30:
dstr = dstr[:27]+'...'
varstr = '%s [%s]'% (dstr, str(var.type)) varstr = '%s [%s]'% (dstr, str(var.type))
elif var in input_update and input_update[var].variable.name is not None: elif var in input_update and input_update[var].variable.name is not None:
varstr = input_update[var].variable.name+" "+str(var.type) varstr = input_update[var].variable.name+" "+str(var.type)
...@@ -491,6 +500,8 @@ def pydotprint(fct, outfile=None, ...@@ -491,6 +500,8 @@ def pydotprint(fct, outfile=None,
varstr = str(var.type) varstr = str(var.type)
if (varstr in all_strings) or with_ids: if (varstr in all_strings) or with_ids:
varstr += ' id=' + str(len(var_str)) varstr += ' id=' + str(len(var_str))
if len(varstr) > max_label_size:
varstr = varstr[:max_label_size-3]+'...'
var_str[var]=varstr var_str[var]=varstr
all_strings.add(varstr) all_strings.add(varstr)
...@@ -512,6 +523,8 @@ def pydotprint(fct, outfile=None, ...@@ -512,6 +523,8 @@ def pydotprint(fct, outfile=None,
else: pf = time*100/mode.fct_call_time[fct] else: pf = time*100/mode.fct_call_time[fct]
prof_str=' (%.3fs,%.3f%%,%.3f%%)'%(time,pt,pf) prof_str=' (%.3fs,%.3f%%,%.3f%%)'%(time,pt,pf)
applystr = str(node.op).replace(':','_') applystr = str(node.op).replace(':','_')
if len(applystr)>max_label_size:
applystr = applystr[:max_label_size-3]+'...'
if (applystr in all_strings) or with_ids: if (applystr in all_strings) or with_ids:
applystr = applystr+' id='+str(topo.index(node)) applystr = applystr+' id='+str(topo.index(node))
applystr += prof_str applystr += prof_str
...@@ -557,6 +570,8 @@ def pydotprint(fct, outfile=None, ...@@ -557,6 +570,8 @@ def pydotprint(fct, outfile=None,
for id,var in enumerate(node.inputs): for id,var in enumerate(node.inputs):
varstr=var_name(var) varstr=var_name(var)
label=str(var.type) label=str(var.type)
if len(label)>max_label_size:
label = label[:max_label_size-3]+'...'
if len(node.inputs)>1: if len(node.inputs)>1:
label=str(id)+' '+label label=str(id)+' '+label
if var.owner is None: if var.owner is None:
...@@ -580,6 +595,8 @@ def pydotprint(fct, outfile=None, ...@@ -580,6 +595,8 @@ def pydotprint(fct, outfile=None,
label=str(var.type) label=str(var.type)
if len(node.outputs)>1: if len(node.outputs)>1:
label=str(id)+' '+label label=str(id)+' '+label
if len(label)>max_label_size:
label = label[:max_label_size-3]+'...'
if out: if out:
g.add_edge(pd.Edge(astr, varstr, label=label)) g.add_edge(pd.Edge(astr, varstr, label=label))
if high_contrast: if high_contrast:
...@@ -617,7 +634,8 @@ def pydotprint_variables(vars, ...@@ -617,7 +634,8 @@ def pydotprint_variables(vars,
outfile=None, outfile=None,
format='png', format='png',
depth = -1, depth = -1,
high_contrast = True, colorCodes = None): high_contrast = True, colorCodes = None,
max_label_size=50):
''' Identical to pydotprint just that it starts from a variable instead ''' Identical to pydotprint just that it starts from a variable instead
of a compiled function. Could be useful ? ''' of a compiled function. Could be useful ? '''
...@@ -647,18 +665,21 @@ def pydotprint_variables(vars, ...@@ -647,18 +665,21 @@ def pydotprint_variables(vars,
dstr = 'val='+str(var.data) dstr = 'val='+str(var.data)
if '\n' in dstr: if '\n' in dstr:
dstr = dstr[:dstr.index('\n')] dstr = dstr[:dstr.index('\n')]
if len(dstr) > 30:
dstr = dstr[:27]+'...'
varstr = '%s [%s]'% (dstr, str(var.type)) varstr = '%s [%s]'% (dstr, str(var.type))
else: else:
#a var id is needed as otherwise var with the same type will be merged in the graph. #a var id is needed as otherwise var with the same type will be merged in the graph.
varstr = str(var.type) varstr = str(var.type)
if len(dstr) > max_label_size:
dstr = dstr[:max_label_size-1]+'...'
varstr += ' ' + str(len(var_str)) varstr += ' ' + str(len(var_str))
var_str[var]=varstr var_str[var]=varstr
return varstr return varstr
def apply_name(node): def apply_name(node):
return str(node.op).replace(':','_') name = str(node.op).replace(':','_')
if len(name) > max_label_size:
name = name[:max_label_size-3]+'...'
return name
def plot_apply(app, d): def plot_apply(app, d):
if d == 0: if d == 0:
...@@ -666,6 +687,8 @@ def pydotprint_variables(vars, ...@@ -666,6 +687,8 @@ def pydotprint_variables(vars,
if app in my_list: if app in my_list:
return return
astr = apply_name(app) + '_' + str(len(my_list.keys())) astr = apply_name(app) + '_' + str(len(my_list.keys()))
if len(astr) > max_label_size:
astr = astr[:max_label_size-3]+'...'
my_list[app] = astr my_list[app] = astr
use_color = None use_color = None
...@@ -685,6 +708,8 @@ def pydotprint_variables(vars, ...@@ -685,6 +708,8 @@ def pydotprint_variables(vars,
for i,nd in enumerate(app.inputs): for i,nd in enumerate(app.inputs):
if nd not in my_list: if nd not in my_list:
varastr = var_name(nd) + '_' + str(len(my_list.keys())) varastr = var_name(nd) + '_' + str(len(my_list.keys()))
if len(varastr) > max_label_size:
varastr = varastr[:max_label_size-3]+'...'
my_list[nd] = varastr my_list[nd] = varastr
if nd.owner is not None: if nd.owner is not None:
g.add_node(pd.Node(varastr)) g.add_node(pd.Node(varastr))
...@@ -703,6 +728,8 @@ def pydotprint_variables(vars, ...@@ -703,6 +728,8 @@ def pydotprint_variables(vars,
for i,nd in enumerate(app.outputs): for i,nd in enumerate(app.outputs):
if nd not in my_list: if nd not in my_list:
varastr = var_name(nd) + '_' + str(len(my_list.keys())) varastr = var_name(nd) + '_' + str(len(my_list.keys()))
if len(varastr) > max_label_size:
varastr = varastr[:max_label_size-3]+'...'
my_list[nd] = varastr my_list[nd] = varastr
color = None color = None
if nd in vars: if nd in vars:
......
...@@ -144,7 +144,7 @@ outdated!""") ...@@ -144,7 +144,7 @@ outdated!""")
GpuJoin, fscalar, fvector, fmatrix, frow, fcol, GpuJoin, fscalar, fvector, fmatrix, frow, fcol,
ftensor3, ftensor4, scalar, vector, matrix, row, col, ftensor3, ftensor4, scalar, vector, matrix, row, col,
tensor3, tensor4) tensor3, tensor4)
from basic_ops import host_from_gpu, gpu_from_host from basic_ops import host_from_gpu, gpu_from_host, as_cuda_array
import opt import opt
import cuda_ndarray import cuda_ndarray
......
...@@ -31,6 +31,14 @@ def as_cuda_ndarray_variable(x): ...@@ -31,6 +31,14 @@ def as_cuda_ndarray_variable(x):
tensor_x = tensor.as_tensor_variable(x) tensor_x = tensor.as_tensor_variable(x)
return gpu_from_host(tensor_x) return gpu_from_host(tensor_x)
def as_cuda_array(obj):
if isinstance(obj, numpy.ndarray):
return cuda_ndarray.cuda_ndarray.CudaNdarray(obj)
elif isinstance(obj, cuda_ndarray.cuda_ndarray.CudaNdarray):
return obj
else:
raise TypeError("Don't know how to cast to a CudaNdarray object")
class HostFromGpu(Op): class HostFromGpu(Op):
def __eq__(self, other): def __eq__(self, other):
return type(self) == type(other) return type(self) == type(other)
......
...@@ -611,7 +611,7 @@ def gpu_print_wrapper(op, cnda): ...@@ -611,7 +611,7 @@ def gpu_print_wrapper(op, cnda):
@register_opt() @register_opt()
@local_optimizer([]) @local_optimizer([])
def local_print_op(node): def local_gpu_print_op(node):
if isinstance(node.op, tensor.printing.Print): if isinstance(node.op, tensor.printing.Print):
x, = node.inputs x, = node.inputs
if x.owner and x.owner.op == host_from_gpu: if x.owner and x.owner.op == host_from_gpu:
......
...@@ -919,7 +919,7 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester( ...@@ -919,7 +919,7 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester(
test_internal_type_ = lambda a: isinstance(a,cuda_ndarray.CudaNdarray), test_internal_type_ = lambda a: isinstance(a,cuda_ndarray.CudaNdarray),
theano_fct_ = theano.tensor.exp, theano_fct_ = theano.tensor.exp,
ref_fct_ = numpy.exp, ref_fct_ = numpy.exp,
cast_value_ = cuda_ndarray.CudaNdarray, cast_value_ = cuda.as_cuda_array,
op_by_matrix_ = True) op_by_matrix_ = True)
#This test the case when the shared constructor view an ndarray as input #This test the case when the shared constructor view an ndarray as input
......
...@@ -759,7 +759,7 @@ class MRG_RandomStreams(object): ...@@ -759,7 +759,7 @@ class MRG_RandomStreams(object):
assert ndim==1 assert ndim==1
bcast = bcast+(pvals.type.broadcastable[-1],) bcast = bcast+(pvals.type.broadcastable[-1],)
unis = self.uniform(size=size, ndim=1) unis = self.uniform(size=size, ndim=1)
op = multinomial.Multinomial(dtype) op = multinomial.MultinomialFromUniform(dtype)
return op(pvals, unis) return op(pvals, unis)
else: else:
raise NotImplementedError(("MRG_RandomStreams.multinomial only" raise NotImplementedError(("MRG_RandomStreams.multinomial only"
......
...@@ -59,7 +59,6 @@ class T_solve(unittest.TestCase): ...@@ -59,7 +59,6 @@ class T_solve(unittest.TestCase):
x=scipy.linalg.solve(A,b) x=scipy.linalg.solve(A,b)
Ax = numpy.dot(A,x) Ax = numpy.dot(A,x)
are = tensor.numeric_grad.abs_rel_err(Ax, b) are = tensor.numeric_grad.abs_rel_err(Ax, b)
self.failUnless(numpy.all(are < 1.0e-5), (are, Ax, b)) self.assertTrue(numpy.all(are < 1.0e-5), (are, Ax, b))
#print A,b #print A,b
#print numpy.dot(A,x) #print numpy.dot(A,x)
import copy
import numpy import numpy
import theano import theano
from theano import tensor, shared, function from theano import tensor, function
import multinomial import multinomial
from theano.compile.mode import get_default_mode, predefined_linkers from theano.compile.mode import get_default_mode, predefined_linkers
import theano.sandbox.cuda as cuda
def run_with_c(f): def get_mode(gpu):
mode = get_default_mode() mode = get_default_mode()
linker_orig = mode.linker mode = copy.copy(mode)
if linker_orig == predefined_linkers['py']: if gpu:
mode = mode.including('gpu', 'gpu_local_optimizations', 'local_cut_gpu_host_gpu', 'local_gpu_multinomial')
if isinstance(mode.linker, theano.gof.PerformLinker):
mode.linker = predefined_linkers['c|py'] mode.linker = predefined_linkers['c|py']
try: return mode
f(mode)
finally: def run_with_c(f, gpu=False):
mode.linker = linker_orig mode = get_mode(gpu)
f(mode, gpu)
def test_multimomial_0(): def test_multinomial_0():
# This tests the multinomial Op directly, not going through the # This tests the MultinomialFromUniform Op directly, not going through the
# multinomial() call in GPU random generation. # multinomial() call in GPU random generation.
p = tensor.matrix() p = tensor.fmatrix()
u = tensor.vector() u = tensor.fvector()
m = multinomial.Multinomial('auto')(p,u) m = multinomial.MultinomialFromUniform('auto')(p,u)
def body(mode): def body(mode, gpu):
#the m*2 allows the multinomial to reuse output #the m*2 allows the multinomial to reuse output
f = function([p,u], m*2, allow_input_downcast=True, mode=mode) f = function([p,u], m*2, allow_input_downcast=True, mode=mode)
if gpu:
assert any([type(node.op) is multinomial.GpuMultinomialFromUniform for node in f.maker.env.toposort()])
# test that both first and second samples can be drawn # test that both first and second samples can be drawn
assert numpy.allclose(f([[1,0], [0,1]], [.1, .1]), assert numpy.allclose(f([[1,0], [0,1]], [.1, .1]),
...@@ -50,16 +57,19 @@ def test_multimomial_0(): ...@@ -50,16 +57,19 @@ def test_multimomial_0():
assert numpy.allclose(r, [[0,2]]), r assert numpy.allclose(r, [[0,2]]), r
run_with_c(body) run_with_c(body)
if cuda.cuda_available:
run_with_c(body, True)
#TODO: check a bigger example (make sure blocking on GPU is handled correctly) #TODO: check a bigger example (make sure blocking on GPU is handled correctly)
def test_multinomial_large(): def test_multinomial_large():
# DEBUG_MODE will test this on GPU # DEBUG_MODE will test this on GPU
def body(mode): def body(mode, gpu):
p = tensor.fmatrix() p = tensor.fmatrix()
u = tensor.fvector() u = tensor.fvector()
m = multinomial.Multinomial('auto')(p,u) m = multinomial.MultinomialFromUniform('auto')(p,u)
f = function([p,u], m*2, allow_input_downcast=True, mode=mode) f = function([p,u], m*2, allow_input_downcast=True, mode=mode)
if gpu:
assert any([type(node.op) is multinomial.GpuMultinomialFromUniform for node in f.maker.env.toposort()])
pval = numpy.arange(10000 * 4, dtype='float32').reshape((10000, 4))+0.1 pval = numpy.arange(10000 * 4, dtype='float32').reshape((10000, 4))+0.1
pval = pval / pval.sum(axis=1)[:,None] pval = pval / pval.sum(axis=1)[:,None]
...@@ -72,21 +82,43 @@ def test_multinomial_large(): ...@@ -72,21 +82,43 @@ def test_multinomial_large():
asdf = numpy.asarray([0, 0, 2, 0])+0*pval asdf = numpy.asarray([0, 0, 2, 0])+0*pval
assert numpy.allclose(mval, asdf) #broadcast over all rows assert numpy.allclose(mval, asdf) #broadcast over all rows
run_with_c(body) run_with_c(body)
if cuda.cuda_available:
run_with_c(body, True)
def test_multinomial_dtypes(): def test_multinomial_dtypes():
p = tensor.dmatrix() p = tensor.dmatrix()
u = tensor.dvector() u = tensor.dvector()
m = multinomial.Multinomial('auto')(p,u) m = multinomial.MultinomialFromUniform('auto')(p,u)
assert m.dtype == 'float64', m.dtype assert m.dtype == 'float64', m.dtype
p = tensor.fmatrix() p = tensor.fmatrix()
u = tensor.fvector() u = tensor.fvector()
m = multinomial.Multinomial('auto')(p,u) m = multinomial.MultinomialFromUniform('auto')(p,u)
assert m.dtype == 'float32', m.dtype assert m.dtype == 'float32', m.dtype
p = tensor.fmatrix() p = tensor.fmatrix()
u = tensor.fvector() u = tensor.fvector()
m = multinomial.Multinomial('float64')(p,u) m = multinomial.MultinomialFromUniform('float64')(p,u)
assert m.dtype == 'float64', m.dtype assert m.dtype == 'float64', m.dtype
def test_gpu_opt():
if not cuda.cuda_available:
# Skip test if cuda_ndarray is not available.
from nose.plugins.skip import SkipTest
raise SkipTest('Optional package cuda not available')
# We test the case where we put the op on the gpu when the output is moved to the gpu.
p = tensor.fmatrix()
u = tensor.fvector()
m = multinomial.MultinomialFromUniform('auto')(p,u)
assert m.dtype == 'float32', m.dtype
m_gpu = cuda.gpu_from_host(m)
f = function([p,u], m_gpu, allow_input_downcast=True, mode=get_mode(True))
assert any([type(node.op) is multinomial.GpuMultinomialFromUniform for node in f.maker.env.toposort()])
pval = numpy.arange(10000 * 4, dtype='float32').reshape((10000, 4))+0.1
pval = pval / pval.sum(axis=1)[:,None]
uval = numpy.ones_like(pval[:,0]) * 0.5
mval = f(pval,uval)
...@@ -104,6 +104,7 @@ class Scalar(Type): ...@@ -104,6 +104,7 @@ class Scalar(Type):
def c_headers(self): def c_headers(self):
l=['<math.h>'] l=['<math.h>']
l.append('<numpy/arrayscalars.h>')
if config.lib.amdlibm: if config.lib.amdlibm:
l+=['<amdlibm.h>'] l+=['<amdlibm.h>']
return l return l
...@@ -127,18 +128,19 @@ class Scalar(Type): ...@@ -127,18 +128,19 @@ class Scalar(Type):
def dtype_specs(self): def dtype_specs(self):
try: try:
return {'float32': (numpy.float32, 'npy_float32', 'PyFloat_Check', 'PyFloat_AsDouble', 'PyFloat_FromDouble'), return {# dtype: (py_type, c_type, cls_name)
'float64': (numpy.float64, 'npy_float64', 'PyFloat_Check', 'PyFloat_AsDouble', 'PyFloat_FromDouble'), 'float32': (numpy.float32, 'npy_float32', 'Float32'),
'complex128': (numpy.complex128, 'theano_complex128', 'PyComplex_Check', 'PyComplex_AsCComplex', 'PyComplex_FromCComplex'), 'float64': (numpy.float64, 'npy_float64', 'Float64'),
'complex64': (numpy.complex64, 'theano_complex64', None, None, None), 'complex128': (numpy.complex128, 'theano_complex128', 'Complex128'),
'uint8': (numpy.uint8, 'npy_uint8', 'PyInt_Check', 'PyInt_AsLong', 'PyInt_FromLong'), 'complex64': (numpy.complex64, 'theano_complex64', 'Complex64'),
'int8': (numpy.int8, 'npy_int8', 'PyInt_Check', 'PyInt_AsLong', 'PyInt_FromLong'), 'uint8': (numpy.uint8, 'npy_uint8', 'UInt8'),
'uint16': (numpy.uint16, 'npy_uint16', 'PyInt_Check', 'PyInt_AsLong', 'PyInt_FromLong'), 'int8': (numpy.int8, 'npy_int8', 'Int8'),
'int16': (numpy.int16, 'npy_int16', 'PyInt_Check', 'PyInt_AsLong', 'PyInt_FromLong'), 'uint16': (numpy.uint16, 'npy_uint16', 'UInt16'),
'uint32': (numpy.uint32, 'npy_uint32', 'PyInt_Check', 'PyInt_AsLong', 'PyInt_FromLong'), 'int16': (numpy.int16, 'npy_int16', 'Int16'),
'int32': (numpy.int32, 'npy_int32', 'PyInt_Check', 'PyInt_AsLong', 'PyInt_FromLong'), 'uint32': (numpy.uint32, 'npy_uint32', 'UInt32'),
'uint64': (numpy.uint64, 'npy_uint64', 'PyInt_Check', 'PyInt_AsLong', 'PyInt_FromLong'), 'int32': (numpy.int32, 'npy_int32', 'Int32'),
'int64': (numpy.int64, 'npy_int64', 'PyInt_Check', 'PyInt_AsLong', 'PyInt_FromLong') 'uint64': (numpy.uint64, 'npy_uint64', 'UInt64'),
'int64': (numpy.int64, 'npy_int64', 'Int64')
}[self.dtype] }[self.dtype]
except KeyError: except KeyError:
raise TypeError("Unsupported dtype for %s: %s" % (self.__class__.__name__, self.dtype)) raise TypeError("Unsupported dtype for %s: %s" % (self.__class__.__name__, self.dtype))
...@@ -173,37 +175,37 @@ class Scalar(Type): ...@@ -173,37 +175,37 @@ class Scalar(Type):
def c_extract(self, name, sub): def c_extract(self, name, sub):
specs = self.dtype_specs() specs = self.dtype_specs()
#TODO: This is the wrong code, but we don't know what to change it to.
# For example, a numpy.uint8 is not a PyInt, so PyInt_Check
# is simply the wrong function to
# call.
# Look at PyArrayScalar api for how to cast to/from PyArrayScalar objects.
# numpy.uint* numpy.float* are all constructors of PyArrayScalar objects.
#
return """ return """
if (!%(check)s(py_%(name)s)) if (!PyObject_TypeCheck(py_%(name)s, &%(pyarr_type)s))
{ {
PyErr_Format(PyExc_ValueError, PyErr_Format(PyExc_ValueError,
"Scalar check failed"); "Scalar check failed (%(dtype)s)");
%(fail)s %(fail)s
} }
%(name)s = (%(dtype)s)%(conv)s(py_%(name)s); PyArray_ScalarAsCtype(py_%(name)s, &%(name)s);
""" % dict(sub, """ % dict(sub,
name = name, name = name,
dtype = specs[1], dtype = specs[1],
check = specs[2], pyarr_type = 'Py%sArrType_Type' % specs[2])
conv = specs[3])
def c_sync(self, name, sub): def c_sync(self, name, sub):
specs = self.dtype_specs() specs = self.dtype_specs()
return """ return """
Py_XDECREF(py_%(name)s); Py_XDECREF(py_%(name)s);
py_%(name)s = %(conv)s((%(dtype)s)%(name)s); py_%(name)s = PyArrayScalar_New(%(cls)s);
if (!py_%(name)s) if (!py_%(name)s)
{
Py_XINCREF(Py_None);
py_%(name)s = Py_None; py_%(name)s = Py_None;
""" % dict(name = name, PyErr_Format(PyExc_MemoryError,
"Instantiation of new Python scalar failed (%(dtype)s)");
%(fail)s
}
PyArrayScalar_ASSIGN(py_%(name)s, %(cls)s, %(name)s);
""" % dict(sub,
name = name,
dtype = specs[1], dtype = specs[1],
conv = specs[4]) cls = specs[2])
def c_cleanup(self, name, sub): def c_cleanup(self, name, sub):
return "" return ""
...@@ -340,6 +342,7 @@ class Scalar(Type): ...@@ -340,6 +342,7 @@ class Scalar(Type):
return "" return ""
def c_code_cache_version(self): def c_code_cache_version(self):
return (10, numpy.__version__) # Use the correct type checking and conversion functions
return (9, numpy.__version__) # Make operators work with 64 and 128 arguments at the same time return (9, numpy.__version__) # Make operators work with 64 and 128 arguments at the same time
return (8, numpy.__version__) # put const around operators and added unary '-' operator return (8, numpy.__version__) # put const around operators and added unary '-' operator
# no need to put lib.amdlibm here as c_compile_args() are put in the key. # no need to put lib.amdlibm here as c_compile_args() are put in the key.
......
...@@ -39,7 +39,7 @@ class test_ScalarOps(unittest.TestCase): ...@@ -39,7 +39,7 @@ class test_ScalarOps(unittest.TestCase):
#so this is not a silent bug. #so this is not a silent bug.
def tes_mod(self): def tes_mod(self):
""" """
We add this test as not all language and C implementation give the same We add this test as not all language and C implementation give the same
signe to the result. This check that the c_code of `Mod` is implemented signe to the result. This check that the c_code of `Mod` is implemented
as Python. That is what we want. as Python. That is what we want.
""" """
...@@ -49,7 +49,7 @@ class test_ScalarOps(unittest.TestCase): ...@@ -49,7 +49,7 @@ class test_ScalarOps(unittest.TestCase):
(1,2), (-1,2), (1,-2), (-1,-2), (1,2), (-1,2), (1,-2), (-1,-2),
(5,3), (-5,3), (5,-3), (-5,-3) (5,3), (-5,3), (5,-3), (-5,-3)
): ):
self.failUnless(fn(a,b) == a%b, (a,)) self.assertTrue(fn(a,b) == a%b, (a,))
class test_composite(unittest.TestCase): class test_composite(unittest.TestCase):
...@@ -106,72 +106,72 @@ class test_logical(unittest.TestCase): ...@@ -106,72 +106,72 @@ class test_logical(unittest.TestCase):
x, y, z = inputs() x, y, z = inputs()
fn = gof.DualLinker().accept(Env([x,y], [x > y])).make_function() fn = gof.DualLinker().accept(Env([x,y], [x > y])).make_function()
for a,b in ((3.,9), (3,0.9), (3,3)): for a,b in ((3.,9), (3,0.9), (3,3)):
self.failUnless(fn(a,b) == (a>b)) self.assertTrue(fn(a,b) == (a>b))
def test_lt(self): def test_lt(self):
x, y, z = inputs() x, y, z = inputs()
fn = gof.DualLinker().accept(Env([x,y], [x < y])).make_function() fn = gof.DualLinker().accept(Env([x,y], [x < y])).make_function()
for a,b in ((3.,9), (3,0.9), (3,3)): for a,b in ((3.,9), (3,0.9), (3,3)):
self.failUnless(fn(a,b) == (a<b)) self.assertTrue(fn(a,b) == (a<b))
def test_le(self): def test_le(self):
x, y, z = inputs() x, y, z = inputs()
fn = gof.DualLinker().accept(Env([x,y], [x <= y])).make_function() fn = gof.DualLinker().accept(Env([x,y], [x <= y])).make_function()
for a,b in ((3.,9), (3,0.9), (3,3)): for a,b in ((3.,9), (3,0.9), (3,3)):
self.failUnless(fn(a,b) == (a<=b)) self.assertTrue(fn(a,b) == (a<=b))
def test_ge(self): def test_ge(self):
x, y, z = inputs() x, y, z = inputs()
fn = gof.DualLinker().accept(Env([x,y], [x >= y])).make_function() fn = gof.DualLinker().accept(Env([x,y], [x >= y])).make_function()
for a,b in ((3.,9), (3,0.9), (3,3)): for a,b in ((3.,9), (3,0.9), (3,3)):
self.failUnless(fn(a,b) == (a>=b)) self.assertTrue(fn(a,b) == (a>=b))
def test_eq(self): def test_eq(self):
x, y, z = inputs() x, y, z = inputs()
fn = gof.DualLinker().accept(Env([x,y], [eq(x,y)])).make_function() fn = gof.DualLinker().accept(Env([x,y], [eq(x,y)])).make_function()
for a,b in ((3.,9), (3,0.9), (3,3)): for a,b in ((3.,9), (3,0.9), (3,3)):
self.failUnless(fn(a,b) == (a==b)) self.assertTrue(fn(a,b) == (a==b))
def test_neq(self): def test_neq(self):
x, y, z = inputs() x, y, z = inputs()
fn = gof.DualLinker().accept(Env([x,y], [neq(x,y)])).make_function() fn = gof.DualLinker().accept(Env([x,y], [neq(x,y)])).make_function()
for a,b in ((3.,9), (3,0.9), (3,3)): for a,b in ((3.,9), (3,0.9), (3,3)):
self.failUnless(fn(a,b) == (a!=b)) self.assertTrue(fn(a,b) == (a!=b))
def test_or(self): def test_or(self):
x, y, z = ints('xyz') x, y, z = ints('xyz')
fn = gof.DualLinker().accept(Env([x,y], [x|y])).make_function() fn = gof.DualLinker().accept(Env([x,y], [x|y])).make_function()
for a,b in ((0,1), (0,0), (1,0), (1,1)): for a,b in ((0,1), (0,0), (1,0), (1,1)):
self.failUnless(fn(a,b) == (a|b), (a,b)) self.assertTrue(fn(a,b) == (a|b), (a,b))
def test_xor(self): def test_xor(self):
x, y, z = ints('xyz') x, y, z = ints('xyz')
fn = gof.DualLinker().accept(Env([x,y], [x^y])).make_function() fn = gof.DualLinker().accept(Env([x,y], [x^y])).make_function()
for a,b in ((0,1), (0,0), (1,0), (1,1)): for a,b in ((0,1), (0,0), (1,0), (1,1)):
self.failUnless(fn(a,b) == (a ^ b), (a,b)) self.assertTrue(fn(a,b) == (a ^ b), (a,b))
def test_and(self): def test_and(self):
x, y, z = ints('xyz') x, y, z = ints('xyz')
fn = gof.DualLinker().accept(Env([x,y], [and_(x, y)])).make_function() fn = gof.DualLinker().accept(Env([x,y], [and_(x, y)])).make_function()
for a,b in ((0,1), (0,0), (1,0), (1,1)): for a,b in ((0,1), (0,0), (1,0), (1,1)):
self.failUnless(fn(a,b) == (a & b), (a,b)) self.assertTrue(fn(a,b) == (a & b), (a,b))
x, y, z = ints('xyz') x, y, z = ints('xyz')
fn = gof.DualLinker().accept(Env([x,y], [x & y])).make_function() fn = gof.DualLinker().accept(Env([x,y], [x & y])).make_function()
for a,b in ((0,1), (0,0), (1,0), (1,1)): for a,b in ((0,1), (0,0), (1,0), (1,1)):
self.failUnless(fn(a,b) == (a & b), (a,b)) self.assertTrue(fn(a,b) == (a & b), (a,b))
def test_not(self): def test_not(self):
x, y, z = ints('xyz') x, y, z = ints('xyz')
fn = gof.DualLinker().accept(Env([x,y], [invert(x)])).make_function() fn = gof.DualLinker().accept(Env([x,y], [invert(x)])).make_function()
for a,b in ((0,1), (0,0), (1,0), (1,1)): for a,b in ((0,1), (0,0), (1,0), (1,1)):
self.failUnless(fn(a,b) == ~a, (a,)) self.assertTrue(fn(a,b) == ~a, (a,))
x, y, z = ints('xyz') x, y, z = ints('xyz')
fn = gof.DualLinker().accept(Env([x,y], [~x])).make_function() fn = gof.DualLinker().accept(Env([x,y], [~x])).make_function()
for a,b in ((0,1), (0,0), (1,0), (1,1)): for a,b in ((0,1), (0,0), (1,0), (1,1)):
self.failUnless(fn(a,b) == ~a, (a,)) self.assertTrue(fn(a,b) == ~a, (a,))
class test_div(unittest.TestCase): class test_div(unittest.TestCase):
...@@ -196,7 +196,3 @@ class test_div(unittest.TestCase): ...@@ -196,7 +196,3 @@ class test_div(unittest.TestCase):
if __name__ == '__main__': if __name__ == '__main__':
unittest.main() unittest.main()
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论