提交 795be453 authored 作者: Pascal Lamblin's avatar Pascal Lamblin

merge

......@@ -39,9 +39,26 @@ Some kinds of errors can only be detected for certain input value combinations.
In the example above, there is no way to guarantee that a future call to say,
``f(-1)`` won't cause a problem. DebugMode is not a silver bullet.
If you instantiate DebugMode using the constructor ``compile.DebugMode``
rather than the keyword ``DEBUG_MODE`` you can configure its behaviour via
constructor arguments. See :api:`DebugMode` for details.
The keyword version of DebugMode (which you get by using ``mode='DEBUG_MODE``)
is quite strict, and can raise several different Exception types.
There following are DebugMode exceptions you might encounter:
DebugModeError
--------------
This is a generic error. All the other exceptions inherit from this one.
This error is typically not raised directly.
However, you can use ``except DebugModeError: ...`` to catch any of the more
specific types of Exception.
For detailed documentation see :api:`DebugModeError`.
BadCLinkerOutput
----------------
......@@ -105,18 +122,6 @@ whereby we debug in DEBUG_MODE and then run the full-size jobs in FAST_RUN.
For detailed documentation see :api:`StochasticOrder`.
FloatError
----------
This happens when invalid floating-point values such as NaN and Inf are
introduced into the computations. It indicates which Op created the first
NaN.
Currently this exception is never raised because the check is not being
performed, but the plan is that it will be. (see ticket #320)
For detailed documentation see :api:`FloatError`.
InvalidValueError
-----------------
......@@ -126,14 +131,11 @@ an output that is invalid with respect to the type of the corresponding output
variable. Like if it returned a complex-valued ndarray for a ``dscalar``
Type.
For detailed documentation see :api:`InvalidValueError`.
DebugModeError
--------------
This can also be triggered when floating-point values such as NaN and Inf are
introduced into the computations. It indicates which Op created the first
NaN. These floating-point values can be allowed by passing the
``check_isfinite=False`` argument to DebugMode.
This is a generic error, pretty unhelpful. You'll generally have to look at the
stack trace and then in the code to figure out why DebugMode is complaining.
For detailed documentation see :api:`InvalidValueError`.
For detailed documentation see :api:`DebugModeError`.
......@@ -531,3 +531,63 @@ class Test_ViewMap(unittest.TestCase):
# input, but guarantees correctness.
#custom_op.view_map = {0:[0], 1:[1]}
#f([1,2,3,4],[5,6,7,8])
class Test_check_isfinite(unittest.TestCase):
def setUp(self):
print 'Up'
self.old_val = theano.tensor.TensorType.filter_checks_isfinite
def tearDown(self):
print 'Down'
theano.tensor.TensorType.filter_checks_isfinite = self.old_val
def test_check_isfinite(self):
x = theano.tensor.dvector()
f = theano.function([x], (x+2) * 5, mode='DEBUG_MODE')
# this should work
f(numpy.log([3, 4, 5]))
# this should raise InvalidValueError
try:
# insert a NaN
f(numpy.log([3, -4, 5]))
assert False
except debugmode.InvalidValueError:
pass
# this should raise InvalidValueError
try:
# insert an Nan and Inf
f(numpy.asarray([0, 1.0, 0])/0)
assert False
except debugmode.InvalidValueError:
pass
# this should raise InvalidValueError
try:
# insert several Inf
f(numpy.asarray([1.0, 1.0, 1.0])/0)
assert False
except debugmode.InvalidValueError:
pass
# this should disable the exception
theano.tensor.TensorType.filter_checks_isfinite = False
# insert several Inf
f(numpy.asarray([1.0, 1.0, 1.0])/0)
def test_check_isfinite_disabled(self):
x = theano.tensor.dvector()
f = theano.function([x], (x+2) * 5, mode=debugmode.DebugMode(check_isfinite=False))
# the DestroyMap checker should be triggered by Nan != Nan
try:
f(numpy.log([3, -4, 5]))
assert False
except debugmode.BadDestroyMap:
pass
#inf should go through
f(numpy.asarray([1.0, 1.0, 1.0])/0)
......@@ -435,6 +435,9 @@ class T_module(unittest.TestCase):
"""Test that we can manipulate the mutable, strict, etc. flags (see SymbolicInput) of
Method inputs"""
if default_mode == 'FAST_COMPILE':
return
M = Module()
M.x = T.dvector()
M.y = T.dvector()
......@@ -598,7 +601,7 @@ def test_method_updates():
m = M.make()
m.f([9,9])
assert m.x is None
assert numpy.all(xval == [0, 1])
assert numpy.all(m.f[M.x] == [0, 1])
# when a variable is listed explicitly and in an update, then there's a problem.
......
......@@ -644,6 +644,8 @@ class Abs(UnaryScalarOp):
return "%(z)s = abs(%(x)s);" % locals()
if type in float_types:
return "%(z)s = fabs(%(x)s);" % locals()
if type in complex_types:
return "%(z)s = sqrt(%(x)s.real*%(x)s.real + %(x)s.imag*%(x)s.imag);" % locals()
#complex, other?
raise NotImplementedError('type not supported', type)
abs_ = Abs(same_out)
......
......@@ -164,6 +164,11 @@ def value(x, name=None, ndim=None):
class TensorType(Type):
"""Symbolic `Type` representing a numpy.ndarray value."""
filter_checks_isfinite = False
"""
When this is True, strict filtering rejects data containing NaN or Inf entries. (Used in `DebugMode`)
"""
def __init__(self, dtype, broadcastable, name = None):
"""Initialize self.dtype and self.broadcastable.
......@@ -199,6 +204,8 @@ class TensorType(Type):
raise TypeError("%s expected a ndarray object with dtype = %s (got %s)." % (self, self.dtype, data.dtype))
if not data.ndim == self.ndim:
raise TypeError("%s expected a ndarray object with %s dimensions (got %s)." % (self, self.ndim, data.ndim))
if self.filter_checks_isfinite and (not numpy.all(numpy.isfinite(data))):
raise TypeError("non-finite elements not allowed")
return data
else:
data = numpy.asarray(data, dtype = self.dtype)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论