提交 d5d97138 authored 作者: Olivier Delalleau's avatar Olivier Delalleau

Merged -- no conflict

...@@ -243,30 +243,21 @@ You can also run them in-place from the Mercurial checkout directory by typing ...@@ -243,30 +243,21 @@ You can also run them in-place from the Mercurial checkout directory by typing
nosetests nosetests
``THEANO_FLAGS`` is an environment variable that define Theano flags
(:ref:`libdoc_config`). For Windows users, you can remove it or see the
documentation to know how to configure them differently.
.. note:: .. note::
The tests should be run with the ``THEANO_FLAGS`` ``device=cpu`` (default). The tests should be run with the :attr:`~config.device` option set to
Otherwise, it will generate false errors. If you have a GPU, it will ``cpu``, e.g. with prefixing the ``nosetests`` command with
automatically be used to run GPU-related tests. ``THEANO_FLAGS=device=cpu``. If you have a GPU, it will automatically
be used to run GPU-related tests.
If you want the GPU-related tests to run on a specific GPU device, and not If you want GPU-related tests to run on a specific GPU device, and not
the default one, you should use :attr:`~config.init_gpu_device`, for the default one, you should use :attr:`~config.init_gpu_device`, for
instance ``THEANO_FLAGS=init_gpu_device=gpu1``. instance ``THEANO_FLAGS=device=cpu,init_gpu_device=gpu1``.
All tests should pass except those marked as ``KnownFailureTest``. If some All tests should pass except those marked as ``KnownFailureTest``. If some
test fails on your machine, you are encouraged to tell us what went wrong on test fails on your machine, you are encouraged to tell us what went wrong on
the ``theano-users@googlegroups.com`` mailing list. the ``theano-users@googlegroups.com`` mailing list.
.. note::
``warn.ignore_bug_before=all`` removes warnings that you don't need to see
here. It is also recommended for a new user to set this flag to a
different value in their ``.theanorc`` file. See
:attr:`.config.warn.ignore_bug_before` for more details.
Troubleshooting: Make sure you have a BLAS library Troubleshooting: Make sure you have a BLAS library
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
...@@ -586,17 +577,22 @@ used within a MinGW Shell (not available if you only installed Python(x,y)). ...@@ -586,17 +577,22 @@ used within a MinGW Shell (not available if you only installed Python(x,y)).
trying to compile any Theano function would result in a compilation error trying to compile any Theano function would result in a compilation error
due to the system being unable to find 'blas.dll'). due to the system being unable to find 'blas.dll').
- You should be able to run the test-suite by executing ``nosetests`` within Testing your installation
the Theano installation directory (if you installed Nose manually as described ~~~~~~~~~~~~~~~~~~~~~~~~~
above, this may only work in a MinGW shell).
Please note that at this time, the test suite may be broken under Windows. Currently, due to memory fragmentation issue in Windows, the
In particular, many tests will probably fail while running the test-suite, test-suite breaks at some point when using ``nosetests``, with many error
due to insufficient memory resources (in which case you will probably get an messages looking
error of the type ``"Not enough storage is available to like: ``DLL load failed: Not enough storage is available to process this
process this command"``). A script named ``run_individual_tests.py`` found command``. As a result, you should instead run
in ``Theano\theano\tests`` is under development as a workaround, but is not
fully functional yet. .. code-block:: bash
python theano/tests/run_tests_in_batch.py
This will run tests in batches of 100, which should avoid memory errors.
Note that this script calls ``nosetests``, which may require being run from
within a MinGW shell if you installed Nose manually as described above.
Compiling a faster BLAS Compiling a faster BLAS
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
......
...@@ -245,13 +245,13 @@ import theano and print the config variable, as in: ...@@ -245,13 +245,13 @@ import theano and print the config variable, as in:
.. attribute:: config.warn.ignore_bug_before .. attribute:: config.warn.ignore_bug_before
String value: 'None', 'all', '0.3' String value: 'None', 'all', '0.3', '0.4'
Default: 'None' Default: 'None'
When we fix a Theano bug that generated bad results under some When we fix a Theano bug that generated bad results under some
circonstances, we also make Theano raise a warning when it encounter circumstances, we also make Theano raise a warning when it encounters
the same circonstances again. This helps to detect if said bug the same circumstances again. This helps to detect if said bug
had affected your past experiments, as you only need to run your had affected your past experiments, as you only need to run your
experiment again with the new version, and you do not have to experiment again with the new version, and you do not have to
understand the Theano internal that triggered the bug. A better understand the Theano internal that triggered the bug. A better
...@@ -263,8 +263,8 @@ import theano and print the config variable, as in: ...@@ -263,8 +263,8 @@ import theano and print the config variable, as in:
You can set its value to the first version of Theano You can set its value to the first version of Theano
that you used (probably 0.3 or higher) that you used (probably 0.3 or higher)
`None` mean that all warnings will be displayed. `None` means that all warnings will be displayed.
`all` mean to hide all warnings. `all` means all warnings will be ignored.
It is recommended that you put a version, so that you will see future It is recommended that you put a version, so that you will see future
warnings. warnings.
......
...@@ -211,7 +211,7 @@ AddConfigVar('numpy.seterr_invalid', ...@@ -211,7 +211,7 @@ AddConfigVar('numpy.seterr_invalid',
### ###
AddConfigVar('warn.ignore_bug_before', AddConfigVar('warn.ignore_bug_before',
"If 'None', we warn about all Theano bugs found by default. If 'all', we don't warn about Theano bugs found by default. If a version, we print only the warnings relative to Theano bugs found after that version. Warning for specific bugs can be configured with specific [warn] flags.", "If 'None', we warn about all Theano bugs found by default. If 'all', we don't warn about Theano bugs found by default. If a version, we print only the warnings relative to Theano bugs found after that version. Warning for specific bugs can be configured with specific [warn] flags.",
EnumStr('None', 'all', '0.3', allow_override=False), EnumStr('None', 'all', '0.3','0.4', allow_override=False),
in_c_key=False) in_c_key=False)
default_0_3 = True default_0_3 = True
......
...@@ -920,7 +920,8 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester( ...@@ -920,7 +920,8 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester(
theano_fct_ = theano.tensor.exp, theano_fct_ = theano.tensor.exp,
ref_fct_ = numpy.exp, ref_fct_ = numpy.exp,
cast_value_ = cuda.as_cuda_array, cast_value_ = cuda.as_cuda_array,
op_by_matrix_ = True) op_by_matrix_ = True,
name='test_shared_options')
#This test the case when the shared constructor view an ndarray as input #This test the case when the shared constructor view an ndarray as input
test_shared_options2 = theano.tensor.tests.test_sharedvar.makeSharedTester( test_shared_options2 = theano.tensor.tests.test_sharedvar.makeSharedTester(
...@@ -937,7 +938,8 @@ test_shared_options2 = theano.tensor.tests.test_sharedvar.makeSharedTester( ...@@ -937,7 +938,8 @@ test_shared_options2 = theano.tensor.tests.test_sharedvar.makeSharedTester(
theano_fct_ = theano.tensor.exp, theano_fct_ = theano.tensor.exp,
ref_fct_ = numpy.exp, ref_fct_ = numpy.exp,
cast_value_ = numpy.asarray, cast_value_ = numpy.asarray,
op_by_matrix_ = True) op_by_matrix_ = True,
name='test_shared_options')
if __name__ == '__main__': if __name__ == '__main__':
test_many_arg_elemwise() test_many_arg_elemwise()
......
...@@ -584,7 +584,9 @@ test_shared_options=theano.tensor.tests.test_sharedvar.makeSharedTester( ...@@ -584,7 +584,9 @@ test_shared_options=theano.tensor.tests.test_sharedvar.makeSharedTester(
test_internal_type_ = scipy.sparse.issparse, test_internal_type_ = scipy.sparse.issparse,
theano_fct_ = lambda a: dense_from_sparse(a*2.), theano_fct_ = lambda a: dense_from_sparse(a*2.),
ref_fct_ = lambda a: numpy.asarray((a*2).todense()), ref_fct_ = lambda a: numpy.asarray((a*2).todense()),
cast_value_ = scipy.sparse.csr_matrix) cast_value_ = scipy.sparse.csr_matrix,
name='test_shared_options',
)
if __name__ == '__main__': if __name__ == '__main__':
......
...@@ -304,8 +304,23 @@ def rand_of_dtype(shape, dtype): ...@@ -304,8 +304,23 @@ def rand_of_dtype(shape, dtype):
else: else:
raise TypeError() raise TypeError()
def makeBroadcastTester(op, expected, checks = {}, **kwargs): def makeBroadcastTester(op, expected, checks = {}, name=None, **kwargs):
name = str(op) + "Tester" name = str(op)
# Here we ensure the test name matches the name of the variable defined in
# this script. This is needed to properly identify the test e.g. with the
# --with-id option of nosetests, or simply to rerun a specific test that
# failed.
capitalize = False
if name.startswith('Elemwise{') and name.endswith(',no_inplace}'):
# For instance: Elemwise{add,no_inplace} -> Add
name = name[9:-12]
capitalize = True
elif name.endswith('_inplace'):
# For instance: sub_inplace -> SubInplace
capitalize = True
if capitalize:
name = ''.join([x.capitalize() for x in name.split('_')])
name += "Tester"
if kwargs.has_key('inplace'): if kwargs.has_key('inplace'):
if kwargs['inplace']: if kwargs['inplace']:
_expected = expected _expected = expected
...@@ -504,7 +519,7 @@ if config.floatX=='float32': ...@@ -504,7 +519,7 @@ if config.floatX=='float32':
# float32. # float32.
# This is probably caused by our way of computing the gradient error. # This is probably caused by our way of computing the gradient error.
div_grad_rtol=0.025 div_grad_rtol=0.025
DivTester = makeBroadcastTester(op = true_div, TrueDivTester = makeBroadcastTester(op = true_div,
expected = lambda x, y: check_floatX((x, y), x / y), expected = lambda x, y: check_floatX((x, y), x / y),
good = _good_broadcast_div_mod_normal_float, good = _good_broadcast_div_mod_normal_float,
# integers = (randint(2, 3), randint_nonzero(2, 3)), # integers = (randint(2, 3), randint_nonzero(2, 3)),
...@@ -513,7 +528,7 @@ DivTester = makeBroadcastTester(op = true_div, ...@@ -513,7 +528,7 @@ DivTester = makeBroadcastTester(op = true_div,
grad = _grad_broadcast_div_mod_normal, grad = _grad_broadcast_div_mod_normal,
grad_rtol=div_grad_rtol, grad_rtol=div_grad_rtol,
) )
DivInplaceTester = makeBroadcastTester(op = inplace.true_div_inplace, TrueDivInplaceTester = makeBroadcastTester(op = inplace.true_div_inplace,
expected = lambda x, y: x / y, expected = lambda x, y: x / y,
good = _good_broadcast_div_mod_normal_float_inplace, good = _good_broadcast_div_mod_normal_float_inplace,
grad = _grad_broadcast_div_mod_normal, grad = _grad_broadcast_div_mod_normal,
......
...@@ -23,7 +23,9 @@ def makeSharedTester(shared_constructor_, ...@@ -23,7 +23,9 @@ def makeSharedTester(shared_constructor_,
theano_fct_, theano_fct_,
ref_fct_, ref_fct_,
cast_value_ = numpy.asarray, cast_value_ = numpy.asarray,
op_by_matrix_ = False): op_by_matrix_=False,
name=None,
):
""" """
This is a generic fct to allow reusing the same test function This is a generic fct to allow reusing the same test function
for many shared variable of many types. for many shared variable of many types.
...@@ -46,7 +48,12 @@ def makeSharedTester(shared_constructor_, ...@@ -46,7 +48,12 @@ def makeSharedTester(shared_constructor_,
:param ref_fct_: A reference function that should return the same value as the theano_fct_ :param ref_fct_: A reference function that should return the same value as the theano_fct_
:param cast_value_: A callable that cast an ndarray into the internal shared variable representation :param cast_value_: A callable that cast an ndarray into the internal shared variable representation
:param op_by_matrix_: When we do inplace operation on the an internal type object, should we do it with a scalar or a matrix of the same value. :param op_by_matrix_: When we do inplace operation on the an internal type object, should we do it with a scalar or a matrix of the same value.
:param name: This string is used to set the returned class' __name__
attribute. This is needed for nosetests to properly tag the
test with its correct name, rather than use the generic
SharedTester name. This parameter is mandatory (keeping the
default None value will raise an error), and must be set to
the name of the variable that will hold the returned class.
:note: :note:
We must use /= as sparse type don't support other inplace operation. We must use /= as sparse type don't support other inplace operation.
...@@ -607,7 +614,8 @@ def makeSharedTester(shared_constructor_, ...@@ -607,7 +614,8 @@ def makeSharedTester(shared_constructor_,
assert not x_shared.type.values_eq(x, y) assert not x_shared.type.values_eq(x, y)
assert not x_shared.type.values_eq_approx(x, y) assert not x_shared.type.values_eq_approx(x, y)
assert name is not None
SharedTester.__name__ = name
return SharedTester return SharedTester
...@@ -625,4 +633,5 @@ test_shared_options=makeSharedTester( ...@@ -625,4 +633,5 @@ test_shared_options=makeSharedTester(
theano_fct_ = lambda a: a*2, theano_fct_ = lambda a: a*2,
ref_fct_ = lambda a: numpy.asarray((a*2)), ref_fct_ = lambda a: numpy.asarray((a*2)),
cast_value_ = numpy.asarray, cast_value_ = numpy.asarray,
op_by_matrix_ = False) op_by_matrix_ = False,
name='test_shared_options')
...@@ -6,28 +6,28 @@ __contact__ = "delallea@iro" ...@@ -6,28 +6,28 @@ __contact__ = "delallea@iro"
""" """
Run this script to run tests individually. Run this script to run tests in small batches rather than all at the same time.
If no argument is provided, then the whole Theano test-suite is run.
Otherwise, only tests found in the directory given as argument are run.
This script performs three tasks: This script performs three tasks:
1. Run `nosetests --collect-only --with-id` to collect test IDs 1. Run `nosetests --collect-only --with-id` to collect test IDs
2. Run `nosetests --with-id X` with for X = 1 to total number of tests 2. Run `nosetests --with-id i1 ... iN` with batches of N indices, until all
tests have been run (currently N=100).
3. Run `nosetests --failed` to re-run only tests that failed 3. Run `nosetests --failed` to re-run only tests that failed
=> The output of this 3rd step is the one you should care about => The output of this 3rd step is the one you should care about
One reason to use this script is if you are a Windows user, and see errors like One reason to use this script is if you are a Windows user, and see errors like
"Not enough storage is available to process this command" when trying to simply "Not enough storage is available to process this command" when trying to simply
run `nosetests` in your Theano installation directory. run `nosetests` in your Theano installation directory. This error is apparently
By using this script, nosetests is run on each test individually, which is much caused by memory fragmentation: at some point Windows runs out of contiguous
slower but should at least let you run the test suite. memory to load the C modules compiled by Theano in the test-suite.
Note that this script is a work-in-progress and is not fully functional at this By using this script, nosetests is run on a small subset (batch) of tests until
point: the way some tests are defined in the Theano test-suite seems to confuse all tests are run. Note that this is slower, in particular because of the
the nosetests' TestID module, probably leading to not running all tests, as initial cost of importing theano and loading the C module cache on each call of
well as to some unexpected test failures. nosetests.
You can also provide a single command-line argument, which should be an integer
number N (default = 1), in order to run batches of N tests rather than run tests
one at a time. It will be faster (but may fail under Windows if N is too large).
""" """
...@@ -37,35 +37,55 @@ import theano ...@@ -37,35 +37,55 @@ import theano
def main(): def main():
theano_install_dir = os.path.join(os.path.dirname(theano.__file__), '..') if len(sys.argv) == 1:
os.chdir(theano_install_dir) tests_dir = os.path.join(os.path.dirname(theano.__file__), '..')
# It seems like weird things happen if we keep the same IDs file around... else:
# (the number of test items in it changes from one run to another) assert len(sys.argv) == 2
tests_dir = sys.argv[1]
assert os.path.isdir(tests_dir)
os.chdir(tests_dir)
# It seems safer to fully regenerate the list of tests on each call.
if os.path.isfile('.noseids'): if os.path.isfile('.noseids'):
os.remove('.noseids') os.remove('.noseids')
# Collect test IDs. # Collect test IDs.
print """\
####################
# COLLECTING TESTS #
####################"""
assert subprocess.call(['nosetests', '--collect-only', '--with-id']) == 0 assert subprocess.call(['nosetests', '--collect-only', '--with-id']) == 0
data = cPickle.load( noseids_file = os.path.join(tests_dir, '.noseids')
open(os.path.join(theano_install_dir, '.noseids'), 'rb')) data = cPickle.load(open(noseids_file, 'rb'))
ids = data['ids'] ids = data['ids']
n_tests = len(ids) n_tests = len(ids)
assert n_tests == max(ids)
# Run tests. # Run tests.
n_batch = 1 n_batch = 10
if len(sys.argv) >= 2: failed = []
n_batch = int(sys.argv[1]) print """\
has_error = 0 ###################################
# RUNNING TESTS IN BATCHES OF %s #
###################################""" % n_batch
for test_id in xrange(1, n_tests + 1, n_batch): for test_id in xrange(1, n_tests + 1, n_batch):
test_range = range(test_id, min(test_id + n_batch, n_tests + 1)) test_range = range(test_id, min(test_id + n_batch, n_tests + 1))
rval = subprocess.call(['nosetests', '-v', '--with-id'] + # We suppress all output because we want the user to focus only on the
map(str, test_range)) # failed tests, which are re-run (with output) below.
has_error += rval dummy_out = open(os.devnull, 'w')
if has_error: rval = subprocess.call(['nosetests', '-q', '--with-id'] +
map(str, test_range), stdout=dummy_out.fileno(),
stderr=dummy_out.fileno())
# Recover failed test indices from the 'failed' field of the '.noseids'
# file. We need to do it after each batch because otherwise this field
# gets erased.
failed += cPickle.load(open(noseids_file, 'rb'))['failed']
print '%s%% done (failed: %s)' % ((test_range[-1] * 100) // n_tests,
len(failed))
if failed:
# Re-run only failed tests # Re-run only failed tests
print """\ print """\
########################### ################################
# RE-RUNNING FAILED TESTS # # RE-RUNNING FAILED TESTS ONLY #
###########################""" ################################"""
subprocess.call(['nosetests', '-v', '--failed']) subprocess.call(['nosetests', '-v', '--with-id'] + failed)
return 0 return 0
else: else:
print """\ print """\
...@@ -76,4 +96,3 @@ def main(): ...@@ -76,4 +96,3 @@ def main():
if __name__ == '__main__': if __name__ == '__main__':
sys.exit(main()) sys.exit(main())
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论