提交 0dedcd95 authored 作者: James Bergstra's avatar James Bergstra

merge

...@@ -14,28 +14,31 @@ Guide ...@@ -14,28 +14,31 @@ Guide
===== =====
The DebugMode evaluation mode (available via ``mode='DEBUG_MODE'``, The DebugMode evaluation mode includes a number of self-checks and assertions
see :ref:`this <function_mode>`) includes a number of self-checks and assertions that that can help to diagnose several kinds of programmer errors that can lead to
can help to diagnose several kinds of programmer errors that can lead incorrect output.
to incorrect output.
It is much slower to evaluate a function or method in DEBUG_MODE than It is much slower to evaluate a function or method with DebugMode than
it would be in FAST_RUN or even FAST_COMPILE. We recommended you use it would be in ``'FAST_RUN'`` or even ``'FAST_COMPILE'``. We recommended you use
DebugMode during development, but not when you launch 1000 processes on DebugMode during development, but not when you launch 1000 processes on
a cluster. a cluster.
DebugMode is used as follows: DebugMode can be used as follows:
.. code-block:: python .. code-block:: python
x = theano.dvector('x') x = tensor.dvector('x')
f = theano.function(x, 10*x, mode='DEBUG_MODE') f = theano.function([x], 10*x, mode='DEBUG_MODE')
f(5) f(5)
f(0) f(0)
f(7) f(7)
It can also be used by setting an environment variable ``THEANO_DEFAULT_MODE=DEBUG_MODE``.
It can also be used by passing a DebugMode instance as the mode, as in
>>> f = theano.function([x], 10*x, mode=DebugMode(check_c_code=False))
If any problem is detected, DebugMode will raise an exception according to If any problem is detected, DebugMode will raise an exception according to
what went wrong, either at call time (``f(5)``) or compile time ( what went wrong, either at call time (``f(5)``) or compile time (
...@@ -79,11 +82,7 @@ Reference ...@@ -79,11 +82,7 @@ Reference
If there are internal errors, this mode will raise an `DebugModeError` exception. If there are internal errors, this mode will raise an `DebugModeError` exception.
:remark: The work of debugging is implemented by the `_Maker`, `_Linker`, and .. attribute:: stability_patience = config.THEANO_DEBUGMODE_PATIENCE
`_VariableEquivalenceTracker` classes.
.. attibute:: stability_patience = config.THEANO_DEBUGMODE_PATIENCE
When checking for the stability of optimization, recompile the graph this many times. When checking for the stability of optimization, recompile the graph this many times.
Default 10. Default 10.
...@@ -136,7 +135,7 @@ is quite strict, and can raise several different Exception types. ...@@ -136,7 +135,7 @@ is quite strict, and can raise several different Exception types.
There following are DebugMode exceptions you might encounter: There following are DebugMode exceptions you might encounter:
..class:: DebugModeError .. class:: DebugModeError(Exception)
This is a generic error. All the other exceptions inherit from this one. This is a generic error. All the other exceptions inherit from this one.
This error is typically not raised directly. This error is typically not raised directly.
...@@ -145,7 +144,7 @@ There following are DebugMode exceptions you might encounter: ...@@ -145,7 +144,7 @@ There following are DebugMode exceptions you might encounter:
.. class:: BadCLinkerOutput .. class:: BadCLinkerOutput(DebugModeError)
This exception means that python (``perform``) and c (``c_code``) for an Op This exception means that python (``perform``) and c (``c_code``) for an Op
didn't compute the same thing like they were supposed to. didn't compute the same thing like they were supposed to.
...@@ -153,7 +152,7 @@ There following are DebugMode exceptions you might encounter: ...@@ -153,7 +152,7 @@ There following are DebugMode exceptions you might encounter:
.. class:: BadOptimization .. class:: BadOptimization(DebugModeError)
This exception indicates that an Optimization replaced one variable (say V1) This exception indicates that an Optimization replaced one variable (say V1)
with another one (say V2) but at runtime, the values for V1 and V2 were with another one (say V2) but at runtime, the values for V1 and V2 were
...@@ -167,7 +166,7 @@ There following are DebugMode exceptions you might encounter: ...@@ -167,7 +166,7 @@ There following are DebugMode exceptions you might encounter:
.. class:: BadDestroyMap .. class:: BadDestroyMap(DebugModeError)
This happens when an Op's ``perform()`` or ``c_code()`` modifies an input that it wasn't This happens when an Op's ``perform()`` or ``c_code()`` modifies an input that it wasn't
supposed to. If either the ``perform`` or ``c_code`` implementation of an Op supposed to. If either the ``perform`` or ``c_code`` implementation of an Op
...@@ -177,7 +176,7 @@ There following are DebugMode exceptions you might encounter: ...@@ -177,7 +176,7 @@ There following are DebugMode exceptions you might encounter:
For detailed documentation on the ``destroy_map`` attribute, see :ref:`inplace`. For detailed documentation on the ``destroy_map`` attribute, see :ref:`inplace`.
.. class:: BadViewMap .. class:: BadViewMap(DebugModeError)
This happens when an Op's perform() or c_code() creates an alias or alias-like This happens when an Op's perform() or c_code() creates an alias or alias-like
dependency between an input and an output... and it didn't warn the dependency between an input and an output... and it didn't warn the
...@@ -186,7 +185,7 @@ There following are DebugMode exceptions you might encounter: ...@@ -186,7 +185,7 @@ There following are DebugMode exceptions you might encounter:
For detailed documentation on the ``view_map`` attribute, see :ref:`views`. For detailed documentation on the ``view_map`` attribute, see :ref:`views`.
.. class:: StochasticOrder .. class:: StochasticOrder(DebugModeError)
This happens when an optimization does not perform the same graph operations This happens when an optimization does not perform the same graph operations
in the same order when run several times in a row. This can happen if any in the same order when run several times in a row. This can happen if any
...@@ -195,7 +194,7 @@ There following are DebugMode exceptions you might encounter: ...@@ -195,7 +194,7 @@ There following are DebugMode exceptions you might encounter:
whereby we debug in DEBUG_MODE and then run the full-size jobs in FAST_RUN. whereby we debug in DEBUG_MODE and then run the full-size jobs in FAST_RUN.
.. class:: InvalidValueError .. class:: InvalidValueError(DebugModeError)
This happens when some Op's ``perform`` or ``c_code`` implementation computes This happens when some Op's ``perform`` or ``c_code`` implementation computes
an output that is invalid with respect to the type of the corresponding output an output that is invalid with respect to the type of the corresponding output
......
...@@ -104,22 +104,22 @@ The first argument you pass is the `dtype` and the second is the ...@@ -104,22 +104,22 @@ The first argument you pass is the `dtype` and the second is the
Where `dtype` is one of (a complete list of supported dtypes): Where `dtype` is one of (a complete list of supported dtypes):
=========== ================ ================= ================= =================== =================
dtype domain bits dtype domain bits
=========== ================ ================= ================= =================== =================
int8 signed integer 8 ``'int8'`` signed integer 8
int16 signed integer 16 ``'int16'`` signed integer 16
int32 signed integer 32 ``'int32'`` signed integer 32
int64 signed integer 64 ``'int64'`` signed integer 64
uint8 unsigned integer 8 ``'uint8'`` unsigned integer 8
uint16 unsigned integer 16 ``'uint16'`` unsigned integer 16
uint32 unsigned integer 32 ``'uint32'`` unsigned integer 32
uint64 unsigned integer 64 ``'uint64'`` unsigned integer 64
float32 floating point 32 ``'float32'`` floating point 32
float64 floating point 64 ``'float64'`` floating point 64
complex64 complex 64 (two float32) ``'complex64'`` complex 64 (two float32)
complex128 complex 128 (two float64) ``'complex128'`` complex 128 (two float64)
=========== ================ ================= ================= =================== =================
The broadcastable pattern indicates both the number of dimensions and The broadcastable pattern indicates both the number of dimensions and
......
...@@ -23,7 +23,7 @@ Theano provides a 'Print' Op to do this. ...@@ -23,7 +23,7 @@ Theano provides a 'Print' Op to do this.
x = theano.tensor.dvector('x') x = theano.tensor.dvector('x')
x_printed = theano.Print('this is a very important value')(x) x_printed = theano.printing.Print('this is a very important value')(x)
f = theano.function([x], x * 5) f = theano.function([x], x * 5)
f_with_print = theano.function([x], x_printed * 5) f_with_print = theano.function([x], x_printed * 5)
...@@ -75,15 +75,25 @@ Check out this one: ...@@ -75,15 +75,25 @@ Check out this one:
super(PrintEverythingMode, self).__init__(wrap_linker, optimizer='fast_run') super(PrintEverythingMode, self).__init__(wrap_linker, optimizer='fast_run')
When you use ``mode=PrintEverythingMode()`` as the mode for Function or Method, When you use ``mode=PrintEverythingMode()`` as the mode for Function or Method,
then you should see a lot of output. Every Apply node will be printed out, then you should see [potentially a lot of] output. Every Apply node will be printed out,
along with its position in the graph, the arguments to the ``perform`` or along with its position in the graph, the arguments to the ``perform`` or
``c_code`` and the output it computed. Admittedly, this is a huge amount of ``c_code`` and the output it computed.
>>> x = T.dscalar('x')
>>> f = function([x], [5*x], mode=PrintEverythingMode())
>>> f(3)
>>> # print: 0 Elemwise{mul,no_inplace}(5, x) [array(5, dtype=int8), array(3.0)] [array(15.0)]
>>> # print: [array(15.0)]
Admittedly, this may be a huge amount of
output to read through if you are using big tensors... but you can choose to output to read through if you are using big tensors... but you can choose to
put logic inside of the print_eval function that would, for example, only put logic inside of the print_eval function that would, for example, only
print something out if a certain kind of Op was used, at a certain program print something out if a certain kind of Op was used, at a certain program
position, or if a particular value shows up in one of the inputs or outputs. position, or if a particular value shows up in one of the inputs or outputs.
Use your imagination :)
.. TODO: documentation for link.WrapLinkerMany .. TODO: documentation for link.WrapLinkerMany
This can be a really powerful debugging tool. Note the call to ``fn`` inside the call to ``print_eval``; without it, the graph wouldn't get computed at all! This can be a really powerful debugging tool.
Note the call to ``fn`` inside the call to ``print_eval``; without it, the graph wouldn't get computed at all!
...@@ -274,9 +274,10 @@ shared variable, but you do *not* want to use its value. In this case, you can u ...@@ -274,9 +274,10 @@ shared variable, but you do *not* want to use its value. In this case, you can u
for the purpose of one particular function. for the purpose of one particular function.
>>> fn_of_state = state * 2 + inc >>> fn_of_state = state * 2 + inc
>>> non_shared_state = state.type() >>> foo = lscalar() # the type (lscalar) must match the shared variable we
>>> skip_shared = function([inc, non_shared_state], fn_of_state, >>> # are replacing with the ``givens`` list
givens=[(state, non_shared_state)]) >>> skip_shared = function([inc, foo], fn_of_state,
givens=[(state, foo)])
>>> skip_shared(1, 3) # we're using 3 for the state, not state.value >>> skip_shared(1, 3) # we're using 3 for the state, not state.value
array(7) array(7)
>>> state.value # old state still there, but we didn't use it >>> state.value # old state still there, but we didn't use it
......
...@@ -15,9 +15,9 @@ is controlled by the value of the ``mode`` parameter. ...@@ -15,9 +15,9 @@ is controlled by the value of the ``mode`` parameter.
Theano defines the following modes by name: Theano defines the following modes by name:
- ``FAST_COMPILE``: Apply just a few optimizations, but use C op implementations where possible. - ``'FAST_COMPILE'``: Apply just a few graph optimizations, but use C implementations where possible.
- ``FAST_RUN``: Apply all optimizations, and use C op implementations where possible. - ``'FAST_RUN'``: Apply all optimizations, and use C implementations where possible.
- ``DEBUG_MODE``: Verify the correctness of all optimizations, and compare C and python - ``'DEBUG_MODE'``: Verify the correctness of all optimizations, and compare C and python
implementations. This mode can take much longer than the other modes, implementations. This mode can take much longer than the other modes,
but can identify many kinds of problems. but can identify many kinds of problems.
...@@ -43,24 +43,24 @@ DEBUG_MODE ``compile.debugmode.DebugMode()`` ...@@ -43,24 +43,24 @@ DEBUG_MODE ``compile.debugmode.DebugMode()``
Using DebugMode Using DebugMode
=============== ===============
While normally you should use the ``FAST_RUN`` or ``FAST_COMPILE`` mode, While normally you should use the ``FAST_RUN`` or ``FAST_COMPILE`` mode,
it is useful at first to run your code using the DebugMode it is useful at first (especially when you are defining new kinds of
expressions or new optimizations) to run your code using the DebugMode
(available via ``mode='DEBUG_MODE'``). The DebugMode is designed to (available via ``mode='DEBUG_MODE'``). The DebugMode is designed to
do several self-checks and assertations that can help to diagnose do several self-checks and assertations that can help to diagnose
possible programming errors that can lead to incorect output. Note that possible programming errors that can lead to incorect output. Note that
``DEBUG_MODE`` is much slower then ``FAST_RUN`` or ``FAST_COMPILE`` so ``DEBUG_MODE`` is much slower then ``FAST_RUN`` or ``FAST_COMPILE`` so
use it only during development, not when you luch 1000 process on a use it only during development (not when you luch 1000 process on a
cluster. cluster!).
DebugMode is used as follows: DebugMode is used as follows:
.. code-block:: python .. code-block:: python
x = theano.dvector('x') x = T.dvector('x')
f = theano.function(x, 10*x, mode='DEBUG_MODE') f = theano.function([x], 10*x, mode='DEBUG_MODE')
f(5) f(5)
f(0) f(0)
...@@ -68,7 +68,7 @@ DebugMode is used as follows: ...@@ -68,7 +68,7 @@ DebugMode is used as follows:
If any problem is detected, DebugMode will raise an exception according to If any problem is detected, DebugMode will raise an exception according to
what went wrong, either at call time (e.g. ``f(5)``) or compile time (e.g what went wrong, either at call time (``f(5)``) or compile time (
``f = theano.function(x, 10*x, mode='DEBUG_MODE')``). These exceptions ``f = theano.function(x, 10*x, mode='DEBUG_MODE')``). These exceptions
should *not* be ignored; talk to your local Theano guru or email the should *not* be ignored; talk to your local Theano guru or email the
users list if you cannot make the exception go away. users list if you cannot make the exception go away.
...@@ -77,13 +77,11 @@ Some kinds of errors can only be detected for certain input value combinations. ...@@ -77,13 +77,11 @@ Some kinds of errors can only be detected for certain input value combinations.
In the example above, there is no way to guarantee that a future call to say, In the example above, there is no way to guarantee that a future call to say,
``f(-1)`` won't cause a problem. DebugMode is not a silver bullet. ``f(-1)`` won't cause a problem. DebugMode is not a silver bullet.
If you instantiate DebugMode using the constructor ``compile.DebugMode`` If you instantiate DebugMode using the constructor (see :class:`DebugMode`)
rather than the keyword ``DEBUG_MODE`` you can configure its behaviour via rather than the keyword ``DEBUG_MODE`` you can configure its behaviour via
constructor arguments. See :ref:`DebugMode <debugMode>` for details. constructor arguments. See :ref:`DebugMode <debugMode>` for details.
The keyword version of DebugMode (which you get by using ``mode='DEBUG_MODE``) The keyword version of DebugMode (which you get by using ``mode='DEBUG_MODE``)
is quite strict, and can raise several different Exception types. For a is quite strict.
list of possible exeption go here.
.. _using_profilemode: .. _using_profilemode:
......
...@@ -50,7 +50,7 @@ Broadcasting ...@@ -50,7 +50,7 @@ Broadcasting
Numpy does *broadcasting* of arrays of different shapes during Numpy does *broadcasting* of arrays of different shapes during
arithmetic operations. What this means in general is that the smaller arithmetic operations. What this means in general is that the smaller
array is *broadcasted* across the larger array so that they have array (or scalar) is *broadcasted* across the larger array so that they have
compatible shapes. The example below shows an instance of compatible shapes. The example below shows an instance of
*broadcastaing*: *broadcastaing*:
...@@ -59,7 +59,7 @@ compatible shapes. The example below shows an instance of ...@@ -59,7 +59,7 @@ compatible shapes. The example below shows an instance of
>>> a * b >>> a * b
array([2., 4., 6.]) array([2., 4., 6.])
The smaller array ``b`` in this case is *broadcasted* to the same size The smaller array ``b`` (actually a scalar here, which works like a 0-d array) in this case is *broadcasted* to the same size
as ``a`` during the multiplication. This trick is often useful in as ``a`` during the multiplication. This trick is often useful in
simplifying how expression are written. More details about *broadcasting* simplifying how expression are written. More details about *broadcasting*
can be found at `numpy user guide <http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>`__. can be found at `numpy user guide <http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html>`__.
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论