提交 f87940d8 authored 作者: Frederic's avatar Frederic

Update doc about DebugMode and ProfileMode.

上级 87aa4cf2
...@@ -19,7 +19,7 @@ I wrote a new optimization, but it's not getting used... ...@@ -19,7 +19,7 @@ I wrote a new optimization, but it's not getting used...
Remember that you have to register optimizations with the :ref:`optdb` Remember that you have to register optimizations with the :ref:`optdb`
for them to get used by the normal modes like FAST_COMPILE, FAST_RUN, for them to get used by the normal modes like FAST_COMPILE, FAST_RUN,
and DEBUG_MODE. and DebugMode.
I wrote a new optimization, and it changed my results even though I'm pretty sure it is correct. I wrote a new optimization, and it changed my results even though I'm pretty sure it is correct.
......
...@@ -168,7 +168,7 @@ not modify any of the inputs. ...@@ -168,7 +168,7 @@ not modify any of the inputs.
TODO: EXPLAIN DESTROYMAP and VIEWMAP BETTER AND GIVE EXAMPLE. TODO: EXPLAIN DESTROYMAP and VIEWMAP BETTER AND GIVE EXAMPLE.
When developing an Op, you should run computations in DebugMode, by using When developing an Op, you should run computations in DebugMode, by using
argument ``mode='DEBUG_MODE'`` to ``theano.function``. DebugMode is argument ``mode='DebugMode'`` to ``theano.function``. DebugMode is
slow, but it can catch many common violations of the Op contract. slow, but it can catch many common violations of the Op contract.
TODO: Like what? How? Talk about Python vs. C too. TODO: Like what? How? Talk about Python vs. C too.
......
...@@ -289,7 +289,7 @@ Example: ...@@ -289,7 +289,7 @@ Example:
f = T.function([a,b],[c],mode='FAST_RUN') f = T.function([a,b],[c],mode='FAST_RUN')
m = theano.Module() m = theano.Module()
minstance = m.make(mode='DEBUG_MODE') minstance = m.make(mode='DebugMode')
Whenever possible, unit tests should omit this parameter. Leaving Whenever possible, unit tests should omit this parameter. Leaving
out the mode will ensure that unit tests use the default mode. out the mode will ensure that unit tests use the default mode.
...@@ -306,7 +306,7 @@ type this: ...@@ -306,7 +306,7 @@ type this:
THEANO_FLAGS='mode=FAST_COMPILE' nosetests THEANO_FLAGS='mode=FAST_COMPILE' nosetests
THEANO_FLAGS='mode=FAST_RUN' nosetests THEANO_FLAGS='mode=FAST_RUN' nosetests
THEANO_FLAGS='mode=DEBUG_MODE' nosetests THEANO_FLAGS='mode=DebugMode' nosetests
.. _random_value_in_tests: .. _random_value_in_tests:
......
...@@ -29,7 +29,7 @@ DebugMode can be used as follows: ...@@ -29,7 +29,7 @@ DebugMode can be used as follows:
x = tensor.dvector('x') x = tensor.dvector('x')
f = theano.function([x], 10*x, mode='DEBUG_MODE') f = theano.function([x], 10*x, mode='DebugMode')
f(5) f(5)
f(0) f(0)
...@@ -42,7 +42,7 @@ It can also be used by passing a DebugMode instance as the mode, as in ...@@ -42,7 +42,7 @@ It can also be used by passing a DebugMode instance as the mode, as in
If any problem is detected, DebugMode will raise an exception according to If any problem is detected, DebugMode will raise an exception according to
what went wrong, either at call time (``f(5)``) or compile time ( what went wrong, either at call time (``f(5)``) or compile time (
``f = theano.function(x, 10*x, mode='DEBUG_MODE')``). These exceptions ``f = theano.function(x, 10*x, mode='DebugMode')``). These exceptions
should *not* be ignored; talk to your local Theano guru or email the should *not* be ignored; talk to your local Theano guru or email the
users list if you cannot make the exception go away. users list if you cannot make the exception go away.
...@@ -51,7 +51,7 @@ In the example above, there is no way to guarantee that a future call to say, ...@@ -51,7 +51,7 @@ In the example above, there is no way to guarantee that a future call to say,
``f(-1)`` won't cause a problem. DebugMode is not a silver bullet. ``f(-1)`` won't cause a problem. DebugMode is not a silver bullet.
If you instantiate DebugMode using the constructor ``compile.DebugMode`` If you instantiate DebugMode using the constructor ``compile.DebugMode``
rather than the keyword ``DEBUG_MODE`` you can configure its behaviour via rather than the keyword ``DebugMode`` you can configure its behaviour via
constructor arguments. constructor arguments.
Reference Reference
...@@ -133,7 +133,7 @@ Reference ...@@ -133,7 +133,7 @@ Reference
The keyword version of DebugMode (which you get by using ``mode='DEBUG_MODE``) The keyword version of DebugMode (which you get by using ``mode='DebugMode``)
is quite strict, and can raise several different Exception types. is quite strict, and can raise several different Exception types.
There following are DebugMode exceptions you might encounter: There following are DebugMode exceptions you might encounter:
...@@ -200,7 +200,7 @@ There following are DebugMode exceptions you might encounter: ...@@ -200,7 +200,7 @@ There following are DebugMode exceptions you might encounter:
in the same order when run several times in a row. This can happen if any in the same order when run several times in a row. This can happen if any
steps are ordered by ``id(object)`` somehow, such as via the default object steps are ordered by ``id(object)`` somehow, such as via the default object
hash function. A Stochastic optimization invalidates the pattern of work hash function. A Stochastic optimization invalidates the pattern of work
whereby we debug in DEBUG_MODE and then run the full-size jobs in FAST_RUN. whereby we debug in DebugMode and then run the full-size jobs in FAST_RUN.
.. class:: InvalidValueError(DebugModeError) .. class:: InvalidValueError(DebugModeError)
......
...@@ -534,7 +534,7 @@ dimensions, see :meth:`_tensor_py_operators.dimshuffle`. ...@@ -534,7 +534,7 @@ dimensions, see :meth:`_tensor_py_operators.dimshuffle`.
.. function:: shape_padright(x,n_ones = 1) .. function:: shape_padright(x, n_ones=1)
Reshape `x` by right padding the shape with `n_ones` 1s. Note that all Reshape `x` by right padding the shape with `n_ones` 1s. Note that all
this new dimension will be broadcastable. To make them non-broadcastable this new dimension will be broadcastable. To make them non-broadcastable
...@@ -599,7 +599,7 @@ dimensions, see :meth:`_tensor_py_operators.dimshuffle`. ...@@ -599,7 +599,7 @@ dimensions, see :meth:`_tensor_py_operators.dimshuffle`.
Create a matrix by filling the shape of `a` with `b` Create a matrix by filling the shape of `a` with `b`
.. function:: eye(n, m = None, k = 0, dtype=theano.config.floatX) .. function:: eye(n, m=None, k=0, dtype=theano.config.floatX)
:param n: number of rows in output (value or theano scalar) :param n: number of rows in output (value or theano scalar)
:param m: number of columns in output (value or theano scalar) :param m: number of columns in output (value or theano scalar)
...@@ -1067,11 +1067,11 @@ Mathematical ...@@ -1067,11 +1067,11 @@ Mathematical
Returns a variable representing the exponential of a, ie e^a. Returns a variable representing the exponential of a, ie e^a.
.. function:: maximum(a,b) .. function:: maximum(a, b)
Returns a variable representing the maximum element by element of a and b Returns a variable representing the maximum element by element of a and b
.. function:: minimum(a,b) .. function:: minimum(a, b)
Returns a variable representing the minimum element by element of a and b Returns a variable representing the minimum element by element of a and b
......
...@@ -102,7 +102,7 @@ You can detect those problems by running the code without this ...@@ -102,7 +102,7 @@ You can detect those problems by running the code without this
optimization, using the Theano flag optimization, using the Theano flag
``optimizer_excluding=local_shape_to_shape_i``. You can also obtain the ``optimizer_excluding=local_shape_to_shape_i``. You can also obtain the
same effect by running in the modes ``FAST_COMPILE`` (it will not apply this same effect by running in the modes ``FAST_COMPILE`` (it will not apply this
optimization, nor most other optimizations) or ``DEBUG_MODE`` (it will test optimization, nor most other optimizations) or ``DebugMode`` (it will test
before and after all optimizations (much slower)). before and after all optimizations (much slower)).
......
...@@ -277,7 +277,7 @@ Tips for Improving Performance on GPU ...@@ -277,7 +277,7 @@ Tips for Improving Performance on GPU
the GPU, *float32* tensor ``shared`` variables are stored on the GPU by default to the GPU, *float32* tensor ``shared`` variables are stored on the GPU by default to
eliminate transfer time for GPU ops using those variables. eliminate transfer time for GPU ops using those variables.
* If you aren't happy with the performance you see, try building your functions with * If you aren't happy with the performance you see, try building your functions with
``mode='PROFILE_MODE'``. This should print some timing information at program ``mode='ProfileMode'``. This should print some timing information at program
termination. Is time being used sensibly? If an op or Apply is termination. Is time being used sensibly? If an op or Apply is
taking more time than its share, then if you know something about GPU taking more time than its share, then if you know something about GPU
programming, have a look at how it's implemented in theano.sandbox.cuda. programming, have a look at how it's implemented in theano.sandbox.cuda.
......
...@@ -91,15 +91,15 @@ print predict(D[0]) ...@@ -91,15 +91,15 @@ print predict(D[0])
# 2. Profiling # 2. Profiling
# #
# same code as above but run with following command lines: # same code as above but run with following command lines:
# THEANO_FLAGS=mode=PROFILE_MODE,device=gpu python program_name.py # THEANO_FLAGS=mode=ProfileMode,device=gpu python program_name.py
# THEANO_FLAGS=mode=PROFILE_MODE,device=cpu python program_name.py # THEANO_FLAGS=mode=ProfileMode,device=cpu python program_name.py
# for GPU and CPU # for GPU and CPU
# 2.1 Profiling output for CPU computations # 2.1 Profiling output for CPU computations
$ THEANO_FLAGS=mode=PROFILE_MODE,device=cpu python program_name.py $ THEANO_FLAGS=mode=ProfileMode,device=cpu python program_name.py
Used the cpu Used the cpu
target values for D target values for D
prediction on D prediction on D
...@@ -192,7 +192,7 @@ Test them first, as they are not guaranteed to always provide a speedup. ...@@ -192,7 +192,7 @@ Test them first, as they are not guaranteed to always provide a speedup.
# 2.2 Profiling output for GPU computations # 2.2 Profiling output for GPU computations
$ THEANO_FLAGS=mode=PROFILE_MODE,device=gpu python program_name.py $ THEANO_FLAGS=mode=ProfileMode,device=gpu python program_name.py
Using gpu device 0: GeForce GTX 580 Using gpu device 0: GeForce GTX 580
Used the gpu Used the gpu
target values for D target values for D
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论