提交 e493f9cd authored 作者: Mehdi Mirza's avatar Mehdi Mirza 提交者: memimo

cleup profileMode deprecation in docs

上级 5a0d273c
......@@ -21,13 +21,13 @@ Description
* Mathematical symbolic expression compiler
* Dynamic C/CUDA code generation
* Efficient symbolic differentiation
* Theano computes derivatives of functions with one or many inputs.
* Speed and stability optimizations
* Gives the right answer for ``log(1+x)`` even if x is really tiny.
* Works on Linux, Mac and Windows
* Transparent use of a GPU
......@@ -38,7 +38,7 @@ Description
* Extensive unit-testing and self-verification
* Detects and diagnoses many types of errors
* On CPU, common machine learning algorithms are 1.6x to 7.5x faster than competitive alternatives
* including specialized implementations in C/C++, NumPy, SciPy, and Matlab
......@@ -79,7 +79,7 @@ Exercise 1
f = theano.function([a], out) # compile function
print f([0,1,2])
# prints `array([0,2,1026])`
theano.printing.pydotprint_variables(b, outfile="f_unoptimized.png", var_with_name_simple=True)
theano.printing.pydotprint(f, outfile="f_optimized.png", var_with_name_simple=True)
......@@ -101,12 +101,12 @@ Real example
import theano
import theano.tensor as T
rng = numpy.random
N = 400
feats = 784
D = (rng.randn(N, feats), rng.randint(size=N,low=0, high=2))
training_steps = 10000
# Declare Theano symbolic variables
x = T.matrix("x")
y = T.vector("y")
......@@ -176,7 +176,7 @@ Theano flags
Theano can be configured with flags. They can be defined in two ways
* With an environment variable: ``THEANO_FLAGS="mode=ProfileMode,ProfileMode.profile_memory=True"``
* With an environment variable: ``THEANO_FLAGS="profile=True,profile_memory=True"``
* With a configuration file that defaults to ``~/.theanorc``
......@@ -185,7 +185,7 @@ Exercise 2
-----------
.. code-block:: python
import numpy
import theano
import theano.tensor as T
......@@ -268,7 +268,7 @@ GPU
* Only 32 bit floats are supported (being worked on)
* Only 1 GPU per process
* Use the Theano flag ``device=gpu`` to tell to use the GPU device
* Use ``device=gpu{0, 1, ...}`` to specify which GPU if you have more than one
* Shared variables with float32 dtype are by default moved to the GPU memory space
......@@ -277,7 +277,7 @@ GPU
* Be sure to use ``floatX`` (``theano.config.floatX``) in your code
* Cast inputs before putting them into a shared variable
* Cast "problem": int32 with float32 to float64
* A new casting mechanism is being developed
* Insert manual cast in your code or use [u]int{8,16}
* Insert manual cast around the mean operator (which involves a division by the length, which is an int64!)
......@@ -295,7 +295,7 @@ Symbolic variables
------------------
* # Dimensions
* T.scalar, T.vector, T.matrix, T.tensor3, T.tensor4
* Dtype
......@@ -322,7 +322,7 @@ Creating symbolic variables: Broadcastability
Details regarding symbolic broadcasting...
* Broadcastability must be specified when creating the variable
* The only shorcut with broadcastable dimensions are: **T.row** and **T.col**
......@@ -358,7 +358,7 @@ Benchmarks
.. image:: ../hpcs2011_tutorial/pics/mlp.png
**Convolutional Network**:
**Convolutional Network**:
256x256 images convolved with 6 7x7 filters,
downsampled to 6x50x50, tanh, convolution with 16 6x7x7 filter, elementwise
......
......@@ -104,7 +104,7 @@ Exercise 5
-----------
- In the last exercises, do you see a speed up with the GPU?
- Where does it come from? (Use ProfileMode)
- Where does it come from? (Use profile=True)
- Is there something we can do to speed up the GPU version?
......
......@@ -21,13 +21,13 @@ Description
* Mathematical symbolic expression compiler
* Dynamic C/CUDA code generation
* Efficient symbolic differentiation
* Theano computes derivatives of functions with one or many inputs.
* Speed and stability optimizations
* Gives the right answer for ``log(1+x)`` even if x is really tiny.
* Works on Linux, Mac and Windows
* Transparent use of a GPU
......@@ -38,7 +38,7 @@ Description
* Extensive unit-testing and self-verification
* Detects and diagnoses many types of errors
* On CPU, common machine learning algorithms are 1.6x to 7.5x faster than competitive alternatives
* including specialized implementations in C/C++, NumPy, SciPy, and Matlab
......@@ -76,7 +76,7 @@ Exercise 1
f = theano.function([a], out) # compile function
print f([0, 1, 2])
# prints `array([0, 2, 1026])`
theano.printing.pydotprint_variables(b, outfile="f_unoptimized.png", var_with_name_simple=True)
theano.printing.pydotprint(f, outfile="f_optimized.png", var_with_name_simple=True)
......@@ -133,7 +133,7 @@ Theano flags
Theano can be configured with flags. They can be defined in two ways
* With an environment variable: ``THEANO_FLAGS="mode=ProfileMode,ProfileMode.profile_memory=True"``
* With an environment variable: ``THEANO_FLAGS="profile=True,profile_memory=True"``
* With a configuration file that defaults to ``~/.theanorc``
......@@ -142,7 +142,7 @@ Exercise 2
-----------
.. code-block:: python
import numpy
import theano
import theano.tensor as tt
......@@ -225,7 +225,7 @@ GPU
* Only 32 bit floats are supported (being worked on)
* Only 1 GPU per process. Wiki page on using multiple process for multiple GPU
* Use the Theano flag ``device=gpu`` to tell to use the GPU device
* Use ``device=gpu{0, 1, ...}`` to specify which GPU if you have more than one
* Shared variables with float32 dtype are by default moved to the GPU memory space
......@@ -234,7 +234,7 @@ GPU
* Be sure to use ``floatX`` (``theano.config.floatX``) in your code
* Cast inputs before putting them into a shared variable
* Cast "problem": int32 with float32 to float64
* Insert manual cast in your code or use [u]int{8,16}
* The mean operator is worked on to make the output stay in float32.
......@@ -256,7 +256,7 @@ Symbolic variables
------------------
* # Dimensions
* tt.scalar, tt.vector, tt.matrix, tt.tensor3, tt.tensor4
* Dtype
......@@ -283,7 +283,7 @@ Creating symbolic variables: Broadcastability
Details regarding symbolic broadcasting...
* Broadcastability must be specified when creating the variable
* The only shorcut with broadcastable dimensions are: **tt.row** and **tt.col**
......
......@@ -23,7 +23,7 @@ Theano defines the following modes by name:
- ``'DebugMode'``: A mode for debugging. See :ref:`DebugMode <debugmode>` for details.
- ``'ProfileMode'``: Deprecated, use the Theano flag :attr:`config.profile`.
- ``'DEBUG_MODE'``: Deprecated. Use the string DebugMode.
- ``'PROFILE_MODE'``: Deprecated. Use the string ProfileMode.
- ``'PROFILE_MODE'``: Deprecated, use the Theano flag :attr:`config.profile`.
The default mode is typically ``FAST_RUN``, but it can be controlled via the
configuration variable :attr:`config.mode`, which can be
......@@ -70,4 +70,3 @@ Reference
Return a new Mode instance like this one, but with an
optimizer modified by requiring the given tags.
......@@ -17,7 +17,7 @@ You can profile your
functions using either of the following two options:
1. Use Theano flag :attr:`config.profile` to enable profiling.
1. Use Theano flag :attr:`config.profile` to enable profiling.
- To enable the memory profiler use the Theano flag:
:attr:`config.profile_memory` in addition to :attr:`config.profile`.
- Moreover, to enable the profiling of Theano optimization phase,
......@@ -30,8 +30,8 @@ functions using either of the following two options:
2. Pass the argument :attr:`profile=True` to the function :func:`theano.function <function.function>`. And then call :attr:`f.profile.print_summary()` for a single function.
- Use this option when you want to profile not all the
functions but one or more specific function(s).
- You can also combine the profile of many functions:
- You can also combine the profile of many functions:
.. testcode::
profile = theano.compile.ProfileStats()
......@@ -68,6 +68,15 @@ compare equal, if their parameters differ (the scalar being
executed). So the class section will merge more Apply nodes then the
Ops section.
Note that the profile also shows which Ops were running a c implementation.
Developers wishing to optimize the performance of their graph should
focus on the worst offending Ops and Apply nodes – either by optimizing
an implementation, providing a missing C implementation, or by writing
a graph optimization that eliminates the offending Op altogether.
You should strongly consider emailing one of our lists about your
issue before spending too much time on this.
Here is an example output when we disable some Theano optimizations to
give you a better idea of the difference between sections. With all
optimizations enabled, there would be only one op left in the graph.
......
......@@ -213,8 +213,8 @@ Tips for Improving Performance on GPU
frequently-accessed data (see :func:`shared()<shared.shared>`). When using
the GPU, *float32* tensor ``shared`` variables are stored on the GPU by default to
eliminate transfer time for GPU ops using those variables.
* If you aren't happy with the performance you see, try building your functions with
``mode='ProfileMode'``. This should print some timing information at program
* If you aren't happy with the performance you see, try running your script with
``profil=True`` flag. This should print some timing information at program
termination. Is time being used sensibly? If an op or Apply is
taking more time than its share, then if you know something about GPU
programming, have a look at how it's implemented in theano.sandbox.cuda.
......@@ -339,7 +339,7 @@ to the exercise in section :ref:`Configuration Settings and Compiling Mode<using
Is there an increase in speed from CPU to GPU?
Where does it come from? (Use ``ProfileMode``)
Where does it come from? (Use ``profile=True`` flag.)
What can be done to further increase the speed of the GPU version? Put your ideas to test.
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论