提交 72066bc3 authored 作者: Frédéric Bastien's avatar Frédéric Bastien

Merge pull request #1592 from yosinski/master

Some doc / tutorial updates
...@@ -27,7 +27,7 @@ More precisely, if *A* is a tensor you want to compute ...@@ -27,7 +27,7 @@ More precisely, if *A* is a tensor you want to compute
for i in xrange(k): for i in xrange(k):
result = result * A result = result * A
There are three thing here that we need to handle: the initial value There are three things here that we need to handle: the initial value
assigned to ``result``, the accumulation of results in ``result``, and assigned to ``result``, the accumulation of results in ``result``, and
the unchanging variable ``A``. Unchanging variables are passed to scan as the unchanging variable ``A``. Unchanging variables are passed to scan as
``non_sequences``. Initialization occurs in ``outputs_info``, and the accumulation ``non_sequences``. Initialization occurs in ``outputs_info``, and the accumulation
...@@ -67,7 +67,7 @@ Next we initialize the output as a tensor with same shape and dtype as ``A``, ...@@ -67,7 +67,7 @@ Next we initialize the output as a tensor with same shape and dtype as ``A``,
filled with ones. We give ``A`` to scan as a non sequence parameter and filled with ones. We give ``A`` to scan as a non sequence parameter and
specify the number of steps ``k`` to iterate over our lambda expression. specify the number of steps ``k`` to iterate over our lambda expression.
Scan return a tuples, containing our result (``result``) and a Scan returns a tuple containing our result (``result``) and a
dictionary of updates (empty in this case). Note that the result dictionary of updates (empty in this case). Note that the result
is not a matrix, but a 3D tensor containing the value of ``A**k`` for is not a matrix, but a 3D tensor containing the value of ``A**k`` for
each step. We want the last value (after ``k`` steps) so we compile each step. We want the last value (after ``k`` steps) so we compile
......
...@@ -99,6 +99,7 @@ The second step is to combine *x* and *y* into their sum *z*: ...@@ -99,6 +99,7 @@ The second step is to combine *x* and *y* into their sum *z*:
*x* and *y*. You can use the :ref:`pp <libdoc_printing>` *x* and *y*. You can use the :ref:`pp <libdoc_printing>`
function to pretty-print out the computation associated to *z*. function to pretty-print out the computation associated to *z*.
>>> from theano import pp
>>> print pp(z) >>> print pp(z)
(x + y) (x + y)
......
...@@ -125,7 +125,7 @@ as it will be useful later on. ...@@ -125,7 +125,7 @@ as it will be useful later on.
Mode Mode
==== ====
Everytime :func:`theano.function <function.function>` is called, Every time :func:`theano.function <function.function>` is called,
the symbolic relationships between the input and output Theano *variables* the symbolic relationships between the input and output Theano *variables*
are optimized and compiled. The way this compilation occurs are optimized and compiled. The way this compilation occurs
is controlled by the value of the ``mode`` parameter. is controlled by the value of the ``mode`` parameter.
...@@ -133,11 +133,11 @@ is controlled by the value of the ``mode`` parameter. ...@@ -133,11 +133,11 @@ is controlled by the value of the ``mode`` parameter.
Theano defines the following modes by name: Theano defines the following modes by name:
- ``'FAST_COMPILE'``: Apply just a few graph optimizations and only use Python implementations. - ``'FAST_COMPILE'``: Apply just a few graph optimizations and only use Python implementations.
- ``'FAST_RUN'``: Apply all optimizations, and use C implementations where possible. - ``'FAST_RUN'``: Apply all optimizations and use C implementations where possible.
- ``'DebugMode``: Verify the correctness of all optimizations, and compare C and Python - ``'DebugMode``: Verify the correctness of all optimizations, and compare C and Python
implementations. This mode can take much longer than the other modes, but can identify implementations. This mode can take much longer than the other modes, but can identify
several kinds of problems. several kinds of problems.
- ``'ProfileMode'``: Same optimization then FAST_RUN, put print some profiling information - ``'ProfileMode'``: Same optimization as FAST_RUN, but print some profiling information
The default mode is typically ``FAST_RUN``, but it can be controlled via The default mode is typically ``FAST_RUN``, but it can be controlled via
the configuration variable :attr:`config.mode`, the configuration variable :attr:`config.mode`,
...@@ -167,7 +167,7 @@ A mode is composed of 2 things: an optimizer and a linker. Some modes, ...@@ -167,7 +167,7 @@ A mode is composed of 2 things: an optimizer and a linker. Some modes,
like ``ProfileMode`` and ``DebugMode``, add logic around the optimizer and like ``ProfileMode`` and ``DebugMode``, add logic around the optimizer and
linker. ``ProfileMode`` and ``DebugMode`` use their own linker. linker. ``ProfileMode`` and ``DebugMode`` use their own linker.
You can select witch linker to use with the Theano flag :attr:`config.linker`. You can select which linker to use with the Theano flag :attr:`config.linker`.
Here is a table to compare the different linkers. Here is a table to compare the different linkers.
============= ========= ================= ========= === ============= ========= ================= ========= ===
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论