提交 bf3880d4 authored 作者: Eric Larsen's avatar Eric Larsen 提交者: Frederic

Correct Theano's tutorial: edits after rebase

上级 edfd9f24
......@@ -37,7 +37,7 @@ Inputs and Outputs are lists of Theano variables.
.. note::
See :ref:`dev_start_guide` for information about git, github, the
See the :ref:`dev_start_guide` for information about git, github, the
development workflow and how to make a quality contribution.
-----------
......@@ -96,7 +96,7 @@ implement the :func:`perform`
and/or :func:`c_code <Op.c_code>` methods (and other related :ref:`c methods
<cop>`), or the :func:`make_thunk` method. ``perform`` allows
to easily wrap an existing Python function into Theano. ``c_code``
and related methods allow the op to generate C code that will be
and the related methods allow the op to generate C code that will be
compiled and linked by Theano. On the other hand, ``make_thunk``
will be called only once during compilation and should generate
a ``thunk``: a standalone function that when called will do the wanted computations.
......@@ -177,7 +177,7 @@ Try it!:
print out
How To Test it
--------------
Theano has some functions to simplify testing. These help test the
``infer_shape``, ``grad`` and ``R_op`` methods. Put the following code
......@@ -284,20 +284,36 @@ For instance, to verify the Rop method of the DoubleOp, you can use this:
def test_double_rop(self):
self.check_rop_lop(DoubleRop()(self.x), self.in_shape)
Running your tests
==================
You can run ``nosetests`` in the Theano folder to run all of Theano's
tests, including yours if they are somewhere in the directory
structure. For instance, you can run the following command lines to ``nosetests test_file.py`` to run only the
tests in that file. You can run ``nosetests
test_file.py:test_DoubleRop`` to run only the tests inside that test
class. You can run ``nosetests
test_file.py:test_DoubleRop.test_double_op`` to run only one
particular test. More `nosetests
<http://readthedocs.org/docs/nose/en/latest/>`_ documentation.
**Testing GPU Ops**
Ops to be executed on the GPU should inherit from the ``theano.sandbox.cuda.GpuOp``
and not ``theano.Op``. This allows Theano to distinguish them. Currently, we
use this to test if the NVIDIA driver works correctly with our sum reduction code on the
GPU.
Running Your Tests
------------------
You can run the command ``nosetests`` in the Theano folder to run all of Theano's
tests, including yours if they are somewhere in the directory structure.
The following command lines have these purposes:
* ``nosetests test_file.py``: run all the tests in the file *test_file.py*.
* ``nosetests test_file.py:test_DoubleRop``: run only the tests found inside the test.
class *test_DoubleRop*
* ``nosetests test_file.py:test_DoubleRop.test_double_op``: run only the test *test_double_op*
in the class *test_DoubleRop*.
You can also add this block the end of the test file and run the file:
More documentation on ``nosetests`` is available here:
`nosetests <http://readthedocs.org/docs/nose/en/latest/>`_.
Alternatively, you can add the following block of code the end of the test file and run it
file so as to have the test *test_DoubleRop.test_double_op* performed:
.. code-block:: python
......@@ -307,17 +323,6 @@ You can also add this block the end of the test file and run the file:
t.test_double_rop()
**Testing GPU Ops**
Ops to be executed on the GPU should inherit from the ``theano.sandbox.cuda.GpuOp``
and not ``theano.Op``. This allows Theano to distinguish them. Currently, we
use this to test if the NVIDIA driver works correctly with our sum reduction code on the
GPU.
A more extensive discussion of this section's topic may be found in the advanced
tutorial :ref:`Extending Theano<extending>`
-------------------------------------------
......@@ -376,9 +381,10 @@ For more details see :ref:`random_value_in_tests`.
**A Final Note:**
Documentation
-------------
A more extensive discussion of this section's content may be found in the advanced
tutorial :ref:`Extending Theano<extending>`
See :ref:`metadocumentation`, for some information on how to generate
the documentation.
......
......@@ -24,15 +24,15 @@ internals cannot be modified.
Faster gcc optimization
-----------------------
You can enable faster gcc optimization with the cxxflags. This list of flags was suggested on the mailing list::
You can enable faster gcc optimization with the ``cxxflags``. This list of flags was suggested on the mailing list::
cxxflags=-march=native -O3 -ffast-math -ftree-loop-distribution -funroll-loops -ftracer
Use it at your own risk. Some people warned that the -ftree-loop-distribution optimization caused them wrong results in the past.
Also the -march=native must be used with care if you have NFS. In that case, you MUST set the compiledir to a local path of the computer.
Use it at your own risk. Some people warned that the ``-ftree-loop-distribution`` optimization resulted in wrong results in the past.
Also the ``-march=native`` flag must be used with care if you have NFS. In that case, you MUST set the compiledir to a local path of the computer.
Related Projects
----------------
We try to list in this `wiki page <https://github.com/Theano/Theano/wiki/Related-projects>`_ other Theano related projects.
......@@ -5,14 +5,16 @@
Some general Remarks
=====================
.. TODO: This discussion is awkward. Even with this beneficial reordering (28 July 2012) its purpose and message are unclear.
.. TODO: This discussion is awkward. Even with this beneficial reordering (28 July 2012)
.. its purpose and message are for the moment unclear.
Limitations
-----------
Theano offers a good amount of flexibility, but has some limitations too.
How then can you write your algorithm to make the most of what Theano can do?
You must answer for yourself the following question: How can my algorithm be cleverly written
so as to make the most of what Theano can do?
- *While*- or *for*-Loops within an expression graph are supported, but only via
......
......@@ -253,7 +253,7 @@ Tips for Improving Performance on GPU
Changing the Value of Shared Variables
--------------------------------------
To change the value of a shared variable, e.g. to provide new data to processes,
To change the value of a ``shared`` variable, e.g. to provide new data to processes,
use ``shared_variable.set_value(new_value)``. For a lot more detail about this,
see :ref:`aliasing`.
......@@ -339,7 +339,7 @@ What can be done to further increase the speed of the GPU version?
.. Note::
* Only 32 bits floats are currently supported (development is in process).
* Only 32 bits floats are currently supported (development is in progress).
* ``Shared`` variables with *float32* dtype are by default moved to the GPU memory space.
* There is a limit of one GPU per process.
......@@ -487,7 +487,7 @@ Modify and execute to work for a matrix of shape (20, 10).
return thunk
To test it:
Use this code to test it:
>>> x = theano.tensor.fmatrix()
>>> f = theano.function([x], PyCUDADoubleOp()(x))
......@@ -507,7 +507,7 @@ Modify and execute to multiply two matrices: *x* * *y*.
Modify and execute to return two outputs: *x + y* and *x - y*.
(Currently, *elemwise fusion* generates computation with only 1 output.)
Modify and execute to support *stride* (i.e. so as not constrain the input to be C-contiguous).
Modify and execute to support *stride* (i.e. so as not constrain the input to be *C-contiguous*).
-------------------------------------------
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论