提交 16ff6ebb authored 作者: Pascal Lamblin's avatar Pascal Lamblin

merge

...@@ -6,18 +6,20 @@ Theano is a Python library that allows you to define, optimize, and ...@@ -6,18 +6,20 @@ Theano is a Python library that allows you to define, optimize, and
evaluate mathematical expressions involving multi-dimensional evaluate mathematical expressions involving multi-dimensional
arrays efficiently. Theano features: arrays efficiently. Theano features:
* **tight integration with numpy** * **tight integration with numpy** -- Use `numpy.ndarray` in Theano-compiled functions.
* **near-transparent use of a GPU** to accelerate for intense calculations [JAN 2010]. * **near-transparent use of a GPU** -- Accelerate data-intensive calculations [JAN 2010].
* **symbolic differentiation** * **symbolic differentiation** -- Let Theano do your derivatives.
* **speed and stability optimizations**: write ``log(1+exp(x))`` and get the right answer. * **speed and stability optimizations** -- Write ``log(1+exp(x))`` and get the right answer.
* **dynamic C code generation** for faster expression evaluation * **dynamic C code generation** -- Evaluate expressions faster.
Theano has been powering large-scale computationally intensive scientific investigations
since 2007. But it is also approachable enough to be used in the classroom
(IFT6266 at the University of Montreal).
Download Download
======== ========
In April 2009 we made declared the creation of a `0.1 release <http://pylearn.org/theano/downloads/Theano-0.1.tar.gz>`_. We recommend the latest development version, available via::
Development has continued non-stop since then.
The current version is available via::
hg clone http://hg.assembla.com/theano Theano hg clone http://hg.assembla.com/theano Theano
...@@ -27,13 +29,26 @@ installation and configuration, see :ref:`installing Theano <install>`. ...@@ -27,13 +29,26 @@ installation and configuration, see :ref:`installing Theano <install>`.
Documentation Documentation
============= =============
Roughly in order of what you'll want to check out:
* :ref:`introduction` -- What is Theano?
* :ref:`tutorial` -- Learn the basics.
* :ref:`libdoc` -- All Theano's functionality, module by module.
* :ref:`extending` -- Learn to add a Type, Op, or graph optimization.
* :ref:`internal` -- How to maintaining Theano, LISA-specific tips, and more...
You can download the latest `PDF documentation <http://pylearn.org/theano/theano.pdf>`_, rather than reading it online. You can download the latest `PDF documentation <http://pylearn.org/theano/theano.pdf>`_, rather than reading it online.
* If you have no idea what Theano is read the :ref:`introduction <introduction>`. Community
* :ref:`learn the basics <tutorial>` =========
* :ref:`library reference <libdoc>`
* :ref:`extending Theano <extending>` with new Types, and Ops * Register and post to `theano-users`_ if you want to talk to all Theano users.
* :ref:`internal docs <internal>`
* Register and post to `theano-dev`_ if you want to talk to the developers.
* We try to stay organized with `Theano's Trac <trac/>`__
* Come visit us in Montreal! Most of the developers are students in the LISA_ group at the `University of Montreal`_.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
...@@ -54,17 +69,6 @@ You can download the latest `PDF documentation <http://pylearn.org/theano/theano ...@@ -54,17 +69,6 @@ You can download the latest `PDF documentation <http://pylearn.org/theano/theano
LICENSE LICENSE
Community
=========
* Register and post to `theano-users`_ if you want to talk to all Theano users.
* Register and post to `theano-dev`_ if you want to talk to the developers.
* We try to stay organized with `Theano's Trac <trac/>`__
* Come visit us in Montreal! Most of the developers are students in the LISA_ group at the `University of Montreal`_.
.. _theano-dev: http://groups.google.com/group/theano-dev .. _theano-dev: http://groups.google.com/group/theano-dev
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
.. _internal: .. _internal:
====================== ======================
Internal documentation Internal Documentation
====================== ======================
If you're feeling ambitious, go fix some `pylint If you're feeling ambitious, go fix some `pylint
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
.. _libdoc_compile: .. _libdoc_compile:
============================================================== ==============================================================
:mod:`compile` -- transforming expression graphs to functions :mod:`compile` -- Transforming Expression Graphs to Functions
============================================================== ==============================================================
.. module:: compile .. module:: compile
......
.. _libdoc_config: .. _libdoc_config:
======================================= =======================================
:mod:`config` -- library configuration :mod:`config` -- Theano Configuration
======================================= =======================================
.. module:: config .. module:: config
:platform: Unix, Windows :platform: Unix, Windows
:synopsis: library configuration :synopsis: Library configuration attributes.
.. moduleauthor:: LISA .. moduleauthor:: LISA
Guide
=====
The config module contains many attributes that modify Theano's behavior. Many of these
attributes are consulted during the import of the ``theano`` module and many are assumed to be
read-only.
*As a rule, the attributes in this module should not be modified by user code.*
Reference
=========
.. envvar:: THEANO_FLAGS .. envvar:: THEANO_FLAGS
***TODO*** This is a list of comma-delimited key=value pairs that control Theano's behavior.
For example, in bash, you can type:
.. code-block:: bash
THEANO_FLAGS=floatX=float32 python <myscript>.py
***TODO***
Which will cause
.. attribute:: floatX
String value: either 'float32' or 'float64'.
Default: 'float64'
Set with the 'floatX' key in :envvar:`THEANO_FLAGS`
.. attribute:: THEANO_DEFAULT_MODE
String value: any valid mode string (see :func:`theano.function`).
Default: 'FAST_RUN'
Set with the 'THEANO_DEFAULT_MODE' environment variable.
.. attribute:: floatx
***TODO*** Consider moving many config elements to THEANO_FLAGS right away before taking the
time to document the current madness.
***TODO*** what attributes are in here?
...@@ -2,10 +2,10 @@ ...@@ -2,10 +2,10 @@
.. _libdoc_floatX: .. _libdoc_floatX:
======================================================================= =======================================================================
:mod:`floatX` -- easy switching between float32 and float64 :mod:`floatX` -- Switching Between 'float32' and 'float64'
======================================================================= =======================================================================
.. module:: floatx .. module:: floatX
:platform: Unix, Windows :platform: Unix, Windows
:synopsis: easy switching between float32 and float64 :synopsis: easy switching between float32 and float64
.. moduleauthor:: LISA .. moduleauthor:: LISA
...@@ -14,34 +14,35 @@ ...@@ -14,34 +14,35 @@
Guide Guide
===== =====
Their is a special data type called floatX. It is not a real datatype. It is never present in the theano graph, but their exist constructor and function that will change the floatX to float32 or float64(default) in your graph. You can change the value of floatX when you start the execution of python by setting the environement variables THEANO_GPU=floatX=float{32,64}(case sensitive). You can have the value of floatX with:: On the CPU, 'float32' computations are often twice as fast as 'float64'
and are half the size.
On GPUs the speed difference between 'float32'`` and 'float64' is much greater.
Often we develop our code using double-precision expressions, and then wonder if
we might get the same answer much more quickly with single-precision arithmetic.
If we have used ``tensor.dmatrix`` and ``tensor.dvector`` and so on throughout
our code, it could be tedious to switch to single-precision Variables. To make
switching precisions easier, Theano provides the ``floatX`` module.
import theano.config as config >>> from theano.floatX import xmatrix, xvector, xtensor4
config.floatX >>> import numpy
>>> a = xvector('a')
>>> b = xmatrix()
>>> c = xtensor4()
This can help to have the same code run on the cpu in float64 and let it run on the gpu in float32. float32 is the only datatype that is currently supported on the gpu. This can give different result due to rounding error. This option help to compare those difference. These calls are identical to ``dvector``, ``dmatrix``, and ``dtensor4`` by default, but a
single environment variable can switch them to ``fvector``, ``fmatrix`` and ``ftensor4``.
Also, on the cpu, float32 are twice as fast as float64. You can set the floatX precision via ``floatX`` in the :envvar:`THEANO_FLAGS`.
It defaults to ``'float64'``. To set it to ``'float32'`` in *bash* for example, type ``export THEANO_FLAGS=floatX=float64``.
To set it from within your program call :func:`set_floatX`
Their exist helper function in theano.floatx that simplify using this The current floatX precision is stored in ``theano.config.floatX`` as a string.
Here is the list of fonction that create/accept tensor of floatX. They are all variant of function that already exist for other datatype. Its value is either 'float32' or 'float64'.
theano.scalar.Scalar.__init__(dtype) So it is easy to allocate a numpy vector of the floatX dtype.
theano.scalar.floatX
theano.floatx.xscalar
theano.floatx.xvector
theano.floatx.xmatrix
theano.floatx.xrow
theano.floatx.xcol
theano.floatx.xtensor3
theano.floatx.xtensor4
theano.tensor.cast(TensorVariable, dtype)
TensorType.__init__(dtype, broadcastable, name = None, shape=None)
HINT: linear algorythm are less affected by the different precision then non-linear one. >>> import theano.config as config
use numpy.asarray(x,dtype=config.floatX) to cast numpy array to floatX >>> print config.floatX # either 'float32' or 'float64'
numpy.asarray(x,dtype=config.floatX) warn copy only if needed. >>> x = numpy.asarray([1,2,3], dtype=config.floatX)
WARNING: theano.floatx.set_floatX() exist for our test. Don't use it for something else. If you do, it will make code hard to read and it is a sign that their is something better for you then floatX.
Reference Reference
========== ==========
...@@ -52,32 +53,32 @@ Reference ...@@ -52,32 +53,32 @@ Reference
.. function:: xvector(name=None) .. function:: xvector(name=None)
Alias for either :func:`d...` or :func:`f---` Alias for either :func:`dvector` or :func:`fvector`
.. function:: xmatrix(name=None) .. function:: xmatrix(name=None)
Alias for either :func:`d...` or :func:`f---` Alias for either :func:`dmatrix` or :func:`fmatrix`
.. function:: xrow(name=None) .. function:: xrow(name=None)
Alias for either :func:`d...` or :func:`f---` Alias for either :func:`drow` or :func:`frow`
.. function:: xcol(name=None) .. function:: xcol(name=None)
Alias for either :func:`d...` or :func:`f---` Alias for either :func:`dcol` or :func:`fcol`
.. function:: xtensor3(name=None) .. function:: xtensor3(name=None)
Alias for either :func:`d...` or :func:`f---` Alias for either :func:`dtensor3` or :func:`ftensor3`
.. function:: xtensor4(name=None) .. function:: xtensor4(name=None)
Alias for either :func:`d...` or :func:`f---` Alias for either :func:`dtensor4` or :func:`ftensor4`
.. function:: set_floatX(dtype=config.floatX) .. function:: set_floatX(dtype=config.floatX)
Reset the :func:`xscalar`, ... :func:`xtensor4` aliases to return types Reset the :func:`xscalar`, ... :func:`xtensor4` aliases to return Variables with given dtype.
with given dtype. This is called at import-time when setting floatX in :envvar:`THEANO_FLAGS`.
...@@ -2,5 +2,5 @@ ...@@ -2,5 +2,5 @@
.. _libdoc_gof: .. _libdoc_gof:
================================================ ================================================
:mod:`gof` -- theano internals [doc TODO] :mod:`gof` -- Theano Internals [doc TODO]
================================================ ================================================
.. _libdoc_gradient: .. _libdoc_gradient:
=========================================== ===========================================
:mod:`gradient` -- symbolic differentiation :mod:`gradient` -- Symbolic Differentiation
=========================================== ===========================================
.. module:: gradient .. module:: gradient
......
.. _libdoc_printing: .. _libdoc_printing:
================================================================================ ===============================================================
:mod:`printing` -- graph printing and symbolic print statement [doc TODO] :mod:`printing` -- Graph Printing and Symbolic Print Statement
================================================================================ ===============================================================
.. module:: printing
:platform: Unix, Windows
:synopsis: Provides the Print Op and graph-printing routines.
.. moduleauthor:: LISA
Guide
======
Intermediate values in a computation cannot be printed in
the normal python way with the print statement, because Theano has no *statements*.
Instead there is the `Print` Op.
>>> x = T.dvector()
>>> hello_world_op = Print('hello world')
>>> printed_x = hello_world_op(x)
>>> f = function([x], printed_x)
>>> f([1,2,3])
>>> # output: "hello world __str__ = [ 1. 2. 3.]"
If you print more than one thing in a function like `f`, they will not
necessarily be printed in the order that you think. The order might even depend
on which graph optimizations are applied. Strictly speaking, the order of
printing is not completely defined by the interface --
the only hard rule is that if the input of some print output `a` is
ultimately used as an input to some other print input `b` (so that `b` depends on `a`),
then `a` will print before `b`.
Reference
==========
.. class:: Print(Op)
This identity-like Op has the side effect of printing a message followed by its inputs
when it runs. Default behaviour is to print the __str__ representation. Optionally, one
can pass a list of the input member functions to execute, or attributes to print.
.. method:: __init__(message="", attrs=("__str__",)
:type message: string
:param message: prepend this to the output
:type attrs: list of strings
:param attrs: list of input node attributes or member functions to print.
Functions are
identified through callable(), executed and their return value printed.
.. method:: __call__(x)
:type x: a :class:`Variable`
:param x: any symbolic variable
:returns: symbolic identity(x)
When you use the return-value from this function in a theano function,
running the function will print the value that `x` takes in the graph.
...@@ -2,5 +2,5 @@ ...@@ -2,5 +2,5 @@
.. _libdoc_scalar: .. _libdoc_scalar:
============================================================== ==============================================================
:mod:`scalar` -- symbolic scalar types, ops [doc TODO] :mod:`scalar` -- Symbolic Scalar Types, Ops [doc TODO]
============================================================== ==============================================================
...@@ -2,6 +2,6 @@ ...@@ -2,6 +2,6 @@
.. _libdoc_sparse: .. _libdoc_sparse:
=========================================================== ===========================================================
:mod:`sparse` -- symbolic sparse matrices [doc TODO] :mod:`sparse` -- Symbolic Sparse Matrices [doc TODO]
=========================================================== ===========================================================
...@@ -247,6 +247,34 @@ Advanced indexing. ...@@ -247,6 +247,34 @@ Advanced indexing.
.. _libdoc_tensor_elementwise: .. _libdoc_tensor_elementwise:
.. note::
Index-assignment is *not* supported.
If you want to do something like ``a[5] = b`` or ``a[5]+=b``, see :func:`setsubtensor`.
Operator Support
================
Python arithmetic operators are supported:
>>> a = T.itensor3()
>>> a + 3 # T.add(a, 3) -> itensor3
>>> 3 - a # T.sub(3, a)
>>> a * 3.5 # T.mul(a, 3.5) -> ftensor3 or dtensor3 (depending on autocasting)
>>> 2.2 / a # T.truediv(2.2, a)
>>> 2.2 // a # T.intdiv(2.2, a)
>>> 2.2**a # T.pow(2.2, a)
.. note::
In-place operators are *not* supported. Theano's graph-optimizations
will determine which intermediate values to use for in-place
computations. If you would like to update the value of a
:term:`shared variable`, consider using the ``updates`` argument to
:func:`theano.function`.
Elementwise Elementwise
=========== ===========
...@@ -254,36 +282,28 @@ Elementwise ...@@ -254,36 +282,28 @@ Elementwise
Casting Casting
------- -------
Logic Functions Comparisons
--------------- ------------
.. note::
Theano has no boolean dtype. Instead, all boolean tensors are represented
in ``'int8'``.
.. function:: lt(a, b) .. function:: lt(a, b)
Returns a variable representing the result of logical less than (a<b).
:Parameter: *a* - symbolic Tensor (or compatible)
:Parameter: *b* - symbolic Tensor (or compatible)
:Return type: symbolic Tensor
:Returns: a symbolic tensor representing the application of logical
elementwise less than.
.. code-block:: python Returns a symbolic ``'int8'`` tensor representing the result of logical less-than (a<b).
import theano.tensor as T
x,y = T.dmatrices('x','y') Also available using syntax ``a < b``
z = T.lt(x,y)
.. function:: gt(a, b) .. function:: gt(a, b)
Returns a variable representing the result of logical greater than (a>b).
:Parameter: *a* - symbolic Tensor (or compatible)
:Parameter: *b* - symbolic Tensor (or compatible)
:Return type: symbolic Tensor
:Returns: a symbolic tensor representing the application of logical
elementwise greater than.
.. code-block:: python Returns a symbolic ``'int8'`` tensor representing the result of logical greater-than (a>b).
import theano.tensor as T
x,y = T.dmatrices('x','y') Also available using syntax ``a > b``
z = T.gt(x,y)
.. function:: le(a, b) .. function:: le(a, b)
Returns a variable representing the result of logical less than or Returns a variable representing the result of logical less than or
equal (a<=b). equal (a<=b).
:Parameter: *a* - symbolic Tensor (or compatible) :Parameter: *a* - symbolic Tensor (or compatible)
...@@ -292,26 +312,29 @@ Logic Functions ...@@ -292,26 +312,29 @@ Logic Functions
:Returns: a symbolic tensor representing the application of logical :Returns: a symbolic tensor representing the application of logical
elementwise less than or equal. elementwise less than or equal.
.. code-block:: python .. code-block:: python
import theano.tensor as T
x,y = T.dmatrices('x','y') import theano.tensor as T
z = T.le(x,y) x,y = T.dmatrices('x','y')
z = T.le(x,y)
.. function:: ge(a, b) .. function:: ge(a, b)
Returns a variable representing the result of logical greater or equal
than (a>=b). Returns a variable representing the result of logical greater or equal than (a>=b).
:Parameter: *a* - symbolic Tensor (or compatible) :Parameter: *a* - symbolic Tensor (or compatible)
:Parameter: *b* - symbolic Tensor (or compatible) :Parameter: *b* - symbolic Tensor (or compatible)
:Return type: symbolic Tensor :Return type: symbolic Tensor
:Returns: a symbolic tensor representing the application of logical :Returns: a symbolic tensor representing the application of logical
elementwise greater than or equal. elementwise greater than or equal.
.. code-block:: python .. code-block:: python
import theano.tensor as T
x,y = T.dmatrices('x','y') import theano.tensor as T
z = T.ge(x,y) x,y = T.dmatrices('x','y')
z = T.ge(x,y)
.. function:: eq(a, b) .. function:: eq(a, b)
Returns a variable representing the result of logical equality (a==b). Returns a variable representing the result of logical equality (a==b).
:Parameter: *a* - symbolic Tensor (or compatible) :Parameter: *a* - symbolic Tensor (or compatible)
:Parameter: *b* - symbolic Tensor (or compatible) :Parameter: *b* - symbolic Tensor (or compatible)
...@@ -319,12 +342,14 @@ Logic Functions ...@@ -319,12 +342,14 @@ Logic Functions
:Returns: a symbolic tensor representing the application of logical :Returns: a symbolic tensor representing the application of logical
elementwise equality. elementwise equality.
.. code-block:: python .. code-block:: python
import theano.tensor as T
x,y = T.dmatrices('x','y') import theano.tensor as T
z = T.eq(x,y) x,y = T.dmatrices('x','y')
z = T.eq(x,y)
.. function:: neq(a, b) .. function:: neq(a, b)
Returns a variable representing the result of logical inequality Returns a variable representing the result of logical inequality
(a!=b). (a!=b).
:Parameter: *a* - symbolic Tensor (or compatible) :Parameter: *a* - symbolic Tensor (or compatible)
...@@ -333,10 +358,11 @@ Logic Functions ...@@ -333,10 +358,11 @@ Logic Functions
:Returns: a symbolic tensor representing the application of logical :Returns: a symbolic tensor representing the application of logical
elementwise inequality. elementwise inequality.
.. code-block:: python .. code-block:: python
import theano.tensor as T
x,y = T.dmatrices('x','y') import theano.tensor as T
z = T.neq(x,y) x,y = T.dmatrices('x','y')
z = T.neq(x,y)
Mathematical Mathematical
------------ ------------
......
.. _libdoc_tensor: .. _libdoc_tensor:
================================================== ==================================================
:mod:`tensor` -- types and ops for symbolic numpy :mod:`tensor` -- Types and Ops for Symbolic numpy
================================================== ==================================================
.. module:: tensor .. module:: tensor
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
Tutorial Tutorial
======== ========
Let's start an interactive session and import Theano. Let's start an interactive session (e.g. ``python`` or ``ipython``) and import Theano.
>>> from theano import * >>> from theano import *
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论