Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
703007e7
提交
703007e7
authored
4月 07, 2009
作者:
Olivier Breuleux
浏览文件
操作
浏览文件
下载
差异文件
merge
上级
346343bc
648fb338
全部展开
隐藏空白字符变更
内嵌
并排
正在显示
31 个修改的文件
包含
730 行增加
和
227 行删除
+730
-227
NEWS.txt
NEWS.txt
+2
-0
NEWS.txt
doc/NEWS.txt
+32
-0
features.txt
doc/advanced/features.txt
+4
-0
module.txt
doc/advanced/module.txt
+87
-6
cop.txt
doc/advanced_tutorial/cop.txt
+12
-12
ctype.txt
doc/advanced_tutorial/ctype.txt
+0
-0
graphstructures.txt
doc/advanced_tutorial/graphstructures.txt
+2
-2
inplace.txt
doc/advanced_tutorial/inplace.txt
+13
-13
op.txt
doc/advanced_tutorial/op.txt
+28
-28
optimization.txt
doc/advanced_tutorial/optimization.txt
+23
-22
type.txt
doc/advanced_tutorial/type.txt
+24
-24
module.txt
doc/basic_tutorial/module.txt
+10
-4
conf.py
doc/conf.py
+1
-1
contents.txt
doc/contents.txt
+1
-1
index.txt
doc/index.txt
+8
-3
install.txt
doc/install.txt
+1
-1
how_to_release.txt
doc/internal/how_to_release.txt
+9
-3
unittest.txt
doc/topics/unittest.txt
+6
-0
debugmode.py
theano/compile/debugmode.py
+6
-0
module.py
theano/compile/module.py
+22
-70
test_module.py
theano/compile/tests/test_module.py
+45
-0
env.py
theano/gof/env.py
+3
-0
test_theano_object.py
theano/sandbox/test_theano_object.py
+101
-0
theano_object.py
theano/sandbox/theano_object.py
+226
-0
basic.py
theano/scalar/basic.py
+1
-1
test_basic.py
theano/sparse/tests/test_basic.py
+6
-6
basic.py
theano/tensor/basic.py
+22
-7
test_basic.py
theano/tensor/tests/test_basic.py
+0
-0
test_nnet.py
theano/tensor/tests/test_nnet.py
+20
-20
test_xlogx.py
theano/tensor/tests/test_xlogx.py
+3
-3
unittest_tools.py
theano/tests/unittest_tools.py
+12
-0
没有找到文件。
NEWS.txt
0 → 120000
浏览文件 @
703007e7
doc/NEWS.txt
\ No newline at end of file
doc/NEWS.txt
0 → 100644
浏览文件 @
703007e7
.. _NEWS:
=============
Release Notes
=============
Theano 0.1
==========
*Release date: 2009-04-02*
What works
----------
- building symbolic expression.
- arranging symbolic expressions into Modules so that multiple functions
can work on the same data.
- symbolic gradient descent.
- graph optimization.
- compilation to C for many kinds of expression.
- a debugging mode that checks that your expression results are correct,
using a variety of sanity checks.
What's missing?
---------------
- An algorithm library. We're missing a library of examples and standard
component implementations. Some examples will find their way into
the Theano repo, but standard algorithms will go into the 'pylearn'
project (toolbox style). Now that we have a stable foundation, we
can reach a consensus on style for algorithms.
doc/advanced/features.txt
浏览文件 @
703007e7
...
@@ -5,6 +5,8 @@
...
@@ -5,6 +5,8 @@
List of Env Features
List of Env Features
====================
====================
See :api:`gof.env.Env`.
WRITEME
WRITEME
.. _nodefinder:
.. _nodefinder:
...
@@ -12,4 +14,6 @@ WRITEME
...
@@ -12,4 +14,6 @@ WRITEME
NodeFinder
NodeFinder
==========
==========
See :api:`gof.toolbox.NodeFinder`.
WRITEME
WRITEME
doc/advanced/module.txt
浏览文件 @
703007e7
...
@@ -5,15 +5,82 @@
...
@@ -5,15 +5,82 @@
Module Interface
Module Interface
================
================
WRITEME
A Theano Module is like Theano's version of a file.
When you instantiate a ``Module()``, you are creating a blank file.
Into this file you can put both symbolic and non-symbolic objects.
Non-symbolic objects are like constants (technically literals) in the file.
Symbolic objects are like variables and functions.
The functions in a Module are called Methods.
The variables in a Module (and submodules) are global.
Module Methods have access to all these global variables.
To use a Module, you need to compile it.
This is done by calling `Module.make()`.
The result of compiling a Module is a ModuleInstance, this is the compiled
version of your Theano file.
In the ModuleInstance, your symbolic variables have become containers (containing None),
and your Methods have become callable functions.
You should initialize the symbolic variables by calling
``ModuleInstance.initialize()`` (although make() will call it for you,
on the top-level ModuleInstance.)
You can compile a Module several times, to create multiple ModuleInstances.
Each of these will have its own copy of all program literals.
Module Graph
------------
Components can be grouped into a directed graph.
When we call `make`, this graph is replicated with ComponentInstances instead of
Components. Wheras Components are represent symbolic things (ie. Variables), ComponentInstances represent non-symbolic ones (ie. sparse matrices, ndarrays, callable functions).
.. index::
single: Component
single: component; Component
.. _component:
---------
Component
---------
All of the elements of what is called the "module system" or "modules" are
components.
A component subclass is represents a symbolic theano thing, and implements the
``build`` function.
The ``build`` function is responsible for converting the symbolic thing into a
non-symbolic thing.
Compiling with make
-------------------
Conversion from a Component graph to a ComponentInstance graph is performed by `Component.make`.
This method traverses the Component graph in multiple passes.
In the first pass (the allocate pass), it creates storage for all Variables that are contained in the graph (see
`Component.allocate`). These are the module variables.
In the second pass (the build pass), it creates functions that (in general) operate on these module variables.
This pass also serves to construct all ComponentInstance-derived instances as well, such as
`ModuleInstance`s. The objects that are returned from this second pass are the return value of
`Component.make`.
In the third pass (the initialize pass), is optional and not necessarily recursive through the
graph.
The purpose of the third pass is to call the initialize method of the ComponentInstances built
during the second pass.
During this pass the ComponentInstance graph is complete. It is a good time to fill storage
allocated in phase 1 with sensible values.
.. index::
.. index::
single: External
single: External
single: component; External
single: component; External
.. _external:
.. _external:
--------
--------
External
External
--------
--------
...
@@ -27,7 +94,6 @@ WRITEME
...
@@ -27,7 +94,6 @@ WRITEME
single: component; Member
single: component; Member
.. _member:
.. _member:
------
------
Member
Member
------
------
...
@@ -40,7 +106,6 @@ WRITEME
...
@@ -40,7 +106,6 @@ WRITEME
single: component; Method
single: component; Method
.. _method:
.. _method:
------
------
Method
Method
------
------
...
@@ -53,13 +118,29 @@ WRITEME
...
@@ -53,13 +118,29 @@ WRITEME
single: component; Module
single: component; Module
.. _module:
.. _module:
------
------
Module
Module
------
------
WRITEME
A Module instance can contain objects as attributes.
This makes it something like a class in the way that Method is
analogous to a function.
A Module is meant to contain Components.
Attributes which are not Components themselves must at least be transform-able
into Components by :api:`compile.module.wrap`. If a Module contains something
that is not convertible into a Component, then it is not possible to compile
that Module with ``make``.
Old Text
--------
In the Module system, the analog of the file is the `Module`, the analog of the function is the
`Method`, and the analog of the variable is the `Member`. Module, Member, and Method all work
at the symbolic level. Once a graph of Modules, Members, and Methods is ready for use, it must
be compiled with a call to `make` which will return an isomorphic structure in which Modules
have become `ModuleInstances`, Members have become `Container`s, and Methods have become
`Function`s.
This structure contains numbers and functions, and is ready for computation.
doc/advanced_tutorial/cop.txt
浏览文件 @
703007e7
...
@@ -35,7 +35,7 @@ There are less methods to define for an Op than for a Type:
...
@@ -35,7 +35,7 @@ There are less methods to define for an Op than for a Type:
This must return C code that cleans up whatever c_code allocated and
This must return C code that cleans up whatever c_code allocated and
that we must free.
that we must free.
*Default* The default behavior is to do nothing.
*Default
:
* The default behavior is to do nothing.
.. function:: c_compile_args()
.. function:: c_compile_args()
c_headers()
c_headers()
...
@@ -118,14 +118,14 @@ version that it produces in the code I gave above.
...
@@ -118,14 +118,14 @@ version that it produces in the code I gave above.
.. code-block:: python
.. code-block:: python
from theano import gof
from theano import gof
class BinaryDoubleOp(gof.Op):
class BinaryDoubleOp(gof.Op):
def __init__(self, name, fn, ccode):
def __init__(self, name, fn, ccode):
self.name = name
self.name = name
self.fn = fn
self.fn = fn
self.ccode = ccode
self.ccode = ccode
def make_node(self, x, y):
def make_node(self, x, y):
if isinstance(x, (int, float)):
if isinstance(x, (int, float)):
x = gof.Constant(double, x)
x = gof.Constant(double, x)
...
@@ -134,29 +134,29 @@ version that it produces in the code I gave above.
...
@@ -134,29 +134,29 @@ version that it produces in the code I gave above.
if x.type != double or y.type != double:
if x.type != double or y.type != double:
raise TypeError('%s only works on doubles' % self.name)
raise TypeError('%s only works on doubles' % self.name)
return gof.Apply(self, [x, y], [double()])
return gof.Apply(self, [x, y], [double()])
def perform(self, node, (x, y), (z, )):
def perform(self, node, (x, y), (z, )):
z[0] = self.fn(x, y)
z[0] = self.fn(x, y)
def __str__(self):
def __str__(self):
return self.name
return self.name
def c_code(self, node, name, (x, y), (z, ), sub):
def c_code(self, node, name, (x, y), (z, ), sub):
return self.ccode % locals()
return self.ccode % locals()
add = BinaryDoubleOp(name = 'add',
add = BinaryDoubleOp(name = 'add',
fn = lambda x, y: x + y,
fn = lambda x, y: x + y,
ccode = "%(z)s = %(x)s + %(y)s;")
ccode = "%(z)s = %(x)s + %(y)s;")
sub = BinaryDoubleOp(name = 'sub',
sub = BinaryDoubleOp(name = 'sub',
fn = lambda x, y: x - y,
fn = lambda x, y: x - y,
ccode = "%(z)s = %(x)s - %(y)s;")
ccode = "%(z)s = %(x)s - %(y)s;")
mul = BinaryDoubleOp(name = 'mul',
mul = BinaryDoubleOp(name = 'mul',
fn = lambda x, y: x * y,
fn = lambda x, y: x * y,
ccode = "%(z)s = %(x)s * %(y)s;")
ccode = "%(z)s = %(x)s * %(y)s;")
div = BinaryDoubleOp(name = 'div',
div = BinaryDoubleOp(name = 'div',
fn = lambda x, y: x / y,
fn = lambda x, y: x / y,
ccode = "%(z)s = %(x)s / %(y)s;")
ccode = "%(z)s = %(x)s / %(y)s;")
doc/advanced_tutorial/ctype.txt
浏览文件 @
703007e7
差异被折叠。
点击展开。
doc/advanced_tutorial/graphstructures.txt
浏览文件 @
703007e7
...
@@ -146,12 +146,12 @@ Automatic wrapping
...
@@ -146,12 +146,12 @@ Automatic wrapping
All nodes in the graph must be instances of ``Apply`` or ``Result``, but
All nodes in the graph must be instances of ``Apply`` or ``Result``, but
``<Op subclass>.make_node()`` typically wraps constants to satisfy those
``<Op subclass>.make_node()`` typically wraps constants to satisfy those
constraints. For example, the :api:`tensor.add <theano.tensor.add>`
constraints. For example, the :api:`tensor.add <theano.tensor.
basic.
add>`
Op instance is written so that:
Op instance is written so that:
.. code-block:: python
.. code-block:: python
e = scalar('x') + 1
e =
d
scalar('x') + 1
builds the following graph:
builds the following graph:
...
...
doc/advanced_tutorial/inplace.txt
浏览文件 @
703007e7
...
@@ -7,8 +7,8 @@ Views and inplace operations
...
@@ -7,8 +7,8 @@ Views and inplace operations
Theano allows the definition of Ops which return a :term:`view` on one
Theano allows the definition of Ops which return a :term:`view` on one
of their inputs or operates :term:`inplace` on one or several
of their inputs or operates :term:`inplace` on one or several
inputs. This allows more efficient operations on numpy's
ndarray data type than
inputs. This allows more efficient operations on numpy's
``ndarray``
would be possible otherwise.
data type than
would be possible otherwise.
However, in order to work correctly, these Ops need to
However, in order to work correctly, these Ops need to
implement an additional interface.
implement an additional interface.
...
@@ -29,7 +29,7 @@ Views
...
@@ -29,7 +29,7 @@ Views
A "view" on an object ``x`` is an object ``y`` which shares memory
A "view" on an object ``x`` is an object ``y`` which shares memory
with ``x`` in some way. In other words, changing ``x`` might also
with ``x`` in some way. In other words, changing ``x`` might also
change ``y`` and vice versa. For example, imagine a
"vector"
structure
change ``y`` and vice versa. For example, imagine a
``vector``
structure
which contains two fields: an integer length and a pointer to a memory
which contains two fields: an integer length and a pointer to a memory
buffer. Suppose we have:
buffer. Suppose we have:
...
@@ -51,7 +51,7 @@ range ``0xDEADBEFF - 0xDEADBFDF`` and z the range ``0xCAFEBABE -
...
@@ -51,7 +51,7 @@ range ``0xDEADBEFF - 0xDEADBFDF`` and z the range ``0xCAFEBABE -
considered to be a view of ``x`` and vice versa.
considered to be a view of ``x`` and vice versa.
Suppose you had an Op which took ``x`` as input and returned
Suppose you had an Op which took ``x`` as input and returned
``y``. You would need to tell Theano that
y is a view of x
. For this
``y``. You would need to tell Theano that
``y`` is a view of ``x``
. For this
purpose, you would set the ``view_map`` field as follows:
purpose, you would set the ``view_map`` field as follows:
...
@@ -103,7 +103,7 @@ operation on ``x``.
...
@@ -103,7 +103,7 @@ operation on ``x``.
.. code-block:: python
.. code-block:: python
x, y = dscalars('xy')
x, y = dscalars('x
', '
y')
r1 = log(x)
r1 = log(x)
# r2 is x AFTER the add_inplace - x still represents the value before adding y
# r2 is x AFTER the add_inplace - x still represents the value before adding y
...
@@ -119,7 +119,7 @@ operation on ``x``.
...
@@ -119,7 +119,7 @@ operation on ``x``.
Needless to say, this goes for user-defined inplace operations as
Needless to say, this goes for user-defined inplace operations as
well: the modified input must figure in the list of outputs you
well: the modified input must figure in the list of outputs you
give to
Apply in the definition of make_node
.
give to
``Apply`` in the definition of ``make_node``
.
Also, for technical reasons but also because they are slightly
Also, for technical reasons but also because they are slightly
confusing to use as evidenced by the previous code, Theano does not
confusing to use as evidenced by the previous code, Theano does not
...
@@ -132,7 +132,7 @@ operation on ``x``.
...
@@ -132,7 +132,7 @@ operation on ``x``.
introduces inconsistencies.
introduces inconsistencies.
Take the previous definitions of
x, y and z
and suppose an Op which
Take the previous definitions of
``x``, ``y`` and ``z``
and suppose an Op which
adds one to every byte of its input. If we give ``x`` as an input to
adds one to every byte of its input. If we give ``x`` as an input to
that Op, it can either allocate a new buffer of the same size as ``x``
that Op, it can either allocate a new buffer of the same size as ``x``
(that could be ``z``) and set that new buffer's bytes to the variable of
(that could be ``z``) and set that new buffer's bytes to the variable of
...
@@ -141,7 +141,7 @@ it could add one to each byte *in* the buffer ``x``, therefore
...
@@ -141,7 +141,7 @@ it could add one to each byte *in* the buffer ``x``, therefore
changing it. That would be an inplace Op.
changing it. That would be an inplace Op.
Theano needs to be notified of this fact. The syntax is similar to
Theano needs to be notified of this fact. The syntax is similar to
that of
view_map
:
that of
``view_map``
:
.. code-block:: python
.. code-block:: python
...
@@ -160,10 +160,10 @@ first input (position 0).
...
@@ -160,10 +160,10 @@ first input (position 0).
myop.destroy_map = {1: [0]} # second output operates inplace on first input
myop.destroy_map = {1: [0]} # second output operates inplace on first input
myop.destroy_map = {0: [0], # first output operates inplace on first input
myop.destroy_map = {0: [0], # first output operates inplace on first input
1: [1]} # *AND* second output operates inplace on second input
1: [1]} # *AND* second output operates inplace on second input
myop.destroy_map = {0: [0], # first output operates inplace on first input
myop.destroy_map = {0: [0], # first output operates inplace on first input
1: [0]} # *AND* second output *ALSO* operates inplace on first input
1: [0]} # *AND* second output *ALSO* operates inplace on first input
myop.destroy_map = {0: [0, 1]} # first output operates inplace on both the first and second input
myop.destroy_map = {0: [0, 1]} # first output operates inplace on both the first and second input
# unlike for views, the previous line is legal and supported
# unlike for views, the previous line is legal and supported
...
@@ -194,7 +194,7 @@ input(s)'s memory). From there, go to the previous section.
...
@@ -194,7 +194,7 @@ input(s)'s memory). From there, go to the previous section.
the value of ``x`` it might invert the order and that will
the value of ``x`` it might invert the order and that will
certainly lead to erroneous computations.
certainly lead to erroneous computations.
You can often identify an incorrect
view_map or destroy_map by using
You can often identify an incorrect
``view_map`` or ``destroy_map``
:ref:`DebugMode`. *Be sure to use DebugMode when developing a new Op that
by using :ref:`DebugMode`. *Be sure to use DebugMode when developing
uses view_map and/or destroy_map
.*
a new Op that uses ``view_map`` and/or ``destroy_map``
.*
doc/advanced_tutorial/op.txt
浏览文件 @
703007e7
...
@@ -12,16 +12,17 @@ computations. We'll start by defining multiplication.
...
@@ -12,16 +12,17 @@ computations. We'll start by defining multiplication.
Op's contract
Op's contract
=============
=============
An Op (:api:`gof.op.Op`) is any object which defines the following methods:
An Op (:api:`gof.op.Op`) is any object which defines the
following methods:
.. function:: make_node(*inputs)
.. function:: make_node(*inputs)
This method is responsible for creating output Variables of a
This method is responsible for creating output Variables of a
suitable Type to serve as the outputs of this Op's application.
suitable Type to serve as the outputs of this Op's application.
This method should put these outputs into an Apply instance, and
This method should put these outputs into an Apply instance, and
return the Apply instance.
return the Apply instance.
This method creates an Apply node representing the application of
This method creates an Apply node representing the application of
the Op on the inputs provided. If the Op cannot be applied on
the Op on the inputs provided. If the Op cannot be applied on
these inputs, it must raise an appropriate exception.
these inputs, it must raise an appropriate exception.
...
@@ -30,13 +31,13 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
...
@@ -30,13 +31,13 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
ordered correctly: a subsequent ``self.make_node(*apply.inputs)``
ordered correctly: a subsequent ``self.make_node(*apply.inputs)``
must produce something equivalent to the first ``apply``.
must produce something equivalent to the first ``apply``.
``default_output``
.. attribute:: default_output
*Default
*:
None
*Default
:*
None
If this member variable is an integer, then the default
If this member variable is an integer, then the default
implementation of ``__call__`` will return
implementation of ``__call__`` will return
`
node.outputs[self.default_output]``, where `node
` was returned
`
`node.outputs[self.default_output]``, where ``node`
` was returned
by ``make_node``. Otherwise, the entire list of outputs will be
by ``make_node``. Otherwise, the entire list of outputs will be
returned.
returned.
...
@@ -45,7 +46,7 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
...
@@ -45,7 +46,7 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
Syntactic shortcut to make_node which returns the output
Syntactic shortcut to make_node which returns the output
Variables of the Op.
Variables of the Op.
*Default
*:
this is done for you by Op.
*Default
:*
this is done for you by Op.
.. function:: perform(node, inputs, output_storage)
.. function:: perform(node, inputs, output_storage)
...
@@ -64,26 +65,26 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
...
@@ -64,26 +65,26 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
- ``output_storage``: This is a list of storage cells.
- ``output_storage``: This is a list of storage cells.
A storage cell is a one-element list. It is forbidden to change
A storage cell is a one-element list. It is forbidden to change
the length of the list(s) contained in
output_storage
. There is
the length of the list(s) contained in
``output_storage``
. There is
one storage cell for each output of the Op.
one storage cell for each output of the Op.
The data you put in ``output_storage`` must match the type of the
The data you put in ``output_storage`` must match the type of the
symbolic output. This is a situation where the ``node`` argument
symbolic output. This is a situation where the ``node`` argument
can come in handy.
can come in handy.
A function Mode may allow
output_storage
elements to persist between
A function Mode may allow
``output_storage``
elements to persist between
evaluations, or it may reset
output_storage
cells to hold a value of
evaluations, or it may reset
``output_storage``
cells to hold a value of
None. This feature can allow perform to reuse memory between calls, for
``None``. This feature can allow ``perform`` to reuse memory between
example.
calls, for
example.
This method must be determined by the inputs. That is to say, if
This method must be determined by the inputs. That is to say, if
it is evaluated once on inputs A and returned B, then if ever
it is evaluated once on inputs A and returned B, then if ever
inputs C, equal to A, are presented again, then outputs equal to
inputs C, equal to A, are presented again, then outputs equal to
B must be returned again.
B must be returned again.
You must be careful about aliasing outputs to inputs, and making
You must be careful about aliasing outputs to inputs, and making
modifications to any of the inputs. See `Views and inplace
modifications to any of the inputs. See
:ref:
`Views and inplace
operations <views_and_inplace>`
_
before writing a ``perform``
operations <views_and_inplace>` before writing a ``perform``
implementation that does either of these things.
implementation that does either of these things.
.. function:: __eq__(other)
.. function:: __eq__(other)
...
@@ -95,20 +96,21 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
...
@@ -95,20 +96,21 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
(from perform) as this one, given identical inputs. This means it
(from perform) as this one, given identical inputs. This means it
will produce the same output values, it will destroy the same
will produce the same output values, it will destroy the same
inputs (same destroy_map), and will alias outputs to the same
inputs (same destroy_map), and will alias outputs to the same
inputs (same view_map).
inputs (same view_map). For more details, see
:ref:`views_and_inplace`.
.. function:: __hash__()
.. function:: __hash__()
If two Op instances compare equal, then they **must** return the
If two Op instances compare equal, then they **must** return the
same hash value.
same hash value.
Equally important, this hash value must not change during the
Equally important, this hash value must not change during the
lifetime of self. Op instances should be immutable in this
lifetime of self. Op instances should be immutable in this
sense.
sense.
.. function:: __ne__(other)
.. function:: __ne__(other)
Default:
``(not (self==other))``
*Default:*
``(not (self==other))``
.. function:: grad(inputs, output_gradients)
.. function:: grad(inputs, output_gradients)
...
@@ -116,30 +118,28 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
...
@@ -116,30 +118,28 @@ An Op (:api:`gof.op.Op`) is any object which defines the following methods:
If the Op you are defining is differentiable, you can define its
If the Op you are defining is differentiable, you can define its
gradient symbolically in this method.
gradient symbolically in this method.
Both the ``inputs`` and ``output_gradients`` will be
Both the ``inputs`` and ``output_gradients`` will be
Variables. This method must return a list containing one Variable
Variables. This method must return a list containing one Variable
(or
None
) for each input. Each returned Variable represents the
(or
``None``
) for each input. Each returned Variable represents the
gradient with respect to that input given the symbolic gradients
gradient with respect to that input given the symbolic gradients
with respect to each output.
with respect to each output.
If the output is not differentiable with respect to any inputs,
If the output is not differentiable with respect to any inputs,
then this method should be defined to return [None for i in
then this method should be defined to return
``
[None for i in
inputs].
inputs]
``
.
If this method is not defined, then
t
heano assumes it has been
If this method is not defined, then
T
heano assumes it has been
forgotten. Symbolic differentiation will fail on a graph that
forgotten. Symbolic differentiation will fail on a graph that
includes this Op.
includes this Op.
For more information on the use of this method, see ``grad``.
For each method, the *default* is what :api:`theano.gof.op.Op` defines
For each method, the *default* is what :api:`theano.gof.op.Op` defines
for you. At a bare minimum, a new Op must define ``make_node`` and
for you. At a bare minimum, a new Op must define ``make_node`` and
``perform``, which have no defaults.
``perform``, which have no defaults.
For more details, including the interface for providing a C
For more details, including the interface for providing a C
implementation of
perform()
, refer to the documentation for :ref:`op`.
implementation of
``perform()``
, refer to the documentation for :ref:`op`.
Defining an Op: ``mul``
Defining an Op: ``mul``
...
@@ -252,7 +252,7 @@ AttributeError: 'int' object has no attribute 'type'
...
@@ -252,7 +252,7 @@ AttributeError: 'int' object has no attribute 'type'
Automatic Constant Wrapping
Automatic Constant Wrapping
---------------------------
---------------------------
Well,
ok
. We'd like our Op to be a bit more flexible. This can be done
Well,
OK
. We'd like our Op to be a bit more flexible. This can be done
by modifying ``make_node`` to accept Python ``int`` or ``float`` as
by modifying ``make_node`` to accept Python ``int`` or ``float`` as
``x`` and/or ``y``:
``x`` and/or ``y``:
...
...
doc/advanced_tutorial/optimization.txt
浏览文件 @
703007e7
...
@@ -18,7 +18,7 @@ Env is a wrapper around a whole computation graph, you can see its
...
@@ -18,7 +18,7 @@ Env is a wrapper around a whole computation graph, you can see its
:ref:`documentation <env>` for more details) and navigates through it
:ref:`documentation <env>` for more details) and navigates through it
in a suitable way, replacing some Variables by others in the process. A
in a suitable way, replacing some Variables by others in the process. A
local optimization, on the other hand, is defined as a function on a
local optimization, on the other hand, is defined as a function on a
*single* :ref:`apply` node and must return either
False
(to mean that
*single* :ref:`apply` node and must return either
``False``
(to mean that
nothing is to be done) or a list of new Variables that we would like to
nothing is to be done) or a list of new Variables that we would like to
replace the node's outputs with. A :ref:`navigator` is a special kind
replace the node's outputs with. A :ref:`navigator` is a special kind
of global optimization which navigates the computation graph in some
of global optimization which navigates the computation graph in some
...
@@ -49,7 +49,7 @@ methods:
...
@@ -49,7 +49,7 @@ methods:
This method takes an Env object and adds :ref:`features
This method takes an Env object and adds :ref:`features
<envfeature>` to it. These features are "plugins" that are needed
<envfeature>` to it. These features are "plugins" that are needed
for the
apply
method to do its job properly.
for the
``apply``
method to do its job properly.
.. function:: optimize(env)
.. function:: optimize(env)
...
@@ -69,7 +69,7 @@ A local optimization is an object which defines the following methods:
...
@@ -69,7 +69,7 @@ A local optimization is an object which defines the following methods:
.. function:: transform(node)
.. function:: transform(node)
This method takes an :ref:`apply` node and returns either
False
to
This method takes an :ref:`apply` node and returns either
``False``
to
signify that no changes are to be done or a list of Variables which
signify that no changes are to be done or a list of Variables which
matches the length of the node's ``outputs`` list. When the
matches the length of the node's ``outputs`` list. When the
LocalOptimizer is applied by a Navigator, the outputs of the node
LocalOptimizer is applied by a Navigator, the outputs of the node
...
@@ -99,9 +99,9 @@ Here is the code for a global optimization implementing the
...
@@ -99,9 +99,9 @@ Here is the code for a global optimization implementing the
simplification described above:
simplification described above:
.. code-block:: python
.. code-block:: python
from theano.gof import toolbox
from theano.gof import toolbox
class Simplify(gof.Optimizer):
class Simplify(gof.Optimizer):
def add_requirements(self, env):
def add_requirements(self, env):
env.extend(toolbox.ReplaceValidate())
env.extend(toolbox.ReplaceValidate())
...
@@ -116,38 +116,39 @@ simplification described above:
...
@@ -116,38 +116,39 @@ simplification described above:
env.replace_validate(z, b)
env.replace_validate(z, b)
elif y == b:
elif y == b:
env.replace_validate(z, a)
env.replace_validate(z, a)
simplify = Simplify()
simplify = Simplify()
Here's how it works: first, in ``add_requirements``, we add the
Here's how it works: first, in ``add_requirements``, we add the
``ReplaceValidate`` :ref:`envfeature` located in
``ReplaceValidate`` :ref:`envfeature` located in
``theano.gof.toolbox`
`. This feature adds the ``replace_validate``
:api:`theano.gof.toolbox
`. This feature adds the ``replace_validate``
method to
the env
, which is an enhanced version of ``replace`` that
method to
``env``
, which is an enhanced version of ``replace`` that
does additional checks to ensure that we are not messing up the
does additional checks to ensure that we are not messing up the
computation graph (note: if
ReplaceValidate
was already added by
computation graph (note: if
``ReplaceValidate``
was already added by
another optimizer, ``extend`` will do nothing). In a nutshell,
another optimizer, ``extend`` will do nothing). In a nutshell,
``toolbox.ReplaceValidate`` grants access to ``env.replace_validate``
``toolbox.ReplaceValidate`` grants access to ``env.replace_validate``
,
and ``env.replace_validate`` allows us to replace a Variable with
and ``env.replace_validate`` allows us to replace a Variable with
another while respecting certain validation constraints. You can
another while respecting certain validation constraints. You can
browse the list of :ref:`features <envfeaturelist>` and see if some of
browse the list of :ref:`features <envfeaturelist>` and see if some of
them might be useful to write optimizations with. For example, as an
them might be useful to write optimizations with. For example, as an
exercise, try to rewrite Simplify using :ref:`nodefinder`
(h
int: you
exercise, try to rewrite Simplify using :ref:`nodefinder`
. (H
int: you
want to use the method it publishes in
place
of the call to toposort!)
want to use the method it publishes in
stead
of the call to toposort!)
Then, in ``apply`` we do the actual job of simplification. We start by
Then, in ``apply`` we do the actual job of simplification. We start by
iterating through the graph in topological order. For each node
iterating through the graph in topological order. For each node
encountered, we check if it's a ``div`` node. If not, we have nothing
encountered, we check if it's a ``div`` node. If not, we have nothing
to do here. If so, we put in x, y and z the numerator, denominator and
to do here. If so, we put in ``x``, ``y`` and ``z`` the numerator,
quotient (output) of the division. The simplification only occurs when
denominator and quotient (output) of the division.
the numerator is a multiplication, so we check for that. If the
The simplification only occurs when the numerator is a multiplication,
numerator is a multiplication we put the two operands in a and b, so
so we check for that. If the numerator is a multiplication we put the
two operands in ``a`` and ``b``, so
we can now say that ``z == (a*b)/y``. If ``y==a`` then ``z==b`` and if
we can now say that ``z == (a*b)/y``. If ``y==a`` then ``z==b`` and if
``y==b`` then ``z==a``. When either case happens then we can replace
z
``y==b`` then ``z==a``. When either case happens then we can replace
by either a or b
using ``env.replace_validate`` - else we do
``z`` by either ``a`` or ``b``
using ``env.replace_validate`` - else we do
nothing. You might want to check the documentation about :ref:`variable`
nothing. You might want to check the documentation about :ref:`variable`
and :ref:`apply` to get a better understanding of the
and :ref:`apply` to get a better understanding of the
pointer-following game you need to get ahold of the nodes of interest
pointer-following game you need to get ahold of the nodes of interest
for the simplification (
x, y, z, a, b, etc.)
for the simplification (
``x``, ``y``, ``z``, ``a``, ``b``, etc.).
Test time:
Test time:
...
@@ -198,7 +199,7 @@ places. Note that ``add(x, y)`` and ``add(y, x)`` are still considered
...
@@ -198,7 +199,7 @@ places. Note that ``add(x, y)`` and ``add(y, x)`` are still considered
to be different because Theano has no clue that ``add`` is
to be different because Theano has no clue that ``add`` is
commutative. You may write your own global optimizer to identify
commutative. You may write your own global optimizer to identify
computations that are identical with full knowledge of the rules of
computations that are identical with full knowledge of the rules of
arithmetics that your
o
ps implement. Theano might provide facilities
arithmetics that your
O
ps implement. Theano might provide facilities
for this somewhere in the future.
for this somewhere in the future.
.. note::
.. note::
...
@@ -217,7 +218,7 @@ The local version of the above code would be the following:
...
@@ -217,7 +218,7 @@ The local version of the above code would be the following:
.. code-block:: python
.. code-block:: python
class LocalSimplify(gof.LocalOptimizer):
class LocalSimplify(gof.LocalOptimizer):
def transform(self, node):
def transform(self, node):
if node.op == div:
if node.op == div:
...
@@ -234,7 +235,7 @@ The local version of the above code would be the following:
...
@@ -234,7 +235,7 @@ The local version of the above code would be the following:
# but it isn't now
# but it isn't now
# TODO: do this and explain it
# TODO: do this and explain it
return [] # that's not what you should do
return [] # that's not what you should do
local_simplify = LocalSimplify()
local_simplify = LocalSimplify()
The definition of transform is the inner loop of the global optimizer,
The definition of transform is the inner loop of the global optimizer,
...
...
doc/advanced_tutorial/type.txt
浏览文件 @
703007e7
...
@@ -39,30 +39,30 @@ default values.
...
@@ -39,30 +39,30 @@ default values.
``filter(value, strict = True)`` does not raise an exception, the
``filter(value, strict = True)`` does not raise an exception, the
value is compatible with the Type.
value is compatible with the Type.
*Default
*:
True iff ``filter(value, strict = True)`` does not raise
*Default
:*
True iff ``filter(value, strict = True)`` does not raise
an exception.
an exception.
.. function:: values_eq(a, b)
.. function:: values_eq(a, b)
Returns True iff ``a`` and ``b`` are equal.
Returns True iff ``a`` and ``b`` are equal.
*Default
*:
``a == b``
*Default
:*
``a == b``
.. function:: values_eq_approx(a, b)
.. function:: values_eq_approx(a, b)
Returns True iff ``a`` and ``b`` are approximately equal, for a
Returns True iff ``a`` and ``b`` are approximately equal, for a
definition of "approximately" which varies from Type to Type.
definition of "approximately" which varies from Type to Type.
*Default
*:
``values_eq(a, b)``
*Default
:*
``values_eq(a, b)``
.. function:: make_variable(name=None)
.. function:: make_variable(name=None)
Makes a :term:`Variable` of this Type with the specified name, if
Makes a :term:`Variable` of this Type with the specified name, if
``name
is not None``. If ``name
is ``None``, then the Variable does
``name
`` is not ``None``. If ``name``
is ``None``, then the Variable does
not have a name. The Variable will have its ``type`` field set to
not have a name. The Variable will have its ``type`` field set to
the Type object.
the Type object.
*Default
*:
there is a generic definition of this in Type. The
*Default
:*
there is a generic definition of this in Type. The
Variable's ``type`` will be the object that defines this method (in
Variable's ``type`` will be the object that defines this method (in
other words, ``self``).
other words, ``self``).
...
@@ -70,21 +70,21 @@ default values.
...
@@ -70,21 +70,21 @@ default values.
Syntactic shortcut to ``make_variable``.
Syntactic shortcut to ``make_variable``.
*Default
*:
``make_variable``
*Default
:*
``make_variable``
.. function:: __eq__(other)
.. function:: __eq__(other)
Used to compare Type instances themselves
Used to compare Type instances themselves
*Default
*:
``object.__eq__``
*Default
:*
``object.__eq__``
.. function:: __hash__()
.. function:: __hash__()
Types should not be mutable, so it should be O
k
to define a hash
Types should not be mutable, so it should be O
K
to define a hash
function. Typically this function should hash all of the terms
function. Typically this function should hash all of the terms
involved in ``__eq__``.
involved in ``__eq__``.
*Default
*:
``id(self)``
*Default
:*
``id(self)``
For each method, the *default* is what ``Type`` defines
For each method, the *default* is what ``Type`` defines
for you. So, if you create an instance of ``Type`` or an
for you. So, if you create an instance of ``Type`` or an
...
@@ -99,7 +99,7 @@ For more details you can go see the documentation for :ref:`type`.
...
@@ -99,7 +99,7 @@ For more details you can go see the documentation for :ref:`type`.
Defining double
Defining double
===============
===============
We are going to base Type ``double`` on Python's ``float``. We
are
We are going to base Type ``double`` on Python's ``float``. We
must define ``filter`` and shall override ``values_eq_approx``.
must define ``filter`` and shall override ``values_eq_approx``.
...
@@ -139,17 +139,17 @@ graph in such a way that it produces slightly different variables, for
...
@@ -139,17 +139,17 @@ graph in such a way that it produces slightly different variables, for
example because of numerical instability like rounding errors at the
example because of numerical instability like rounding errors at the
end of the mantissa. For instance, ``a + a + a + a + a + a`` might not
end of the mantissa. For instance, ``a + a + a + a + a + a`` might not
actually produce the exact same output as ``6 * a`` (try with a=0.1),
actually produce the exact same output as ``6 * a`` (try with a=0.1),
but with ``values_eq_approx`` we
with
don't necessarily mind.
but with ``values_eq_approx`` we don't necessarily mind.
We added an extra ``tolerance`` argument here. Since this argument is
We added an extra ``tolerance`` argument here. Since this argument is
not part of the API, it must have a default value which we
not part of the API, it must have a default value
,
which we
chose to be 1e-4.
chose to be 1e-4.
.. note::
.. note::
``values_eq`` is never actually used by Theano, but it might be used
``values_eq`` is never actually used by Theano, but it might be used
internally in the future. Equality testing in
DebugMode is done
internally in the future. Equality testing in
using ``values_eq_approx``.
:ref:`DebugMode <debugmode>` is done
using ``values_eq_approx``.
**Putting them together**
**Putting them together**
...
@@ -160,7 +160,7 @@ the Type is to instantiate a plain Type and set the needed fields:
...
@@ -160,7 +160,7 @@ the Type is to instantiate a plain Type and set the needed fields:
.. code-block:: python
.. code-block:: python
from
T
heano import gof
from
t
heano import gof
double = gof.Type()
double = gof.Type()
double.filter = filter
double.filter = filter
...
@@ -175,19 +175,19 @@ and define ``filter`` and ``values_eq_approx`` in the subclass:
...
@@ -175,19 +175,19 @@ and define ``filter`` and ``values_eq_approx`` in the subclass:
from theano import gof
from theano import gof
class Double(gof.Type):
class Double(gof.Type):
def filter(self, x, strict=False):
def filter(self, x, strict=False):
if strict and not isinstance(x, float):
if strict and not isinstance(x, float):
raise TypeError('Expected a float!')
raise TypeError('Expected a float!')
return float(x)
return float(x)
def values_eq_approx(self, x, y, tolerance=1e-4):
def values_eq_approx(self, x, y, tolerance=1e-4):
return abs(x - y) / (abs(x) + abs(y)) < tolerance
return abs(x - y) / (abs(x) + abs(y)) < tolerance
double = Double()
double = Double()
``double`` is then an instance of Type ``Double``, which in turn is a
``double`` is then an instance of Type ``Double``, which in turn is a
sub
lc
ass of ``Type``.
sub
cl
ass of ``Type``.
There is a small issue with defining ``double`` this way. All
There is a small issue with defining ``double`` this way. All
instances of ``Double`` are technically the same Type. However, different
instances of ``Double`` are technically the same Type. However, different
...
@@ -199,7 +199,7 @@ instances of ``Double`` are technically the same Type. However, different
...
@@ -199,7 +199,7 @@ instances of ``Double`` are technically the same Type. However, different
False
False
Theano compares Types using ``==`` to see if they are the same.
Theano compares Types using ``==`` to see if they are the same.
This happens in DebugMode. Also,
o
ps can (and should) ensure that their inputs
This happens in DebugMode. Also,
O
ps can (and should) ensure that their inputs
have the expected Type by checking something like ``if x.type == lvector``.
have the expected Type by checking something like ``if x.type == lvector``.
There are several ways to make sure that equality testing works properly:
There are several ways to make sure that equality testing works properly:
...
@@ -243,7 +243,7 @@ attempt to clear up the confusion:
...
@@ -243,7 +243,7 @@ attempt to clear up the confusion:
that Type instance. If you were to parse the C expression ``c = a +
that Type instance. If you were to parse the C expression ``c = a +
b;``, ``a``, ``b`` and ``c`` would all be Variable instances.
b;``, ``a``, ``b`` and ``c`` would all be Variable instances.
* A **subclass of Type** is a way of implementing
* A **subclass of Type** is a way of implementing
a set of Type instances that share
a set of Type instances that share
structural similarities. In the ``double`` example that we are doing,
structural similarities. In the ``double`` example that we are doing,
there is actually only one Type in that set, therefore the subclass
there is actually only one Type in that set, therefore the subclass
...
@@ -265,18 +265,18 @@ Final version
...
@@ -265,18 +265,18 @@ Final version
from theano import gof
from theano import gof
class Double(gof.Type):
class Double(gof.Type):
def filter(self, x, strict=False):
def filter(self, x, strict=False):
if strict and not isinstance(x, float):
if strict and not isinstance(x, float):
raise TypeError('Expected a float!')
raise TypeError('Expected a float!')
return float(x)
return float(x)
def values_eq_approx(self, x, y, tolerance=1e-4):
def values_eq_approx(self, x, y, tolerance=1e-4):
return abs(x - y) / (abs(x) + abs(y)) < tolerance
return abs(x - y) / (abs(x) + abs(y)) < tolerance
def __str__(self):
def __str__(self):
return "double"
return "double"
double = Double()
double = Double()
...
...
doc/basic_tutorial/module.txt
浏览文件 @
703007e7
...
@@ -5,9 +5,11 @@ Using Module
...
@@ -5,9 +5,11 @@ Using Module
Now that we're familiar with the basics, we introduce Theano's more
Now that we're familiar with the basics, we introduce Theano's more
advanced interface, Module. This interface allows you to define Theano
advanced interface, Module. This interface allows you to define Theano
"
objects" which can have many state variables and many
methods sharing
"
files" which can have variables and
methods sharing
th
ese stat
es. The Module system simplifies the way to define complex
th
ose variabl
es. The Module system simplifies the way to define complex
systems such as a neural network.
systems such as a neural network.
It also lets you load and save these complex systems using Python's pickle
mechanism.
Remake of the "state" example
Remake of the "state" example
...
@@ -44,12 +46,15 @@ This deserves to be broken up a bit...
...
@@ -44,12 +46,15 @@ This deserves to be broken up a bit...
>>> m = Module()
>>> m = Module()
Here we instantiate an empty Module.
Here we instantiate an empty Module.
If you can imagine that Theano is a way of generating code (expression
graphs),
then a ``Module()`` is like a fresh blank file.
>>> m.state = T.dscalar()
>>> m.state = T.dscalar()
>>> m.inc = T.dscalar('inc')
>>> m.inc = T.dscalar('inc')
Then we declare Variables for use
with
our Module.
Then we declare Variables for use
in
our Module.
Since we assign these input Variables as attributes of the Module,
Since we assign these input Variables as attributes of the Module,
they will be *member Variables* of the Module.
they will be *member Variables* of the Module.
Member Variables are special in a few ways, which we will see shortly.
Member Variables are special in a few ways, which we will see shortly.
...
@@ -57,7 +62,8 @@ Member Variables are special in a few ways, which we will see shortly.
...
@@ -57,7 +62,8 @@ Member Variables are special in a few ways, which we will see shortly.
.. note::
.. note::
There is no need to name the Variable explicitly here. ``m.state`` will
There is no need to name the Variable explicitly here. ``m.state`` will
be given the name ``'state'`` automatically.
be given the name ``'state'`` automatically, because it is being assigned
to the attribute named ``'state'``.
.. note::
.. note::
...
...
doc/conf.py
浏览文件 @
703007e7
...
@@ -64,7 +64,7 @@ today_fmt = '%B %d, %Y'
...
@@ -64,7 +64,7 @@ today_fmt = '%B %d, %Y'
# List of directories, relative to source directories, that shouldn't be searched
# List of directories, relative to source directories, that shouldn't be searched
# for source files.
# for source files.
exclude_dirs
=
[
'images'
,
'trac'
]
exclude_dirs
=
[
'images'
,
'
scripts'
,
'
trac'
]
# The reST default role (used for this markup: `text`) to use for all documents.
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
#default_role = None
...
...
doc/contents.txt
浏览文件 @
703007e7
...
@@ -8,7 +8,6 @@ Contents
...
@@ -8,7 +8,6 @@ Contents
.. toctree::
.. toctree::
:maxdepth: 2
:maxdepth: 2
index
introduction
introduction
LICENSE
LICENSE
install
install
...
@@ -19,6 +18,7 @@ Contents
...
@@ -19,6 +18,7 @@ Contents
glossary
glossary
links
links
internal/index
internal/index
NEWS
.. examples/index
.. examples/index
...
...
doc/index.txt
浏览文件 @
703007e7
...
@@ -5,8 +5,7 @@ Quick Start
...
@@ -5,8 +5,7 @@ Quick Start
Theano is a Python library that allows you to define, optimize, and
Theano is a Python library that allows you to define, optimize, and
efficiently evaluate mathematical expressions involving multi-dimensional
efficiently evaluate mathematical expressions involving multi-dimensional
arrays. Using Theano, it is not uncommon to see speed improvements of
arrays.
ten-fold over using pure NumPy.
The latest release is version `0.1
The latest release is version `0.1
<http://pylearn.org/theano/downloads/Theano-0.1.tar.gz>`_.
<http://pylearn.org/theano/downloads/Theano-0.1.tar.gz>`_.
...
@@ -14,7 +13,13 @@ You can download the latest `PDF documentation
...
@@ -14,7 +13,13 @@ You can download the latest `PDF documentation
<http://pylearn.org/theano/theano.pdf>`_, rather than reading it online.
<http://pylearn.org/theano/theano.pdf>`_, rather than reading it online.
You can go to the :ref:`Table of Contents <contents>`.
You can go to the :ref:`Table of Contents <contents>`.
Or, you can choose your own adventure...
News
----
* 2009-04-01: Theano 0.1 released. See the :ref:`release notes <NEWS>`.
Choose your own adventure...
----------------------------
* You have no idea what Theano is and you read the :ref:`introduction
* You have no idea what Theano is and you read the :ref:`introduction
<introduction>`.
<introduction>`.
...
...
doc/install.txt
浏览文件 @
703007e7
...
@@ -125,7 +125,7 @@ Mac
...
@@ -125,7 +125,7 @@ Mac
- Install some kind of BLAS library (TODO: how?)
- Install some kind of BLAS library (TODO: how?)
- Set ``THEANO_BLAS_LDFLAGS to something which will link against said BLAS
- Set ``THEANO_BLAS_LDFLAGS
``
to something which will link against said BLAS
library. E.g., ``THEANO_BLAS_LDFLAGS='-lcblas -latlas -lgfortran'``.
library. E.g., ``THEANO_BLAS_LDFLAGS='-lcblas -latlas -lgfortran'``.
This advice has not been tested recently, so please inform us of your results.
This advice has not been tested recently, so please inform us of your results.
...
...
doc/internal/how_to_release.txt
浏览文件 @
703007e7
...
@@ -11,6 +11,7 @@ Clone the code::
...
@@ -11,6 +11,7 @@ Clone the code::
Edit ``setup.py`` to contain the newest version number::
Edit ``setup.py`` to contain the newest version number::
cd Theano-0.X
vi setup.py # Edit the version "field"
vi setup.py # Edit the version "field"
The homepage must link to the download URL, for PyPi to correctly get the
The homepage must link to the download URL, for PyPi to correctly get the
...
@@ -21,8 +22,8 @@ Edit ``doc/index.txt`` to contain a link to what will be the download URL::
...
@@ -21,8 +22,8 @@ Edit ``doc/index.txt`` to contain a link to what will be the download URL::
Tag the release. The syntax is something like the following::
Tag the release. The syntax is something like the following::
cd
Theano-0.X
hg tag
Theano-0.X
hg
tag
hg
push
Now, package the release and move it to the static theano directory::
Now, package the release and move it to the static theano directory::
...
@@ -30,7 +31,6 @@ Now, package the release and move it to the static theano directory::
...
@@ -30,7 +31,6 @@ Now, package the release and move it to the static theano directory::
cd ..
cd ..
tar cvf Theano-0.X.tar Theano-0.X
tar cvf Theano-0.X.tar Theano-0.X
gzip -9 Theano-0.X.tar
gzip -9 Theano-0.X.tar
rm -Rf Theano-0.X
mv Theano-0.X.tar.gz www/theano_static/downloads/
mv Theano-0.X.tar.gz www/theano_static/downloads/
~/repos/theano/.hg/refresh-epydoc.sh
~/repos/theano/.hg/refresh-epydoc.sh
...
@@ -42,7 +42,13 @@ directory::
...
@@ -42,7 +42,13 @@ directory::
Finally, use setuptools to register and upload the release::
Finally, use setuptools to register and upload the release::
cd Theano-0.X
python setup.py register sdist bdist_egg upload
python setup.py register sdist bdist_egg upload
# If you get an error message about needing to be identified, then store
# your pypi information in ~/.pypirc
# You can remove this file after upload.
cd ..
rm -Rf Theano-0.X
I wrote the above without actually running it. This needs to be
I wrote the above without actually running it. This needs to be
scrutinized when you are actually do a release.
scrutinized when you are actually do a release.
...
...
doc/topics/unittest.txt
浏览文件 @
703007e7
...
@@ -393,6 +393,12 @@ Here is an example showing how to use verify_grad:
...
@@ -393,6 +393,12 @@ Here is an example showing how to use verify_grad:
>>> # ...
>>> # ...
>>> tensor.verify_grad(Flatten(), [a_val])
>>> tensor.verify_grad(Flatten(), [a_val])
.. note::
Although ``verify_grad`` is defined in ``theano.tensor.basic``, unittests
should use the version of ``verify_grad`` defined in ``theano.tests.unittest_tools``.
This is simply a wrapper function which takes care of seeding the random
number generator appropriately before calling ``theano.tensor.basic.verify_grad``
makeTester and makeBroadcastTester
makeTester and makeBroadcastTester
==================================
==================================
...
...
theano/compile/debugmode.py
浏览文件 @
703007e7
...
@@ -130,6 +130,12 @@ class BadDestroyMap(DebugModeError):
...
@@ -130,6 +130,12 @@ class BadDestroyMap(DebugModeError):
print
>>
sio
,
" changed input type:"
,
self
.
node
.
inputs
[
self
.
idx
]
.
type
print
>>
sio
,
" changed input type:"
,
self
.
node
.
inputs
[
self
.
idx
]
.
type
print
>>
sio
,
" repr (old val):"
,
repr
(
self
.
old_val
)
print
>>
sio
,
" repr (old val):"
,
repr
(
self
.
old_val
)
print
>>
sio
,
" repr (new val):"
,
repr
(
self
.
new_val
)
print
>>
sio
,
" repr (new val):"
,
repr
(
self
.
new_val
)
print
>>
sio
,
" value dtype (new <space> old):"
,
self
.
new_val
.
dtype
,
self
.
old_val
.
dtype
print
>>
sio
,
" value shape (new <space> old):"
,
self
.
new_val
.
shape
,
self
.
old_val
.
shape
print
>>
sio
,
" value min (new <space> old):"
,
self
.
new_val
.
min
(),
self
.
old_val
.
min
()
print
>>
sio
,
" value max (new <space> old):"
,
self
.
new_val
.
max
(),
self
.
old_val
.
max
()
print
>>
sio
,
" value min (new-old):"
,
(
self
.
new_val
-
self
.
old_val
)
.
min
()
print
>>
sio
,
" value max (new-old):"
,
(
self
.
new_val
-
self
.
old_val
)
.
max
()
print
>>
sio
,
""
print
>>
sio
,
""
print
>>
sio
,
" Hint: this can also be caused by a deficient values_eq_approx() or __eq__() implementation that compares node input values"
print
>>
sio
,
" Hint: this can also be caused by a deficient values_eq_approx() or __eq__() implementation that compares node input values"
return
sio
.
getvalue
()
return
sio
.
getvalue
()
...
...
theano/compile/module.py
浏览文件 @
703007e7
"""Classes implementing Theano's Module system.
"""Classes implementing Theano's Module system.
Rationale
For design notes, see doc/advanced/module.txt
=========
Functions in theano can share containers, when the `value` argument to `In` is a Container
instance. This feature makes it possible for multiple functions to use (and update) the same
inputs.
Modules provide a more intuitive syntax that makes this feature easier to use.
They draw on the metaphor of a python import--a module has functions and variables, and
can contain other modules. All functions have access to all variables, and whenever any
function modifies a file-level variable, then that change is visible to all other functions.
In the Module system, the analog of the file is the `Module`, the analog of the function is the
`Method`, and the analog of the variable is the `Member`. Module, Member, and Method all work
at the symbolic level. Once a graph of Modules, Members, and Methods is ready for use, it must
be compiled with a call to `make` which will return an isomorphic structure in which Modules
have become `ModuleInstances`, Members have become `Container`s, and Methods have become
`Function`s.
This structure contains numbers and functions, and is ready for computation.
Design Documentation
====================
Module Graph
------------
Components form a tree structure. Each component may have a _parent_ to which it is _bound_.
When we call `make`, this tree structure is replicated with ComponentInstances instead of
Components. Wheras Components are primarily symbolic, ComponentInstances are sparse matrices,
ndarrays, callable functions, etc.
Compilation via make
--------------------
Conversion from a Component graph to a ComponentInstance graph is performed by `Component.make`.
This method traverses the Component graph in two passes.
In the first pass (the allocate pass), it creates storage for all Variables that are contained in the graph (see
`Component.allocate`). These are the module variables.
In the second pass (the build pass), it creates functions that (in general) operate on these module variables.
This pass also serves to construct all ComponentInstance-derived instances as well, such as
`ModuleInstance`s. The objects that are returned from this second pass are the return value of
`Component.make`.
In the third pass (the initialize pass), is optional and not necessarily recursive through the
graph.
The purpose of the third pass is to call the initialize method of the ComponentInstances built
during the second pass.
During this pass the ComponentInstance graph is complete. It is a good time to fill storage
allocated in phase 1 with sensible values.
Class Structure
---------------
The most important classes for the user API here are `Module`, `ModuleInstance`, and `Method`.
Several other classes are defined to factorize functionality.
- `Component`: WRITEME: what properties make something a Component?
- `_RComponent`: WRITEME: what properties make something a Component?
- `External`: WRITEME: what properties hold? What
- `Member`: WRITEME: what properties hold? What do they do?
"""
"""
...
@@ -249,7 +184,6 @@ class External(_RComponent):
...
@@ -249,7 +184,6 @@ class External(_RComponent):
rval
+=
'
\n
=
%
s'
%
(
pprint
(
self
.
r
,
dict
(
target
=
self
.
r
)))
rval
+=
'
\n
=
%
s'
%
(
pprint
(
self
.
r
,
dict
(
target
=
self
.
r
)))
return
rval
return
rval
class
Member
(
_RComponent
):
class
Member
(
_RComponent
):
"""
"""
Member represents a Variable which is a state of a Composite. That
Member represents a Variable which is a state of a Composite. That
...
@@ -836,7 +770,10 @@ class ComponentDict(Composite):
...
@@ -836,7 +770,10 @@ class ComponentDict(Composite):
def
set
(
self
,
item
,
value
):
def
set
(
self
,
item
,
value
):
if
not
isinstance
(
value
,
Component
):
if
not
isinstance
(
value
,
Component
):
raise
TypeError
(
'ComponentDict may only contain Components.'
,
value
,
type
(
value
))
msg
=
"""
ComponentDict may only contain Components.
(Hint: maybe value here needs to be wrapped, see theano.compile.module.register_wrapper.)"""
raise
TypeError
(
msg
,
value
,
type
(
value
))
#value = value.bind(self, item)
#value = value.bind(self, item)
value
.
name
=
name_join
(
self
.
name
,
str
(
item
))
value
.
name
=
name_join
(
self
.
name
,
str
(
item
))
self
.
_components
[
item
]
=
value
self
.
_components
[
item
]
=
value
...
@@ -868,6 +805,16 @@ class ComponentDict(Composite):
...
@@ -868,6 +805,16 @@ class ComponentDict(Composite):
__autowrappers
=
[]
__autowrappers
=
[]
def
register_wrapper
(
condition
,
wrapper
):
def
register_wrapper
(
condition
,
wrapper
):
"""
:type condition: function x -> bool
:param condition: this function should return True iff `wrapper` can sensibly turn x into a
Component.
:type wrapper: function x -> Component
:param wrapper: this function should convert `x` into an instance of a Component subclass.
"""
__autowrappers
.
append
((
condition
,
wrapper
))
__autowrappers
.
append
((
condition
,
wrapper
))
def
wrapper
(
x
):
def
wrapper
(
x
):
...
@@ -881,8 +828,13 @@ def wrapper(x):
...
@@ -881,8 +828,13 @@ def wrapper(x):
def
wrap
(
x
):
def
wrap
(
x
):
"""
"""
Wraps x in a Component. Wrappers can be registered using
Wraps `x` in a `Component`. Wrappers can be registered using
register_wrapper to allow wrapping more types.
`register_wrapper` to allow wrapping more types.
It is necessary for Module attributes to be wrappable.
A Module with an attribute that is not wrappable as a Component, will cause
`Component.make` to fail.
"""
"""
w
=
wrapper
(
x
)
w
=
wrapper
(
x
)
if
w
is
not
None
:
if
w
is
not
None
:
...
...
theano/compile/tests/test_module.py
浏览文件 @
703007e7
...
@@ -502,6 +502,51 @@ class T_module(unittest.TestCase):
...
@@ -502,6 +502,51 @@ class T_module(unittest.TestCase):
self
.
assertRaises
(
NotImplementedError
,
c
.
set
,
"n"
,
1
)
self
.
assertRaises
(
NotImplementedError
,
c
.
set
,
"n"
,
1
)
def
test_wrappable_as_tensor
(
self
):
M
=
Module
()
M
.
a
=
[
1
,
2
,
3
]
M
.
make
()
m
=
M
.
make
()
print
m
.
a
print
m
.
a
[
0
],
type
(
m
.
a
[
0
]),
m
.
a
[
0
]
==
1
print
list
(
m
.
a
)
assert
list
(
m
.
a
)
==
[
1
,
2
,
3
]
assert
m
.
a
is
not
M
.
a
try
:
m
.
a
=
[
4
,
5
,
6
]
assert
False
except
Exception
,
e
:
if
e
[
0
]
.
startswith
(
"Cannot set readonly"
):
pass
else
:
raise
try
:
m
.
a
[
0
]
=
4
assert
False
except
Exception
,
e
:
if
e
[
0
]
.
startswith
(
"Cannot set readonly"
):
pass
else
:
raise
def
test_mixed_list
(
self
):
M
=
Module
()
M
.
a
=
[
1
,
2
,
T
.
lscalar
()]
m
=
M
.
make
()
assert
list
(
m
.
a
)
==
[
1
,
2
,
None
]
assert
m
.
a
is
not
M
.
a
try
:
m
.
a
[
0
]
=
4
assert
False
except
Exception
,
e
:
if
e
[
0
]
.
startswith
(
"Cannot set readonly"
):
pass
else
:
raise
m
.
a
[
2
]
=
3
assert
list
(
m
.
a
)
==
[
1
,
2
,
3
]
def
test_multiple_references
():
def
test_multiple_references
():
...
...
theano/gof/env.py
浏览文件 @
703007e7
...
@@ -499,6 +499,9 @@ class Env(utils.object2):
...
@@ -499,6 +499,9 @@ class Env(utils.object2):
def
__str__
(
self
):
def
__str__
(
self
):
return
"[
%
s]"
%
", "
.
join
(
graph
.
as_string
(
self
.
inputs
,
self
.
outputs
))
return
"[
%
s]"
%
", "
.
join
(
graph
.
as_string
(
self
.
inputs
,
self
.
outputs
))
def
__repr__
(
self
):
return
self
.
__str__
()
### clone ###
### clone ###
...
...
theano/sandbox/test_theano_object.py
0 → 100644
浏览文件 @
703007e7
from
theano_object
import
*
RUN_TESTS
=
False
def
run
(
TF
):
def
deco
(
f
):
if
TF
and
RUN_TESTS
:
print
'running test'
,
f
.
__name__
f
()
return
f
if
RUN_TESTS
else
None
return
deco
class
MyModule
(
TheanoObject
):
def
__init__
(
self
,
a
=
3
,
b
=
9
):
super
(
MyModule
,
self
)
.
__init__
()
self
.
a
=
self
.
symbolic_member
(
2
)
self
.
b
=
self
.
symbolic_member
(
3
)
self
.
c
=
100
#a constant
self
.
d
=
[
self
.
symbolic_member
(
5
),
self
.
symbolic_member
(
6
)]
self
.
e
=
[
'a'
,
self
.
symbolic_member
(
6
)]
@symbolic_fn
def
add
(
self
,
x
):
return
RVal
(
self
.
a
+
self
.
b
+
x
)
@symbolic_fn_opts
(
mode
=
'FAST_COMPILE'
)
def
sub
(
self
,
x
):
outputs
=
(
self
.
a
-
x
,
self
.
b
-
x
)
updates
=
{
self
.
b
:
self
.
b
-
x
}
return
RVal
(
outputs
,
updates
)
def
normal_function
(
self
,
x
):
return
self
.
add
(
x
)
+
self
.
sub
(
x
)
#use numpy addition
@symbolic_fn
def
use_submodule
(
self
,
x
):
return
RVal
(
self
.
a
+
x
+
self
.
submodule
.
b
)
@run
(
True
)
def
test_outputs
():
MM
=
MyModule
(
3
,
4
)
assert
MM
.
add
(
5
)
==
12
assert
MM
.
b
.
get
()
==
4
MM
.
sub
(
3
)
assert
MM
.
b
.
get
()
==
1
#test get()
assert
MM
.
add
(
5
)
==
9
#test that b's container is shared between add and sub
MM
.
b
.
set
(
2
)
#test set
assert
MM
.
b
.
get
()
==
2
#test get()
assert
MM
.
add
(
5
)
==
10
#test that b's container is shared between add and sub
@run
(
True
)
def
test_submodule
():
MM
=
MyModule
(
1
,
2
)
MM
.
submodule
=
MyModule
(
3
,
4
)
assert
MM
.
add
(
5
)
==
8
MM
.
submodule
.
sub
(
7
)
assert
MM
.
submodule
.
b
.
get
()
==
-
3
assert
MM
.
use_submodule
(
0
)
==
-
2
#self.a is 1 + self.submodule.b is -3
@run
(
False
)
def
test_misc_prints
():
MM
=
MyModule
()
print
MM
print
'add'
,
MM
.
add
(
4
)
print
'b'
,
MM
.
value
(
MM
.
b
)
print
'sub'
,
MM
.
sub
(
45
)
print
'b'
,
MM
.
value
(
MM
.
b
)
print
MM
.
sub
(
23
)
print
MM
.
add
(
9
)
print
MM
.
add
(
19
)
print
'b'
,
MM
.
value
(
MM
.
b
)
print
'a'
,
MM
.
value
(
MM
.
a
)
MM
.
value_set
(
MM
.
a
,
6
)
MM
.
value_set
(
MM
.
b
,
6
)
print
MM
.
add
(
6
)
try
:
MM
.
b
=
5
except
Exception
,
e
:
print
e
MM
.
del_member
(
MM
.
b
)
try
:
print
'b'
,
MM
.
value
(
MM
.
b
)
except
Exception
,
e
:
print
e
MM
.
b
=
'asdffd'
try
:
print
'b'
,
MM
.
value
(
MM
.
b
)
except
Exception
,
e
:
print
e
try
:
print
'b'
,
MM
.
value
(
MM
.
b
)
except
Exception
,
e
:
print
'E'
,
e
print
MM
.
b
print
'a'
,
MM
.
value
(
MM
.
a
)
theano/sandbox/theano_object.py
0 → 100644
浏览文件 @
703007e7
"""DRAFT: TheanoObject
N.B. the gotcha with this design is listed in the documentation of `TheanoObject`
"""
import
theano
from
theano
import
tensor
import
numpy
def
theano_type
(
x
):
"""Return a theano Type instance suitable for containing value `x`."""
if
type
(
x
)
is
int
:
return
tensor
.
lscalar
else
:
raise
NotImplementedError
()
class
symbolic_fn_callable
(
object
):
"""This is the class whose instance you get when you access a symbolic function in a
`TheanoObject`.
When you call a symbolic function (`symbolic_fn`) of a TheanoObject the `__call__` of this
class handles your request.
You can also access the symbolic outputs and updates of a symbolic function though this
class.
.. code-block:: python
class T(TheanoObject):
@symbolic_fn
def add(self, x):
...
add_outputs = ...
add_updates = ...
return RVal(add_outputs, add_updates)
t = T()
t.add.outputs(5) # returns `add_outputs` from when `x=theano_type(5)`
t.add.updates(5) # returns `add_updates` from when `x=theano_type(5)`
t.add.theano_function(5) # returns the `Function` compiled when `x=theano_type(5)`
t.add(5) # runs the `Function` compiled when `x=theano_type(5)`
# with arguments `(5,)`
"""
def
__init__
(
self
,
fn
,
mode
):
self
.
fn
=
fn
self
.
mode
=
mode
def
on
(
self
,
o_self
):
"""Silly method to work with symbolic_fn.__get__"""
self
.
o_self
=
o_self
return
self
def
run_symbolic
(
self
,
*
args
,
**
kwargs
):
return
self
.
o_self
.
_get_method_impl
(
self
.
fn
,
self
.
o_self
,
args
,
kwargs
,
mode
=
self
.
mode
)
def
__call__
(
self
,
*
args
,
**
kwargs
):
return
self
.
run_symbolic
(
*
args
,
**
kwargs
)[
'theano_function'
](
*
args
,
**
kwargs
)
def
theano_function
(
self
,
*
args
,
**
kwargs
):
return
self
.
run_symbolic
(
*
args
,
**
kwargs
)[
'theano_function'
]
def
outputs
(
self
,
*
args
,
**
kwargs
):
return
self
.
run_symbolic
(
*
args
,
**
kwargs
)[
'outputs'
]
def
updates
(
self
,
*
args
,
**
kwargs
):
return
self
.
run_symbolic
(
*
args
,
**
kwargs
)[
'updates'
]
class
symbolic_fn
(
object
):
"""A property-like class for decorating symbolic functions in `TheanoObject`
"""
def
__init__
(
self
,
fn
,
mode
=
None
):
self
.
fn
=
fn
self
.
callable
=
symbolic_fn_callable
(
fn
,
mode
)
def
__get__
(
self
,
o_self
,
o_cls
):
return
self
.
callable
.
on
(
o_self
)
def
__set__
(
self
,
o_self
,
new_val
):
pass
#return NotImplemented
def
symbolic_fn_opts
(
**
kwargs
):
"""Return a decorator for symbolic_functions in a `TheanoObject`
`kwargs` passed here are passed to `theano.function` via `symbolic_fn`
"""
def
deco
(
f
):
return
symbolic_fn
(
f
,
**
kwargs
)
return
deco
class
RVal
(
object
):
"""A Return-Value object for a `symbolic_fn` """
outputs
=
[]
"""The method will compute values for the variables in this list"""
updates
=
{}
"""The method will update module variables in this dictionary
For items ``(k,v)`` in this dictionary, ``k`` must be a `symbolic_member` of some module.
On each call to this compiled function, the value of ``k`` will be replaced with the
computed value of the Variable ``v``.
"""
def
__init__
(
self
,
outputs
,
updates
=
{}):
self
.
outputs
=
outputs
assert
type
(
updates
)
is
dict
self
.
updates
=
updates
class
TheanoObject
(
object
):
"""Base for Theano-supported classes
This class provides support for symbolic_fn class attributes.
These will be compiled on demand so that they can be used just like normal (non-symbolic)
methods.
The symbolic functions in a TheanoObject can share member variables that have been created
using the `symbolic_member` method.
:note: Other variables (ones not created using ``self.symbolic_member``) referred to in the
body of a symbolic function will *not* be shared between symbolic functions, or between
symbolic functions and this class. These other variables will be locked away in the
closure of a symbolic function when that function is compiled.
:warning: It is not recommended for code to interleave
(a) changes to non-symbolic instance variables with
(b) calls to symbolic functions that use those instance variables.
A symbolic function may be
compiled multiple times because it must be compiled for each set of argument types.
Each time the function is compiled, the values of non-symbolic variables will be locked
into the compiled function. Subsequent changes to those non-symbolic instance variables
will not have any effect on the behaviour of the already-compiled symbolic function.
:todo: Is there an efficient way of recognizing when a compiled symbolic function is stale,
wrt the current values of the class's instance variables?
- One option is to re-evaluate symbolic functions symbolically and see if the graph can be
completely merged with the original graph. This is not fast enough to do all the time by
default though.
"""
def
__init__
(
self
):
self
.
module_method_cache
=
{}
def
_get_method_impl
(
self
,
fn
,
o_self
,
args
,
kwargs
,
mode
):
"""Retrieve information about the symbolic function (`fn`) in TheanoObject instance
`o_self`, being evaluated on arguments `args` and `kwargs`.
:rtype: dict with entries 'theano_function', 'outputs', 'updates'
:return: the theano function compiled for these arguments, the symbolic outputs of that
function, and the symbolic updates performed by that function.
:note: This function caches return values in self.`module_method_cache`.
:todo: This may at some point become a class-level cache rather than an instance-level
cache.
"""
if
kwargs
:
raise
NotImplementedError
()
cache
=
self
.
module_method_cache
args_types
=
tuple
(
theano_type
(
arg
)
for
arg
in
args
)
key
=
(
fn
,
args_types
)
if
key
not
in
cache
:
inputs
=
[
a
()
for
a
in
args_types
]
print
'compiling'
,
fn
,
'for inputs'
,
inputs
rval
=
fn
(
o_self
,
*
inputs
)
print
'compiling to compute outputs'
,
rval
.
outputs
if
isinstance
(
rval
.
outputs
,
(
tuple
,
list
)):
all_required_inputs
=
theano
.
gof
.
graph
.
inputs
(
rval
.
outputs
)
else
:
all_required_inputs
=
theano
.
gof
.
graph
.
inputs
([
rval
.
outputs
])
# construct In instances for the symbolic_member instances that can automatically be
# included here.
module_inputs
=
[
theano
.
compile
.
io
.
In
(
variable
=
v
,
value
=
v
.
_theanoclass_container
,
mutable
=
(
v
in
rval
.
updates
),
update
=
rval
.
updates
.
get
(
v
,
None
))
for
v
in
all_required_inputs
\
if
hasattr
(
v
,
'_theanoclass_container'
)
and
not
(
v
in
inputs
)]
cache
[
key
]
=
dict
(
theano_function
=
theano
.
function
(
inputs
+
module_inputs
,
rval
.
outputs
),
updates
=
rval
.
updates
,
outputs
=
rval
.
outputs
,
mode
=
mode
)
return
cache
[
key
]
def
symbolic_member
(
self
,
ival
,
name
=
None
):
"""Create a Variable instance to hold value `ival`.
This function also immediately creates a Container object for ival.
When the returned Variable is used as input to a `TheanoObject` `symbolic_fn`, (but
does not appear as an argument to that symbolic_fn), then this Container will be used to
retrieve (and store) values for the Variable.
This Variable's Container's contents can be retrieved by its `get()` method.
This Variable's Container's contents can be written using its `set(newval)` method.
"""
if
type
(
ival
)
is
not
int
:
raise
NotImplementedError
()
v
=
tensor
.
lscalar
(
name
)
v
.
_theanoclass_container
=
\
theano
.
gof
.
Container
(
v
,
storage
=
[
numpy
.
asarray
(
ival
,
dtype
=
'int64'
)],
readonly
=
False
)
assert
not
hasattr
(
v
,
'set'
)
assert
not
hasattr
(
v
,
'get'
)
v
.
get
=
lambda
:
v
.
_theanoclass_container
.
data
def
setval_in_v
(
newval
):
v
.
_theanoclass_container
.
data
=
newval
v
.
set
=
setval_in_v
return
v
theano/scalar/basic.py
浏览文件 @
703007e7
...
@@ -766,7 +766,7 @@ tan = Tan(upgrade_to_float, name = 'tan')
...
@@ -766,7 +766,7 @@ tan = Tan(upgrade_to_float, name = 'tan')
class
Cosh
(
UnaryScalarOp
):
class
Cosh
(
UnaryScalarOp
):
"""
"""
sin
h(x) = (exp(x) + exp(-x)) / 2
cos
h(x) = (exp(x) + exp(-x)) / 2
"""
"""
def
impl
(
self
,
x
):
def
impl
(
self
,
x
):
return
math
.
cosh
(
x
)
return
math
.
cosh
(
x
)
...
...
theano/sparse/tests/test_basic.py
浏览文件 @
703007e7
...
@@ -11,7 +11,7 @@ from theano import gof
...
@@ -11,7 +11,7 @@ from theano import gof
from
theano.sparse.basic
import
_is_dense
,
_is_sparse
,
_is_dense_variable
,
_is_sparse_variable
from
theano.sparse.basic
import
_is_dense
,
_is_sparse
,
_is_dense_variable
,
_is_sparse_variable
from
theano.sparse.basic
import
_mtypes
,
_mtype_to_str
from
theano.sparse.basic
import
_mtypes
,
_mtype_to_str
from
theano.tests
import
unittest_tools
from
theano.tests
import
unittest_tools
as
utt
def
eval_outputs
(
outputs
):
def
eval_outputs
(
outputs
):
...
@@ -19,7 +19,7 @@ def eval_outputs(outputs):
...
@@ -19,7 +19,7 @@ def eval_outputs(outputs):
class
T_transpose
(
unittest
.
TestCase
):
class
T_transpose
(
unittest
.
TestCase
):
def
setUp
(
self
):
def
setUp
(
self
):
u
nittest_tools
.
seed_rng
()
u
tt
.
seed_rng
()
def
test_transpose_csc
(
self
):
def
test_transpose_csc
(
self
):
sp
=
sparse
.
csc_matrix
(
sparse
.
eye
(
5
,
3
))
sp
=
sparse
.
csc_matrix
(
sparse
.
eye
(
5
,
3
))
...
@@ -126,7 +126,7 @@ class T_Add(unittest.TestCase):
...
@@ -126,7 +126,7 @@ class T_Add(unittest.TestCase):
class
T_conversion
(
unittest
.
TestCase
):
class
T_conversion
(
unittest
.
TestCase
):
def
setUp
(
self
):
def
setUp
(
self
):
u
nittest_tools
.
seed_rng
()
u
tt
.
seed_rng
()
def
test0
(
self
):
def
test0
(
self
):
a
=
tensor
.
as_tensor_variable
(
numpy
.
random
.
rand
(
5
))
a
=
tensor
.
as_tensor_variable
(
numpy
.
random
.
rand
(
5
))
...
@@ -157,7 +157,7 @@ class T_conversion(unittest.TestCase):
...
@@ -157,7 +157,7 @@ class T_conversion(unittest.TestCase):
import
scipy.sparse
as
sp
import
scipy.sparse
as
sp
class
test_structureddot
(
unittest
.
TestCase
):
class
test_structureddot
(
unittest
.
TestCase
):
def
setUp
(
self
):
def
setUp
(
self
):
u
nittest_tools
.
seed_rng
()
u
tt
.
seed_rng
()
def
test_structuredot
(
self
):
def
test_structuredot
(
self
):
bsize
=
2
bsize
=
2
...
@@ -193,7 +193,7 @@ class test_structureddot(unittest.TestCase):
...
@@ -193,7 +193,7 @@ class test_structureddot(unittest.TestCase):
assert
_is_dense
(
c
)
assert
_is_dense
(
c
)
assert
numpy
.
all
(
outvals
==
c
)
assert
numpy
.
all
(
outvals
==
c
)
tensor
.
verify_grad
(
buildgraphCSC
,
[
kernvals
,
imvals
])
utt
.
verify_grad
(
buildgraphCSC
,
[
kernvals
,
imvals
])
##
##
# Test compressed-sparse row matrices ###
# Test compressed-sparse row matrices ###
...
@@ -215,7 +215,7 @@ class test_structureddot(unittest.TestCase):
...
@@ -215,7 +215,7 @@ class test_structureddot(unittest.TestCase):
assert
_is_dense
(
c
)
assert
_is_dense
(
c
)
assert
numpy
.
all
(
outvals
==
c
)
assert
numpy
.
all
(
outvals
==
c
)
tensor
.
verify_grad
(
buildgraphCSR
,
[
kernvals
,
imvals
])
utt
.
verify_grad
(
buildgraphCSR
,
[
kernvals
,
imvals
])
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
...
...
theano/tensor/basic.py
浏览文件 @
703007e7
...
@@ -20,7 +20,6 @@ from ..gof.python25 import partial
...
@@ -20,7 +20,6 @@ from ..gof.python25 import partial
from
..
import
compile
,
printing
from
..
import
compile
,
printing
from
..printing
import
pprint
,
Print
from
..printing
import
pprint
,
Print
from
..tests
import
unittest_tools
### set up the external interface
### set up the external interface
from
elemwise
import
Elemwise
,
DimShuffle
,
CAReduce
,
Sum
from
elemwise
import
Elemwise
,
DimShuffle
,
CAReduce
,
Sum
...
@@ -159,6 +158,15 @@ def constant(x, name=None, ndim=None):
...
@@ -159,6 +158,15 @@ def constant(x, name=None, ndim=None):
def
value
(
x
,
name
=
None
,
ndim
=
None
):
def
value
(
x
,
name
=
None
,
ndim
=
None
):
return
constant_or_value
(
x
,
rtype
=
TensorValue
,
name
=
name
,
ndim
=
ndim
)
return
constant_or_value
(
x
,
rtype
=
TensorValue
,
name
=
name
,
ndim
=
ndim
)
def
_obj_is_wrappable_as_tensor
(
x
):
try
:
constant
(
x
)
return
True
except
TypeError
:
return
False
def
_wrap_tensor_into_member
(
x
):
return
compile
.
module
.
Member
(
constant
(
x
))
compile
.
module
.
register_wrapper
(
_obj_is_wrappable_as_tensor
,
_wrap_tensor_into_member
)
class
TensorType
(
Type
):
class
TensorType
(
Type
):
...
@@ -250,11 +258,16 @@ class TensorType(Type):
...
@@ -250,11 +258,16 @@ class TensorType(Type):
if
type
(
a
)
is
numpy
.
ndarray
and
type
(
b
)
is
numpy
.
ndarray
:
if
type
(
a
)
is
numpy
.
ndarray
and
type
(
b
)
is
numpy
.
ndarray
:
if
a
.
shape
!=
b
.
shape
:
if
a
.
shape
!=
b
.
shape
:
return
False
return
False
if
a
.
shape
==
():
if
a
.
dtype
!=
b
.
dtype
:
ones
=
numpy
.
ones
(
2
)
return
False
return
numpy
.
allclose
(
ones
*
a
,
ones
*
b
)
if
'int'
in
str
(
a
.
dtype
):
return
numpy
.
all
(
a
==
b
)
else
:
else
:
return
numpy
.
allclose
(
a
,
b
)
if
a
.
shape
==
():
#for comparing scalars, use broadcasting.
ones
=
numpy
.
ones
(
2
)
return
numpy
.
allclose
(
ones
*
a
,
ones
*
b
)
else
:
return
numpy
.
allclose
(
a
,
b
)
return
False
return
False
def
__hash__
(
self
):
def
__hash__
(
self
):
...
@@ -924,7 +937,8 @@ def argmax(x, axis=None):
...
@@ -924,7 +937,8 @@ def argmax(x, axis=None):
@constructor
@constructor
def
min
(
x
,
axis
=
None
):
def
min
(
x
,
axis
=
None
):
if
'float'
in
str
(
x
.
dtype
):
str_x_type
=
str
(
x
.
dtype
)
if
str_x_type
.
startswith
(
'float'
)
or
str_x_type
.
startswith
(
'int'
):
return
-
max
(
-
x
,
axis
=
axis
)
return
-
max
(
-
x
,
axis
=
axis
)
else
:
else
:
#Be careful about unsigned integers, complex
#Be careful about unsigned integers, complex
...
@@ -932,7 +946,8 @@ def min(x, axis=None):
...
@@ -932,7 +946,8 @@ def min(x, axis=None):
@constructor
@constructor
def
argmin
(
x
,
axis
=
None
):
def
argmin
(
x
,
axis
=
None
):
if
'float'
in
str
(
x
.
dtype
):
str_x_type
=
str
(
x
.
dtype
)
if
str_x_type
.
startswith
(
'float'
)
or
str_x_type
.
startswith
(
'int'
):
return
argmax
(
-
x
,
axis
=
axis
)
return
argmax
(
-
x
,
axis
=
axis
)
else
:
else
:
#Be careful about unsigned integers, complex
#Be careful about unsigned integers, complex
...
...
theano/tensor/tests/test_basic.py
浏览文件 @
703007e7
差异被折叠。
点击展开。
theano/tensor/tests/test_nnet.py
浏览文件 @
703007e7
...
@@ -5,86 +5,86 @@ from theano import tensor as T
...
@@ -5,86 +5,86 @@ from theano import tensor as T
from
theano
import
gof
from
theano
import
gof
import
test_basic
as
TT
import
test_basic
as
TT
import
numpy
import
numpy
from
theano.tests
import
unittest_tools
from
theano.tests
import
unittest_tools
as
utt
from
theano.tensor.nnet
import
*
from
theano.tensor.nnet
import
*
class
T_sigmoid
(
unittest
.
TestCase
):
class
T_sigmoid
(
unittest
.
TestCase
):
def
setUp
(
self
):
def
setUp
(
self
):
u
nittest_tools
.
seed_rng
()
u
tt
.
seed_rng
()
def
test_elemwise
(
self
):
def
test_elemwise
(
self
):
TT
.
verify_grad
(
sigmoid
,
[
numpy
.
random
.
rand
(
3
,
4
)])
utt
.
verify_grad
(
sigmoid
,
[
numpy
.
random
.
rand
(
3
,
4
)])
class
T_softplus
(
unittest
.
TestCase
):
class
T_softplus
(
unittest
.
TestCase
):
def
setUp
(
self
):
def
setUp
(
self
):
u
nittest_tools
.
seed_rng
()
u
tt
.
seed_rng
()
def
test_elemwise
(
self
):
def
test_elemwise
(
self
):
TT
.
verify_grad
(
softplus
,
[
numpy
.
random
.
rand
(
3
,
4
)])
utt
.
verify_grad
(
softplus
,
[
numpy
.
random
.
rand
(
3
,
4
)])
class
T_Softmax
(
unittest
.
TestCase
):
class
T_Softmax
(
unittest
.
TestCase
):
def
setUp
(
self
):
def
setUp
(
self
):
u
nittest_tools
.
seed_rng
()
u
tt
.
seed_rng
()
def
test0
(
self
):
def
test0
(
self
):
def
f
(
a
):
def
f
(
a
):
return
softmax
(
a
)[:,
0
]
return
softmax
(
a
)[:,
0
]
TT
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
)])
utt
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
)])
def
test1
(
self
):
def
test1
(
self
):
def
f
(
a
):
def
f
(
a
):
return
softmax
(
a
)[:,
1
]
return
softmax
(
a
)[:,
1
]
TT
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
)])
utt
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
)])
def
test2
(
self
):
def
test2
(
self
):
def
f
(
a
):
def
f
(
a
):
return
softmax
(
a
)[:,
2
]
return
softmax
(
a
)[:,
2
]
TT
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
)])
utt
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
)])
def
test3
(
self
):
def
test3
(
self
):
def
f
(
a
):
def
f
(
a
):
return
softmax
(
a
)[:,
3
]
return
softmax
(
a
)[:,
3
]
TT
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
)])
utt
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
)])
class
T_SoftmaxWithBias
(
unittest
.
TestCase
):
class
T_SoftmaxWithBias
(
unittest
.
TestCase
):
def
setUp
(
self
):
def
setUp
(
self
):
u
nittest_tools
.
seed_rng
()
u
tt
.
seed_rng
()
def
test0
(
self
):
def
test0
(
self
):
def
f
(
a
,
b
):
def
f
(
a
,
b
):
return
softmax_with_bias
(
a
,
b
)[:,
0
]
return
softmax_with_bias
(
a
,
b
)[:,
0
]
TT
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
),
utt
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
),
numpy
.
random
.
rand
(
4
)])
numpy
.
random
.
rand
(
4
)])
def
test1
(
self
):
def
test1
(
self
):
def
f
(
a
,
b
):
def
f
(
a
,
b
):
return
softmax_with_bias
(
a
,
b
)[:,
1
]
return
softmax_with_bias
(
a
,
b
)[:,
1
]
TT
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
),
utt
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
),
numpy
.
random
.
rand
(
4
)])
numpy
.
random
.
rand
(
4
)])
def
test2
(
self
):
def
test2
(
self
):
def
f
(
a
,
b
):
def
f
(
a
,
b
):
return
softmax_with_bias
(
a
,
b
)[:,
2
]
return
softmax_with_bias
(
a
,
b
)[:,
2
]
TT
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
),
utt
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
),
numpy
.
random
.
rand
(
4
)])
numpy
.
random
.
rand
(
4
)])
def
test3
(
self
):
def
test3
(
self
):
def
f
(
a
,
b
):
def
f
(
a
,
b
):
return
softmax_with_bias
(
a
,
b
)[:,
3
]
return
softmax_with_bias
(
a
,
b
)[:,
3
]
TT
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
),
utt
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
),
numpy
.
random
.
rand
(
4
)])
numpy
.
random
.
rand
(
4
)])
class
T_CrossentropySoftmax1Hot
(
unittest
.
TestCase
):
class
T_CrossentropySoftmax1Hot
(
unittest
.
TestCase
):
def
setUp
(
self
):
def
setUp
(
self
):
u
nittest_tools
.
seed_rng
()
u
tt
.
seed_rng
()
def
test0
(
self
):
def
test0
(
self
):
y_idx
=
[
0
,
1
,
3
]
y_idx
=
[
0
,
1
,
3
]
def
f
(
a
,
b
):
def
f
(
a
,
b
):
return
crossentropy_softmax_1hot_with_bias
(
a
,
b
,
y_idx
)[
0
]
return
crossentropy_softmax_1hot_with_bias
(
a
,
b
,
y_idx
)[
0
]
TT
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
),
utt
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
),
numpy
.
random
.
rand
(
4
)])
numpy
.
random
.
rand
(
4
)])
def
test1
(
self
):
def
test1
(
self
):
y_idx
=
[
0
,
1
,
3
]
y_idx
=
[
0
,
1
,
3
]
def
f
(
a
):
def
f
(
a
):
return
crossentropy_softmax_1hot
(
a
,
y_idx
)[
0
]
return
crossentropy_softmax_1hot
(
a
,
y_idx
)[
0
]
TT
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
)])
utt
.
verify_grad
(
f
,
[
numpy
.
random
.
rand
(
3
,
4
)])
class
T_prepend
(
unittest
.
TestCase
):
class
T_prepend
(
unittest
.
TestCase
):
def
setUp
(
self
):
def
setUp
(
self
):
u
nittest_tools
.
seed_rng
()
u
tt
.
seed_rng
()
def
test0
(
self
):
def
test0
(
self
):
"""basic functionality"""
"""basic functionality"""
x
=
tensor
.
matrix
(
'x'
)
x
=
tensor
.
matrix
(
'x'
)
...
@@ -110,7 +110,7 @@ class T_prepend(unittest.TestCase):
...
@@ -110,7 +110,7 @@ class T_prepend(unittest.TestCase):
class
T_solve
(
unittest
.
TestCase
):
class
T_solve
(
unittest
.
TestCase
):
def
setUp
(
self
):
def
setUp
(
self
):
self
.
rng
=
numpy
.
random
.
RandomState
(
u
nittest_tools
.
fetch_seed
(
666
))
self
.
rng
=
numpy
.
random
.
RandomState
(
u
tt
.
fetch_seed
(
666
))
def
test0
(
self
):
def
test0
(
self
):
A
=
self
.
rng
.
randn
(
5
,
5
)
A
=
self
.
rng
.
randn
(
5
,
5
)
...
...
theano/tensor/tests/test_xlogx.py
浏览文件 @
703007e7
...
@@ -8,11 +8,11 @@ import test_basic as TT
...
@@ -8,11 +8,11 @@ import test_basic as TT
import
random
import
random
import
numpy.random
import
numpy.random
from
theano.tests
import
unittest_tools
from
theano.tests
import
unittest_tools
as
utt
class
T_XlogX
(
unittest
.
TestCase
):
class
T_XlogX
(
unittest
.
TestCase
):
def
setUp
(
self
):
def
setUp
(
self
):
u
nittest_tools
.
seed_rng
()
u
tt
.
seed_rng
()
def
test0
(
self
):
def
test0
(
self
):
x
=
as_tensor_variable
([
1
,
0
])
x
=
as_tensor_variable
([
1
,
0
])
...
@@ -23,7 +23,7 @@ class T_XlogX(unittest.TestCase):
...
@@ -23,7 +23,7 @@ class T_XlogX(unittest.TestCase):
# class Dummy(object):
# class Dummy(object):
# def make_node(self, a):
# def make_node(self, a):
# return [xlogx(a)[:,2]]
# return [xlogx(a)[:,2]]
TT
.
verify_grad
(
xlogx
,
[
numpy
.
random
.
rand
(
3
,
4
)])
utt
.
verify_grad
(
xlogx
,
[
numpy
.
random
.
rand
(
3
,
4
)])
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
...
...
theano/tests/unittest_tools.py
浏览文件 @
703007e7
import
unittest
import
unittest
import
numpy
import
numpy
import
theano.tensor
as
T
import
os
,
sys
import
os
,
sys
...
@@ -40,3 +41,14 @@ def seed_rng(pseed=None):
...
@@ -40,3 +41,14 @@ def seed_rng(pseed=None):
'instead of seed
%
i given as parameter'
%
(
seed
,
pseed
)
'instead of seed
%
i given as parameter'
%
(
seed
,
pseed
)
numpy
.
random
.
seed
(
seed
)
numpy
.
random
.
seed
(
seed
)
return
seed
return
seed
def
verify_grad
(
op
,
pt
,
n_tests
=
2
,
rng
=
None
,
eps
=
1.0e-7
,
tol
=
0.0001
):
"""
Wrapper for tensor/basic.py:verify_grad
Takes care of seeding the random number generator if None is given
"""
if
rng
is
None
:
seed_rng
()
rng
=
numpy
.
random
T
.
verify_grad
(
op
,
pt
,
n_tests
,
rng
,
eps
,
tol
)
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论