Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
b1f7979f
提交
b1f7979f
authored
8月 13, 2015
作者:
Arnaud Bergeron
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fixup extending/* and delete associated tests.
上级
3e303fc9
显示空白字符变更
内嵌
并排
正在显示
11 个修改的文件
包含
103 行增加
和
763 行删除
+103
-763
cop.txt
doc/extending/cop.txt
+4
-6
ctype.txt
doc/extending/ctype.txt
+5
-28
fibby.txt
doc/extending/fibby.txt
+1
-6
graphstructures.txt
doc/extending/graphstructures.txt
+45
-58
inplace.txt
doc/extending/inplace.txt
+5
-0
op.txt
doc/extending/op.txt
+24
-37
optimization.txt
doc/extending/optimization.txt
+13
-21
other_ops.txt
doc/extending/other_ops.txt
+1
-1
type.txt
doc/extending/type.txt
+0
-11
unittest.txt
doc/extending/unittest.txt
+5
-5
test_tutorial.py
theano/tests/test_tutorial.py
+0
-590
没有找到文件。
doc/extending/cop.txt
浏览文件 @
b1f7979f
...
...
@@ -253,8 +253,10 @@ We will be defining C code for the multiplication Op on doubles.
**c_code**
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
.. testsetup::
from theano import Op
mul = Op()
.. testcode::
...
...
@@ -298,10 +300,6 @@ As before, I tried to organize the code in order to minimize
repetition. You can check that mul produces the same C code in this
version that it produces in the code I gave above.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
.. testcode::
from theano import gof
...
...
doc/extending/ctype.txt
浏览文件 @
b1f7979f
...
...
@@ -159,9 +159,7 @@ Defining the methods
.. testsetup::
import theano
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
double = theano.Type()
**c_declare**
...
...
@@ -193,9 +191,6 @@ your Type. If you wish people to develop operations that make use of
it, it's best to publish it somewhere.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
**c_init**
.. testcode::
...
...
@@ -222,9 +217,6 @@ you should only assume that either ``c_init`` or ``c_extract`` has been
called, without knowing for sure which of the two.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
**c_extract**
.. testcode::
...
...
@@ -261,9 +253,6 @@ using the ``PyFloat_AsDouble`` function (yet again provided by CPython's C
API) and we put it in our double variable that we declared previously.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
**c_sync**
.. testcode::
...
...
@@ -323,9 +312,6 @@ than sorry.
do *NOT* decrease its reference count!
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
**c_cleanup**
.. testcode::
...
...
@@ -374,13 +360,7 @@ depends on the the relationship between Python and C with respect to
that Variable. For instance, imagine you define the following function
and call it:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
.. testcode::
from theano import function
from theano.tensor import double
.. code-block:: python
x, y, z = double('x'), double('y'), double('z')
a = add(x, y)
...
...
@@ -463,9 +443,6 @@ multiplication block.
Final version
=============
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
.. testcode::
from theano import gof
...
...
@@ -530,7 +507,7 @@ know how to generate C code.
You can implement c_code for this op. You register it like this:
..
testcode::
..
code-block:: python
theano.compile.ops.register_deep_copy_op_c_code(YOUR_TYPE_CLASS, THE_C_CODE, version=())
...
...
@@ -552,7 +529,7 @@ ViewOp to generate C code when working with this type, as
otherwise it will use Python code instead. This is achieved by
calling:
..
testcode::
..
code-block:: python
theano.compile.ops.register_view_op_c_code(YOUR_TYPE_CLASS, THE_C_CODE, version=())
...
...
@@ -572,7 +549,7 @@ Theano Variable that has a shape attribute (Shape_i returns only one of
the elements of the shape).
..
testcode::
..
code-block:: python
theano.compile.ops.register_shape_c_code(YOUR_TYPE_CLASS, THE_C_CODE, version=())
theano.compile.ops.register_shape_i_c_code(YOUR_TYPE_CLASS, THE_C_CODE, CHECK_INPUT, version=())
...
...
doc/extending/fibby.txt
浏览文件 @
b1f7979f
...
...
@@ -26,9 +26,6 @@ clarity. For example, when you write C code that assumes memory is contiguous,
you should check the strides and alignment.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_fibby.test_fibby_1
.. testcode::
import theano
...
...
@@ -145,7 +142,7 @@ the correct size for the output. This is essentially simulating the line
``y = x.copy()``.
..
testcode::
..
code-block:: c
Py_XDECREF(%(y)s);
%(y)s = (PyArrayObject*)PyArray_FromArray(
...
...
@@ -249,7 +246,6 @@ Here is some code to test that the optimization is applied only when needed.
# Test it does not apply when not needed
x = T.dvector()
f = function([x], fibby(x))
#theano.printing.debugprint(f)
# We call the function to make sure it runs.
# If you run in DebugMode, it will compare the C and Python outputs.
...
...
@@ -260,7 +256,6 @@ Here is some code to test that the optimization is applied only when needed.
# Test that the optimization gets applied.
f_zero = function([], fibby(T.zeros([5])))
#theano.printing.debugprint(f_zero)
# If you run in DebugMode, it will compare the output before
# and after the optimization.
...
...
doc/extending/graphstructures.txt
浏览文件 @
b1f7979f
...
...
@@ -71,9 +71,6 @@ without any shortcuts, that will make the graph construction very explicit.
This is what you would normally type:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_graphstructures.test_graphstructures_1
.. testcode::
# create 3 Variables with owner = None
...
...
@@ -90,43 +87,40 @@ This is what you would normally type:
This is what you would type to build the graph explicitly:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_graphstructures.test_graphstructures_1
.. testcode::
from theano.tensor import add, mul, Apply, Variable
, TensorType
from theano.tensor import add, mul, Apply, Variable, Constant
, TensorType
# Instantiate a type that represents a matrix of doubles
float64_matrix = TensorType(dtype =
'float64', # double
broadcastable =
(False, False)) # matrix
float64_matrix = TensorType(dtype=
'float64', # double
broadcastable=
(False, False)) # matrix
# We make the Variable instances we need.
x = Variable(type = float64_matrix, name =
'x')
y = Variable(type = float64_matrix, name =
'y')
z = Variable(type = float64_matrix, name =
'z')
x = Variable(type=float64_matrix, name=
'x')
y = Variable(type=float64_matrix, name=
'y')
z = Variable(type=float64_matrix, name=
'z')
# This is the Variable that we want to symbolically represents y*z
mul_variable = Variable(type =
float64_matrix)
mul_variable = Variable(type=
float64_matrix)
assert mul_variable.owner is None
# Instantiate a symbolic multiplication
node_mul = Apply(op =
mul,
inputs =
[y, z],
outputs =
[mul_variable])
node_mul = Apply(op=
mul,
inputs=
[y, z],
outputs=
[mul_variable])
# Fields 'owner' and 'index' are set by Apply
assert mul_variable.owner is node_mul
# 'index' is the position of mul_variable in mode_mul's outputs
assert mul_variable.index == 0
# This is the Variable that we want to symbolically represents x+(y*z)
add_variable = Variable(type =
float64_matrix)
add_variable = Variable(type=
float64_matrix)
assert add_variable.owner is None
# Instantiate a symbolic addition
node_add = Apply(op =
add,
inputs =
[x, mul_variable],
outputs =
[add_variable])
node_add = Apply(op=
add,
inputs=
[x, mul_variable],
outputs=
[add_variable])
# Fields 'owner' and 'index' are set by Apply
assert add_variable.owner is node_add
assert add_variable.index == 0
...
...
@@ -163,14 +157,13 @@ builds the following graph:
.. testcode::
node = Apply(op =
add,
inputs = [Variable(type = dscalar, name =
'x'),
Constant(type = lscalar, data =
1)],
outputs = [Variable(type =
dscalar)])
node = Apply(op=
add,
inputs=[Variable(type=T.dscalar, name=
'x'),
Constant(type=T.lscalar, data=
1)],
outputs=[Variable(type=T.
dscalar)])
e = node.outputs[0]
Graph Structures
================
...
...
@@ -402,39 +395,34 @@ In both types of pairs, the second element of the tuple is an index,
such that: ``var.clients[*][0].inputs[index]`` or
``fgraph.outputs[index]`` is that variable.
.. testcode::
import theano
v = theano.tensor.vector()
f = theano.function([v], (v+1).sum())
theano.printing.debugprint(f)
# Sorted list of all nodes in the compiled graph.
topo = f.maker.fgraph.toposort()
topo[0].outputs[0].clients
# [(Sum(Elemwise{add,no_inplace}.0), 0)]
topo[1].outputs[0].clients
# [('output', 0)]
# An internal variable
var = topo[0].outputs[0]
client = var.clients[0]
client
# (Sum(Elemwise{add,no_inplace}.0), 0)
type(client[0])
# <class 'theano.gof.graph.Apply'>
assert client[0].inputs[client[1]] is var
# An output of the graph
var = topo[1].outputs[0]
client = var.clients[0]
client
# ('output', 0)
assert f.maker.fgraph.outputs[client[1]] is var
.. testoutput::
Sum{acc_dtype=float64} [@A] '' 1
>>> import theano
>>> v = theano.tensor.vector()
>>> f = theano.function([v], (v+1).sum())
>>> theano.printing.debugprint(f)
Sum{acc_dtype=float64} [@A] '' 1
|Elemwise{add,no_inplace} [@B] '' 0
|TensorConstant{(1,) of 1.0} [@C]
|<TensorType(float64, vector)> [@D]
\ No newline at end of file
>>> # Sorted list of all nodes in the compiled graph.
>>> topo = f.maker.fgraph.toposort()
>>> topo[0].outputs[0].clients
[(Sum{acc_dtype=float64}(Elemwise{add,no_inplace}.0), 0)]
>>> topo[1].outputs[0].clients
[('output', 0)]
>>> # An internal variable
>>> var = topo[0].outputs[0]
>>> client = var.clients[0]
>>> client
(Sum{acc_dtype=float64}(Elemwise{add,no_inplace}.0), 0)
>>> type(client[0])
<class 'theano.gof.graph.Apply'>
>>> assert client[0].inputs[client[1]] is var
>>> # An output of the graph
>>> var = topo[1].outputs[0]
>>> client = var.clients[0]
>>> client
('output', 0)
>>> assert f.maker.fgraph.outputs[client[1]] is var
doc/extending/inplace.txt
浏览文件 @
b1f7979f
...
...
@@ -55,6 +55,11 @@ Suppose you had an Op which took ``x`` as input and returned
purpose, you would set the ``view_map`` field as follows:
.. testsetup::
from theano import Op
myop = Op()
.. testcode::
myop.view_map = {0: [0]}
...
...
doc/extending/op.txt
浏览文件 @
b1f7979f
...
...
@@ -541,9 +541,6 @@ multiplication Op could take an arbitrary number of arguments.
First, we'll instantiate a ``mul`` Op:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode:: mul
from theano import gof
...
...
@@ -558,9 +555,6 @@ two. This function ensures that both inputs have the ``double`` type.
Since multiplying two doubles yields a double, this function makes an
Apply node with an output Variable of type ``double``.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode:: mul
def make_node(x, y):
...
...
@@ -594,8 +588,6 @@ built-in type ``float`` because this is the type that ``double.filter()``
will always return, per our own definition. ``output_storage`` will
contain a single storage cell for the multiplication's variable.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode:: mul
def perform(node, inputs, output_storage):
...
...
@@ -626,31 +618,32 @@ Here, ``z`` is a list of one element. By default, ``z == [None]``.
Trying out our new Op
=====================
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
In the following code, we use our new Op:
>>> import theano
>>> x, y = double('x'), double('y')
>>> z = mul(x, y)
>>> f = theano.function([x, y], z)
>>> f(5, 6)
30.0
>>> f(5.6, 6.7)
37.519999999999996
.. doctest:: mul
>>> import theano
>>> x, y = double('x'), double('y')
>>> z = mul(x, y)
>>> f = theano.function([x, y], z)
>>> f(5, 6)
30.0
>>> f(5.6, 6.7)
37.519999999999996
Note that there is an implicit call to
``double.filter()`` on each argument, so if we give integers as inputs
they are magically cast to the right type. Now, what if we try this?
>>> x = double('x')
>>> z = mul(x, 2)
Traceback (most recent call last):
.. doctest:: mul
>>> x = double('x')
>>> z = mul(x, 2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/u/breuleuo/hg/theano/theano/gof/op.py", line 207, in __call__
File "<stdin>", line 2, in make_node
AttributeError: 'int' object has no attribute 'type'
AttributeError: 'int' object has no attribute 'type'
Automatic Constant Wrapping
---------------------------
...
...
@@ -659,8 +652,6 @@ Well, OK. We'd like our Op to be a bit more flexible. This can be done
by modifying ``make_node`` to accept Python ``int`` or ``float`` as
``x`` and/or ``y``:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode:: mul
def make_node(x, y):
...
...
@@ -677,16 +668,15 @@ Whenever we pass a Python int or float instead of a Variable as ``x`` or
``y``, ``make_node`` will convert it to :ref:`constant` for us. ``gof.Constant``
is a :ref:`variable` we statically know the value of.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_op.test_op_1
.. doctest:: mul
>>> x = double('x')
>>> z = mul(x, 2)
>>> f = theano.function([x], z)
>>> f(10)
20.0
>>> f(3.4)
6.799999999999999
8
>>> x = double('x')
>>> z = mul(x, 2)
>>> f = theano.function([x], z)
>>> f(10)
20.0
>>> f(3.4)
6.
8
Now the code works the way we want it to.
...
...
@@ -707,9 +697,6 @@ operations ``add``, ``sub`` and ``div``, code for ``make_node`` can be
shared between these Ops. Here is revised implementation of these four
arithmetic operators:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode::
from theano import gof
...
...
doc/extending/optimization.txt
浏览文件 @
b1f7979f
...
...
@@ -119,9 +119,6 @@ Global optimization
Here is the code for a global optimization implementing the
simplification described above:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
.. testcode::
import theano
...
...
@@ -182,9 +179,6 @@ pointer-following game you need to get ahold of the nodes of interest
for the simplification (``x``, ``y``, ``z``, ``a``, ``b``, etc.).
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
Test time:
>>> from theano.scalar import float64, add, mul, true_div
...
...
@@ -222,8 +216,8 @@ computation, using the ``merge_optimizer`` defined in
``theano.gof.opt``.
>>> from theano.gof.opt import merge_optimizer
>>> merge_optimizer.optimize(e)
(0,
0.0001430511474609375
, None, None, {}, 1, 0)
>>> merge_optimizer.optimize(e)
# doctest: +ELLIPSIS
(0,
...
, None, None, {}, 1, 0)
>>> e
[true_div(mul(*1 -> add(y, z), x), *1)]
>>> simplify.optimize(e)
...
...
@@ -254,9 +248,6 @@ Local optimization
The local version of the above code would be the following:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
.. testcode::
...
...
@@ -295,9 +286,6 @@ with a :ref:`navigator`. Basically, a :ref:`navigator` is a global
optimizer that loops through all nodes in the graph (or a well-defined
subset of them) and applies one or several local optimizers on them.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_2
>>> x = float64('x')
>>> y = float64('y')
>>> z = float64('z')
...
...
@@ -307,7 +295,7 @@ subset of them) and applies one or several local optimizers on them.
[add(z, mul(true_div(mul(y, x), y), true_div(z, x)))]
>>> simplify = gof.TopoOptimizer(local_simplify)
>>> simplify.optimize(e)
(<theano.gof.opt.TopoOptimizer object at 0x
7f3219787f90>, 1, 5, 3, 0.00017309188842773438, 0.00020599365234375, 6.4849853515625e-05
)
(<theano.gof.opt.TopoOptimizer object at 0x
...>, 1, 5, 3, ..., ..., ...
)
>>> e
[add(z, mul(x, true_div(z, x)))]
...
...
@@ -334,6 +322,9 @@ Theano defines some shortcuts to make LocalOptimizers:
Replaces all occurrences of the first pattern by the second pattern.
See :class:`PatternSub`.
.. testsetup::
from theano.scalar import identity
.. testcode::
...
...
@@ -438,9 +429,9 @@ Query
A Query is built by the following call:
..
testcode::
..
code-block:: python
theano.gof.Query(include, require
= None, exclude = None, subquery =
None)
theano.gof.Query(include, require
=None, exclude=None, subquery=
None)
.. class:: Query
...
...
@@ -481,20 +472,21 @@ Optimizer:
.. testcode::
from theano.gof import Query
from theano.compile import optdb
# This is how the optimizer for the fast_run mode is defined
fast_run = optdb.query(Query(include
=
['fast_run']))
fast_run = optdb.query(Query(include
=
['fast_run']))
# This is how the optimizer for the fast_compile mode is defined
fast_compile = optdb.query(Query(include
=
['fast_compile']))
fast_compile = optdb.query(Query(include
=
['fast_compile']))
# This is the same as fast_run but no optimizations will replace
# any operation by an inplace version. This assumes, of course,
# that all inplace operations are tagged as 'inplace' (as they
# should!)
fast_run_no_inplace = optdb.query(Query(include
= ['fast_run'], exclude = ['inplace']))
fast_run_no_inplace = fast_run.excluding('inplace'
)
fast_run_no_inplace = optdb.query(Query(include
=['fast_run'],
exclude=['inplace'])
)
Registering an Optimizer
...
...
doc/extending/other_ops.txt
浏览文件 @
b1f7979f
...
...
@@ -90,7 +90,7 @@ and (like in SciPy) they do not support broadcasting operations by default
formats for sparse type: ``csr`` and ``csc``. So in ``make_mode()``,
you can create output variables like this:
..
testcode::
..
code-block:: python
out_format = inputs[0].format # or 'csr' or 'csc' if the output format is fixed
SparseType(dtype=inputs[0].dtype, format=out_format).make_variable()
...
...
doc/extending/type.txt
浏览文件 @
b1f7979f
...
...
@@ -176,8 +176,6 @@ must define ``filter`` and shall override ``values_eq_approx``.
**filter**
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode::
# Note that we shadow Python's function ``filter`` with this
...
...
@@ -246,8 +244,6 @@ contract. Recall that Type defines default implementations for all
required methods of the interface, except ``filter``. One way to make
the Type is to instantiate a plain Type and set the needed fields:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode::
from theano import gof
...
...
@@ -260,8 +256,6 @@ the Type is to instantiate a plain Type and set the needed fields:
Another way to make this Type is to make a subclass of ``gof.Type``
and define ``filter`` and ``values_eq_approx`` in the subclass:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. code-block:: python
from theano import gof
...
...
@@ -331,9 +325,6 @@ There are several ways to make sure that equality testing works properly:
#. Define ``Double.__eq__`` so that instances of type Double
are equal. For example:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode::
def __eq__(self, other):
...
...
@@ -387,8 +378,6 @@ attempt to clear up the confusion:
Final version
=============
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_extending.test_extending_1
.. testcode::
from theano import gof
...
...
doc/extending/unittest.txt
浏览文件 @
b1f7979f
...
...
@@ -236,16 +236,16 @@ Example:
def test_validity(self):
a = T.dmatrix('a')
b = T.dmatrix('b')
c = T.dot(a,b)
f = theano.function([a,
b],
[c])
cmp = f(self.avals,
self.bvals) == numpy.dot(self.avals,
self.bvals)
c = T.dot(a,
b)
f = theano.function([a,
b],
[c])
cmp = f(self.avals,
self.bvals) == numpy.dot(self.avals,
self.bvals)
self.assertTrue(numpy.all(cmp))
Avoid hard-coding variables, as in the following case:
..
testcode:: writeUnitest
..
code-block:: python
self.assertTrue(numpy.all(f(self.avals,
self.bvals)==numpy.array([[25,25,30,28],[21,18,14,
25]])))
self.assertTrue(numpy.all(f(self.avals,
self.bvals) == numpy.array([[25, 25, 30, 28], [21, 18, 14,
25]])))
This makes the test case less manageable and forces the user to update
the variables each time the input is changed or possibly when the
...
...
theano/tests/test_tutorial.py
浏览文件 @
b1f7979f
...
...
@@ -22,475 +22,6 @@ from theano.sandbox.rng_mrg import MRG_RandomStreams
from
theano.tensor.shared_randomstreams
import
RandomStreams
class
T_extending
(
unittest
.
TestCase
):
# All tests here belong to files in
# http://deeplearning.net/software/theano/extending
# Theano/doc/extending/*.txt
# Any change you do here also add it to the tutorial!
# This belongs to an entire folder since code-snippets are connected
# from one file to another .. and they do not make sense on their
# own.
def
test_extending_1
(
self
):
# Note that we shadow Python's function ``filter`` with this
# definition.
def
filter
(
x
,
strict
=
False
,
allow_downcast
=
None
):
if
strict
:
if
isinstance
(
x
,
float
):
return
x
else
:
raise
TypeError
(
'Expected a float!'
)
else
:
return
float
(
x
)
def
values_eq_approx
(
x
,
y
,
tolerance
=
1e-4
):
return
abs
(
x
-
y
)
/
(
abs
(
x
)
+
abs
(
y
))
<
tolerance
from
theano
import
gof
double
=
gof
.
Type
()
double
.
filter
=
filter
double
.
values_eq_approx
=
values_eq_approx
from
theano
import
gof
class
Double
(
gof
.
Type
):
def
filter
(
self
,
x
,
strict
=
False
):
if
strict
and
not
isinstance
(
x
,
float
):
raise
TypeError
(
'Expected a float!'
)
return
float
(
x
)
def
values_eq_approx
(
self
,
x
,
y
,
tolerance
=
1e-4
):
return
abs
(
x
-
y
)
/
(
abs
(
x
)
+
abs
(
y
))
<
tolerance
# Added to make those tests pass in DebugMode
@staticmethod
def
may_share_memory
(
a
,
b
):
return
a
is
b
double
=
Double
()
def
__eq__
(
self
,
other
):
return
type
(
self
)
is
Double
and
type
(
other
)
is
Double
from
theano
import
gof
class
Double
(
gof
.
Type
):
def
filter
(
self
,
x
,
strict
=
False
,
allow_downcast
=
None
):
if
strict
and
not
isinstance
(
x
,
float
):
raise
TypeError
(
'Expected a float!'
)
return
float
(
x
)
def
values_eq_approx
(
self
,
x
,
y
,
tolerance
=
1e-4
):
return
abs
(
x
-
y
)
/
(
abs
(
x
)
+
abs
(
y
))
<
tolerance
def
__str__
(
self
):
return
"double"
# Added to make those tests pass in DebugMode
@staticmethod
def
may_share_memory
(
a
,
b
):
return
a
is
b
double
=
Double
()
from
theano
import
gof
mul
=
gof
.
Op
()
def
make_node
(
x
,
y
):
if
x
.
type
!=
double
or
y
.
type
!=
double
:
raise
TypeError
(
'mul only works on doubles'
)
return
gof
.
Apply
(
mul
,
[
x
,
y
],
[
double
()])
mul
.
make_node
=
make_node
def
perform
(
node
,
inputs
,
output_storage
):
x
,
y
=
inputs
[
0
],
inputs
[
1
]
z
=
output_storage
[
0
]
z
[
0
]
=
x
*
y
mul
.
perform
=
perform
x
,
y
=
double
(
'x'
),
double
(
'y'
)
z
=
mul
(
x
,
y
)
f
=
theano
.
function
([
x
,
y
],
z
)
assert
f
(
5
,
6
)
==
30.0
assert
f
(
5.6
,
6.7
)
==
37.519999999999996
x
=
double
(
'x'
)
self
.
assertRaises
(
AttributeError
,
mul
,
x
,
2
)
def
make_node
(
x
,
y
):
if
isinstance
(
x
,
(
int
,
float
)):
x
=
gof
.
Constant
(
double
,
x
)
if
isinstance
(
y
,
(
int
,
float
)):
y
=
gof
.
Constant
(
double
,
y
)
if
x
.
type
!=
double
or
y
.
type
!=
double
:
raise
TypeError
(
'mul only works on doubles'
)
return
gof
.
Apply
(
mul
,
[
x
,
y
],
[
double
()])
mul
.
make_node
=
make_node
x
=
double
(
'x'
)
z
=
mul
(
x
,
2
)
f
=
theano
.
function
([
x
],
z
)
assert
f
(
10
)
==
20.0
assert
f
(
3.4
)
==
6.7999999999999998
from
theano
import
gof
class
BinaryDoubleOp
(
gof
.
Op
):
__props__
=
(
"name"
,
"fn"
)
def
__init__
(
self
,
name
,
fn
):
self
.
name
=
name
self
.
fn
=
fn
def
make_node
(
self
,
x
,
y
):
if
isinstance
(
x
,
(
int
,
float
)):
x
=
gof
.
Constant
(
double
,
x
)
if
isinstance
(
y
,
(
int
,
float
)):
y
=
gof
.
Constant
(
double
,
y
)
if
x
.
type
!=
double
or
y
.
type
!=
double
:
raise
TypeError
(
'
%
s only works on doubles'
%
self
.
name
)
return
gof
.
Apply
(
self
,
[
x
,
y
],
[
double
()])
def
perform
(
self
,
node
,
inp
,
out
):
x
,
y
=
inp
z
,
=
out
z
[
0
]
=
self
.
fn
(
x
,
y
)
def
__str__
(
self
):
return
self
.
name
add
=
BinaryDoubleOp
(
name
=
'add'
,
fn
=
lambda
x
,
y
:
x
+
y
)
sub
=
BinaryDoubleOp
(
name
=
'sub'
,
fn
=
lambda
x
,
y
:
x
-
y
)
mul
=
BinaryDoubleOp
(
name
=
'mul'
,
fn
=
lambda
x
,
y
:
x
*
y
)
div
=
BinaryDoubleOp
(
name
=
'div'
,
fn
=
lambda
x
,
y
:
x
/
y
)
def
test_extending_2
(
self
):
'''
This test fails in DebugMode for the same reasons the test in
tensor/tests/test_basic.py:T_scalarfromtensor.test0
fails on debug mode ( as much as I could tell - Razvan )
'''
from
theano
import
gof
class
Double
(
gof
.
Type
):
def
filter
(
self
,
x
,
strict
=
False
,
allow_downcast
=
None
):
if
strict
and
not
isinstance
(
x
,
float
):
raise
TypeError
(
'Expected a float!'
)
return
float
(
x
)
def
values_eq_approx
(
self
,
x
,
y
,
tolerance
=
1e-4
):
return
abs
(
x
-
y
)
/
(
abs
(
x
)
+
abs
(
y
))
<
tolerance
def
__str__
(
self
):
return
"double"
# Added to make those tests pass in DebugMode
@staticmethod
def
may_share_memory
(
a
,
b
):
return
a
is
b
double
=
Double
()
class
BinaryDoubleOp
(
gof
.
Op
):
__props__
=
(
"name"
,
"fn"
)
def
__init__
(
self
,
name
,
fn
):
self
.
name
=
name
self
.
fn
=
fn
def
make_node
(
self
,
x
,
y
):
if
isinstance
(
x
,
(
int
,
float
)):
x
=
gof
.
Constant
(
double
,
x
)
if
isinstance
(
y
,
(
int
,
float
)):
y
=
gof
.
Constant
(
double
,
y
)
if
x
.
type
!=
double
or
y
.
type
!=
double
:
raise
TypeError
(
'
%
s only works on doubles'
%
self
.
name
)
return
gof
.
Apply
(
self
,
[
x
,
y
],
[
double
()])
def
perform
(
self
,
node
,
inp
,
out
):
x
,
y
=
inp
z
,
=
out
z
[
0
]
=
self
.
fn
(
x
,
y
)
def
__str__
(
self
):
return
self
.
name
add
=
BinaryDoubleOp
(
name
=
'add'
,
fn
=
lambda
x
,
y
:
x
+
y
)
sub
=
BinaryDoubleOp
(
name
=
'sub'
,
fn
=
lambda
x
,
y
:
x
-
y
)
mul
=
BinaryDoubleOp
(
name
=
'mul'
,
fn
=
lambda
x
,
y
:
x
*
y
)
div
=
BinaryDoubleOp
(
name
=
'div'
,
fn
=
lambda
x
,
y
:
x
/
y
)
def
c_declare
(
name
,
sub
,
check_input
=
True
):
return
"""
double
%(name)
s;
"""
%
dict
(
name
=
name
)
double
.
c_declare
=
c_declare
def
c_init
(
name
,
sub
):
return
"""
%(name)
s = 0.0;
"""
%
dict
(
name
=
name
)
double
.
c_init
=
c_init
def
c_extract
(
name
,
sub
,
check_input
=
True
):
if
(
check_input
):
pre
=
"""
if (!PyFloat_Check(py_
%(name)
s)) {
PyErr_SetString(PyExc_TypeError, "expected a float");
%(fail)
s
}"""
%
dict
(
name
=
name
,
fail
=
sub
[
'fail'
])
else
:
pre
=
""
return
pre
+
"""
%(name)
s = PyFloat_AsDouble(py_
%(name)
s);
"""
%
dict
(
name
=
name
,
fail
=
sub
[
'fail'
])
double
.
c_extract
=
c_extract
def
c_sync
(
name
,
sub
):
return
"""
Py_XDECREF(py_
%(name)
s);
py_
%(name)
s = PyFloat_FromDouble(
%(name)
s);
if (!py_
%(name)
s) {
printf("PyFloat_FromDouble failed on:
%%
f
\\
n",
%(name)
s);
Py_XINCREF(Py_None);
py_
%(name)
s = Py_None;
}
"""
%
dict
(
name
=
name
)
double
.
c_sync
=
c_sync
def
c_cleanup
(
name
,
sub
):
return
""
double
.
c_cleanup
=
c_cleanup
from
theano
import
function
x
,
y
,
z
=
double
(
'x'
),
double
(
'y'
),
double
(
'z'
)
a
=
add
(
x
,
y
)
b
=
mul
(
a
,
z
)
f
=
function
([
x
,
y
,
z
],
b
)
assert
f
(
1.0
,
2.0
,
3.0
)
==
9.0
from
theano
import
gof
class
Double
(
gof
.
Type
):
def
filter
(
self
,
x
,
strict
=
False
,
allow_downcast
=
None
):
if
strict
and
not
isinstance
(
x
,
float
):
raise
TypeError
(
'Expected a float!'
)
return
float
(
x
)
def
values_eq_approx
(
self
,
x
,
y
,
tolerance
=
1e-4
):
return
abs
(
x
-
y
)
/
(
x
+
y
)
<
tolerance
def
__str__
(
self
):
return
"double"
def
c_declare
(
self
,
name
,
sub
,
check_input
=
True
):
return
"""
double
%(name)
s;
"""
%
dict
(
name
=
name
)
def
c_init
(
self
,
name
,
sub
):
return
"""
%(name)
s = 0.0;
"""
%
dict
(
name
=
name
)
def
c_extract
(
self
,
name
,
sub
,
check_input
=
True
):
if
(
check_input
):
pre
=
"""
if (!PyFloat_Check(py_
%(name)
s)) {
PyErr_SetString(PyExc_TypeError, "expected a float");
%(fail)
s
}
"""
%
dict
(
sub
,
name
=
name
)
else
:
pre
=
""
return
pre
+
"""
%(name)
s = PyFloat_AsDouble(py_
%(name)
s);
"""
%
dict
(
sub
,
name
=
name
)
def
c_sync
(
self
,
name
,
sub
):
return
"""
Py_XDECREF(py_
%(name)
s);
py_
%(name)
s = PyFloat_FromDouble(
%(name)
s);
if (!py_
%(name)
s) {
printf("PyFloat_FromDouble failed on:
%%
f
\\
n",
%(name)
s);
Py_XINCREF(Py_None);
py_
%(name)
s = Py_None;
}
"""
%
dict
(
name
=
name
)
def
c_cleanup
(
self
,
name
,
sub
):
return
""
# Added to make those tests pass in DebugMode
@staticmethod
def
may_share_memory
(
a
,
b
):
return
a
is
b
double
=
Double
()
def
c_code
(
node
,
name
,
input_names
,
output_names
,
sub
):
x_name
,
y_name
=
input_names
[
0
],
input_names
[
1
]
output_name
=
output_names
[
0
]
return
"""
%(output_name)
s =
%(x_name)
s *
%(y_name)
s;
"""
%
locals
()
mul
.
c_code
=
c_code
from
theano
import
gof
class
BinaryDoubleOp
(
gof
.
Op
):
__props__
=
(
"name"
,
"fn"
,
"ccode"
)
def
__init__
(
self
,
name
,
fn
,
ccode
):
self
.
name
=
name
self
.
fn
=
fn
self
.
ccode
=
ccode
def
make_node
(
self
,
x
,
y
):
if
isinstance
(
x
,
(
int
,
float
)):
x
=
gof
.
Constant
(
double
,
x
)
if
isinstance
(
y
,
(
int
,
float
)):
y
=
gof
.
Constant
(
double
,
y
)
if
x
.
type
!=
double
or
y
.
type
!=
double
:
raise
TypeError
(
'
%
s only works on doubles'
%
self
.
name
)
return
gof
.
Apply
(
self
,
[
x
,
y
],
[
double
()])
def
perform
(
self
,
node
,
inp
,
out
):
x
,
y
=
inp
z
,
=
out
z
[
0
]
=
self
.
fn
(
x
,
y
)
def
__str__
(
self
):
return
self
.
name
def
c_code
(
self
,
node
,
name
,
inp
,
out
,
sub
):
x
,
y
=
inp
z
,
=
out
return
self
.
ccode
%
locals
()
add
=
BinaryDoubleOp
(
name
=
'add'
,
fn
=
lambda
x
,
y
:
x
+
y
,
ccode
=
"
%(z)
s =
%(x)
s +
%(y)
s;"
)
sub
=
BinaryDoubleOp
(
name
=
'sub'
,
fn
=
lambda
x
,
y
:
x
-
y
,
ccode
=
"
%(z)
s =
%(x)
s -
%(y)
s;"
)
mul
=
BinaryDoubleOp
(
name
=
'mul'
,
fn
=
lambda
x
,
y
:
x
*
y
,
ccode
=
"
%(z)
s =
%(x)
s *
%(y)
s;"
)
div
=
BinaryDoubleOp
(
name
=
'div'
,
fn
=
lambda
x
,
y
:
x
/
y
,
ccode
=
"
%(z)
s =
%(x)
s /
%(y)
s;"
)
from
theano.gof
import
toolbox
class
Simplify
(
gof
.
Optimizer
):
def
add_requirements
(
self
,
fgraph
):
fgraph
.
attach_feature
(
toolbox
.
ReplaceValidate
())
def
apply
(
self
,
fgraph
):
for
node
in
fgraph
.
toposort
():
if
node
.
op
==
div
:
x
,
y
=
node
.
inputs
z
=
node
.
outputs
[
0
]
if
x
.
owner
and
x
.
owner
.
op
==
mul
:
a
,
b
=
x
.
owner
.
inputs
if
y
==
a
:
fgraph
.
replace_validate
(
z
,
b
)
elif
y
==
b
:
fgraph
.
replace_validate
(
z
,
a
)
simplify
=
Simplify
()
x
=
double
(
'x'
)
y
=
double
(
'y'
)
z
=
double
(
'z'
)
a
=
add
(
z
,
mul
(
div
(
mul
(
y
,
x
),
y
),
div
(
z
,
x
)))
e
=
gof
.
FunctionGraph
([
x
,
y
,
z
],
[
a
])
simplify
.
optimize
(
e
)
class
LocalSimplify
(
gof
.
LocalOptimizer
):
def
transform
(
self
,
node
):
if
node
.
op
==
div
:
x
,
y
=
node
.
inputs
if
x
.
owner
and
x
.
owner
.
op
==
mul
:
a
,
b
=
x
.
owner
.
inputs
if
y
==
a
:
return
[
b
]
elif
y
==
b
:
return
[
a
]
return
False
def
tracks
(
self
):
# This should be needed for the EquilibriumOptimizer
# but it isn't now
# TODO: do this and explain it
return
[]
# that's not what you should do
local_simplify
=
LocalSimplify
()
x
=
double
(
'x'
)
y
=
double
(
'y'
)
z
=
double
(
'z'
)
a
=
add
(
z
,
mul
(
div
(
mul
(
y
,
x
),
y
),
div
(
z
,
x
)))
e
=
gof
.
FunctionGraph
([
x
,
y
,
z
],
[
a
])
simplify
=
gof
.
TopoOptimizer
(
local_simplify
)
simplify
.
optimize
(
e
)
def
test_as_op
(
self
):
import
theano
import
numpy
from
theano.compile.ops
import
as_op
def
infer_shape_numpy_dot
(
node
,
input_shapes
):
ashp
,
bshp
=
input_shapes
return
[
ashp
[:
-
1
]
+
bshp
[
-
1
:]]
@as_op
(
itypes
=
[
theano
.
tensor
.
fmatrix
,
theano
.
tensor
.
fmatrix
],
otypes
=
[
theano
.
tensor
.
fmatrix
],
infer_shape
=
infer_shape_numpy_dot
)
def
numpy_add
(
a
,
b
):
return
numpy
.
add
(
a
,
b
)
def
infer_shape_numpy_add_sub
(
node
,
input_shapes
):
ashp
,
bshp
=
input_shapes
# Both inputs should have that same shape, so we just
# return one of them.
return
[
ashp
[
0
]]
@as_op
(
itypes
=
[
theano
.
tensor
.
fmatrix
,
theano
.
tensor
.
fmatrix
],
otypes
=
[
theano
.
tensor
.
fmatrix
],
infer_shape
=
infer_shape_numpy_add_sub
)
def
numpy_add
(
a
,
b
):
return
numpy
.
add
(
a
,
b
)
@as_op
(
itypes
=
[
theano
.
tensor
.
fmatrix
,
theano
.
tensor
.
fmatrix
],
otypes
=
[
theano
.
tensor
.
fmatrix
],
infer_shape
=
infer_shape_numpy_add_sub
)
def
numpy_sub
(
a
,
b
):
return
numpy
.
sub
(
a
,
b
)
class
T_using_gpu
(
unittest
.
TestCase
):
# All tests here belog to
# http://deeplearning.net/software/theano/tutorial/using_gpu.html
...
...
@@ -684,127 +215,6 @@ class Fibby(theano.Op):
return
(
1
,)
class
T_fibby
(
unittest
.
TestCase
):
# All tests here belong to
# http://deeplearning.net/software/theano/extending/fibby.html
# Theano/doc/extending/fibby.txt
# Any change you do here also add it to the tutorial !
def
test_fibby_1
(
self
):
# The definition of class Fibby is done outside of the test,
# so the object can be pickled.
fibby
=
Fibby
()
from
theano.tensor.opt
import
(
get_scalar_constant_value
,
NotScalarConstantError
)
# Remove any fibby(zeros(...))
@theano.tensor.opt.register_specialize
@theano.gof.local_optimizer
([
fibby
])
def
fibby_of_zero
(
node
):
if
node
.
op
==
fibby
:
x
=
node
.
inputs
[
0
]
try
:
if
numpy
.
all
(
0
==
get_scalar_constant_value
(
x
)):
return
[
x
]
except
NotScalarConstantError
:
pass
# Test it does not apply when not needed
x
=
T
.
dvector
()
f
=
function
([
x
],
fibby
(
x
))
# theano.printing.debugprint(f)
# We call the function to make sure it runs.
# If you run in DebugMode, it will compare the C and Python outputs.
f
(
numpy
.
random
.
rand
(
5
))
topo
=
f
.
maker
.
fgraph
.
toposort
()
assert
len
(
topo
)
==
1
assert
isinstance
(
topo
[
0
]
.
op
,
Fibby
)
# Test that the optimization gets applied.
f_zero
=
function
([],
fibby
(
T
.
zeros
([
5
])))
# theano.printing.debugprint(f_zero)
# If you run in DebugMode, it will compare the output before
# and after the optimization.
f_zero
()
# Check that the optimization removes the Fibby Op.
# For security, the Theano memory interface ensures that the output
# of the function is always memory not aliased to the input.
# That is why there is a DeepCopyOp op.
topo
=
f_zero
.
maker
.
fgraph
.
toposort
()
assert
len
(
topo
)
==
1
assert
isinstance
(
topo
[
0
]
.
op
,
theano
.
compile
.
ops
.
DeepCopyOp
)
class
T_graphstructures
(
unittest
.
TestCase
):
# All tests here belong to
# http://deeplearning.net/software/theano/extending/graphstructures.html
# Theano/doc/extending/graphstructures.txt
# Any change you do here also add it to the tutorial !
def
test_graphstructures_1
(
self
):
x
=
T
.
dmatrix
(
'x'
)
y
=
T
.
dmatrix
(
'y'
)
z
=
x
+
y
x
=
T
.
matrix
(
'x'
)
y
=
T
.
matrix
(
'y'
)
z
=
T
.
matrix
(
'z'
)
# create 2 Variables (one for 'e', one intermediate for y*z)
# create 2 Apply instances (one for '+', one for '*')
e
=
x
+
y
*
z
from
theano.tensor
import
add
,
mul
,
Apply
,
Variable
,
TensorType
# Instantiate a type that represents a matrix of doubles
float64_matrix
=
TensorType
(
dtype
=
'float64'
,
# double
broadcastable
=
(
False
,
False
))
# matrix
# We make the Variable instances we need.
x
=
Variable
(
type
=
float64_matrix
,
name
=
'x'
)
y
=
Variable
(
type
=
float64_matrix
,
name
=
'y'
)
z
=
Variable
(
type
=
float64_matrix
,
name
=
'z'
)
# This is the Variable that we want to symbolically represents y*z
mul_variable
=
Variable
(
type
=
float64_matrix
)
assert
mul_variable
.
owner
is
None
# Instantiate a symbolic multiplication
node_mul
=
Apply
(
op
=
mul
,
inputs
=
[
y
,
z
],
outputs
=
[
mul_variable
])
# Fields 'owner' and 'index' are set by Apply
assert
mul_variable
.
owner
is
node_mul
# 'index' is the position of mul_variable in mode_mul's outputs
assert
mul_variable
.
index
==
0
# This is the Variable that we want to symbolically represents x+(y*z)
add_variable
=
Variable
(
type
=
float64_matrix
)
assert
add_variable
.
owner
is
None
# Instantiate a symbolic addition
node_add
=
Apply
(
op
=
add
,
inputs
=
[
x
,
mul_variable
],
outputs
=
[
add_variable
])
# Fields 'owner' and 'index' are set by Apply
assert
add_variable
.
owner
is
node_add
assert
add_variable
.
index
==
0
e
=
add_variable
# We have access to x, y and z through pointers
assert
e
.
owner
.
inputs
[
0
]
is
x
assert
e
.
owner
.
inputs
[
1
]
is
mul_variable
assert
e
.
owner
.
inputs
[
1
]
.
owner
.
inputs
[
0
]
is
y
assert
e
.
owner
.
inputs
[
1
]
.
owner
.
inputs
[
1
]
is
z
class
T_scan
(
unittest
.
TestCase
):
# All tests here belong to
# http://deeplearning.net/software/theano/tutorial/loop.html
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论