Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
6bdb5854
提交
6bdb5854
authored
10月 21, 2011
作者:
Olivier Delalleau
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #142 from nouiz/op_doc_merge
Op doc merge
上级
ce7593d8
544edca7
隐藏空白字符变更
内嵌
并排
正在显示
4 个修改的文件
包含
44 行增加
和
15 行删除
+44
-15
extending_theano.txt
doc/cifarSC2011/extending_theano.txt
+28
-5
op.txt
doc/extending/op.txt
+13
-7
sharedvalue.py
theano/compile/sharedvalue.py
+1
-1
check_blas.py
theano/misc/check_blas.py
+2
-2
没有找到文件。
doc/cifarSC2011/extending_theano.txt
浏览文件 @
6bdb5854
...
@@ -10,8 +10,8 @@ Theano graphs
...
@@ -10,8 +10,8 @@ Theano graphs
- Theano works with symbolic graphs
- Theano works with symbolic graphs
- Those graphs are bi-partite graphs (graph with 2 types of nodes)
- Those graphs are bi-partite graphs (graph with 2 types of nodes)
- The 2 types
are Apply nodes
and Variable nodes
- The 2 types
of nodes are Apply
and Variable nodes
-
Apply nodes have a link to the Op they execute
-
Each Apply node has a link to the Op that it executes
Inputs and Outputs are lists of Theano variables
Inputs and Outputs are lists of Theano variables
...
@@ -28,25 +28,42 @@ Op contract
...
@@ -28,25 +28,42 @@ Op contract
class MyOp(theano.Op):
class MyOp(theano.Op):
def make_node(self, *inputs):
def make_node(self, *inputs):
pass
def __eq__(self, other):
def __eq__(self, other):
pass
def __hash__(self):
def __hash__(self):
pass
def __str__(self):
def __str__(self):
pass
# Python implementation:
# Python implementation:
def perform(self, node, inputs_storage, output_storage):
def perform(self, node, inputs_storage, output_storage):
pass
# C implementation: [see theano web site for other functions]
# C implementation: [see theano web site for other functions]
def c_code(...):
def c_code(...):
# ...
# ...
pass
# others implementation (pycuda, ...):
# others implementation (pycuda, ...):
def make_thunk(self, node, storage_map, _, _2):
def make_thunk(self, node, storage_map, _, _2):
pass
# optional:
# optional:
def __init__(self, ...):
def __init__(self, ...):
pass
def grad(self, inputs, g):
def grad(self, inputs, g):
pass
def R_op(self, inputs, eval_points):
def R_op(self, inputs, eval_points):
pass
def infer_shape(node, (i0_shapes, ...))
def infer_shape(node, (i0_shapes, ...))
pass
.. ../extending/op.txt
.. ../extending/op.txt
...
@@ -78,9 +95,11 @@ This could be helpful if one only needs the shape of the output instead of the a
...
@@ -78,9 +95,11 @@ This could be helpful if one only needs the shape of the output instead of the a
The :func:`grad` method is required if you want to differentiate some cost whose expression
The :func:`grad` method is required if you want to differentiate some cost whose expression
includes your op.
includes your op.
The :func:`__str__` is usefull to generate a better name for your op when printing.
The :func:`__str__` method is useful in order to provide a more meaningful
string representation of your Op.
The :func:`R_op` is needed if you want theano.tensor.Rop to work with your op.
The :func:`R_op` method is needed if you want `theano.tensor.Rop` to
work with your op.
Op example
Op example
----------
----------
...
@@ -92,13 +111,17 @@ Op example
...
@@ -92,13 +111,17 @@ Op example
class DoubleOp(theano.Op):
class DoubleOp(theano.Op):
def __eq__(self, other):
def __eq__(self, other):
return type(self) == type(other)
return type(self) == type(other)
def __hash__(self):
def __hash__(self):
return hash(type(self))
return hash(type(self))
def __str__(self):
def __str__(self):
return self.__class__.__name__
return self.__class__.__name__
def make_node(self, x):
def make_node(self, x):
x = theano.tensor.as_tensor_variable(x)
x = theano.tensor.as_tensor_variable(x)
return theano.Apply(self, [x], [x.type()])
return theano.Apply(self, [x], [x.type()])
def perform(self, node, inputs, output_storage):
def perform(self, node, inputs, output_storage):
x = inputs[0]
x = inputs[0]
z = output_storage[0]
z = output_storage[0]
...
@@ -123,7 +146,7 @@ Exercises 8
...
@@ -123,7 +146,7 @@ Exercises 8
- Modify and execute to compute: x * y
- Modify and execute to compute: x * y
- Modify and execute the example to return 2 outputs: x + y and x - y
- Modify and execute the example to return 2 outputs: x + y and x - y
- Our current elem
wise fusion generate computation with only 1 outputs
- Our current elem
ent-wise fusion generates computation with only 1 output.
doc/extending/op.txt
浏览文件 @
6bdb5854
...
@@ -152,7 +152,11 @@ following methods:
...
@@ -152,7 +152,11 @@ following methods:
of each input as symbolic variables (one per dimension).
of each input as symbolic variables (one per dimension).
The function should return a list with one tuple for each output.
The function should return a list with one tuple for each output.
Each tuple should contain the corresponding output's shape.
Each tuple should contain the corresponding output's computed shape.
Implementing this method will allow Theano to compute the output's
shape without computing the output itself, potentially sparing you
a costly recomputation.
.. function:: make_thunk(node, storage_map, compute_map, no_recycling)
.. function:: make_thunk(node, storage_map, compute_map, no_recycling)
...
@@ -208,14 +212,16 @@ following methods:
...
@@ -208,14 +212,16 @@ following methods:
*Default:* python default: module_path_to_your_class.CLASSNAME
*Default:* python default: module_path_to_your_class.CLASSNAME
This allows for better printing of the Op. If the Op parameterizable, it is highly
This allows you to specify a more informative string representation of your
recommended to implement this method, showing the value of the different parameters
Op. If an Op has parameters, it is highly recommended to have the
in the current instance's name.
``__str__`` method include the name of the op and the Op's parameters'
values.
At a bare minimum, a new Op must define ``make_node`` and ``perform``, which have no defaults.
At a bare minimum, a new Op must define ``make_node`` and ``perform``, which
have no defaults.
Also you can
provide a :ref:`C implementation <cop>` of
You can also
provide a :ref:`C implementation <cop>` of
``perform()``. For
other details
refer to the documentation for
``perform()``. For
more details,
refer to the documentation for
:ref:`op`.
:ref:`op`.
...
...
theano/compile/sharedvalue.py
浏览文件 @
6bdb5854
...
@@ -187,7 +187,7 @@ class SharedVariable(Variable):
...
@@ -187,7 +187,7 @@ class SharedVariable(Variable):
msg
=
(
'an object of type:
%
s. Did you forget to cast it into '
msg
=
(
'an object of type:
%
s. Did you forget to cast it into '
'a Numpy array before calling theano.shared()?'
%
'a Numpy array before calling theano.shared()?'
%
type
(
value
))
type
(
value
))
raise
TypeError
(
raise
TypeError
(
"The generic 'SharedVariable' object is not subscriptable. "
"The generic 'SharedVariable' object is not subscriptable. "
"This shared variable contains
%
s"
%
msg
)
"This shared variable contains
%
s"
%
msg
)
...
...
theano/misc/check_blas.py
浏览文件 @
6bdb5854
...
@@ -92,9 +92,9 @@ if __name__ == "__main__":
...
@@ -92,9 +92,9 @@ if __name__ == "__main__":
if
verbose
:
if
verbose
:
print
"""
print
"""
Some result
that you can compare again. They where 10 executions of gemm in float64 with matrix
of shape 2000x2000.
Some result
s that you can compare against. They were 10 executions of gemm in float64 with matrices
of shape 2000x2000.
C
pu
tested: Xeon E5345(2.33Ghz, 8M L2 cache, 1333Mhz FSB), Xeon E5430(2.66Ghz, 12M L2 cache, 1333Mhz FSB),
C
PU
tested: Xeon E5345(2.33Ghz, 8M L2 cache, 1333Mhz FSB), Xeon E5430(2.66Ghz, 12M L2 cache, 1333Mhz FSB),
Xeon E5450(3Ghz, 12M L2 cache, 1333Mhz FSB), Xeon X5560(2.8Ghz, 12M L2 cache, 6.4GT/s QPI, hyper-threads enabled?)
Xeon E5450(3Ghz, 12M L2 cache, 1333Mhz FSB), Xeon X5560(2.8Ghz, 12M L2 cache, 6.4GT/s QPI, hyper-threads enabled?)
Core 2 E8500, Core i7 930(2.8Ghz, hyper-threads enabled), Core i7 950(3.07GHz, hyper-threads enabled)
Core 2 E8500, Core i7 930(2.8Ghz, hyper-threads enabled), Core i7 950(3.07GHz, hyper-threads enabled)
Xeon X5550(2.67GHz, 8M l2 cache?, hyper-threads enabled)
Xeon X5550(2.67GHz, 8M l2 cache?, hyper-threads enabled)
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论