提交 6bdb5854 authored 作者: Olivier Delalleau's avatar Olivier Delalleau

Merge pull request #142 from nouiz/op_doc_merge

Op doc merge
...@@ -10,8 +10,8 @@ Theano graphs ...@@ -10,8 +10,8 @@ Theano graphs
- Theano works with symbolic graphs - Theano works with symbolic graphs
- Those graphs are bi-partite graphs (graph with 2 types of nodes) - Those graphs are bi-partite graphs (graph with 2 types of nodes)
- The 2 types are Apply nodes and Variable nodes - The 2 types of nodes are Apply and Variable nodes
- Apply nodes have a link to the Op they execute - Each Apply node has a link to the Op that it executes
Inputs and Outputs are lists of Theano variables Inputs and Outputs are lists of Theano variables
...@@ -28,25 +28,42 @@ Op contract ...@@ -28,25 +28,42 @@ Op contract
class MyOp(theano.Op): class MyOp(theano.Op):
def make_node(self, *inputs): def make_node(self, *inputs):
pass
def __eq__(self, other): def __eq__(self, other):
pass
def __hash__(self): def __hash__(self):
pass
def __str__(self): def __str__(self):
pass
# Python implementation: # Python implementation:
def perform(self, node, inputs_storage, output_storage): def perform(self, node, inputs_storage, output_storage):
pass
# C implementation: [see theano web site for other functions] # C implementation: [see theano web site for other functions]
def c_code(...): def c_code(...):
# ... # ...
pass
# others implementation (pycuda, ...): # others implementation (pycuda, ...):
def make_thunk(self, node, storage_map, _, _2): def make_thunk(self, node, storage_map, _, _2):
pass
# optional: # optional:
def __init__(self, ...): def __init__(self, ...):
pass
def grad(self, inputs, g): def grad(self, inputs, g):
pass
def R_op(self, inputs, eval_points): def R_op(self, inputs, eval_points):
pass
def infer_shape(node, (i0_shapes, ...)) def infer_shape(node, (i0_shapes, ...))
pass
.. ../extending/op.txt .. ../extending/op.txt
...@@ -78,9 +95,11 @@ This could be helpful if one only needs the shape of the output instead of the a ...@@ -78,9 +95,11 @@ This could be helpful if one only needs the shape of the output instead of the a
The :func:`grad` method is required if you want to differentiate some cost whose expression The :func:`grad` method is required if you want to differentiate some cost whose expression
includes your op. includes your op.
The :func:`__str__` is usefull to generate a better name for your op when printing. The :func:`__str__` method is useful in order to provide a more meaningful
string representation of your Op.
The :func:`R_op` is needed if you want theano.tensor.Rop to work with your op. The :func:`R_op` method is needed if you want `theano.tensor.Rop` to
work with your op.
Op example Op example
---------- ----------
...@@ -92,13 +111,17 @@ Op example ...@@ -92,13 +111,17 @@ Op example
class DoubleOp(theano.Op): class DoubleOp(theano.Op):
def __eq__(self, other): def __eq__(self, other):
return type(self) == type(other) return type(self) == type(other)
def __hash__(self): def __hash__(self):
return hash(type(self)) return hash(type(self))
def __str__(self): def __str__(self):
return self.__class__.__name__ return self.__class__.__name__
def make_node(self, x): def make_node(self, x):
x = theano.tensor.as_tensor_variable(x) x = theano.tensor.as_tensor_variable(x)
return theano.Apply(self, [x], [x.type()]) return theano.Apply(self, [x], [x.type()])
def perform(self, node, inputs, output_storage): def perform(self, node, inputs, output_storage):
x = inputs[0] x = inputs[0]
z = output_storage[0] z = output_storage[0]
...@@ -123,7 +146,7 @@ Exercises 8 ...@@ -123,7 +146,7 @@ Exercises 8
- Modify and execute to compute: x * y - Modify and execute to compute: x * y
- Modify and execute the example to return 2 outputs: x + y and x - y - Modify and execute the example to return 2 outputs: x + y and x - y
- Our current elemwise fusion generate computation with only 1 outputs - Our current element-wise fusion generates computation with only 1 output.
...@@ -152,7 +152,11 @@ following methods: ...@@ -152,7 +152,11 @@ following methods:
of each input as symbolic variables (one per dimension). of each input as symbolic variables (one per dimension).
The function should return a list with one tuple for each output. The function should return a list with one tuple for each output.
Each tuple should contain the corresponding output's shape. Each tuple should contain the corresponding output's computed shape.
Implementing this method will allow Theano to compute the output's
shape without computing the output itself, potentially sparing you
a costly recomputation.
.. function:: make_thunk(node, storage_map, compute_map, no_recycling) .. function:: make_thunk(node, storage_map, compute_map, no_recycling)
...@@ -208,14 +212,16 @@ following methods: ...@@ -208,14 +212,16 @@ following methods:
*Default:* python default: module_path_to_your_class.CLASSNAME *Default:* python default: module_path_to_your_class.CLASSNAME
This allows for better printing of the Op. If the Op parameterizable, it is highly This allows you to specify a more informative string representation of your
recommended to implement this method, showing the value of the different parameters Op. If an Op has parameters, it is highly recommended to have the
in the current instance's name. ``__str__`` method include the name of the op and the Op's parameters'
values.
At a bare minimum, a new Op must define ``make_node`` and ``perform``, which have no defaults. At a bare minimum, a new Op must define ``make_node`` and ``perform``, which
have no defaults.
Also you can provide a :ref:`C implementation <cop>` of You can also provide a :ref:`C implementation <cop>` of
``perform()``. For other details refer to the documentation for ``perform()``. For more details, refer to the documentation for
:ref:`op`. :ref:`op`.
......
...@@ -187,7 +187,7 @@ class SharedVariable(Variable): ...@@ -187,7 +187,7 @@ class SharedVariable(Variable):
msg = ('an object of type: %s. Did you forget to cast it into ' msg = ('an object of type: %s. Did you forget to cast it into '
'a Numpy array before calling theano.shared()?' % 'a Numpy array before calling theano.shared()?' %
type(value)) type(value))
raise TypeError( raise TypeError(
"The generic 'SharedVariable' object is not subscriptable. " "The generic 'SharedVariable' object is not subscriptable. "
"This shared variable contains %s" % msg) "This shared variable contains %s" % msg)
......
...@@ -92,9 +92,9 @@ if __name__ == "__main__": ...@@ -92,9 +92,9 @@ if __name__ == "__main__":
if verbose: if verbose:
print """ print """
Some result that you can compare again. They where 10 executions of gemm in float64 with matrix of shape 2000x2000. Some results that you can compare against. They were 10 executions of gemm in float64 with matrices of shape 2000x2000.
Cpu tested: Xeon E5345(2.33Ghz, 8M L2 cache, 1333Mhz FSB), Xeon E5430(2.66Ghz, 12M L2 cache, 1333Mhz FSB), CPU tested: Xeon E5345(2.33Ghz, 8M L2 cache, 1333Mhz FSB), Xeon E5430(2.66Ghz, 12M L2 cache, 1333Mhz FSB),
Xeon E5450(3Ghz, 12M L2 cache, 1333Mhz FSB), Xeon X5560(2.8Ghz, 12M L2 cache, 6.4GT/s QPI, hyper-threads enabled?) Xeon E5450(3Ghz, 12M L2 cache, 1333Mhz FSB), Xeon X5560(2.8Ghz, 12M L2 cache, 6.4GT/s QPI, hyper-threads enabled?)
Core 2 E8500, Core i7 930(2.8Ghz, hyper-threads enabled), Core i7 950(3.07GHz, hyper-threads enabled) Core 2 E8500, Core i7 930(2.8Ghz, hyper-threads enabled), Core i7 950(3.07GHz, hyper-threads enabled)
Xeon X5550(2.67GHz, 8M l2 cache?, hyper-threads enabled) Xeon X5550(2.67GHz, 8M l2 cache?, hyper-threads enabled)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论