提交 40a39f61 authored 作者: David Warde-Farley's avatar David Warde-Farley

Spelling/grammar fixes.

上级 9501ca28
......@@ -10,8 +10,8 @@ Theano graphs
- Theano works with symbolic graphs
- Those graphs are bi-partite graphs (graph with 2 types of nodes)
- Those 2 nodes types are Apply and Variable nodes
- Apply node have a link to the Op that it execute
- The 2 types of nodes are Apply and Variable nodes
- Each Apply node has a link to the Op that it executes
Inputs and Outputs are lists of Theano variables
......@@ -50,35 +50,36 @@ Op contract
.. ../extending/op.txt
There is 2 mandatory function. The first is :func:`make_node`. The
second is the one that do/tell the computation to do at run
time. Currently you have 4 posibility: implement the :func:`perform`
and/or :func:`c_code <Op.c_code>` (and other related :ref:`c functions
<cop>`), or the :func:`make_thunk` function. The ``perform`` allow you
to easily wrap an existing python function in Theano. The ``c_code``
and related function allow you to have your op generate c code and
have Theano compile and link to it. The ``make_thunk`` function will
There are 2 mandatory methods. The first is :func:`make_node`. The
second is the one that expresses what computation should be done at run
time. Currently you have 4 possibilities: implement the :func:`perform`
and/or :func:`c_code <Op.c_code>` (and other related :ref:`C functions
<cop>`), or the :func:`make_thunk` method. The ``perform`` method allows you
to easily wrap an existing Python function in Theano. The ``c_code``
and related methods allow you to have your op generate C code and
have Theano compile and link to it. The ``make_thunk`` method will
be called during compilation and should generate a ``thunk``: a
function that when called will do the wanted computation. This is
usefull if you want to generate code and compile it yourself. For
example, this allow you to use PyCUDA to compile gpu code.
There is 2 mandatory/highly suggested function. They are needed to for a basic
optimization that merge duplicate computation in a Theano function. So
if you don't want Theano to do you computation multiple time for no
good reason, implement them! Those function are :func:`__eq__` and
method that when called will do the desired computation. This is
useful if you want to generate code and compile it yourself. For
example, this allow you to use PyCUDA to compile GPU code.
There are 2 mandatory/highly recommended methods. They are needed for a basic
optimization that merges duplicate computations in a Theano function. Thus,
if you don't want Theano to perform your computations multiple times for no
good reason, implement these! Those methods are :func:`__eq__` and
:func:`__hash__`.
The :func:`infer_shape` method allow some very interesting
optimization like don't performing the computation of your op just to
take the shape your Op's output.
The :func:`infer_shape` method allows for some very interesting
optimizations, such as not performing your op's computations simply to
determine the shape your Op's output.
The :func:`grad` method is needed you want want differentiation to
work with your op.
The :func:`grad` method is needed if you want symbolic differentiation to
work with your Op.
The :func:`__str__` is usefull to have a better printing of you op.
The :func:`__str__` is useful in order to provide a more meaningful string
representation of your Op.
The :func:`R_op` is needed if you want theano.tensor.Rop to work with your op.
The :func:`R_op` is needed if you want `theano.tensor.Rop` to work with your op.
Op example
----------
......@@ -121,7 +122,7 @@ Exercises 8
- Modify and execute to compute: x * y
- Modify and execute the example to return 2 outputs: x + y and x - y
- Our current elemwise fusion generate computation with only 1 outputs
- Our current element-wise fusion generates computation with only 1 output.
......@@ -141,17 +141,17 @@ following methods:
Optional.
This function is needed for shape optimization. ``shapes`` is a
list with one tuple for each input the Apply node linked to this op
have. Each tuple contain 1 element for each dimensions of the
corresponding inputs. The value is the the corresponding
dimensions shape of the corresponding inputs.
This method is needed for shape optimization. ``shapes`` is a
list with one tuple for each input to the Apply node linked to this Op.
Each tuple contains 1 element for each dimension of the
corresponding input. The value corresponds to the input's size
along the given dimension.
This sound complicated, but this is just the corresponding inputs
shape in symbolic variable.
This sounds complicated, but this is just the corresponding input's
shape in a symbolic variable.
The function should return a list with one tuple for each output.
Each tuple should contain the corresponding output's shape.
Each tuple should contain the corresponding output's computed shape.
.. function:: make_thunk(node, storage_map, compute_map, no_recycling)
......@@ -186,14 +186,16 @@ following methods:
*Default:* python default: module_path_to_your_class.CLASSNAME
This allow you to have a better printing of Op. If an Op have parameter
it is highly recommented that it make the ``__str__`` function
print the name of the op and the Op's parameters values.
This allows you to specify a more informative string representation of your
Op. If an Op has parameters, it is highly recommended to have the
``__str__`` method include the name of the op and the Op's parameters'
values.
At a bare minimum, a new Op must define ``make_node`` and ``perform``, which have no defaults.
At a bare minimum, a new Op must define ``make_node`` and ``perform``, which
have no defaults.
Also you can provide a :ref:`C implementation <cop>` of
``perform()``. For other details refer to the documentation for
You can also provide a :ref:`C implementation <cop>` of
``perform()``. For more details, refer to the documentation for
:ref:`op`.
......
......@@ -92,9 +92,9 @@ if __name__ == "__main__":
if verbose:
print """
Some result that you can compare again. They where 10 executions of gemm in float64 with matrix of shape 2000x2000.
Some results that you can compare against. They were 10 executions of gemm in float64 with matrices of shape 2000x2000.
Cpu tested: Xeon E5345(2.33Ghz, 8M L2 cache, 1333Mhz FSB), Xeon E5430(2.66Ghz, 12M L2 cache, 1333Mhz FSB),
CPU tested: Xeon E5345(2.33Ghz, 8M L2 cache, 1333Mhz FSB), Xeon E5430(2.66Ghz, 12M L2 cache, 1333Mhz FSB),
Xeon E5450(3Ghz, 12M L2 cache, 1333Mhz FSB), Xeon X5560(2.8Ghz, 12M L2 cache, 6.4GT/s QPI, hyper-threads enabled?)
Core 2 E8500, Core i7 930(2.8Ghz, hyper-threads enabled), Core i7 950(3.07GHz, hyper-threads enabled)
Xeon X5550(2.67GHz, 8M l2 cache?, hyper-threads enabled)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论