提交 c4f320ad authored 作者: Frederic's avatar Frederic

Fix doc warning/error.

上级 1a4f4f6a
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
fgraph fg
toolbox toolbox
type type
......
...@@ -145,32 +145,32 @@ downcast** of the latter. ...@@ -145,32 +145,32 @@ downcast** of the latter.
.. code-block:: python .. code-block:: python
import numpy as np import numpy as np
import theano import theano
import theano.tensor as T import theano.tensor as T
up_to = T.iscalar("up_to") up_to = T.iscalar("up_to")
# define a named function, rather than using lambda # define a named function, rather than using lambda
def accumulate_by_adding(arange_val, sum_to_date): def accumulate_by_adding(arange_val, sum_to_date):
return sum_to_date + arange_val return sum_to_date + arange_val
seq = T.arange(up_to) seq = T.arange(up_to)
# An unauthorized implicit downcast from the dtype of 'seq', to that of # An unauthorized implicit downcast from the dtype of 'seq', to that of
# 'T.as_tensor_variable(0)' which is of dtype 'int8' by default would occur # 'T.as_tensor_variable(0)' which is of dtype 'int8' by default would occur
# if this instruction were to be used instead of the next one: # if this instruction were to be used instead of the next one:
# outputs_info = T.as_tensor_variable(0) # outputs_info = T.as_tensor_variable(0)
outputs_info = T.as_tensor_variable(np.asarray(0, seq.dtype)) outputs_info = T.as_tensor_variable(np.asarray(0, seq.dtype))
scan_result, scan_updates = theano.scan(fn=accumulate_by_adding, scan_result, scan_updates = theano.scan(fn=accumulate_by_adding,
outputs_info=outputs_info, outputs_info=outputs_info,
sequences=seq) sequences=seq)
triangular_sequence = theano.function(inputs=[up_to], outputs=scan_result) triangular_sequence = theano.function(inputs=[up_to], outputs=scan_result)
# test # test
some_num = 15 some_num = 15
print triangular_sequence(some_num) print triangular_sequence(some_num)
print [n * (n + 1) // 2 for n in xrange(some_num)] print [n * (n + 1) // 2 for n in xrange(some_num)]
Another simple example Another simple example
......
...@@ -186,6 +186,4 @@ Modify and execute this code to compute this expression: a**2 + b**2 + 2*a*b. ...@@ -186,6 +186,4 @@ Modify and execute this code to compute this expression: a**2 + b**2 + 2*a*b.
.. TODO: repair this link .. TODO: repair this link
:download:`Solution<../adding_solution_1.py>` :download:`Solution<adding_solution_1.py>`
-------------------------------------------
...@@ -145,7 +145,7 @@ The ``compute_test_value`` mechanism works as follows: ...@@ -145,7 +145,7 @@ The ``compute_test_value`` mechanism works as follows:
"How do I Print an Intermediate Value in a Function/Method?" "How do I Print an Intermediate Value in a Function/Method?"
---------------------------------------------------------- ------------------------------------------------------------
Theano provides a 'Print' op to do this. Theano provides a 'Print' op to do this.
......
...@@ -314,7 +314,7 @@ Here's a brief example. The setup code is: ...@@ -314,7 +314,7 @@ Here's a brief example. The setup code is:
Here, 'rv_u' represents a random stream of 2x2 matrices of draws from a uniform Here, 'rv_u' represents a random stream of 2x2 matrices of draws from a uniform
distribution. Likewise, 'rv_n' represents a random stream of 2x2 matrices of distribution. Likewise, 'rv_n' represents a random stream of 2x2 matrices of
draws from a normal distribution. The distributions that are implemented are draws from a normal distribution. The distributions that are implemented are
defined in :class:`RandomStreams` and, at a lower level, in :ref:`raw_random<_libdoc_tensor_raw_random>`. defined in :class:`RandomStreams` and, at a lower level, in :ref:`raw_random<libdoc_tensor_raw_random>`.
.. TODO: repair the latter reference on RandomStreams .. TODO: repair the latter reference on RandomStreams
......
...@@ -38,7 +38,7 @@ We try to list in this `wiki page <https://github.com/Theano/Theano/wiki/Related ...@@ -38,7 +38,7 @@ We try to list in this `wiki page <https://github.com/Theano/Theano/wiki/Related
"What are Theano's Limitations?" "What are Theano's Limitations?"
------------------------------- --------------------------------
Theano offers a good amount of flexibility, but has some limitations too. Theano offers a good amount of flexibility, but has some limitations too.
You must answer for yourself the following question: How can my algorithm be cleverly written You must answer for yourself the following question: How can my algorithm be cleverly written
......
...@@ -74,7 +74,7 @@ CudaNdarrays. Here is an example from the file ``theano/misc/tests/test_pycuda_t ...@@ -74,7 +74,7 @@ CudaNdarrays. Here is an example from the file ``theano/misc/tests/test_pycuda_t
Theano Op using a PyCUDA function Theano Op using a PyCUDA function
------------------------------- ---------------------------------
You can use a GPU function compiled with PyCUDA in a Theano op: You can use a GPU function compiled with PyCUDA in a Theano op:
......
...@@ -96,7 +96,4 @@ Modify and execute the polynomial example to have the reduction done by ``scan`` ...@@ -96,7 +96,4 @@ Modify and execute the polynomial example to have the reduction done by ``scan``
.. TODO: repair this link as well as the code in the target file .. TODO: repair this link as well as the code in the target file
:download:`Solution<../loop_solution_1.py>` :download:`Solution<loop_solution_1.py>`
-------------------------------------------
...@@ -129,7 +129,7 @@ as it will be useful later on. ...@@ -129,7 +129,7 @@ as it will be useful later on.
.. TODO: repair this link .. TODO: repair this link
:download:`Solution<../modes_solution_1.py>` :download:`Solution<modes_solution_1.py>`
------------------------------------------- -------------------------------------------
......
...@@ -391,7 +391,7 @@ What can be done to further increase the speed of the GPU version? Put your idea ...@@ -391,7 +391,7 @@ What can be done to further increase the speed of the GPU version? Put your idea
.. TODO: repair this link .. TODO: repair this link
:download:`Solution<../using_gpu_solution_1.py>` :download:`Solution<using_gpu_solution_1.py>`
------------------------------------------- -------------------------------------------
...@@ -608,8 +608,6 @@ have to be jointly optimized explicitly in the code.) ...@@ -608,8 +608,6 @@ have to be jointly optimized explicitly in the code.)
Modify and execute to support *stride* (i.e. so as not constrain the input to be *C-contiguous*). Modify and execute to support *stride* (i.e. so as not constrain the input to be *C-contiguous*).
-------------------------------------------
......
...@@ -530,7 +530,7 @@ def pydotprint(fct, outfile=None, ...@@ -530,7 +530,7 @@ def pydotprint(fct, outfile=None,
blue boxes are outputs variables of the graph blue boxes are outputs variables of the graph
grey boxes are variables that are not outputs and are not used grey boxes are variables that are not outputs and are not used
red ellipses are transfers from/to the gpu (ops with names GpuFromHost, red ellipses are transfers from/to the gpu (ops with names GpuFromHost,
HostFromGpu) HostFromGpu)
""" """
if colorCodes is None: if colorCodes is None:
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论