Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
c4f320ad
提交
c4f320ad
authored
8月 29, 2012
作者:
Frederic
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix doc warning/error.
上级
1a4f4f6a
隐藏空白字符变更
内嵌
并排
正在显示
11 个修改的文件
包含
36 行增加
和
43 行删除
+36
-43
index.txt
doc/library/gof/index.txt
+1
-1
scan.txt
doc/library/scan.txt
+26
-26
adding.txt
doc/tutorial/adding.txt
+1
-3
debug_faq.txt
doc/tutorial/debug_faq.txt
+1
-1
examples.txt
doc/tutorial/examples.txt
+1
-1
faq.txt
doc/tutorial/faq.txt
+1
-1
gpu_data_convert.txt
doc/tutorial/gpu_data_convert.txt
+1
-1
loop.txt
doc/tutorial/loop.txt
+1
-4
modes.txt
doc/tutorial/modes.txt
+1
-1
using_gpu.txt
doc/tutorial/using_gpu.txt
+1
-3
printing.py
theano/printing.py
+1
-1
没有找到文件。
doc/library/gof/index.txt
浏览文件 @
c4f320ad
...
@@ -13,7 +13,7 @@
...
@@ -13,7 +13,7 @@
.. toctree::
.. toctree::
:maxdepth: 1
:maxdepth: 1
fg
raph
fg
toolbox
toolbox
type
type
...
...
doc/library/scan.txt
浏览文件 @
c4f320ad
...
@@ -145,32 +145,32 @@ downcast** of the latter.
...
@@ -145,32 +145,32 @@ downcast** of the latter.
.. code-block:: python
.. code-block:: python
import numpy as np
import numpy as np
import theano
import theano
import theano.tensor as T
import theano.tensor as T
up_to = T.iscalar("up_to")
up_to = T.iscalar("up_to")
# define a named function, rather than using lambda
# define a named function, rather than using lambda
def accumulate_by_adding(arange_val, sum_to_date):
def accumulate_by_adding(arange_val, sum_to_date):
return sum_to_date + arange_val
return sum_to_date + arange_val
seq = T.arange(up_to)
seq = T.arange(up_to)
# An unauthorized implicit downcast from the dtype of 'seq', to that of
# An unauthorized implicit downcast from the dtype of 'seq', to that of
# 'T.as_tensor_variable(0)' which is of dtype 'int8' by default would occur
# 'T.as_tensor_variable(0)' which is of dtype 'int8' by default would occur
# if this instruction were to be used instead of the next one:
# if this instruction were to be used instead of the next one:
# outputs_info = T.as_tensor_variable(0)
# outputs_info = T.as_tensor_variable(0)
outputs_info = T.as_tensor_variable(np.asarray(0, seq.dtype))
outputs_info = T.as_tensor_variable(np.asarray(0, seq.dtype))
scan_result, scan_updates = theano.scan(fn=accumulate_by_adding,
scan_result, scan_updates = theano.scan(fn=accumulate_by_adding,
outputs_info=outputs_info,
outputs_info=outputs_info,
sequences=seq)
sequences=seq)
triangular_sequence = theano.function(inputs=[up_to], outputs=scan_result)
triangular_sequence = theano.function(inputs=[up_to], outputs=scan_result)
# test
# test
some_num = 15
some_num = 15
print triangular_sequence(some_num)
print triangular_sequence(some_num)
print [n * (n + 1) // 2 for n in xrange(some_num)]
print [n * (n + 1) // 2 for n in xrange(some_num)]
Another simple example
Another simple example
...
...
doc/tutorial/adding.txt
浏览文件 @
c4f320ad
...
@@ -186,6 +186,4 @@ Modify and execute this code to compute this expression: a**2 + b**2 + 2*a*b.
...
@@ -186,6 +186,4 @@ Modify and execute this code to compute this expression: a**2 + b**2 + 2*a*b.
.. TODO: repair this link
.. TODO: repair this link
:download:`Solution<../adding_solution_1.py>`
:download:`Solution<adding_solution_1.py>`
-------------------------------------------
doc/tutorial/debug_faq.txt
浏览文件 @
c4f320ad
...
@@ -145,7 +145,7 @@ The ``compute_test_value`` mechanism works as follows:
...
@@ -145,7 +145,7 @@ The ``compute_test_value`` mechanism works as follows:
"How do I Print an Intermediate Value in a Function/Method?"
"How do I Print an Intermediate Value in a Function/Method?"
----------------------------------------------------------
----------------------------------------------------------
--
Theano provides a 'Print' op to do this.
Theano provides a 'Print' op to do this.
...
...
doc/tutorial/examples.txt
浏览文件 @
c4f320ad
...
@@ -314,7 +314,7 @@ Here's a brief example. The setup code is:
...
@@ -314,7 +314,7 @@ Here's a brief example. The setup code is:
Here, 'rv_u' represents a random stream of 2x2 matrices of draws from a uniform
Here, 'rv_u' represents a random stream of 2x2 matrices of draws from a uniform
distribution. Likewise, 'rv_n' represents a random stream of 2x2 matrices of
distribution. Likewise, 'rv_n' represents a random stream of 2x2 matrices of
draws from a normal distribution. The distributions that are implemented are
draws from a normal distribution. The distributions that are implemented are
defined in :class:`RandomStreams` and, at a lower level, in :ref:`raw_random<
_
libdoc_tensor_raw_random>`.
defined in :class:`RandomStreams` and, at a lower level, in :ref:`raw_random<libdoc_tensor_raw_random>`.
.. TODO: repair the latter reference on RandomStreams
.. TODO: repair the latter reference on RandomStreams
...
...
doc/tutorial/faq.txt
浏览文件 @
c4f320ad
...
@@ -38,7 +38,7 @@ We try to list in this `wiki page <https://github.com/Theano/Theano/wiki/Related
...
@@ -38,7 +38,7 @@ We try to list in this `wiki page <https://github.com/Theano/Theano/wiki/Related
"What are Theano's Limitations?"
"What are Theano's Limitations?"
-------------------------------
-------------------------------
-
Theano offers a good amount of flexibility, but has some limitations too.
Theano offers a good amount of flexibility, but has some limitations too.
You must answer for yourself the following question: How can my algorithm be cleverly written
You must answer for yourself the following question: How can my algorithm be cleverly written
...
...
doc/tutorial/gpu_data_convert.txt
浏览文件 @
c4f320ad
...
@@ -74,7 +74,7 @@ CudaNdarrays. Here is an example from the file ``theano/misc/tests/test_pycuda_t
...
@@ -74,7 +74,7 @@ CudaNdarrays. Here is an example from the file ``theano/misc/tests/test_pycuda_t
Theano Op using a PyCUDA function
Theano Op using a PyCUDA function
-------------------------------
-------------------------------
--
You can use a GPU function compiled with PyCUDA in a Theano op:
You can use a GPU function compiled with PyCUDA in a Theano op:
...
...
doc/tutorial/loop.txt
浏览文件 @
c4f320ad
...
@@ -96,7 +96,4 @@ Modify and execute the polynomial example to have the reduction done by ``scan``
...
@@ -96,7 +96,4 @@ Modify and execute the polynomial example to have the reduction done by ``scan``
.. TODO: repair this link as well as the code in the target file
.. TODO: repair this link as well as the code in the target file
:download:`Solution<../loop_solution_1.py>`
:download:`Solution<loop_solution_1.py>`
-------------------------------------------
doc/tutorial/modes.txt
浏览文件 @
c4f320ad
...
@@ -129,7 +129,7 @@ as it will be useful later on.
...
@@ -129,7 +129,7 @@ as it will be useful later on.
.. TODO: repair this link
.. TODO: repair this link
:download:`Solution<
../
modes_solution_1.py>`
:download:`Solution<modes_solution_1.py>`
-------------------------------------------
-------------------------------------------
...
...
doc/tutorial/using_gpu.txt
浏览文件 @
c4f320ad
...
@@ -391,7 +391,7 @@ What can be done to further increase the speed of the GPU version? Put your idea
...
@@ -391,7 +391,7 @@ What can be done to further increase the speed of the GPU version? Put your idea
.. TODO: repair this link
.. TODO: repair this link
:download:`Solution<
../
using_gpu_solution_1.py>`
:download:`Solution<using_gpu_solution_1.py>`
-------------------------------------------
-------------------------------------------
...
@@ -608,8 +608,6 @@ have to be jointly optimized explicitly in the code.)
...
@@ -608,8 +608,6 @@ have to be jointly optimized explicitly in the code.)
Modify and execute to support *stride* (i.e. so as not constrain the input to be *C-contiguous*).
Modify and execute to support *stride* (i.e. so as not constrain the input to be *C-contiguous*).
-------------------------------------------
...
...
theano/printing.py
浏览文件 @
c4f320ad
...
@@ -530,7 +530,7 @@ def pydotprint(fct, outfile=None,
...
@@ -530,7 +530,7 @@ def pydotprint(fct, outfile=None,
blue boxes are outputs variables of the graph
blue boxes are outputs variables of the graph
grey boxes are variables that are not outputs and are not used
grey boxes are variables that are not outputs and are not used
red ellipses are transfers from/to the gpu (ops with names GpuFromHost,
red ellipses are transfers from/to the gpu (ops with names GpuFromHost,
HostFromGpu)
HostFromGpu)
"""
"""
if
colorCodes
is
None
:
if
colorCodes
is
None
:
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论