Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
36b3036a
提交
36b3036a
authored
10月 24, 2011
作者:
David Warde-Farley
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
English spelling/grammar in Theano vision.
上级
4583924c
隐藏空白字符变更
内嵌
并排
正在显示
1 个修改的文件
包含
41 行增加
和
39 行删除
+41
-39
introduction.txt
doc/introduction.txt
+41
-39
没有找到文件。
doc/introduction.txt
浏览文件 @
36b3036a
...
@@ -140,66 +140,68 @@ A PDF version of the online documentation may be found `here
...
@@ -140,66 +140,68 @@ A PDF version of the online documentation may be found `here
Theano Vision
Theano Vision
=============
=============
This is the vision we have for Theano. This is
to help people
what to
This is the vision we have for Theano. This is
give people an idea of
what to
expect in the futur
for Theano, but we do
n't promise to implement all
expect in the futur
e of Theano, but we ca
n't promise to implement all
that. This also should help you to understand where Theano fix related
of it. This should also help you to understand where Theano fits in relation
to
all
other computational tools.
to other computational tools.
* Support tensor and sparse operation
* Support tensor and sparse operation
s
* Support linear algebra operation
* Support linear algebra operation
s
* Graph Transformation
* Graph Transformation
s
* Differentiation/higher order differentiation
* Differentiation/higher order differentiation
*
R operation
*
'R' and 'L' differential operators
* Speed/memory optimization
* Speed/memory optimization
s
* Numeri
acal stability optimization
* Numeri
cal stability optimizations
* Have an OpenCL back
-
end (for GPU, SIMD and multi-core)
* Have an OpenCL backend (for GPU, SIMD and multi-core)
* Lazy evaluation
* Lazy evaluation
* Loop
* Loop
* Parallel execution (SIMD, multi-core, multi-node on cluster,
* Parallel execution (SIMD, multi-core, multi-node on cluster,
multi-node distributed)
multi-node distributed)
* Support all
numpy/scip
y functionality
* Support all
NumPy/basic SciP
y functionality
* Easy wrapping of library function in Theano
* Easy wrapping of library function
s
in Theano
Note: There is no short term plan to work to make Theano work in multi-node in one Theano function.
Note: There is no short term plan to enable multi-node computation in one
Theano function.
Theano Vision State
Theano Vision State
===================
===================
Here is the state of that vision as of 24 October 2011 (after Theano release 0.4.1):
Here is the state of that vision as of 24 October 2011 (after Theano release
0.4.1):
* We support tensor
by using the numpy.ndarray object and we have
operations on them.
* We support tensor
s using the `numpy.ndarray` object and we support many
operations on them.
* We support sparse
by using the scipy.{csc,csr}_matrix object and have some operation on them (More are com
ming).
* We support sparse
types by using the `scipy.{csc,csr}_matrix` object and support some operations on them (more are co
ming).
* We have
a start of
more advanced linear algebra operations.
* We have
started implementing/wrapping
more advanced linear algebra operations.
* We have many graph transformation
that cover the 4 categories
.
* We have many graph transformation
s that cover the 4 categories listed above
.
* We can improve the graph transformation with better storage optimization
* We can improve the graph transformation with better storage optimization
and instruction selection
and instruction selection
.
* Similar to auto-tuning during the optimization phase, but this
* Similar to auto-tuning during the optimization phase, but this
don't apply to only 1 op.
do
es
n't apply to only 1 op.
* Example of use: Determine if we should move computation to the
* Example of use: Determine if we should move computation to the
gpu or not depending of
the input size.
GPU or not depending on
the input size.
* Possible implementation note: allow Theano Variable in the env to
* Possible implementation note: allow Theano Variable in the env to
have more then 1 owner.
have more then 1 owner.
* We have a CUDA back
-end for tensor of float32
only.
* We have a CUDA back
end for tensors of type `float32`
only.
*
Make a generic GPU nd array
(GPU tensor) (started in the
*
Efforts have begun towards a generic GPU ndarray
(GPU tensor) (started in the
`compyte <https://github.com/inducer/compyte/wiki>`_ project)
`compyte <https://github.com/inducer/compyte/wiki>`_ project)
* Move GPU backend outside of Theano(on top of PyCUDA/PyOpenCL)
* Move GPU backend outside of Theano
(on top of PyCUDA/PyOpenCL)
* Will allow GPU to work on Windows and use an OpenCL backend on CPU.
* Will allow GPU to work on Windows and use an OpenCL backend on CPU.
* Loop
work, but not all related optimization
done.
* Loop
s work, but not all related optimizations are currently
done.
* The cvm linker allow
lazy evaluation. It work but some work still needed
* The cvm linker allow
s lazy evaluation. It works, but some work is still
to enable
it by default.
needed before enabling
it by default.
* All test pass with linker=cvm?
* All test
s
pass with linker=cvm?
* How to have
DEBUG_MODE check it? Now DebugMode check it non
lazily.
* How to have
`DEBUG_MODE` check it? Right now, DebugMode checks the computation non-
lazily.
* The profiler us
ing by cvm is less complete then PROFILE_MODE
.
* The profiler us
ed by cvm is less complete than `PROFILE_MODE`
.
* SIMD parall
ism on the cpu come from the compiler
* SIMD parall
elism on the CPU comes from the compiler.
* Multi-core parall
ism is only supported for gemv, gemm if the external
* Multi-core parall
elism is only supported for gemv and gemm, and only
i
mplementation of it implement
it.
i
f the external BLAS implementation supports
it.
* No muli-node implementation in one Theano experiment.
* No muli-node implementation in one Theano experiment.
* Many, but not all
numpy function/alias
implemented.
* Many, but not all
NumPy functions/aliases are
implemented.
* http://trac-hg.assembla.com/theano/ticket/781
* http://trac-hg.assembla.com/theano/ticket/781
* Wrapping an existing
p
ython function in easy, but better documentation of
* Wrapping an existing
P
ython function in easy, but better documentation of
it would make it even easier.
it would make it even easier.
* We need to find a way to separate the
Shared variable data
memory
* We need to find a way to separate the
shared variable
memory
storage location
vs object type
(tensor, sparse, dtype, broadcast
storage location
from its object type
(tensor, sparse, dtype, broadcast
flags).
flags).
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论