Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
1bb10871
提交
1bb10871
authored
10月 25, 2012
作者:
lamblin
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #1023 from nouiz/doc_python_mem
Doc python mem
上级
b8e79c60
1d02cf5b
隐藏空白字符变更
内嵌
并排
正在显示
8 个修改的文件
包含
25 行增加
和
24 行删除
+25
-24
NEWS.txt
NEWS.txt
+8
-10
NEWS.txt
doc/NEWS.txt
+8
-10
index.txt
doc/index.txt
+1
-1
install.txt
doc/install.txt
+1
-1
conv.txt
doc/library/tensor/nnet/conv.txt
+1
-1
index.txt
doc/tutorial/index.txt
+1
-0
python-memory-management.txt
doc/tutorial/python-memory-management.txt
+3
-1
python.txt
doc/tutorial/python.txt
+2
-0
没有找到文件。
NEWS.txt
浏览文件 @
1bb10871
...
@@ -23,8 +23,6 @@ Highlights:
...
@@ -23,8 +23,6 @@ Highlights:
Known bugs:
Known bugs:
* A few crash cases that will be fixed by the final release.
* A few crash cases that will be fixed by the final release.
* CAReduce with NaN in inputs do not return the correct output. (reported by Pascal L.)
* This is used in tensor.{all,any,max,mean,prod,sum} and in the grad of PermuteRowElements.
Bug fixes:
Bug fixes:
* Outputs of Scan nodes could contain corrupted values: some parts of the
* Outputs of Scan nodes could contain corrupted values: some parts of the
...
@@ -229,8 +227,8 @@ Speed up:
...
@@ -229,8 +227,8 @@ Speed up:
Speed up GPU:
Speed up GPU:
* Convolution on the GPU now checks the generation of the card to make
* Convolution on the GPU now checks the generation of the card to make
it faster in some cases (especially medium/big ouput image) (Frederic B.)
it faster in some cases (especially medium/big ouput image) (Frederic B.)
* We had hardcoded 512 as the maximum number of threads per block. Newer cards
* We had hardcoded 512 as the maximum number of threads per block. Newer cards
support up to 1024 threads per block.
support up to 1024 threads per block.
* Faster GpuAdvancedSubtensor1, GpuSubtensor, GpuAlloc (Frederic B.)
* Faster GpuAdvancedSubtensor1, GpuSubtensor, GpuAlloc (Frederic B.)
* We now pass the GPU architecture to nvcc when compiling (Frederic B.)
* We now pass the GPU architecture to nvcc when compiling (Frederic B.)
* Now we use the GPU function async feature by default. (Frederic B.)
* Now we use the GPU function async feature by default. (Frederic B.)
...
@@ -242,7 +240,7 @@ Speed up GPU:
...
@@ -242,7 +240,7 @@ Speed up GPU:
Sparse Sandbox graduate (moved from theano.sparse.sandbox.sp):
Sparse Sandbox graduate (moved from theano.sparse.sandbox.sp):
* sparse.remove0 (Frederic B., Nicolas B.)
* sparse.remove0 (Frederic B., Nicolas B.)
* sparse.sp_sum(a, axis=None) (Nicolas B.)
* sparse.sp_sum(a, axis=None) (Nicolas B.)
* bugfix: the not structured grad was returning a structured grad.
* bugfix: the not structured grad was returning a structured grad.
* sparse.{col_scale,row_scale,ensure_sorted_indices,clean} (Nicolas B.)
* sparse.{col_scale,row_scale,ensure_sorted_indices,clean} (Nicolas B.)
* sparse.{diag,square_diagonal} (Nicolas B.)
* sparse.{diag,square_diagonal} (Nicolas B.)
...
@@ -257,8 +255,8 @@ Sparse:
...
@@ -257,8 +255,8 @@ Sparse:
* Optimized op: StructuredAddSV, StrucutedAddSVCSR (inserted automatically)
* Optimized op: StructuredAddSV, StrucutedAddSVCSR (inserted automatically)
* New Op: sparse.mul_s_v multiplication of sparse matrix by broadcasted vector (Yann D.)
* New Op: sparse.mul_s_v multiplication of sparse matrix by broadcasted vector (Yann D.)
* New Op: sparse.Cast() (Yann D., Nicolas B.)
* New Op: sparse.Cast() (Yann D., Nicolas B.)
* Add sparse_variable.astype() and theano.sparse.cast() and
* Add sparse_variable.astype() and theano.sparse.cast() and
theano.sparse.{b,w,i,l,f,d,c,z}cast() as their tensor equivalent (Nicolas B.)
theano.sparse.{b,w,i,l,f,d,c,z}cast() as their tensor equivalent (Nicolas B.)
* Op class: SamplingDot (Yann D., Nicolas B.)
* Op class: SamplingDot (Yann D., Nicolas B.)
* Optimized version: SamplingDotCsr, StructuredDotCSC
* Optimized version: SamplingDotCsr, StructuredDotCSC
* Optimizations to insert the optimized version: local_sampling_dot_csr, local_structured_add_s_v
* Optimizations to insert the optimized version: local_sampling_dot_csr, local_structured_add_s_v
...
@@ -268,9 +266,9 @@ Sparse:
...
@@ -268,9 +266,9 @@ Sparse:
New flags:
New flags:
* `profile=True` flag now prints the sum of all printed profiles. (Frederic B.)
* `profile=True` flag now prints the sum of all printed profiles. (Frederic B.)
* It works with the linkers vm/cvm (default).
* It works with the linkers vm/cvm (default).
* Also print compile time, optimizer time and linker time.
* Also print compile time, optimizer time and linker time.
* Also print a summary by op class.
* Also print a summary by op class.
* new flag "profile_optimizer" (Frederic B.)
* new flag "profile_optimizer" (Frederic B.)
when profile=True, will also print the time spent in each optimizer.
when profile=True, will also print the time spent in each optimizer.
Useful to find optimization bottleneck.
Useful to find optimization bottleneck.
...
...
doc/NEWS.txt
浏览文件 @
1bb10871
...
@@ -23,8 +23,6 @@ Highlights:
...
@@ -23,8 +23,6 @@ Highlights:
Known bugs:
Known bugs:
* A few crash cases that will be fixed by the final release.
* A few crash cases that will be fixed by the final release.
* CAReduce with NaN in inputs do not return the correct output. (reported by Pascal L.)
* This is used in tensor.{all,any,max,mean,prod,sum} and in the grad of PermuteRowElements.
Bug fixes:
Bug fixes:
* Outputs of Scan nodes could contain corrupted values: some parts of the
* Outputs of Scan nodes could contain corrupted values: some parts of the
...
@@ -229,8 +227,8 @@ Speed up:
...
@@ -229,8 +227,8 @@ Speed up:
Speed up GPU:
Speed up GPU:
* Convolution on the GPU now checks the generation of the card to make
* Convolution on the GPU now checks the generation of the card to make
it faster in some cases (especially medium/big ouput image) (Frederic B.)
it faster in some cases (especially medium/big ouput image) (Frederic B.)
* We had hardcoded 512 as the maximum number of threads per block. Newer cards
* We had hardcoded 512 as the maximum number of threads per block. Newer cards
support up to 1024 threads per block.
support up to 1024 threads per block.
* Faster GpuAdvancedSubtensor1, GpuSubtensor, GpuAlloc (Frederic B.)
* Faster GpuAdvancedSubtensor1, GpuSubtensor, GpuAlloc (Frederic B.)
* We now pass the GPU architecture to nvcc when compiling (Frederic B.)
* We now pass the GPU architecture to nvcc when compiling (Frederic B.)
* Now we use the GPU function async feature by default. (Frederic B.)
* Now we use the GPU function async feature by default. (Frederic B.)
...
@@ -242,7 +240,7 @@ Speed up GPU:
...
@@ -242,7 +240,7 @@ Speed up GPU:
Sparse Sandbox graduate (moved from theano.sparse.sandbox.sp):
Sparse Sandbox graduate (moved from theano.sparse.sandbox.sp):
* sparse.remove0 (Frederic B., Nicolas B.)
* sparse.remove0 (Frederic B., Nicolas B.)
* sparse.sp_sum(a, axis=None) (Nicolas B.)
* sparse.sp_sum(a, axis=None) (Nicolas B.)
* bugfix: the not structured grad was returning a structured grad.
* bugfix: the not structured grad was returning a structured grad.
* sparse.{col_scale,row_scale,ensure_sorted_indices,clean} (Nicolas B.)
* sparse.{col_scale,row_scale,ensure_sorted_indices,clean} (Nicolas B.)
* sparse.{diag,square_diagonal} (Nicolas B.)
* sparse.{diag,square_diagonal} (Nicolas B.)
...
@@ -257,8 +255,8 @@ Sparse:
...
@@ -257,8 +255,8 @@ Sparse:
* Optimized op: StructuredAddSV, StrucutedAddSVCSR (inserted automatically)
* Optimized op: StructuredAddSV, StrucutedAddSVCSR (inserted automatically)
* New Op: sparse.mul_s_v multiplication of sparse matrix by broadcasted vector (Yann D.)
* New Op: sparse.mul_s_v multiplication of sparse matrix by broadcasted vector (Yann D.)
* New Op: sparse.Cast() (Yann D., Nicolas B.)
* New Op: sparse.Cast() (Yann D., Nicolas B.)
* Add sparse_variable.astype() and theano.sparse.cast() and
* Add sparse_variable.astype() and theano.sparse.cast() and
theano.sparse.{b,w,i,l,f,d,c,z}cast() as their tensor equivalent (Nicolas B.)
theano.sparse.{b,w,i,l,f,d,c,z}cast() as their tensor equivalent (Nicolas B.)
* Op class: SamplingDot (Yann D., Nicolas B.)
* Op class: SamplingDot (Yann D., Nicolas B.)
* Optimized version: SamplingDotCsr, StructuredDotCSC
* Optimized version: SamplingDotCsr, StructuredDotCSC
* Optimizations to insert the optimized version: local_sampling_dot_csr, local_structured_add_s_v
* Optimizations to insert the optimized version: local_sampling_dot_csr, local_structured_add_s_v
...
@@ -268,9 +266,9 @@ Sparse:
...
@@ -268,9 +266,9 @@ Sparse:
New flags:
New flags:
* `profile=True` flag now prints the sum of all printed profiles. (Frederic B.)
* `profile=True` flag now prints the sum of all printed profiles. (Frederic B.)
* It works with the linkers vm/cvm (default).
* It works with the linkers vm/cvm (default).
* Also print compile time, optimizer time and linker time.
* Also print compile time, optimizer time and linker time.
* Also print a summary by op class.
* Also print a summary by op class.
* new flag "profile_optimizer" (Frederic B.)
* new flag "profile_optimizer" (Frederic B.)
when profile=True, will also print the time spent in each optimizer.
when profile=True, will also print the time spent in each optimizer.
Useful to find optimization bottleneck.
Useful to find optimization bottleneck.
...
...
doc/index.txt
浏览文件 @
1bb10871
...
@@ -50,7 +50,7 @@ installation and configuration, see :ref:`installing Theano <install>`.
...
@@ -50,7 +50,7 @@ installation and configuration, see :ref:`installing Theano <install>`.
Master Tests Status:
Master Tests Status:
.. image:: https://secure.travis-ci.org/Theano/Theano.png
.. image:: https://secure.travis-ci.org/Theano/Theano.png
?branch=master
:target: http://travis-ci.org/Theano/Theano/builds
:target: http://travis-ci.org/Theano/Theano/builds
.. _available on PyPI: http://pypi.python.org/pypi/Theano
.. _available on PyPI: http://pypi.python.org/pypi/Theano
...
...
doc/install.txt
浏览文件 @
1bb10871
...
@@ -206,7 +206,7 @@ Bleeding-edge install instructions
...
@@ -206,7 +206,7 @@ Bleeding-edge install instructions
Master Tests Status:
Master Tests Status:
.. image:: https://secure.travis-ci.org/Theano/Theano.png
.. image:: https://secure.travis-ci.org/Theano/Theano.png
?branch=master
:target: http://travis-ci.org/Theano/Theano/builds
:target: http://travis-ci.org/Theano/Theano/builds
If you are a developer of Theano, then check out the :ref:`dev_start_guide`.
If you are a developer of Theano, then check out the :ref:`dev_start_guide`.
...
...
doc/library/tensor/nnet/conv.txt
浏览文件 @
1bb10871
...
@@ -8,7 +8,7 @@
...
@@ -8,7 +8,7 @@
Two similar implementation exists for conv2d:
Two similar implementation exists for conv2d:
:func:`signal.conv2d <theano.tensor.signal.conv.conv2d>` and
:func:`signal.conv2d <theano.tensor.signal.conv.conv2d>` and
:func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>. The former implements a traditional
:func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>
`
. The former implements a traditional
2D convolution, while the latter implements the convolutional layers
2D convolution, while the latter implements the convolutional layers
present in convolutional neural networks (where filters are 3D and pool
present in convolutional neural networks (where filters are 3D and pool
over several input channels).
over several input channels).
...
...
doc/tutorial/index.txt
浏览文件 @
1bb10871
...
@@ -43,3 +43,4 @@ you out.
...
@@ -43,3 +43,4 @@ you out.
debug_faq
debug_faq
extending_theano
extending_theano
faq
faq
python-memory-management
doc/tutorial/python-memory-management.
rs
t
→
doc/tutorial/python-memory-management.
tx
t
浏览文件 @
1bb10871
.. _python-memory-management:
Python Memory Management
Python Memory Management
========================
========================
...
@@ -156,7 +158,7 @@ on a 32-bit platform and
...
@@ -156,7 +158,7 @@ on a 32-bit platform and
96 [4, 'toaster', 230.1]
96 [4, 'toaster', 230.1]
on a 64-bit platform. An empty list eats up 72 bytes. The size of an
on a 64-bit platform. An empty list eats up 72 bytes. The size of an
empty, 64-bit C++ ``std::list()``is only 16 bytes, 4-5 times less. What
empty, 64-bit C++ ``std::list()``
is only 16 bytes, 4-5 times less. What
about tuples? (and dictionaries?):
about tuples? (and dictionaries?):
::
::
...
...
doc/tutorial/python.txt
浏览文件 @
1bb10871
...
@@ -11,3 +11,5 @@ tutorials/exercises if you need to learn it or only need a refresher:
...
@@ -11,3 +11,5 @@ tutorials/exercises if you need to learn it or only need a refresher:
* `Python Challenge <http://www.pythonchallenge.com/>`__
* `Python Challenge <http://www.pythonchallenge.com/>`__
* `Dive into Python <http://diveintopython.net/>`__
* `Dive into Python <http://diveintopython.net/>`__
* `Google Python Class <http://code.google.com/edu/languages/google-python-class/index.html>`__
* `Google Python Class <http://code.google.com/edu/languages/google-python-class/index.html>`__
We have a tutorial on how :ref:`python manage its memory <python-memory-management>`
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论