Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
c14d99ea
提交
c14d99ea
authored
7月 13, 2011
作者:
Pascal Lamblin
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix syntax.
上级
4d5baddf
显示空白字符变更
内嵌
并排
正在显示
1 个修改的文件
包含
7 行增加
和
1 行删除
+7
-1
NEWS.txt
doc/NEWS.txt
+7
-1
没有找到文件。
doc/NEWS.txt
浏览文件 @
c14d99ea
...
@@ -20,9 +20,11 @@ Change in output memory storage for Ops:
...
@@ -20,9 +20,11 @@ Change in output memory storage for Ops:
In a future version, the content of the output storage, both for Python and C
In a future version, the content of the output storage, both for Python and C
versions, will either be NULL, or have the following guarantees:
versions, will either be NULL, or have the following guarantees:
* It will be a Python object of the appropriate Type (for a Tensor variable,
* It will be a Python object of the appropriate Type (for a Tensor variable,
a numpy.ndarray, for a GPU variable, a CudaNdarray, for instance)
a numpy.ndarray, for a GPU variable, a CudaNdarray, for instance)
* It will have the correct number of dimensions, and correct dtype
* It will have the correct number of dimensions, and correct dtype
However, its shape and memory layout (strides) will not be guaranteed.
However, its shape and memory layout (strides) will not be guaranteed.
When that change is made, the config flag DebugMode.check_preallocated_output
When that change is made, the config flag DebugMode.check_preallocated_output
...
@@ -64,8 +66,10 @@ Optimization:
...
@@ -64,8 +66,10 @@ Optimization:
GPU:
GPU:
* Move to the gpu fused elemwise that have other dtype then float32 in them
* Move to the gpu fused elemwise that have other dtype then float32 in them
(except float64) if the input and output are float32.
(except float64) if the input and output are float32.
* This allow to move elemwise comparisons to the GPU if we cast it to
* This allow to move elemwise comparisons to the GPU if we cast it to
float32 after that.
float32 after that.
* Implemented CudaNdarray.ndim to have the same interface in ndarray.
* Implemented CudaNdarray.ndim to have the same interface in ndarray.
* Fixed slowdown caused by multiple chained views on CudaNdarray objects
* Fixed slowdown caused by multiple chained views on CudaNdarray objects
* CudaNdarray_alloc_contiguous changed so as to never try to free
* CudaNdarray_alloc_contiguous changed so as to never try to free
...
@@ -83,10 +87,12 @@ New features:
...
@@ -83,10 +87,12 @@ New features:
configured by config.DebugMode.check_preallocated_output.
configured by config.DebugMode.check_preallocated_output.
* var[vector of index] now work, (grad work recursively, the direct grad
* var[vector of index] now work, (grad work recursively, the direct grad
work inplace, gpu work)
work inplace, gpu work)
* limitation: work only of the outer most dimensions.
* limitation: work only of the outer most dimensions.
* New way to test the graph as we build it. Allow to easily find the source
* New way to test the graph as we build it. Allow to easily find the source
of shape mismatch error:
of shape mismatch error:
`
http://deeplearning.net/software/theano/tutorial/debug_faq.html#interactive-debugger
`__
`
<http://deeplearning.net/software/theano/tutorial/debug_faq.html#interactive-debugger>
`__
* cuda.root inferred if nvcc is on the path, otherwise defaults to
* cuda.root inferred if nvcc is on the path, otherwise defaults to
/usr/local/cuda
/usr/local/cuda
* Better graph printing for graphs involving a scan subgraph
* Better graph printing for graphs involving a scan subgraph
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论