Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
1f3b1f4e
提交
1f3b1f4e
authored
6月 17, 2011
作者:
Pascal Lamblin
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Update NEWS.txt.
上级
da84e9f2
显示空白字符变更
内嵌
并排
正在显示
1 个修改的文件
包含
16 行增加
和
3 行删除
+16
-3
NEWS.txt
NEWS.txt
+16
-3
没有找到文件。
NEWS.txt
浏览文件 @
1f3b1f4e
...
@@ -10,6 +10,9 @@ Deprecation:
...
@@ -10,6 +10,9 @@ Deprecation:
* Dividing integers with / is deprecated: use // for integer division, or
* Dividing integers with / is deprecated: use // for integer division, or
cast one of the integers to a float type if you want a float result (you may
cast one of the integers to a float type if you want a float result (you may
also change this behavior with config.int_division).
also change this behavior with config.int_division).
* Removed (already deprecated) sandbox/compile module
* Removed (already deprecated) incsubtensor and setsubtensor functions,
inc_subtensor and set_subtensor are to be used instead.
Bugs fixed:
Bugs fixed:
* Bugfix in CudaNdarray.__iadd__. When it is not implemented, return the error.
* Bugfix in CudaNdarray.__iadd__. When it is not implemented, return the error.
...
@@ -24,11 +27,14 @@ Bugs fixed:
...
@@ -24,11 +27,14 @@ Bugs fixed:
* The output of random samples computed with uniform(..., dtype=...) is
* The output of random samples computed with uniform(..., dtype=...) is
guaranteed to be of the specified dtype instead of potentially being of a
guaranteed to be of the specified dtype instead of potentially being of a
higher-precision dtype.
higher-precision dtype.
* The perform() method of DownsampleFactorMax did not give the right result
when reusing output storage.
* Python 2.4 syntax fixes.
* Python 2.4 syntax fixes.
Crash fixed:
Crash fixed:
* Work around a bug in gcc 4.3.0 that make the compilation of 2d convolution
* Work around a bug in gcc 4.3.0 that make the compilation of 2d convolution
crash.
crash.
* Some optimizations crashed when "ShapeOpt" was disabled.
Optimization:
Optimization:
* Optimize 4 pattern of subtensor followed by subtensor.
* Optimize 4 pattern of subtensor followed by subtensor.
...
@@ -52,6 +58,8 @@ New features:
...
@@ -52,6 +58,8 @@ New features:
* profile the scan overhead
* profile the scan overhead
* simple hook system to add profiler
* simple hook system to add profiler
* reordered the output to be in the order of more general to more specific
* reordered the output to be in the order of more general to more specific
* DebugMode now checks Ops with different patterns of preallocated memory,
configured by config.DebugMode.check_preallocated_output.
* var[vector of index] now work, (grad work recursively, the direct grad
* var[vector of index] now work, (grad work recursively, the direct grad
work inplace, gpu work)
work inplace, gpu work)
* limitation: work only of the outer most dimensions.
* limitation: work only of the outer most dimensions.
...
@@ -59,8 +67,8 @@ New features:
...
@@ -59,8 +67,8 @@ New features:
* cuda.root inferred if nvcc is on the path, otherwise defaults to
* cuda.root inferred if nvcc is on the path, otherwise defaults to
/usr/local/cuda
/usr/local/cuda
* Better graph printing for graphs involving a scan subgraph
* Better graph printing for graphs involving a scan subgraph
* Casting behavior
is closer to numpy by default, and can be controlled
* Casting behavior
can be controlled through config.cast_policy,
through config.cast_policy
.
new (experimental) mode
.
* Smarter C module cache, avoiding erroneous usage of the wrong C
* Smarter C module cache, avoiding erroneous usage of the wrong C
implementation when some options change, and avoiding recompiling the
implementation when some options change, and avoiding recompiling the
same module multiple times in some situations.
same module multiple times in some situations.
...
@@ -69,6 +77,10 @@ New features:
...
@@ -69,6 +77,10 @@ New features:
now available in the sandbox.
now available in the sandbox.
* CUDA devices 4 - 16 should now be available if present.
* CUDA devices 4 - 16 should now be available if present.
* infer_shape support for the View op, better infer_shape support in Scan
* infer_shape support for the View op, better infer_shape support in Scan
* tensor.grad now gives an error by default when computing the gradient
wrt a node that is disconnected from the cost (not in the graph, or
no continuous path from that op to the cost).
* New tensor.isnan and isinf functions.
Documentation:
Documentation:
* Better commenting of cuda_ndarray.cu
* Better commenting of cuda_ndarray.cu
...
@@ -93,5 +105,6 @@ Unit tests:
...
@@ -93,5 +105,6 @@ Unit tests:
Other:
Other:
* Correctly put the broadcast flag to True in the output var of
* Correctly put the broadcast flag to True in the output var of
a Reshape op when we receive an int 1 in the new shape.
a Reshape op when we receive an int 1 in the new shape.
* pydotprint: high contrast mode is now the default
* pydotprint: high contrast mode is now the default, option to print
more compact node names.
* More compact printing (ignore leading "Composite" in op names)
* More compact printing (ignore leading "Composite" in op names)
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论