Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
8f954a05
提交
8f954a05
authored
8月 08, 2017
作者:
notoraptor
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix typos, clear news.
上级
eb31d6f2
隐藏空白字符变更
内嵌
并排
正在显示
2 个修改的文件
包含
37 行增加
和
34 行删除
+37
-34
NEWS.txt
NEWS.txt
+19
-17
NEWS_DEV.txt
NEWS_DEV.txt
+18
-17
没有找到文件。
NEWS.txt
浏览文件 @
8f954a05
...
@@ -6,14 +6,14 @@ Release Notes
...
@@ -6,14 +6,14 @@ Release Notes
Theano 0.10.0beta1 (9th of August, 2017)
Theano 0.10.0beta1 (9th of August, 2017)
========================================
========================================
This release contains a lot of bug fixes
and improvements + new features,
to prepare the upcoming release candidate.
This release contains a lot of bug fixes
, improvements and new features
to prepare the upcoming release candidate.
We recommend that every developer updates to this version.
We recommend that every developer updates to this version.
Highlights:
Highlights:
- Moved Python 3.* minimum supported version from 3.3 to 3.4
- Moved Python 3.* minimum supported version from 3.3 to 3.4
- Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements
- Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements
-
Make theano more FIPS compliant by using ``sha256`` instead of ``md5`` where needed
-
Theano now internally uses ``sha256`` instead of ``md5`` to work on systems that forbide ``md5`` for security reason
- Removed old GPU backend ``theano.sandbox.cuda``. New backend ``theano.gpuarray`` is now the official GPU backend
- Removed old GPU backend ``theano.sandbox.cuda``. New backend ``theano.gpuarray`` is now the official GPU backend
- Support more debuggers for ``PdbBreakpoint``
- Support more debuggers for ``PdbBreakpoint``
...
@@ -23,7 +23,7 @@ Highlights:
...
@@ -23,7 +23,7 @@ Highlights:
- Added meaningful message when missing inputs to scan
- Added meaningful message when missing inputs to scan
- Speed up graph toposort algorithm
- Speed up graph toposort algorithm
- Faster
compilation step
by massively using a new interface for op params
- Faster
C compilation
by massively using a new interface for op params
- Faster optimization step
- Faster optimization step
- Documentation updated and more complete
- Documentation updated and more complete
- Many bug fixes, crash fixes and warning improvements
- Many bug fixes, crash fixes and warning improvements
...
@@ -31,10 +31,9 @@ Highlights:
...
@@ -31,10 +31,9 @@ Highlights:
A total of 65 people contributed to this release since 0.9.0, see list below.
A total of 65 people contributed to this release since 0.9.0, see list below.
Interface changes:
Interface changes:
- Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient
- Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector),
- Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector),
and ``AllocDiag`` (set a vector as a diagonal of an empty array)
and ``AllocDiag`` (set a vector as a diagonal of an empty array)
- Re
placed ``MultinomialWOReplacementFromUniform`` with
``ChoiceFromUniform``
- Re
named ``MultinomialWOReplacementFromUniform`` to
``ChoiceFromUniform``
- Removed or deprecated Theano flags:
- Removed or deprecated Theano flags:
...
@@ -47,6 +46,8 @@ Interface changes:
...
@@ -47,6 +46,8 @@ Interface changes:
- ``nvcc.*`` flags
- ``nvcc.*`` flags
- ``pycuda.init``
- ``pycuda.init``
- Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient
Convolution updates:
Convolution updates:
- Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation
- Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation
- Implemented separable convolutions
- Implemented separable convolutions
...
@@ -60,32 +61,31 @@ GPU:
...
@@ -60,32 +61,31 @@ GPU:
- Added Cholesky op based on ``cusolver`` backend
- Added Cholesky op based on ``cusolver`` backend
- Added GPU ops based on `magma library <http://icl.cs.utk.edu/magma/software/>`_:
- Added GPU ops based on `magma library <http://icl.cs.utk.edu/magma/software/>`_:
SVD, matrix inverse, QR, cholesky and eigh
SVD, matrix inverse, QR, cholesky and eigh
- Added ``GpuAdvancedIncSubtensor``
- Added ``GpuCublasTriangularSolve``
- Added ``GpuCublasTriangularSolve``
- Added atomic addition and exchange for ``long long`` values in ``GpuAdvancedIncSubtensor1_dev20``
- Added atomic addition and exchange for ``long long`` values in ``GpuAdvancedIncSubtensor1_dev20``
-
Fixed C code for log gamma function, now supporting all types except complex types.
-
Support log gamma function for all non-complex types
- Support GPU SoftMax in both OpenCL and CUDA
- Support GPU SoftMax in both OpenCL and CUDA
- Support offset parameter ``k`` for ``GpuEye``
- Support offset parameter ``k`` for ``GpuEye``
- ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU
- ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU
- Better cuDNN support
- Better cuDNN support
- Official support for
versions >= ``v5
``
- Official support for
``v5.*`` and ``v6.*
``
- Better support and loading on Windows and Mac
- Better support and loading on Windows and Mac
- Support cuDNN v6 dilated convolutions
- Support cuDNN v6 dilated convolutions
- Support cuDNN v6 reductions
- Support cuDNN v6 reductions
- Added new
t
heano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
- Added new
T
heano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
to help configure Theano when CUDA and cuDNN can not be found automatically.
to help configure Theano when CUDA and cuDNN can not be found automatically.
- Updated ``float16`` support
- Updated ``float16`` support
- Added documentation for GPU float16 ops
- Added documentation for GPU float16 ops
- Support ``float16`` for ``GpuGemmBatch``
- Support ``float16`` for ``GpuGemmBatch``
- Started to
avoid lifting ``float16`` computations that are not supported
on GPU
- Started to
use ``float32`` precision for computations that don't support ``float16``
on GPU
New features:
New features:
- Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions
- Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions
- Added scalar and elemwise
ops for modified Bessel function of order 0 and 1 from ``scipy.special``
- Added scalar and elemwise
CPU ops for modified Bessel function of order 0 and 1 from ``scipy.special``.
- Added Scaled Exponential Linear Unit (SELU) activation
- Added Scaled Exponential Linear Unit (SELU) activation
- Added sigmoid_binary_crossentropy function
- Added sigmoid_binary_crossentropy function
- Added tri-gamma function
- Added tri-gamma function
...
@@ -94,11 +94,11 @@ New features:
...
@@ -94,11 +94,11 @@ New features:
- Implemented gradient for matrix pseudoinverse op
- Implemented gradient for matrix pseudoinverse op
- Added new prop `replace` for ``ChoiceFromUniform`` op
- Added new prop `replace` for ``ChoiceFromUniform`` op
- Added new prop ``on_error`` for CPU ``Cholesky`` op
- Added new prop ``on_error`` for CPU ``Cholesky`` op
- Added new
t
heano flag ``deterministic`` to help control how Theano optimize certain ops that have deterministic versions.
- Added new
T
heano flag ``deterministic`` to help control how Theano optimize certain ops that have deterministic versions.
Currently used for subtensor Ops only.
Currently used for subtensor Ops only.
- Added new
theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace inser
tions
- Added new
Theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace optimiza
tions
- Added new
t
heano flag ``check_stack_trace`` to help check the stack trace during optimization process
- Added new
T
heano flag ``check_stack_trace`` to help check the stack trace during optimization process
- Added new
theano flag ``cmodule.debug`` to allow a debug mode for t
heano C code. Currently used for cuDNN convolutions only.
- Added new
Theano flag ``cmodule.debug`` to allow a debug mode for T
heano C code. Currently used for cuDNN convolutions only.
Others:
Others:
- Added deprecation warning for the softmax and logsoftmax vector case
- Added deprecation warning for the softmax and logsoftmax vector case
...
@@ -108,14 +108,16 @@ Other more detailed changes:
...
@@ -108,14 +108,16 @@ Other more detailed changes:
- Removed useless warning when profile is manually disabled
- Removed useless warning when profile is manually disabled
- Added tests for abstract conv
- Added tests for abstract conv
- Added options for `disconnected_outputs` to Rop
- Added options for `disconnected_outputs` to Rop
- Insertion of an OutputGuard is now considered as an error
- Removed ``theano/compat/six.py``
- Removed ``theano/compat/six.py``
- Removed ``COp.get_op_params()``
- Removed ``COp.get_op_params()``
- Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
- Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
- Macro names provided for array properties are now standardized in both CPU and GPU C codes
- Macro names provided for array properties are now standardized in both CPU and GPU C codes
- Started to move C code files into separate folder ``c_code`` in every Theano module
- Started to move C code files into separate folder ``c_code`` in every Theano module
- Many improvements for TRAVIS CI tests (with better splitting for faster testing)
- Many improvements for TRAVIS CI tests (with better splitting for faster testing)
- Many improvements for Jenkins CI tests: support for Mac and Windows testings, usage of Docker for better tests isolation
- Many improvements for Jenkins CI tests:
- Daily testings on Linux, Mac and Windows
- Using Docker for better tests isolation
Commiters since 0.9.0:
Commiters since 0.9.0:
- Frederic Bastien
- Frederic Bastien
...
...
NEWS_DEV.txt
浏览文件 @
8f954a05
...
@@ -20,7 +20,7 @@ TODO: better Theano conv doc
...
@@ -20,7 +20,7 @@ TODO: better Theano conv doc
Highlights:
Highlights:
- Moved Python 3.* minimum supported version from 3.3 to 3.4
- Moved Python 3.* minimum supported version from 3.3 to 3.4
- Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements
- Replaced deprecated package ``nose-parameterized`` with up-to-date package ``parameterized`` for Theano requirements
-
Make theano more FIPS compliant by using ``sha256`` instead of ``md5`` where needed
-
Theano now internally uses ``sha256`` instead of ``md5`` to work on systems that forbide ``md5`` for security reason
- Removed old GPU backend ``theano.sandbox.cuda``. New backend ``theano.gpuarray`` is now the official GPU backend
- Removed old GPU backend ``theano.sandbox.cuda``. New backend ``theano.gpuarray`` is now the official GPU backend
- Support more debuggers for ``PdbBreakpoint``
- Support more debuggers for ``PdbBreakpoint``
...
@@ -30,16 +30,15 @@ Highlights:
...
@@ -30,16 +30,15 @@ Highlights:
- Added meaningful message when missing inputs to scan
- Added meaningful message when missing inputs to scan
- Speed up graph toposort algorithm
- Speed up graph toposort algorithm
- Faster
compilation step
by massively using a new interface for op params
- Faster
C compilation
by massively using a new interface for op params
- Faster optimization step
- Faster optimization step
- Documentation updated and more complete
- Documentation updated and more complete
- Many bug fixes, crash fixes and warning improvements
- Many bug fixes, crash fixes and warning improvements
Interface changes:
Interface changes:
- Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient
- Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector),
- Merged duplicated diagonal functions into two ops: ``ExtractDiag`` (extract a diagonal to a vector),
and ``AllocDiag`` (set a vector as a diagonal of an empty array)
and ``AllocDiag`` (set a vector as a diagonal of an empty array)
- Re
placed ``MultinomialWOReplacementFromUniform`` with
``ChoiceFromUniform``
- Re
named ``MultinomialWOReplacementFromUniform`` to
``ChoiceFromUniform``
- Removed or deprecated Theano flags:
- Removed or deprecated Theano flags:
...
@@ -52,6 +51,8 @@ Interface changes:
...
@@ -52,6 +51,8 @@ Interface changes:
- ``nvcc.*`` flags
- ``nvcc.*`` flags
- ``pycuda.init``
- ``pycuda.init``
- Changed ``grad()`` method to ``L_op()`` in ops that need the outputs to compute gradient
Convolution updates:
Convolution updates:
- Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation
- Extended Theano flag ``dnn.enabled`` with new option ``no_check`` to help speed up cuDNN importation
- Implemented separable convolutions
- Implemented separable convolutions
...
@@ -65,32 +66,31 @@ GPU:
...
@@ -65,32 +66,31 @@ GPU:
- Added Cholesky op based on ``cusolver`` backend
- Added Cholesky op based on ``cusolver`` backend
- Added GPU ops based on `magma library <http://icl.cs.utk.edu/magma/software/>`_:
- Added GPU ops based on `magma library <http://icl.cs.utk.edu/magma/software/>`_:
SVD, matrix inverse, QR, cholesky and eigh
SVD, matrix inverse, QR, cholesky and eigh
- Added ``GpuAdvancedIncSubtensor``
- Added ``GpuCublasTriangularSolve``
- Added ``GpuCublasTriangularSolve``
- Added atomic addition and exchange for ``long long`` values in ``GpuAdvancedIncSubtensor1_dev20``
- Added atomic addition and exchange for ``long long`` values in ``GpuAdvancedIncSubtensor1_dev20``
-
Fixed C code for log gamma function, now supporting all types except complex types.
-
Support log gamma function for all non-complex types
- Support GPU SoftMax in both OpenCL and CUDA
- Support GPU SoftMax in both OpenCL and CUDA
- Support offset parameter ``k`` for ``GpuEye``
- Support offset parameter ``k`` for ``GpuEye``
- ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU
- ``CrossentropyCategorical1Hot`` and its gradient are now lifted to GPU
- Better cuDNN support
- Better cuDNN support
- Official support for
versions >= ``v5
``
- Official support for
``v5.*`` and ``v6.*
``
- Better support and loading on Windows and Mac
- Better support and loading on Windows and Mac
- Support cuDNN v6 dilated convolutions
- Support cuDNN v6 dilated convolutions
- Support cuDNN v6 reductions
- Support cuDNN v6 reductions
- Added new
t
heano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
- Added new
T
heano flags ``cuda.include_path``, ``dnn.base_path`` and ``dnn.bin_path``
to help configure Theano when CUDA and cuDNN can not be found automatically.
to help configure Theano when CUDA and cuDNN can not be found automatically.
- Updated ``float16`` support
- Updated ``float16`` support
- Added documentation for GPU float16 ops
- Added documentation for GPU float16 ops
- Support ``float16`` for ``GpuGemmBatch``
- Support ``float16`` for ``GpuGemmBatch``
- Started to
avoid lifting ``float16`` computations that are not supported
on GPU
- Started to
use ``float32`` precision for computations that don't support ``float16``
on GPU
New features:
New features:
- Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions
- Added a wrapper for `Baidu's CTC <https://github.com/baidu-research/warp-ctc>`_ cost and gradient functions
- Added scalar and elemwise
ops for modified Bessel function of order 0 and 1 from ``scipy.special``
- Added scalar and elemwise
CPU ops for modified Bessel function of order 0 and 1 from ``scipy.special``.
- Added Scaled Exponential Linear Unit (SELU) activation
- Added Scaled Exponential Linear Unit (SELU) activation
- Added sigmoid_binary_crossentropy function
- Added sigmoid_binary_crossentropy function
- Added tri-gamma function
- Added tri-gamma function
...
@@ -99,12 +99,11 @@ New features:
...
@@ -99,12 +99,11 @@ New features:
- Implemented gradient for matrix pseudoinverse op
- Implemented gradient for matrix pseudoinverse op
- Added new prop `replace` for ``ChoiceFromUniform`` op
- Added new prop `replace` for ``ChoiceFromUniform`` op
- Added new prop ``on_error`` for CPU ``Cholesky`` op
- Added new prop ``on_error`` for CPU ``Cholesky`` op
- Added new
t
heano flag ``deterministic`` to help control how Theano optimize certain ops that have deterministic versions.
- Added new
T
heano flag ``deterministic`` to help control how Theano optimize certain ops that have deterministic versions.
Currently used for subtensor Ops only.
Currently used for subtensor Ops only.
- Added new theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace insertions
- Added new Theano flag ``cycle_detection`` to speed-up optimization step by reducing time spending in inplace optimizations
- Added new theano flag ``check_stack_trace`` to help check the stack trace during optimization process
- Added new Theano flag ``check_stack_trace`` to help check the stack trace during optimization process
- Added new theano flag ``cmodule.debug`` to allow a debug mode for theano C code. Currently used for cuDNN convolutions only.
- Added new Theano flag ``cmodule.debug`` to allow a debug mode for Theano C code. Currently used for cuDNN convolutions only.
Others:
Others:
- Added deprecation warning for the softmax and logsoftmax vector case
- Added deprecation warning for the softmax and logsoftmax vector case
...
@@ -114,14 +113,16 @@ Other more detailed changes:
...
@@ -114,14 +113,16 @@ Other more detailed changes:
- Removed useless warning when profile is manually disabled
- Removed useless warning when profile is manually disabled
- Added tests for abstract conv
- Added tests for abstract conv
- Added options for `disconnected_outputs` to Rop
- Added options for `disconnected_outputs` to Rop
- Insertion of an OutputGuard is now considered as an error
- Removed ``theano/compat/six.py``
- Removed ``theano/compat/six.py``
- Removed ``COp.get_op_params()``
- Removed ``COp.get_op_params()``
- Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
- Support of list of strings for ``Op.c_support_code()``, to help not duplicate support codes
- Macro names provided for array properties are now standardized in both CPU and GPU C codes
- Macro names provided for array properties are now standardized in both CPU and GPU C codes
- Started to move C code files into separate folder ``c_code`` in every Theano module
- Started to move C code files into separate folder ``c_code`` in every Theano module
- Many improvements for TRAVIS CI tests (with better splitting for faster testing)
- Many improvements for TRAVIS CI tests (with better splitting for faster testing)
- Many improvements for Jenkins CI tests: support for Mac and Windows testings, usage of Docker for better tests isolation
- Many improvements for Jenkins CI tests:
- Daily testings on Linux, Mac and Windows
- Using Docker for better tests isolation
ALL THE PR BELLOW HAVE BEEN CHECKED
ALL THE PR BELLOW HAVE BEEN CHECKED
* https://github.com/Theano/Theano/pull/6218
* https://github.com/Theano/Theano/pull/6218
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论