提交 d1039d20 authored 作者: notoraptor's avatar notoraptor

Update NEWS.

Fix reST nested lists syntax. Fix typos.
上级 9177ff60
...@@ -14,17 +14,18 @@ Highlights (since 0.9.0beta1): ...@@ -14,17 +14,18 @@ Highlights (since 0.9.0beta1):
- Better compatibility with NumPy 1.12 - Better compatibility with NumPy 1.12
- Faster scan optimizations - Faster scan optimizations
- Fixed broadcast checking in scan - Fixed broadcast checking in scan
- Bug fixes related to merge optimizer - Bug fixes related to merge optimizer and shape inference
- many other bug fixes and improvements - many other bug fixes and improvements
- Updated documentation - Updated documentation
- Old GPU back-end:
* In MRG, replaced method `multinomial_wo_replacement()` with new method `choice()`
- New GPU back-end: - New GPU back-end:
* Value of a shared variable is now set inplace
A total of 24 people contributed to this release, see the list at the bottom. - Value of a shared variable is now set inplace
A total of 24 people contributed to this release since 0.9.0beta1 and 116 since 0.8.0, see the list at the bottom.
Interface changes:
- In MRG, replaced method `multinomial_wo_replacement()` with new method `choice()`
Convolution updates: Convolution updates:
- Implement conv2d_transpose convenience function - Implement conv2d_transpose convenience function
...@@ -42,9 +43,6 @@ Others: ...@@ -42,9 +43,6 @@ Others:
- Split op now has C code for CPU and GPU - Split op now has C code for CPU and GPU
- "theano-cache list" now includes compilation times - "theano-cache list" now includes compilation times
Other more detailed changes:
- Changed optdb.max_use_ratio to 6
Committers since 0.9.0beta1: Committers since 0.9.0beta1:
- Benjamin Scellier - Benjamin Scellier
...@@ -94,13 +92,14 @@ Highlights: ...@@ -94,13 +92,14 @@ Highlights:
- Added a bool dtype - Added a bool dtype
- New GPU back-end: - New GPU back-end:
* float16 storage
* better mapping between theano device number and nvidia-smi number, using the PCI bus ID of graphic cards - float16 storage
* More pooling support on GPU when cuDNN isn't there. - better mapping between theano device number and nvidia-smi number, using the PCI bus ID of graphic cards
* ignore_border=False is now implemented for pooling. - More pooling support on GPU when cuDNN isn't there
- ignore_border=False is now implemented for pooling
A total of 112 people contributed to this release, see the list at the bottom. A total of 112 people contributed to this release since 0.8.0, see the list at the bottom.
Interface changes: Interface changes:
...@@ -111,7 +110,7 @@ Interface changes: ...@@ -111,7 +110,7 @@ Interface changes:
- Move softsign out of sandbox to theano.tensor.nnet.softsign - Move softsign out of sandbox to theano.tensor.nnet.softsign
- Roll make the shift be modulo the size of the axis we roll on - Roll make the shift be modulo the size of the axis we roll on
- Merge CumsumOp/CumprodOp into CumOp - Merge CumsumOp/CumprodOp into CumOp
- round() default to the same as NumPy: half_to_even. - round() default to the same as NumPy: half_to_even
Convolution updates: Convolution updates:
- Multi-cores convolution and pooling on CPU - Multi-cores convolution and pooling on CPU
...@@ -127,7 +126,7 @@ GPU: ...@@ -127,7 +126,7 @@ GPU:
- Support for solve (using cusolver), erfinv and erfcinv - Support for solve (using cusolver), erfinv and erfcinv
- cublas gemv workaround when we reduce on an axis with a dimensions size of 0 - cublas gemv workaround when we reduce on an axis with a dimensions size of 0
- Warn user that some cuDNN algorithms may produce unexpected results in certain environments - Warn user that some cuDNN algorithms may produce unexpected results in certain environments
for convolution backward filter operations. for convolution backward filter operations
New features: New features:
- Add gradient of solve, tensorinv (CPU), tensorsolve (CPU) searchsorted (CPU) - Add gradient of solve, tensorinv (CPU), tensorsolve (CPU) searchsorted (CPU)
......
...@@ -21,7 +21,7 @@ Highlights: ...@@ -21,7 +21,7 @@ Highlights:
- Better compatibility with NumPy 1.12 - Better compatibility with NumPy 1.12
- Faster scan optimizations - Faster scan optimizations
- Fixed broadcast checking in scan - Fixed broadcast checking in scan
- Bug fixes related to merge optimizer - Bug fixes related to merge optimizer and shape inference
- many other bug fixes and improvements - many other bug fixes and improvements
- Updated documentation - Updated documentation
- Many computation and compilation speed up - Many computation and compilation speed up
...@@ -38,17 +38,16 @@ Highlights: ...@@ -38,17 +38,16 @@ Highlights:
- scan with checkpoint (trade off between speed and memory usage, useful for long sequences) - scan with checkpoint (trade off between speed and memory usage, useful for long sequences)
- Added a bool dtype - Added a bool dtype
- Old GPU back-end:
* In MRG, replaced method `multinomial_wo_replacement()` with new method `choice()`
- New GPU back-end: - New GPU back-end:
* Value of a shared variable is now set inplace
* float16 storage - Value of a shared variable is now set inplace
* better mapping between theano device number and nvidia-smi number, using the PCI bus ID of graphic cards - float16 storage
* More pooling support on GPU when cuDNN isn't there. - better mapping between theano device number and nvidia-smi number, using the PCI bus ID of graphic cards
* ignore_border=False is now implemented for pooling. - More pooling support on GPU when cuDNN isn't there
- ignore_border=False is now implemented for pooling
Interface changes: Interface changes:
- In MRG, replaced method `multinomial_wo_replacement()` with new method `choice()`
- New pooling interface - New pooling interface
- Pooling parameters can change at run time - Pooling parameters can change at run time
- When converting empty list/tuple, now we use floatX dtype - When converting empty list/tuple, now we use floatX dtype
...@@ -56,7 +55,7 @@ Interface changes: ...@@ -56,7 +55,7 @@ Interface changes:
- Move softsign out of sandbox to theano.tensor.nnet.softsign - Move softsign out of sandbox to theano.tensor.nnet.softsign
- Roll make the shift be modulo the size of the axis we roll on - Roll make the shift be modulo the size of the axis we roll on
- Merge CumsumOp/CumprodOp into CumOp - Merge CumsumOp/CumprodOp into CumOp
- round() default to the same as NumPy: half_to_even. - round() default to the same as NumPy: half_to_even
Convolution updates: Convolution updates:
- Implement conv2d_transpose convenience function - Implement conv2d_transpose convenience function
...@@ -75,7 +74,7 @@ GPU: ...@@ -75,7 +74,7 @@ GPU:
- Support for solve (using cusolver), erfinv and erfcinv - Support for solve (using cusolver), erfinv and erfcinv
- cublas gemv workaround when we reduce on an axis with a dimensions size of 0 - cublas gemv workaround when we reduce on an axis with a dimensions size of 0
- Warn user that some cuDNN algorithms may produce unexpected results in certain environments - Warn user that some cuDNN algorithms may produce unexpected results in certain environments
for convolution backward filter operations. for convolution backward filter operations
New features: New features:
- OpFromGraph now allows gradient overriding for every input - OpFromGraph now allows gradient overriding for every input
...@@ -105,7 +104,6 @@ Others: ...@@ -105,7 +104,6 @@ Others:
Other more detailed changes: Other more detailed changes:
- Changed optdb.max_use_ratio to 6
- Allow more then one output to be an destructive inplace - Allow more then one output to be an destructive inplace
- Add flag profiling.ignore_first_call, useful to profile the new gpu back-end - Add flag profiling.ignore_first_call, useful to profile the new gpu back-end
- Doc/error message fixes/updates - Doc/error message fixes/updates
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论