提交 b7f02a0b authored 作者: nouiz's avatar nouiz

Merge pull request #403 from lamblin/edit_news_txt

Edit NEWS.txt
UPDATED THIS FILE UP TO: 41103b5d158739e4147428ce776fb5716062d4a8 UPDATED THIS FILE UP TO: 41103b5d158739e4147428ce776fb5716062d4a8
* fix subtensor bug(report RP, fix PL) TODO BETTER DESCRIPTION! a7be89231eb26f7a39ab5448ef4abf90a6c0d529
If you have updated to 0.5rc1, you are highly encouraged to update to If you have updated to 0.5rc1, you are highly encouraged to update to
0.5rc2. There is more bug fix and speed uptimization! But there is 0.5rc2. There are more bug fixes and speed uptimization! But there is
also a small new interface change about sum of [u]int* dtype. also a small new interface change about sum of [u]int* dtype.
...@@ -70,17 +68,17 @@ Scan fix: ...@@ -70,17 +68,17 @@ Scan fix:
* computing grad of a function of grad of scan(reported by ?, Razvan) * computing grad of a function of grad of scan(reported by ?, Razvan)
before : most of the time crash, but could be wrong value with bad number of dimensions(so a visible bug) before : most of the time crash, but could be wrong value with bad number of dimensions(so a visible bug)
now : do the right thing. now : do the right thing.
* gradient with respect to outputs using multiple taps(Timotty reported, fix Razvan) * gradient with respect to outputs using multiple taps(Timothy reported, fix by Razvan)
before : it used to return wrong values before : it used to return wrong values
now : do the right thing. now : do the right thing.
Note: The reported case of this bug was happening in conjunction with the Note: The reported case of this bug was happening in conjunction with the
save optimization of scan that give run time errors. So if you didn't save optimization of scan that give run time errors. So if you didn't
manually disable the same memory optimization(number in the list4), manually disable the same memory optimization(number in the list4),
you are fine if you didn't manually request multiple taps. you are fine if you didn't manually request multiple taps.
* Rop of gradient of scan (reported by Timotty and Justin Buyer, fix by Razvan) * Rop of gradient of scan (reported by Timothy and Justin Bayer, fix by Razvan)
before : compilation error when computing R-op before : compilation error when computing R-op
now : do the right thing. now : do the right thing.
* save memory optimization of scan (reported by Timotty and Nicolas BL, fix by Razvan) * save memory optimization of scan (reported by Timothy and Nicolas BL, fix by Razvan)
before : for certain corner cases used to result in a runtime shape error before : for certain corner cases used to result in a runtime shape error
now : do the right thing. now : do the right thing.
* Scan grad when the input of scan has sequences of different lengths. (Razvan, reported by Michael Forbes) * Scan grad when the input of scan has sequences of different lengths. (Razvan, reported by Michael Forbes)
...@@ -154,7 +152,7 @@ Bug fixes (the result changed): ...@@ -154,7 +152,7 @@ Bug fixes (the result changed):
code was affected by this bug. (Olivier, reported by Sander Dieleman) code was affected by this bug. (Olivier, reported by Sander Dieleman)
* When indexing into a subtensor of negative stride (for instance, x[a:b:-1][c]), * When indexing into a subtensor of negative stride (for instance, x[a:b:-1][c]),
an optimization replacing it with a direct indexing (x[d]) used an incorrect formula, an optimization replacing it with a direct indexing (x[d]) used an incorrect formula,
leading to incorrect results. (Pascal) leading to incorrect results. (Pascal, reported by Razvan)
Crashes fixed: Crashes fixed:
...@@ -220,7 +218,7 @@ Others: ...@@ -220,7 +218,7 @@ Others:
* Add a warning about numpy bug with subtensor with more then 2**32 elemenent(TODO, more explicit) * Add a warning about numpy bug with subtensor with more then 2**32 elemenent(TODO, more explicit)
* Added Scalar.ndim=0 and ScalarSharedVariable.ndim=0 (simplify code)(Razvan) * Added Scalar.ndim=0 and ScalarSharedVariable.ndim=0 (simplify code)(Razvan)
* New min_informative_str() function to print graph. (Ian) * New min_informative_str() function to print graph. (Ian)
* Fix catching of exception. (Sometimes we catched interupt) (Frederic, David, Ian, Olivier) * Fix catching of exception. (Sometimes we used to catch interrupts) (Frederic, David, Ian, Olivier)
* Better support for uft string. (David) * Better support for uft string. (David)
* Fix pydotprint with a function compiled with a ProfileMode (Frederic) * Fix pydotprint with a function compiled with a ProfileMode (Frederic)
* Was broken with change to the profiler. * Was broken with change to the profiler.
......
...@@ -282,6 +282,13 @@ AddConfigVar('warn.sum_div_dimshuffle_bug', ...@@ -282,6 +282,13 @@ AddConfigVar('warn.sum_div_dimshuffle_bug',
BoolParam(warn_default('0.3')), BoolParam(warn_default('0.3')),
in_c_key=False) in_c_key=False)
AddConfigVar('warn.subtensor_merge_bug',
"Warn if previous versions of Theano (before 0.5rc2) could have given "
"incorrect results when indexing into a subtensor with negative stride "
"(for instance, for instance, x[a:b:-1][c]).",
BoolParam(warn_default('0.5')),
in_c_key=False)
AddConfigVar('compute_test_value', AddConfigVar('compute_test_value',
"If 'True', Theano will run each op at graph build time, using Constants, SharedVariables and the tag 'test_value' as inputs to the function. This helps the user track down problems in the graph before it gets optimized.", "If 'True', Theano will run each op at graph build time, using Constants, SharedVariables and the tag 'test_value' as inputs to the function. This helps the user track down problems in the graph before it gets optimized.",
EnumStr('off', 'ignore', 'warn', 'raise'), EnumStr('off', 'ignore', 'warn', 'raise'),
......
...@@ -1682,6 +1682,12 @@ def merge_two_slices(slice1, len1, slice2, len2): ...@@ -1682,6 +1682,12 @@ def merge_two_slices(slice1, len1, slice2, len2):
# the k-th element from sl.start but the k-th element from # the k-th element from sl.start but the k-th element from
# sl.stop backwards # sl.stop backwards
n_val = sl1.stop - 1 - sl2 * sl1.step n_val = sl1.stop - 1 - sl2 * sl1.step
if config.warn.subtensor_merge_bug:
_logger.warn((
'Your current code is fine, but Theano versions '
'prior to 0.5rc2 might have given an incorrect result. '
'To disable this warning, set the Theano flag '
'warn.subtensor_merge_bug to False.'))
# we need to pick either n_val or p_val and then follow same # we need to pick either n_val or p_val and then follow same
# steps as above for covering the index error cases # steps as above for covering the index error cases
val = T.switch(T.lt(reverse1, 0), n_val, p_val) val = T.switch(T.lt(reverse1, 0), n_val, p_val)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论