提交 d544d6a9 authored 作者: Olivier Delalleau's avatar Olivier Delalleau 提交者: Frederic

Fixed typos

上级 1652b64b
......@@ -3,7 +3,7 @@
Since 0.5rc2
* Fixed a memory leak with shared variable (we kept a pointer to the original value)
* Alloc, GpuAlloc are not always pre-computed(constant_folding optimization) at compile time if all its inputs are constants
* Alloc, GpuAlloc are not always pre-computed (constant_folding optimization) at compile time if all their inputs are constant
* The keys in our cache now store the hash of constants and not the constant values themselves. This is significantly more efficient for big constant arrays.
* 'theano-cache list' lists key files bigger than 1M
* 'theano-cache list' prints an histogram of the number of keys per compiled module
......
......@@ -3,6 +3,7 @@
Since 0.5rc2
* Fixed a memory leak with shared variable (we kept a pointer to the original value)
* Alloc, GpuAlloc are not always pre-computed (constant_folding optimization) at compile time if all their inputs are constant
* The keys in our cache now store the hash of constants and not the constant values themselves. This is significantly more efficient for big constant arrays.
* 'theano-cache list' lists key files bigger than 1M
* 'theano-cache list' prints an histogram of the number of keys per compiled module
......
......@@ -222,14 +222,14 @@ following methods:
*Default:* Return True
By default when optimizations are enabled, we remove during
function compilation apply node that have all their input
constants. We replace the Apply node with a Theano constant
variable. This way, the apply node is not executed at each function
function compilation Apply nodes whose inputs are all constants.
We replace the Apply node with a Theano constant variable.
This way, the Apply node is not executed at each function
call. If you want to force the execution of an op during the
function call, make do_constant_folding return False.
As done in the Alloc op, you can return False only in some case by
analysing the graph from the node parameter.
As done in the Alloc op, you can return False only in some cases by
analyzing the graph from the node parameter.
At a bare minimum, a new Op must define ``make_node`` and ``perform``, which
have no defaults.
......
......@@ -511,11 +511,11 @@ class PureOp(object):
def do_constant_folding(self, node):
"""
This allow each op to dertermine if they want to be constant
folded when all there in put are constant. This allow them to
choose where they put their memory/speed trade off. Also, it
could make thing faster as Constant can't be used for inplace
operation(see *IncSubtensor)
This allows each op to determine if it wants to be constant
folded when all its inputs are constant. This allows it to
choose where it puts its memory/speed trade-off. Also, it
could make things faster as constants can't be used for inplace
operations (see *IncSubtensor).
"""
return True
......
......@@ -3768,7 +3768,7 @@ def constant_folding(node):
return False
#condition: all inputs are constant
if not node.op.do_constant_folding(node):
# The op ask to don't be constant folded.
# The op asks not to be constant folded.
return False
storage_map = dict([(i, [i.data]) for i in node.inputs])
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论