提交 d0a3178b authored 作者: Frederic Bastien's avatar Frederic Bastien

don't warn about missed memory optimization about view/destroy map when it is about scalar.

上级 f6107d0e
...@@ -531,6 +531,11 @@ def _check_inputs(node, storage_map, r_vals, dr_vals, active_nodes, clobber_dr_v ...@@ -531,6 +531,11 @@ def _check_inputs(node, storage_map, r_vals, dr_vals, active_nodes, clobber_dr_v
for oo,ii in vmap.iteritems(): for oo,ii in vmap.iteritems():
out_var = storage_map[node.outputs[oo]][0] out_var = storage_map[node.outputs[oo]][0]
in_var = storage_map[node.inputs[ii[0]]][0] in_var = storage_map[node.inputs[ii[0]]][0]
# We don't try to optimize simple scalar, as this is not worth our time
# This happen at least in Subtensor when the output is a scalar
# But this depend on the version of numpy!
if getattr(out_var,'size',2)==1:
continue
if not _may_share_memory(out_var, in_var): if not _may_share_memory(out_var, in_var):
#when a subtensor return a tensor of ndim==0, numpy seam to return a copy. #when a subtensor return a tensor of ndim==0, numpy seam to return a copy.
#when have an empty ndarray(happen with output guard) it is not the same. why? #when have an empty ndarray(happen with output guard) it is not the same. why?
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论