提交 19e994e2 authored 作者: Razvan Pascanu's avatar Razvan Pascanu

merge; no conflicts

...@@ -90,12 +90,17 @@ def traverse(out, x,x_copy, d): ...@@ -90,12 +90,17 @@ def traverse(out, x,x_copy, d):
This happens because initially shared variables are on GPU .. which is This happens because initially shared variables are on GPU .. which is
fine for the main computational graph but confuses things a bit for the fine for the main computational graph but confuses things a bit for the
inner graph of scan ''' inner graph of scan '''
import theano.sandbox.cuda as cuda
if out == x: if out == x:
d[out] = tensor.as_tensor_variable(x_copy) d[out] = tensor.as_tensor_variable(x_copy)
return d return d
elif out.owner is None: elif out.owner is None:
return d return d
elif (cuda.cuda_available and
out.owner.op == cuda.host_from_gpu and
out.owner.inputs == [x]):
d[out] = tensor.as_tensor_variable(x_copy)
return d
else: else:
for inp in out.owner.inputs: for inp in out.owner.inputs:
d = traverse(inp, x, x_copy, d) d = traverse(inp, x, x_copy, d)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论