提交 50567639 authored 作者: James Bergstra's avatar James Bergstra

added code comment to elemwise.grad

上级 aaaed1f1
...@@ -487,7 +487,8 @@ class Elemwise(Op): ...@@ -487,7 +487,8 @@ class Elemwise(Op):
return self.name return self.name
def grad(self, inputs, ograds): def grad(self, inputs, ograds):
ograds = map(as_tensor_variable, ograds) # this shouldn't be necessary... # Gradients (especially on the final costs) don't have to be symbolic
ograds = map(as_tensor_variable, ograds)
scalar_inputs = [Scalar(dtype = t.type.dtype)() for t in inputs] scalar_inputs = [Scalar(dtype = t.type.dtype)() for t in inputs]
scalar_ograds = [Scalar(dtype = ograd.type.dtype)() for ograd in ograds] scalar_ograds = [Scalar(dtype = ograd.type.dtype)() for ograd in ograds]
scalar_igrads = self.scalar_op.grad(scalar_inputs, scalar_ograds) scalar_igrads = self.scalar_op.grad(scalar_inputs, scalar_ograds)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论