提交 169a2461 authored 作者: James Bergstra's avatar James Bergstra

added a note to the tutorial examples about how the optimizer simplifies the gradient expression

上级 ea093c2c
......@@ -98,7 +98,7 @@ Here is code to compute this gradient:
>>> x = T.dscalar('x')
>>> y = x**2
>>> gy = T.grad(y, x)
>>> pp(gy)
>>> pp(gy) # print out the gradient prior to optimization
'((fill((x ** 2), 1.0) * 2) * (x ** (2 - 1)))'
>>> f = function([x], gy)
>>> f(4)
......@@ -112,7 +112,16 @@ the correct symbolic gradient.
``x ** 2`` and fill it with 1.0.
.. note::
The optimizer will simplify the symbolic gradient expression.
The optimizer simplifies the symbolic gradient expression. You can see
this by digging inside the internal properties of the compiled function.
.. code-block:: python
pp(f.maker.env.outputs[0])
'(2.0 * x)'
After optimization there is only one Apply node left in the graph, which
doubles the input.
We can also compute the gradient of complex expressions such as the
logistic function defined above. It turns out that the derivative of the
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论