提交 cf860fa6 authored 作者: ricardoV94's avatar ricardoV94 提交者: Ricardo Vieira

Allow building jacobian via vectorization instead of Scan

Also allow arbitrary expression dimensionality
上级 64dfa93e
...@@ -101,9 +101,12 @@ PyTensor implements the :func:`pytensor.gradient.jacobian` macro that does all ...@@ -101,9 +101,12 @@ PyTensor implements the :func:`pytensor.gradient.jacobian` macro that does all
that is needed to compute the Jacobian. The following text explains how that is needed to compute the Jacobian. The following text explains how
to do it manually. to do it manually.
Using Scan
----------
In order to manually compute the Jacobian of some function ``y`` with In order to manually compute the Jacobian of some function ``y`` with
respect to some parameter ``x`` we need to use `scan`. What we respect to some parameter ``x`` we can use `scan`.
do is to loop over the entries in ``y`` and compute the gradient of In this case, we loop over the entries in ``y`` and compute the gradient of
``y[i]`` with respect to ``x``. ``y[i]`` with respect to ``x``.
.. note:: .. note::
...@@ -111,8 +114,7 @@ do is to loop over the entries in ``y`` and compute the gradient of ...@@ -111,8 +114,7 @@ do is to loop over the entries in ``y`` and compute the gradient of
`scan` is a generic op in PyTensor that allows writing in a symbolic `scan` is a generic op in PyTensor that allows writing in a symbolic
manner all kinds of recurrent equations. While creating manner all kinds of recurrent equations. While creating
symbolic loops (and optimizing them for performance) is a hard task, symbolic loops (and optimizing them for performance) is a hard task,
effort is being done for improving the performance of `scan`. We efforts are being made to improving the performance of `scan`.
shall return to :ref:`scan<tutloop>` later in this tutorial.
>>> import pytensor >>> import pytensor
>>> import pytensor.tensor as pt >>> import pytensor.tensor as pt
...@@ -124,9 +126,9 @@ do is to loop over the entries in ``y`` and compute the gradient of ...@@ -124,9 +126,9 @@ do is to loop over the entries in ``y`` and compute the gradient of
array([[ 8., 0.], array([[ 8., 0.],
[ 0., 8.]]) [ 0., 8.]])
What we do in this code is to generate a sequence of integers from ``0`` to This code generates a sequence of integers from ``0`` to
``y.shape[0]`` using `pt.arange`. Then we loop through this sequence, and ``y.shape[0]`` using `pt.arange`. Then it loops through this sequence, and
at each step, we compute the gradient of element ``y[i]`` with respect to at each step, computes the gradient of element ``y[i]`` with respect to
``x``. `scan` automatically concatenates all these rows, generating a ``x``. `scan` automatically concatenates all these rows, generating a
matrix which corresponds to the Jacobian. matrix which corresponds to the Jacobian.
...@@ -139,6 +141,31 @@ matrix which corresponds to the Jacobian. ...@@ -139,6 +141,31 @@ matrix which corresponds to the Jacobian.
``x`` anymore, while ``y[i]`` still is. ``x`` anymore, while ``y[i]`` still is.
Using automatic vectorization
-----------------------------
An alternative way to build the Jacobian is to vectorize the graph that computes a single row or colum of the jacobian
We can use `Lop` or `Rop` (more about it below) to obtain the row or column of the jacobian and `vectorize_graph`
to vectorize it to the full jacobian matrix.
>>> import pytensor
>>> import pytensor.tensor as pt
>>> from pytensor.gradient import Lop
>>> from pytensor.graph import vectorize_graph
>>> x = pt.dvector('x')
>>> y = x ** 2
>>> row_cotangent = pt.dvector("row_cotangent") # Helper variable, it will be replaced during vectorization
>>> J_row = Lop(y, x, row_cotangent)
>>> J = vectorize_graph(J_row, replace={row_cotangent: pt.eye(x.size)})
>>> f = pytensor.function([x], J)
>>> f([4, 4])
array([[ 8., 0.],
[ 0., 8.]])
This avoids the overhead of scan, at the cost of higher memory usage if the jacobian expression has large intermediate operations.
Also, not all graphs are safely vectorizable (e.g., if different rows require intermediate operations of different sizes).
For these reasons `jacobian` uses scan by default. The behavior can be changed by setting `vectorize=True`.
Computing the Hessian Computing the Hessian
===================== =====================
......
...@@ -11,7 +11,7 @@ import numpy as np ...@@ -11,7 +11,7 @@ import numpy as np
import pytensor import pytensor
from pytensor.compile.ops import ViewOp from pytensor.compile.ops import ViewOp
from pytensor.configdefaults import config from pytensor.configdefaults import config
from pytensor.graph import utils from pytensor.graph import utils, vectorize_graph
from pytensor.graph.basic import Apply, NominalVariable, Variable from pytensor.graph.basic import Apply, NominalVariable, Variable
from pytensor.graph.null_type import NullType, null_type from pytensor.graph.null_type import NullType, null_type
from pytensor.graph.op import get_test_values from pytensor.graph.op import get_test_values
...@@ -703,15 +703,15 @@ def grad( ...@@ -703,15 +703,15 @@ def grad(
grad_dict[var] = g_var grad_dict[var] = g_var
def handle_disconnected(var): def handle_disconnected(var):
message = (
"grad method was asked to compute the gradient "
"with respect to a variable that is not part of "
"the computational graph of the cost, or is used "
f"only by a non-differentiable operator: {var}"
)
if disconnected_inputs == "ignore": if disconnected_inputs == "ignore":
pass return
elif disconnected_inputs == "warn": elif disconnected_inputs == "warn":
message = (
"grad method was asked to compute the gradient "
"with respect to a variable that is not part of "
"the computational graph of the cost, or is used "
f"only by a non-differentiable operator: {var}"
)
warnings.warn(message, stacklevel=2) warnings.warn(message, stacklevel=2)
elif disconnected_inputs == "raise": elif disconnected_inputs == "raise":
message = utils.get_variable_trace_string(var) message = utils.get_variable_trace_string(var)
...@@ -2021,13 +2021,19 @@ GradientError: numeric gradient and analytic gradient exceed tolerance: ...@@ -2021,13 +2021,19 @@ GradientError: numeric gradient and analytic gradient exceed tolerance:
Exception args: {args_msg}""" Exception args: {args_msg}"""
def jacobian(expression, wrt, consider_constant=None, disconnected_inputs="raise"): def jacobian(
expression,
wrt,
consider_constant=None,
disconnected_inputs="raise",
vectorize=False,
):
""" """
Compute the full Jacobian, row by row. Compute the full Jacobian, row by row.
Parameters Parameters
---------- ----------
expression : Vector (1-dimensional) :class:`~pytensor.graph.basic.Variable` expression :class:`~pytensor.graph.basic.Variable`
Values that we are differentiating (that we want the Jacobian of) Values that we are differentiating (that we want the Jacobian of)
wrt : :class:`~pytensor.graph.basic.Variable` or list of Variables wrt : :class:`~pytensor.graph.basic.Variable` or list of Variables
Term[s] with respect to which we compute the Jacobian Term[s] with respect to which we compute the Jacobian
...@@ -2051,18 +2057,18 @@ def jacobian(expression, wrt, consider_constant=None, disconnected_inputs="raise ...@@ -2051,18 +2057,18 @@ def jacobian(expression, wrt, consider_constant=None, disconnected_inputs="raise
output, then a zero variable is returned. The return value is output, then a zero variable is returned. The return value is
of same type as `wrt`: a list/tuple or TensorVariable in all cases. of same type as `wrt`: a list/tuple or TensorVariable in all cases.
""" """
from pytensor.tensor.basic import eye
from pytensor.tensor.extra_ops import broadcast_to
if not isinstance(expression, Variable): if not isinstance(expression, Variable):
raise TypeError("jacobian expects a Variable as `expression`") raise TypeError("jacobian expects a Variable as `expression`")
if expression.ndim > 1:
raise ValueError(
"jacobian expects a 1 dimensional variable as `expression`."
" If not use flatten to make it a vector"
)
using_list = isinstance(wrt, list) using_list = isinstance(wrt, list)
using_tuple = isinstance(wrt, tuple) using_tuple = isinstance(wrt, tuple)
grad_kwargs = {
"consider_constant": consider_constant,
"disconnected_inputs": disconnected_inputs,
}
if isinstance(wrt, list | tuple): if isinstance(wrt, list | tuple):
wrt = list(wrt) wrt = list(wrt)
...@@ -2070,43 +2076,55 @@ def jacobian(expression, wrt, consider_constant=None, disconnected_inputs="raise ...@@ -2070,43 +2076,55 @@ def jacobian(expression, wrt, consider_constant=None, disconnected_inputs="raise
wrt = [wrt] wrt = [wrt]
if all(expression.type.broadcastable): if all(expression.type.broadcastable):
# expression is just a scalar, use grad jacobian_matrices = grad(expression.squeeze(), wrt, **grad_kwargs)
return as_list_or_tuple(
using_list, elif vectorize:
using_tuple, expression_flat = expression.ravel()
grad( row_tangent = _float_ones_like(expression_flat).type("row_tangent")
expression.squeeze(), jacobian_single_rows = Lop(expression.ravel(), wrt, row_tangent, **grad_kwargs)
wrt,
consider_constant=consider_constant, n_rows = expression_flat.size
disconnected_inputs=disconnected_inputs, jacobian_matrices = vectorize_graph(
), jacobian_single_rows,
replace={row_tangent: eye(n_rows, dtype=row_tangent.dtype)},
) )
if disconnected_inputs != "raise":
# If the input is disconnected from the cost, `vectorize_graph` has no effect on the respective jacobian
# We have to broadcast the zeros explicitly here
for i, (jacobian_single_row, jacobian_matrix) in enumerate(
zip(jacobian_single_rows, jacobian_matrices, strict=True)
):
if jacobian_single_row.ndim == jacobian_matrix.ndim:
jacobian_matrices[i] = broadcast_to(
jacobian_matrix, shape=(n_rows, *jacobian_matrix.shape)
)
def inner_function(*args): else:
idx = args[0]
expr = args[1] def inner_function(*args):
rvals = [] idx, expr, *wrt = args
for inp in args[2:]: return grad(expr[idx], wrt, **grad_kwargs)
rval = grad(
expr[idx], jacobian_matrices, updates = pytensor.scan(
inp, inner_function,
consider_constant=consider_constant, sequences=pytensor.tensor.arange(expression.size),
disconnected_inputs=disconnected_inputs, non_sequences=[expression.ravel(), *wrt],
return_list=True,
)
if updates:
raise ValueError(
"The scan used to build the jacobian matrices returned a list of updates"
) )
rvals.append(rval)
return rvals if jacobian_matrices[0].ndim < (expression.ndim + wrt[0].ndim):
# There was some raveling or squeezing done prior to getting the jacobians
# Computing the gradients does not affect the random seeds on any random # Reshape into original shapes
# generator used n expression (because during computing gradients we are jacobian_matrices = [
# just backtracking over old values. (rp Jan 2012 - if anyone has a jac_matrix.reshape((*expression.shape, *w.shape))
# counter example please show me) for jac_matrix, w in zip(jacobian_matrices, wrt, strict=True)
jacobs, updates = pytensor.scan( ]
inner_function,
sequences=pytensor.tensor.arange(expression.shape[0]), return as_list_or_tuple(using_list, using_tuple, jacobian_matrices)
non_sequences=[expression, *wrt],
)
assert not updates, "Scan has returned a list of updates; this should not happen."
return as_list_or_tuple(using_list, using_tuple, jacobs)
def hessian(cost, wrt, consider_constant=None, disconnected_inputs="raise"): def hessian(cost, wrt, consider_constant=None, disconnected_inputs="raise"):
......
差异被折叠。
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论