提交 7b9efdc2 authored 作者: Pascal Lamblin's avatar Pascal Lamblin

Update documentation to make clearer what verify_grad takes as input

上级 6b0656ff
...@@ -363,7 +363,7 @@ at point ``x`` is approximated as: ...@@ -363,7 +363,7 @@ at point ``x`` is approximated as:
Here is the prototype for the verify_grad function. Here is the prototype for the verify_grad function.
>>> def verify_grad(op, pt, n_tests=2, rng=None, eps=1.0e-7, abs_tol=0.0001, rel_tol=0.0001): >>> def verify_grad(fun, pt, n_tests=2, rng=None, eps=1.0e-7, abs_tol=0.0001, rel_tol=0.0001):
``verify_grad`` raises an Exception if the difference between the analytic gradient and ``verify_grad`` raises an Exception if the difference between the analytic gradient and
numerical gradient (computed through the Finite Difference Method) exceeds numerical gradient (computed through the Finite Difference Method) exceeds
...@@ -371,9 +371,11 @@ both the given absolute and relative tolerances. ...@@ -371,9 +371,11 @@ both the given absolute and relative tolerances.
The parameters are as follows: The parameters are as follows:
* op: something that behaves like an Op instance with a single output * ``fun``: a Python function that takes Theano variables as inputs,
(can be for instance a python function that combines multiple ops, or that and returns a Theano variable.
calls an op with some of the inputs being fixed to specific values). For instance, an Op instance with a single output is such a function.
It can also be a Python function that calls an op with some of its
inputs being fixed to specific values, or that combine multiple ops.
* pt: the list of numpy.ndarrays to use as inputs to the op * pt: the list of numpy.ndarrays to use as inputs to the op
...@@ -387,13 +389,27 @@ The parameters are as follows: ...@@ -387,13 +389,27 @@ The parameters are as follows:
* rel_tol: relative tolerance used as threshold for gradient comparison * rel_tol: relative tolerance used as threshold for gradient comparison
Here is an example showing how to use verify_grad: In the general case, you can define ``fun`` as you want, as long as it
takes as inputs Theano symbolic variables and returns a sinble Theano
symbolic variable:
>>> def test_verify_exprgrad():
>>> def fun(x,y,z):
>>> return (x + tensor.cos(y)) / (4 * z)**2
>>> x_val = numpy.asarray([[1], [1.1], [1.2]])
>>> y_val = numpy.asarray([0.1, 0.2])
>>> z_val = numpy.asarray(2)
>>> rng = numpy.random.RandomState(42)
>>> tensor.verify_grad(fun, [x_val, y_val, z_val], rng=rng)
Here is an example showing how to use ``verify_grad`` on an Op instance:
>>> def test_flatten_outdimNone(): >>> def test_flatten_outdimNone():
>>> # Testing gradient w.r.t. all inputs of an op (in this example the op >>> # Testing gradient w.r.t. all inputs of an op (in this example the op
>>> # being used is Flatten(), which takes a single input). >>> # being used is Flatten(), which takes a single input).
>>> a_val = numpy.asarray([[0,1,2],[3,4,5]], dtype='float64') >>> a_val = numpy.asarray([[0,1,2],[3,4,5]], dtype='float64')
>>> tensor.verify_grad(tensor.Flatten(), [a_val]) >>> rng = numpy.random.RandomState(42)
>>> tensor.verify_grad(tensor.Flatten(), [a_val], rng=rng)
Here is another example, showing how to verify the gradient w.r.t. a subset of Here is another example, showing how to verify the gradient w.r.t. a subset of
an Op's inputs. This is useful in particular when the gradient w.r.t. some of an Op's inputs. This is useful in particular when the gradient w.r.t. some of
...@@ -410,7 +426,8 @@ which would cause verify_grad to crash. ...@@ -410,7 +426,8 @@ which would cause verify_grad to crash.
>>> return op(x, b, y_idx=numpy.asarray([0, 2]))[0] >>> return op(x, b, y_idx=numpy.asarray([0, 2]))[0]
>>> x_val = numpy.asarray([[-1, 0, 1], [3, 2, 1]], dtype='float64') >>> x_val = numpy.asarray([[-1, 0, 1], [3, 2, 1]], dtype='float64')
>>> b_val = numpy.asarray([1, 2, 3], dtype='float64') >>> b_val = numpy.asarray([1, 2, 3], dtype='float64')
>>> tensor.verify_grad(op_with_fixed_y_idx, [x_val, b_val]) >>> rng = numpy.random.RandomState(42)
>>> tensor.verify_grad(op_with_fixed_y_idx, [x_val, b_val], rng=rng)
.. note:: .. note::
......
...@@ -4032,21 +4032,24 @@ class numeric_grad: ...@@ -4032,21 +4032,24 @@ class numeric_grad:
return (max_arg, pos[max_arg], abs_errs[max_arg], rel_errs[max_arg]) return (max_arg, pos[max_arg], abs_errs[max_arg], rel_errs[max_arg])
def verify_grad(op, pt, n_tests=2, rng=None, eps=None, abs_tol=None, rel_tol=None, def verify_grad(fun, pt, n_tests=2, rng=None, eps=None, abs_tol=None, rel_tol=None,
mode=None, cast_to_output_type=False): mode=None, cast_to_output_type=False):
""" Test an Op's gradient by side effect. Return None on success, raise error on failure. """ Test a gradient by Finite Difference Method. Raise error on failure.
Example: Example:
>>> verify_grad(theano.tensor.tanh, (numpy.asarray([[2,3,4], [-1, 3.3, 9.9]]),)) >>> verify_grad(theano.tensor.tanh,
(numpy.asarray([[2,3,4], [-1, 3.3, 9.9]]),),
rng=numpy.random)
Raises an Exception if the difference between the analytic gradient and Raises an Exception if the difference between the analytic gradient and
numerical gradient (computed through the Finite Difference Method) exceeds numerical gradient (computed through the Finite Difference Method) exceeds
the given tolerance. the given tolerance.
:param op: something that behaves like an Op instance with a single output :param fun: a Python function that takes Theano variables as inputs,
(can be a python function combining multiple ops, but see note below) and returns a Theano variable. For instance, an Op instance with
:param pt: the list of numpy.ndarrays to use as inputs to the op. These arrays must be a single output.
either float32 or float64 arrays. :param pt: the list of numpy.ndarrays to use as input values.
These arrays must be either float32 or float64 arrays.
:param n_tests: number of times to run the test :param n_tests: number of times to run the test
:param rng: random number generator from which to draw random samples :param rng: random number generator from which to draw random samples
:param eps: stepsize used in the Finite Difference Method (Default None is type-dependent) :param eps: stepsize used in the Finite Difference Method (Default None is type-dependent)
...@@ -4060,8 +4063,7 @@ def verify_grad(op, pt, n_tests=2, rng=None, eps=None, abs_tol=None, rel_tol=Non ...@@ -4060,8 +4063,7 @@ def verify_grad(op, pt, n_tests=2, rng=None, eps=None, abs_tol=None, rel_tol=Non
:note: This op does not support multiple outputs. In tests/test_scan.py there is :note: This op does not support multiple outputs. In tests/test_scan.py there is
an experimental verify_grad that covers that case as well by using random an experimental verify_grad that covers that case as well by using random
projections .. projections.
""" """
pt = [numpy.array(p) for p in pt] pt = [numpy.array(p) for p in pt]
...@@ -4089,11 +4091,11 @@ def verify_grad(op, pt, n_tests=2, rng=None, eps=None, abs_tol=None, rel_tol=Non ...@@ -4089,11 +4091,11 @@ def verify_grad(op, pt, n_tests=2, rng=None, eps=None, abs_tol=None, rel_tol=Non
tensor_pt = [value(p.copy(), name='input %i'%i) for i,p in enumerate(pt)] tensor_pt = [value(p.copy(), name='input %i'%i) for i,p in enumerate(pt)]
#op can be either a function or an actual Op instance #fun can be either a function or an actual Op instance
o_output = op(*tensor_pt) o_output = fun(*tensor_pt)
if isinstance(o_output,list) > 1: if isinstance(o_output,list) > 1:
raise NotImplementedError('cant (yet) autotest gradient of op with multiple outputs') raise NotImplementedError('cant (yet) autotest gradient of fun with multiple outputs')
# we could make loop over outputs making random projections R for each, # we could make loop over outputs making random projections R for each,
# but this doesn't handle the case where not all the outputs are # but this doesn't handle the case where not all the outputs are
# differentiable... so I leave this as TODO for now -JB. # differentiable... so I leave this as TODO for now -JB.
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论