提交 f8b377fe authored 作者: Iban Harlouchet's avatar Iban Harlouchet 提交者: Arnaud Bergeron

testcode for doc/library/scan.txt

上级 08aa59ef
......@@ -35,7 +35,10 @@ happens automatically.
The equivalent Theano code would be:
.. code-block:: python
.. testcode::
import theano
import theano.tensor as T
k = T.iscalar("k")
A = T.vector("A")
......@@ -57,6 +60,13 @@ The equivalent Theano code would be:
print power(range(10),2)
print power(range(10),4)
.. testoutput::
[ 0. 1. 4. 9. 16. 25. 36. 49. 64. 81.]
[ 0.00000000e+00 1.00000000e+00 1.60000000e+01 8.10000000e+01
2.56000000e+02 6.25000000e+02 1.29600000e+03 2.40100000e+03
4.09600000e+03 6.56100000e+03]
Let us go through the example line by line. What we did is first to
construct a function (using a lambda expression) that given ``prior_result`` and
``A`` returns ``prior_result * A``. The order of parameters is fixed by scan:
......@@ -88,7 +98,9 @@ The tensor(s) to be looped over should be provided to scan using the
Here's an example that builds a symbolic calculation of a polynomial
from a list of its coefficients:
.. code-block:: python
.. testcode::
import numpy
coefficients = theano.tensor.vector("coefficients")
x = T.scalar("x")
......@@ -112,6 +124,11 @@ from a list of its coefficients:
print calculate_polynomial(test_coefficients, test_value)
print 1.0 * (3 ** 0) + 0.0 * (3 ** 1) + 2.0 * (3 ** 2)
.. testoutput::
19.0
19.0
There are a few things to note here.
First, we calculate the polynomial by first generating each of the coefficients, and
......@@ -142,7 +159,7 @@ pitfall to be careful of: the initial output state that is supplied, that is
generated at each iteration and moreover, it **must not involve an implicit
downcast** of the latter.
.. code-block:: python
.. testcode::
import numpy as np
......@@ -169,9 +186,13 @@ downcast** of the latter.
# test
some_num = 15
print triangular_sequence(some_num)
print [n * (n + 1) // 2 for n in xrange(some_num)]
print(triangular_sequence(some_num))
print([n * (n + 1) // 2 for n in xrange(some_num)])
.. testoutput::
[ 0 1 3 6 10 15 21 28 36 45 55 66 78 91 105]
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 66, 78, 91, 105]
Another simple example
----------------------
......@@ -183,7 +204,7 @@ and a "model" output array (whose shape and dtype will be mimicked),
and produces a sequence of arrays with the shape and dtype of the model,
with all values set to zero except at the provided array indices.
.. code-block:: python
.. testcode::
location = T.imatrix("location")
values = T.vector("values")
......@@ -205,7 +226,21 @@ with all values set to zero except at the provided array indices.
test_locations = numpy.asarray([[1, 1], [2, 3]], dtype=numpy.int32)
test_values = numpy.asarray([42, 50], dtype=numpy.float32)
test_output_model = numpy.zeros((5, 5), dtype=numpy.float32)
print assign_values_at_positions(test_locations, test_values, test_output_model)
print(assign_values_at_positions(test_locations, test_values, test_output_model))
.. testoutput::
[[[ 0. 0. 0. 0. 0.]
[ 0. 42. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]]
[[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 50. 0.]
[ 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0.]]]
This demonstrates that you can introduce new Theano variables into a scan function.
......@@ -219,7 +254,7 @@ Another useful feature of scan, is that it can handle shared variables.
For example, if we want to implement a Gibbs chain of length 10 we would do
the following:
.. code-block:: python
.. testcode::
W = theano.shared(W_values) # we assume that ``W_values`` contains the
# initial values of your weight matrix
......@@ -251,7 +286,7 @@ update dictionary to your function, you will always get the same 10
sets of random numbers. You can even use the ``updates`` dictionary
afterwards. Look at this example :
.. code-block:: python
.. testcode::
a = theano.shared(1)
values, updates = theano.scan(lambda: {a: a+1}, n_steps=10)
......@@ -260,7 +295,7 @@ In this case the lambda expression does not require any input parameters
and returns an update dictionary which tells how ``a`` should be updated
after each step of scan. If we write :
.. code-block:: python
.. testcode::
b = a + 1
c = updates[a] + 1
......@@ -289,7 +324,7 @@ execution. To pass the shared variables to Scan you need to put them in a list
and give it to the ``non_sequences`` argument. Here is the Gibbs sampling code
updated:
.. code-block:: python
.. testcode::
W = theano.shared(W_values) # we assume that ``W_values`` contains the
# initial values of your weight matrix
......@@ -332,7 +367,7 @@ to be ensured by the user. Otherwise, it will result in an error.
Using the previous Gibbs sampling example:
.. code-block:: python
.. testcode::
# The new scan, using strict=True
values, updates = theano.scan(fn=OneStep,
......@@ -369,7 +404,7 @@ In this case we have a sequence over which we need to iterate ``u``,
and two outputs ``x`` and ``y``. To implement this with scan we first
construct a function that computes one iteration step :
.. code-block:: python
.. testcode::
def oneStep(u_tm4, u_t, x_tm3, x_tm1, y_tm1, W, W_in_1, W_in_2, W_feedback, W_out):
......@@ -392,7 +427,7 @@ an order, but also variables, since this is how scan figures out what should
be represented by what. Given that we have all
the Theano variables needed we construct our RNN as follows :
.. code-block:: python
.. testcode::
u = T.matrix() # it is a sequence of vectors
x0 = T.matrix() # initial state of x has to be a matrix, since
......@@ -432,7 +467,7 @@ provided condition evaluates to True.
For an example, we will compute all powers of two smaller then some provided
value ``max_value``.
.. code-block:: python
.. testcode::
def power_of_2(previous_power, max_value):
return previous_power*2, theano.scan_module.until(previous_power*2 > max_value)
......@@ -446,6 +481,10 @@ value ``max_value``.
f = theano.function([max_value], values)
print f(45)
.. testoutput::
[ 2. 4. 8. 16. 32. 64.]
As you can see, in order to terminate on condition, the only thing required
is that the inner function ``power_of_2`` to return also the condition
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论