Unverified 提交 50c60b6b authored 作者: Brandon T. Willard's avatar Brandon T. Willard 提交者: GitHub

Merge pull request #137 from brandonwillard/add-randomvariable-op

Add new RandomVariable Op and optimizations
......@@ -273,7 +273,7 @@ the following:
bvis = theano.shared(bvis_values)
bhid = theano.shared(bhid_values)
trng = tt.shared_randomstreams.RandomStreams(1234)
trng = tt.random.utils.RandomStream(1234)
def OneStep(vsample) :
hmean = tt.nnet.sigmoid(theano.dot(vsample, W) + bhid)
......@@ -354,7 +354,7 @@ updated:
bvis = theano.shared(bvis_values)
bhid = theano.shared(bhid_values)
trng = tt.shared_randomstreams.RandomStreams(1234)
trng = tt.random.utils.RandomStream(1234)
# OneStep, with explicit use of the shared variables (W, bvis, bhid)
def OneStep(vsample, W, bvis, bhid):
......
......@@ -9,8 +9,8 @@
:synopsis: symbolic types and operations for n-dimensional arrays.
.. moduleauthor:: LISA
Theano's strength is in expressing symbolic calculations involving tensors.
There are many types of symbolic expressions for tensors.
Theano's strength is in expressing symbolic calculations involving tensors.
There are many types of symbolic expressions for tensors.
They are grouped into the following sections:
......@@ -19,8 +19,7 @@ They are grouped into the following sections:
basic
nnet/index
raw_random
shared_randomstreams
random/index
signal/index
utils
elemwise
......
.. _libdoc_tensor_random:
=============================================
:mod:`random` -- Low-level random numbers
=============================================
.. module:: theano.tensor.random
:synopsis: symbolic random variables
.. moduleauthor:: pymc-team
The `theano.tensor.random` module provides random-number drawing functionality
that closely resembles the `numpy.random` module.
Reference
=========
.. class:: RandomStream()
A helper class that tracks changes in a shared ``numpy.random.RandomState``
and behaves like ``numpy.random.RandomState`` by managing access
to `RandomVariable`s. For example:
.. testcode:: constructors
from theano.tensor.random.utils import RandomStream
rng = RandomStream()
sample = rng.normal(0, 1, size=(2, 2))
.. class:: RandomStateType(gof.Type)
A `Type` for variables that will take ``numpy.random.RandomState``
values.
.. function:: random_state_type(name=None)
Return a new Variable whose ``.type`` is ``random_state_type``.
.. class:: RandomVariable(gof.Op)
`Op` that draws random numbers from a `numpy.random.RandomState` object.
This `Op` is parameterized to draw numbers from many possible
distributions.
.. _libdoc_tensor_shared_randomstreams:
.. _libdoc_tensor_random_utils:
======================================================
:mod:`shared_randomstreams` -- Friendly random numbers
:mod:`utils` -- Friendly random numbers
======================================================
.. module:: theano.tensor.shared_randomstreams
.. module:: theano.tensor.random.utils
:platform: Unix, Windows
:synopsis: symbolic random variables
.. moduleauthor:: LISA
......@@ -27,17 +27,15 @@ For an example of how to use random numbers, see
Reference
=========
.. class:: RandomStreams(raw_random.RandomStreamsBase)
.. class:: RandomStream()
This is a symbolic stand-in for ``numpy.random.RandomState``.
Random variables of various distributions are instantiated by calls to
parent class :class:`raw_random.RandomStreamsBase`.
This is a symbolic stand-in for ``numpy.random.RandomState``.
.. method:: updates()
:returns: a list of all the (state, new_state) update pairs for the
random variables created by this object
This can be a convenient shortcut to enumerating all the random
variables in a large graph in the ``update`` parameter of function.
......@@ -60,21 +58,4 @@ Reference
.. method:: uniform, normal, binomial, multinomial, random_integers, ...
See :class:`raw_random.RandomStreamsBase`.
.. class:: RandomVariable(object)
.. attribute:: rng
The shared variable whose ``.value`` is the numpy RandomState
generator feeding this random variable.
.. attribute:: update
A pair
whose first element is a shared variable whose value is a numpy RandomState,
and whose second element is an [symbolic] expression for the next value of that
RandomState after drawing samples.
Including this pair in the``updates`` list to function will cause the
function to update the random number generator feeding this variable.
See :class:`basic.RandomVariable`.
.. _libdoc_tensor_raw_random:
=============================================
:mod:`raw_random` -- Low-level random numbers
=============================================
.. module:: theano.tensor.raw_random
:synopsis: symbolic random variables
.. moduleauthor:: LISA
Raw random provides the random-number drawing functionality, that underlies
the friendlier :class:`RandomStreams` interface.
Reference
=========
.. class:: RandomStreamsBase(object)
This is the interface for the
:class:`theano.tensor.shared_randomstreams.RandomStreams` subclass
.. method:: binomial(self, size=(), n=1, p=0.5, ndim=None):
Sample ``n`` times with probability of success ``p`` for each
trial and return the number of successes.
If ``size`` is ambiguous on the number of dimensions, ``ndim``
may be a plain integer to supplement the missing information.
This wraps the numpy implementation, so it has the same
behavior.
.. method:: uniform(self, size=(), low=0.0, high=1.0, ndim=None):
Sample a tensor of the given size whose elements come from a
uniform distribution between low and high.
If ``size`` is ambiguous on the number of dimensions, ``ndim``
may be a plain integer to supplement the missing information.
This wraps the numpy implementation, so it has the same
bounds: [``low``, ``high``\[.
.. method:: normal(self, size=(), avg=0.0, std=1.0, ndim=None):
Sample from a normal distribution centered on ``avg`` with the
specified standard deviation (``std``)
If ``size`` is ambiguous on the number of dimensions, ``ndim``
may be a plain integer to supplement the missing information.
This wrap numpy implementation, so it have the same behavior.
.. method:: random_integers(self, size=(), low=0, high=1, ndim=None):
Sample a random integer between low and high, both inclusive.
If ``size`` is ambiguous on the number of dimensions, ``ndim``
may be a plain integer to supplement the missing information.
This is a generalization of :py:func:`numpy.random.random_integers`
to the case where low and high are tensors. Otherwise it
behaves the same.
.. method:: choice(self, size=(), a=2, replace=True, p=None, ndim=None, dtype='int64'):
Choose values from ``a`` with or without replacement. ``a``
can be a 1-D array or a positive scalar. If ``a`` is a scalar,
the samples are drawn from the range [0, ``a``\[.
If ``size`` is ambiguous on the number of dimensions, ``ndim``
may be a plain integer to supplement the missing information.
This wraps the numpy implementation so it has the same behavior.
.. method:: poisson(self, size=(), lam=None, ndim=None, dtype='int64'):
Draw samples from a Poisson distribution.
The Poisson distribution is the limit of the Binomial
distribution for large N.
If ``size`` is ambiguous on the number of dimensions, ``ndim``
may be a plain integer to supplement the missing information.
This wraps the numpy implementation so it has the same behavior.
.. method:: permutation(self, size=(), n=1, ndim=None):
Returns permutations of the integers between 0 and ``n-1``, as
many times as required by ``size``. For instance, if
``size=(p,q)``, ``p*q`` permutations will be generated, and
the output shape will be ``(p,q,n)``, because each permutation
is of size ``n``.
Theano tries to infer the number of dimensions from the length
of ``size``, but you may always specify it with ``ndim``.
.. note::
The output will have ``ndim+1`` dimensions.
This is a generalization of :py:func:`numpy.random.permutation` to
tensors. Otherwise it behaves the same.
.. method:: multinomial(self, size=(), n=1, pvals=[0.5, 0.5], ndim=None):
Sample n times from a multinomial distribution defined by
probabilities ``pvals``, as many times as required by
``size``. For instance, if ``size=(p,q)``, ``p*q`` samples
will be drawn, and the output shape will be
``(p,q,len(pvals))``.
Theano tries to infer the number of dimensions from the length
of ``size``, but you may always specify it with ``ndim``.
.. note::
The output will have ``ndim+1`` dimensions.
This is a generalization of :py:func:`numpy.random.multinomial`
to the case where ``n`` and ``pvals`` are tensors. Otherwise
it behaves the same.
.. method:: shuffle_row_elements(self, input):
Return a variable with every row (rightmost index) shuffled.
This uses a permutation random variable internally, available
via the ``.permutation`` attribute of the return value.
.. class:: RandomStateType(gof.Type)
A `Type` for variables that will take ``numpy.random.RandomState``
values.
.. function:: random_state_type(name=None)
Return a new Variable whose ``.type`` is ``random_state_type``.
.. class:: RandomFunction(gof.Op)
Op that draws random numbers from a numpy.RandomState object.
This Op is parametrized to draw numbers from many possible
distributions.
.. function:: uniform(random_state, size=None, low=0.0, high=1.0, ndim=None, dtype=None)
Sample from a uniform distribution between low and high.
If the size argument is ambiguous on the number of
dimensions, the first argument may be a plain integer
to supplement the missing information.
:returns: :class:`RandomVariable`, NewRandomState
.. function:: binomial(random_state, size=None, n=1, p=0.5, ndim=None, dtype='int64')
Sample ``n`` times with probability of success ``p`` for each
trial and return the number of successes.
If ``size`` is ambiguous on the number of dimensions, ``ndim`` may
be a plain integer to supplement the missing information.
:returns: :class:`RandomVariable`, NewRandomState
.. function:: normal(random_state, size=None, avg=0.0, std=1.0, ndim=None, dtype=None)
Sample from a normal distribution centered on ``avg`` with the
specified standard deviation (``std``).
If ``size`` is ambiguous on the number of dimensions, ``ndim`` may
be a plain integer to supplement the missing information.
:returns: :class:`RandomVariable`, NewRandomState
.. function:: random_integers(random_state, size=None, low=0, high=1, ndim=None, dtype='int64')
Sample random integers in [``low``, ``high``] to fill up ``size``.
If ``size`` is ambiguous on the number of dimensions, ``ndim`` may
be a plain integer to supplement the missing information.
:returns: :class:`RandomVariable`, NewRandomState
.. function:: permutation(random_state, size=None, n=1, ndim=None, dtype='int64')
Returns permutations of the integers in [0, ``n``\[, as many times
as required by ``size``. For instance, if ``size=(p,q)``, ``p*q``
permutations will be generated, and the output shape will be
``(p,q,n)``, because each permutation is of size ``n``.
If ``size`` is ambiguous on the number of dimensions, ``ndim``
may be a plain integer, which should correspond to ``len(size)``.
.. note::
The output will have ``ndim+1`` dimensions.
:returns: :class:`RandomVariable`, NewRandomState
.. function:: multinomial(random_state, size=None, p_vals=[0.5, 0.5], ndim=None, dtype='int64')
Sample from a multinomial distribution defined by probabilities
``pvals``, as many times as required by ``size``. For instance, if
``size=(p,q)``, ``p*q`` samples will be drawn, and the output
shape will be ``(p,q,len(pvals))``.
If ``size`` is ambiguous on the number of dimensions, ``ndim``
may be a plain integer, which should correspond to ``len(size)``.
.. note::
The output will have ``ndim+1`` dimensions.
:returns: :class:`RandomVariable`, NewRandomState
......@@ -866,7 +866,7 @@ X = tt.matrix("X")
b_sym = tt.vector("b_sym")
# define shared random stream
trng = tt.shared_randomstreams.RandomStreams(1234)
trng = tt.random.utils.RandomStream(1234)
d=trng.binomial(size=W[1].shape)
\end{lstlisting}
\end{frame}
......
......@@ -343,13 +343,13 @@ NumPy, though also not too complicated.
The way to think about putting randomness into Theano's computations is
to put random variables in your graph. Theano will allocate a NumPy
RandomStream object (a random number generator) for each such
`RandomStream` object (a random number generator) for each such
variable, and draw from it as necessary. We will call this sort of
sequence of random numbers a *random stream*. *Random streams* are at
their core shared variables, so the observations on shared variables
hold here as well. Theano's random objects are defined and implemented in
:ref:`RandomStreams<libdoc_tensor_shared_randomstreams>` and, at a lower level,
in :ref:`RandomStreamsBase<libdoc_tensor_raw_random>`.
:ref:`RandomStream<libdoc_tensor_random_utils>` and, at a lower level,
in :ref:`RandomVariable<libdoc_tensor_random_basic>`.
Brief Example
-------------
......@@ -361,11 +361,11 @@ Here's a brief example. The setup code is:
.. testcode::
from theano.tensor.shared_randomstreams import RandomStreams
from theano.tensor.random.utils import RandomStream
from theano import function
srng = RandomStreams(seed=234)
rv_u = srng.uniform((2,2))
rv_n = srng.normal((2,2))
srng = RandomStream(seed=234)
rv_u = srng.uniform(0, 1, size=(2,2))
rv_n = srng.normal(0, 1, size=(2,2))
f = function([], rv_u)
g = function([], rv_n, no_default_updates=True) #Not updating rv_n.rng
nearly_zeros = function([], rv_u + rv_u - 2 * rv_u)
......@@ -373,8 +373,8 @@ Here's a brief example. The setup code is:
Here, 'rv_u' represents a random stream of 2x2 matrices of draws from a uniform
distribution. Likewise, 'rv_n' represents a random stream of 2x2 matrices of
draws from a normal distribution. The distributions that are implemented are
defined in :class:`RandomStreams` and, at a lower level,
in :ref:`raw_random<libdoc_tensor_raw_random>`. They only work on CPU.
defined as :class:`RandomVariable`s
in :ref:`basic<libdoc_tensor_random_basic>`. They only work on CPU.
See `Other Implementations`_ for GPU version.
......@@ -412,7 +412,7 @@ You can seed just one random variable by seeding or assigning to the
>>> rng_val.seed(89234) # seeds the generator
>>> rv_u.rng.set_value(rng_val, borrow=True) # Assign back seeded rng
You can also seed *all* of the random variables allocated by a :class:`RandomStreams`
You can also seed *all* of the random variables allocated by a :class:`RandomStream`
object by that object's ``seed`` method. This seed will be used to seed a
temporary random number generator, that will in turn generate seeds for each
of the random variables.
......@@ -447,11 +447,11 @@ number generators associated with a given theano graph (e.g. g1, with compiled
function f1 below) to a second graph (e.g. g2, with function f2). This might
arise for example if you are trying to initialize the state of a model, from
the parameters of a pickled version of a previous model. For
:class:`theano.tensor.shared_randomstreams.RandomStreams` and
:class:`theano.sandbox.rng_mrg.MRG_RandomStreams`
:class:`theano.tensor.random.utils.RandomStream` and
:class:`theano.sandbox.rng_mrg.MRG_RandomStream`
this can be achieved by copying elements of the `state_updates` parameter.
Each time a random variable is drawn from a RandomStreams object, a tuple is
Each time a random variable is drawn from a `RandomStream` object, a tuple is
added to the `state_updates` list. The first element is a shared variable,
which represents the state of the random number generator associated with this
*particular* variable, while the second represents the theano graph
......@@ -464,12 +464,12 @@ to another is shown below.
>>> import theano
>>> import numpy
>>> import theano.tensor as tt
>>> from theano.sandbox.rng_mrg import MRG_RandomStreams
>>> from theano.tensor.shared_randomstreams import RandomStreams
>>> from theano.sandbox.rng_mrg import MRG_RandomStream
>>> from theano.tensor.random.utils import RandomStream
>>> class Graph():
... def __init__(self, seed=123):
... self.rng = RandomStreams(seed)
... self.rng = RandomStream(seed)
... self.y = self.rng.uniform(size=(1,))
>>> g1 = Graph(seed=123)
......@@ -485,7 +485,7 @@ array([ 0.72803009])
array([ 0.55056769])
>>> def copy_random_state(g1, g2):
... if isinstance(g1.rng, MRG_RandomStreams):
... if isinstance(g1.rng, MRG_RandomStream):
... g2.rng.rstate = g1.rng.rstate
... for (su1, su2) in zip(g1.rng.state_updates, g2.rng.state_updates):
... su2[0].set_value(su1[0].get_value())
......@@ -501,7 +501,7 @@ array([ 0.59044123])
Other Random Distributions
--------------------------
There are :ref:`other distributions implemented <libdoc_tensor_raw_random>`.
There are :ref:`other distributions implemented <libdoc_tensor_random_basic>`.
.. _example_other_random:
......@@ -510,7 +510,7 @@ Other Implementations
There is another implementations based on :ref:`MRG31k3p
<libdoc_rng_mrg>`.
The RandomStream only work on the CPU, MRG31k3p work on the CPU and GPU.
The `RandomStream` only work on the CPU, MRG31k3p work on the CPU and GPU.
.. note::
......@@ -518,7 +518,7 @@ The RandomStream only work on the CPU, MRG31k3p work on the CPU and GPU.
.. code-block:: python
from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
from theano.sandbox.rng_mrg import MRG_RandomStream as RandomStream
.. _logistic_regression:
......
......@@ -329,7 +329,7 @@ Note that we need to iterate over the indices of ``y`` and not over the elements
b_sym = tt.vector("b_sym")
# define shared random stream
trng = tt.shared_randomstreams.RandomStreams(1234)
trng = tt.random.utils.RandomStream(1234)
d=trng.binomial(size=W[1].shape)
results, updates = theano.scan(lambda v: tt.tanh(tt.dot(v, W) + b_sym) * d, sequences=X)
......
......@@ -15,3 +15,4 @@ jaxlib; python_version > '3.6'
diff-cover
pre-commit
isort
pypolyagamma
......@@ -11,7 +11,7 @@ from theano.compile.builders import OpFromGraph
from theano.compile.function import function
from theano.gof.null_type import NullType
from theano.gradient import DisconnectedType
from theano.tensor.shared_randomstreams import RandomStreams
from theano.tensor.random.utils import RandomStream
class TestOpFromGraph(unittest_tools.InferShapeTester):
......@@ -359,7 +359,7 @@ class TestOpFromGraph(unittest_tools.InferShapeTester):
assert results == expect_result
# Inner graph where some computation doesn't rely on explicit inputs
srng = RandomStreams(seed=234)
srng = RandomStream(seed=234)
rv_u = srng.uniform((2, 2))
x, y = tt.matrices("xy")
out1 = x + rv_u
......
......@@ -4,7 +4,7 @@ from itertools import count
import numpy as np
import pytest
from theano import shared, sparse, tensor
from theano import shared, tensor
from theano.gof.graph import (
Apply,
Variable,
......@@ -287,34 +287,6 @@ class TestAutoName:
assert r2.auto_name == "auto_" + str(autoname_id + 1)
assert r3.auto_name == "auto_" + str(autoname_id + 2)
@pytest.mark.skipif(
not sparse.enable_sparse, reason="Optional package SciPy not installed"
)
def test_sparsevariable(self):
# Get counter value
autoname_id = next(Variable.__count__)
Variable.__count__ = count(autoname_id)
r1 = sparse.csc_matrix(name="x", dtype="float32")
r2 = sparse.dense_from_sparse(r1)
r3 = sparse.csc_from_dense(r2)
assert r1.auto_name == "auto_" + str(autoname_id)
assert r2.auto_name == "auto_" + str(autoname_id + 1)
assert r3.auto_name == "auto_" + str(autoname_id + 2)
def test_randomvariable(self):
# Get counter value
autoname_id = next(Variable.__count__)
Variable.__count__ = count(autoname_id)
mytype = tensor.TensorType(dtype="int32", broadcastable=())
r1 = tensor.shared_randomstreams.RandomStateSharedVariable(
name="x", type=mytype, value=1, strict=False
)
r2 = tensor.shared_randomstreams.RandomStateSharedVariable(
name="x", type=mytype, value=1, strict=False
)
assert r1.auto_name == "auto_" + str(autoname_id)
assert r2.auto_name == "auto_" + str(autoname_id + 1)
def test_clone(self):
# Get counter value
autoname_id = next(Variable.__count__)
......@@ -326,9 +298,30 @@ class TestAutoName:
def test_equal_computations():
# This was a bug report by a Theano user.
a, b = tensor.iscalars(2)
with pytest.raises(ValueError):
equal_computations([a], [a, b])
assert equal_computations([a], [a])
assert equal_computations([tensor.as_tensor(1)], [tensor.as_tensor(1)])
assert not equal_computations([b], [a])
assert not equal_computations([tensor.as_tensor(1)], [tensor.as_tensor(2)])
assert equal_computations([2], [2])
assert equal_computations([np.r_[2, 1]], [np.r_[2, 1]])
assert equal_computations([np.r_[2, 1]], [tensor.as_tensor(np.r_[2, 1])])
assert equal_computations([tensor.as_tensor(np.r_[2, 1])], [np.r_[2, 1]])
assert not equal_computations([2], [a])
assert not equal_computations([np.r_[2, 1]], [a])
assert not equal_computations([a], [2])
assert not equal_computations([a], [np.r_[2, 1]])
c = tensor.type_other.NoneConst
assert equal_computations([c], [c])
m = tensor.matrix()
max_argmax1 = tensor.max_and_argmax(m)
max_argmax2 = tensor.max_and_argmax(m)
......
......@@ -10,7 +10,7 @@ from theano.gpuarray.multinomial import (
GPUAMultinomialFromUniform,
)
from theano.sandbox import multinomial
from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
from theano.sandbox.rng_mrg import MRG_RandomStream as RandomStream
def test_multinomial_output_dtype():
......@@ -262,7 +262,7 @@ class TestFunctionWor:
def test_select_distinct(self):
# Tests that multinomial_wo_replacement always selects distinct elements
th_rng = RandomStreams(12345)
th_rng = RandomStream(12345)
p = tensor.fmatrix()
n = tensor.iscalar()
......@@ -285,7 +285,7 @@ class TestFunctionWor:
# Tests that multinomial_wo_replacement fails when asked to sample more
# elements than the actual number of elements
th_rng = RandomStreams(12345)
th_rng = RandomStream(12345)
p = tensor.fmatrix()
n = tensor.iscalar()
......@@ -305,7 +305,7 @@ class TestFunctionWor:
# Tests that multinomial_wo_replacement selects elements, on average,
# proportional to the their probabilities
th_rng = RandomStreams(12345)
th_rng = RandomStream(12345)
p = tensor.fmatrix()
n = tensor.iscalar()
......
......@@ -11,7 +11,7 @@ from theano import config, tensor
from theano.gpuarray.rng_mrg import GPUA_mrg_uniform
from theano.gpuarray.type import gpuarray_shared_constructor
from theano.sandbox import rng_mrg
from theano.sandbox.rng_mrg import MRG_RandomStreams
from theano.sandbox.rng_mrg import MRG_RandomStream
utt.seed_rng()
......@@ -42,7 +42,7 @@ def test_consistency_GPUA_serial():
rstate.default_update = new_rstate
# Not really necessary, just mimicking
# rng_mrg.MRG_RandomStreams' behavior
# rng_mrg.MRG_RandomStream' behavior
sample.rstate = rstate
sample.update = (rstate, new_rstate)
......@@ -89,7 +89,7 @@ def test_consistency_GPUA_parallel():
rstate.default_update = new_rstate
# Not really necessary, just mimicking
# rng_mrg.MRG_RandomStreams' behavior
# rng_mrg.MRG_RandomStream' behavior
sample.rstate = rstate
sample.update = (rstate, new_rstate)
......@@ -117,7 +117,7 @@ def test_GPUA_full_fill():
# This needs to be large to trigger the problem on GPU
size = (10, 1000)
R = MRG_RandomStreams(234)
R = MRG_RandomStream(234)
uni = R.uniform(size, nstreams=60 * 256)
f_cpu = theano.function([], uni)
......@@ -177,7 +177,7 @@ def test_f16_nonzero():
def test_cpu_target_with_shared_variable():
srng = MRG_RandomStreams()
srng = MRG_RandomStream()
s = np.random.rand(2, 3).astype("float32")
x = gpuarray_shared_constructor(s, name="x")
try:
......
......@@ -231,7 +231,7 @@ class TestScan:
dtype="float32",
)
vsample = theano.shared(v_vsample)
trng = theano.sandbox.rng_mrg.MRG_RandomStreams(utt.fetch_seed())
trng = theano.sandbox.rng_mrg.MRG_RandomStream(utt.fetch_seed())
def f(vsample_tm1):
return (
......@@ -513,7 +513,7 @@ class ScanGpuTests:
dtype="float32",
)
vsample = theano.shared(v_vsample)
trng = theano.sandbox.rng_mrg.MRG_RandomStreams(utt.fetch_seed())
trng = theano.sandbox.rng_mrg.MRG_RandomStream(utt.fetch_seed())
def f(vsample_tm1):
return (
......
......@@ -6,7 +6,7 @@ import numpy as np
import theano
from theano.misc.pkl_utils import StripPickler, dump, load
from theano.sandbox.rng_mrg import MRG_RandomStreams
from theano.sandbox.rng_mrg import MRG_RandomStream
class TestDumpLoad:
......@@ -23,7 +23,7 @@ class TestDumpLoad:
shutil.rmtree(self.tmpdir)
def test_dump_load_mrg(self):
rng = MRG_RandomStreams()
rng = MRG_RandomStream()
with open("test", "wb") as f:
dump(rng, f)
......@@ -31,7 +31,7 @@ class TestDumpLoad:
with open("test", "rb") as f:
rng = load(f)
assert type(rng) == MRG_RandomStreams
assert type(rng) == MRG_RandomStream
def test_dump_zip_names(self):
foo_1 = theano.shared(0, name="foo")
......
......@@ -330,7 +330,7 @@ def test_jax_scan_multiple_output():
delta = tt.scalar("delta")
# TODO: Use random streams when their JAX conversions are implemented.
# trng = tt.shared_randomstreams.RandomStreams(1234)
# trng = tt.random.RandomStream(1234)
def seir_one_step(ct0, dt0, st0, et0, it0, logp_c, logp_d, beta, gamma, delta):
# bt0 = trng.binomial(n=st0, p=beta)
......
......@@ -3,7 +3,7 @@ import pytest
from theano import config, function, tensor
from theano.sandbox import multinomial
from theano.sandbox.rng_mrg import MRG_RandomStreams as RandomStreams
from theano.sandbox.rng_mrg import MRG_RandomStream as RandomStream
class TestOP:
......@@ -146,7 +146,7 @@ class TestFunction:
def test_select_distinct(self):
# Tests that multinomial_wo_replacement always selects distinct elements
th_rng = RandomStreams(12345)
th_rng = RandomStream(12345)
p = tensor.fmatrix()
n = tensor.iscalar()
......@@ -169,7 +169,7 @@ class TestFunction:
# Tests that multinomial_wo_replacement fails when asked to sample more
# elements than the actual number of elements
th_rng = RandomStreams(12345)
th_rng = RandomStream(12345)
p = tensor.fmatrix()
n = tensor.iscalar()
......@@ -189,7 +189,7 @@ class TestFunction:
# Tests that multinomial_wo_replacement selects elements, on average,
# proportional to the their probabilities
th_rng = RandomStreams(12345)
th_rng = RandomStream(12345)
p = tensor.fmatrix()
n = tensor.iscalar()
......
差异被折叠。
差异被折叠。
import numpy as np
from pytest import fixture, raises
import theano.tensor as tt
from theano import change_flags, config
from theano.gradient import NullTypeGradError
from theano.tensor.opt import Assert
from theano.tensor.random.basic import normal
from theano.tensor.random.op import RandomVariable, default_shape_from_params, observed
from theano.tensor.type_other import NoneTypeT
@fixture(scope="module", autouse=True)
def set_theano_flags():
with change_flags(cxx="", compute_test_value="raise"):
yield
def test_default_shape_from_params():
with raises(ValueError, match="^ndim_supp*"):
default_shape_from_params(0, (np.array([1, 2]), 0))
res = default_shape_from_params(1, (np.array([1, 2]), np.eye(2)), rep_param_idx=0)
assert res == (2,)
res = default_shape_from_params(1, (np.array([1, 2]), 0), param_shapes=((2,), ()))
assert res == (2,)
with raises(ValueError, match="^Reference parameter*"):
default_shape_from_params(1, (np.array(1),), rep_param_idx=0)
res = default_shape_from_params(
2, (np.array([1, 2]), np.ones((2, 3, 4))), rep_param_idx=1
)
assert res == (3, 4)
def test_RandomVariable():
str_res = str(
RandomVariable(
"normal",
0,
[0, 0],
"normal",
inplace=True,
)
)
assert str_res == "normal_rv"
# `ndims_params` should be a `Sequence` type
with raises(TypeError, match="^Parameter ndims_params*"):
RandomVariable(
"normal",
0,
0,
config.floatX,
inplace=True,
)
# `size` should be a `Sequence` type
with raises(TypeError, match="^Parameter size*"):
RandomVariable(
"normal",
0,
[0, 0],
config.floatX,
inplace=True,
)(0, 1, size={1, 2})
# No dtype
with raises(TypeError, match="^dtype*"):
RandomVariable(
"normal",
0,
[0, 0],
inplace=True,
)(0, 1)
# Confirm that `inplace` works
rv = RandomVariable(
"normal",
0,
[0, 0],
"normal",
inplace=True,
)
assert rv.inplace
assert rv.destroy_map == {0: [3]}
# A no-params `RandomVariable`
rv = RandomVariable(name="test_rv", ndim_supp=0, ndims_params=())
with raises(TypeError):
rv.make_node(rng=1)
# `RandomVariable._infer_shape` should handle no parameters
rv_shape = rv._infer_shape(tt.constant([]), (), [])
assert rv_shape.equals(tt.constant([], dtype="int64"))
# Integer-specificed `dtype`
dtype_1 = tt.all_dtypes[1]
rv_node = rv.make_node(None, None, 1)
rv_out = rv_node.outputs[1]
rv_out.tag.test_value = 1
assert rv_out.dtype == dtype_1
with raises(NullTypeGradError):
tt.grad(rv_out, [rv_node.inputs[0]])
rv = RandomVariable("normal", 0, [0, 0], config.floatX, inplace=True)
mu = tt.tensor(config.floatX, [True, False, False])
mu.tag.test_value = np.zeros((1, 2, 3)).astype(config.floatX)
sd = tt.tensor(config.floatX, [False, False])
sd.tag.test_value = np.ones((2, 3)).astype(config.floatX)
s1 = tt.iscalar()
s1.tag.test_value = 1
s2 = tt.iscalar()
s2.tag.test_value = 2
s3 = tt.iscalar()
s3.tag.test_value = 3
s3 = Assert("testing")(s3, tt.eq(s1, 1))
res = rv.compute_bcast([mu, sd], (s1, s2, s3))
assert res == [False] * 3
def test_observed():
rv_var = normal(0, 1, size=3)
obs_var = observed(rv_var, np.array([0.2, 0.1, -2.4], dtype=config.floatX))
assert obs_var.owner.inputs[0] is rv_var
with raises(TypeError):
observed(rv_var, np.array([1, 2], dtype=int))
with raises(TypeError):
observed(rv_var, np.array([[1.0, 2.0]], dtype=rv_var.dtype))
obs_rv = observed(None, np.array([0.2, 0.1, -2.4], dtype=config.floatX))
assert isinstance(obs_rv.owner.inputs[0].type, NoneTypeT)
rv_val = tt.vector()
rv_val.tag.test_value = np.array([0.2, 0.1, -2.4], dtype=config.floatX)
obs_var = observed(rv_var, rv_val)
with raises(NullTypeGradError):
tt.grad(obs_var.sum(), [rv_val])
差异被折叠。
import pickle
import sys
import numpy as np
import pytest
from theano import shared
from theano.compile.ops import ViewOp
from theano.tensor.random.type import RandomStateType, random_state_type
# @pytest.mark.skipif(
# not config.cxx, reason="G++ not available, so we need to skip this test."
# )
def test_view_op_c_code():
# TODO: It might be good to make sure that the registered C code works
# (even though it's basically copy-paste from other registered `Op`s).
# from theano.compile.ops import view_op
# from theano.gof.cc import CLinker
# rng_var = random_state_type()
# rng_view = view_op(rng_var)
# function(
# [rng_var],
# rng_view,
# mode=Mode(optimizer=None, linker=CLinker()),
# )
assert ViewOp.c_code_and_version[RandomStateType]
class TestRandomStateType:
def test_pickle(self):
rng_r = random_state_type()
rng_pkl = pickle.dumps(rng_r)
rng_unpkl = pickle.loads(rng_pkl)
assert isinstance(rng_unpkl, type(rng_r))
assert isinstance(rng_unpkl.type, type(rng_r.type))
def test_repr(self):
assert repr(random_state_type) == "RandomStateType"
def test_filter(self):
rng_type = random_state_type
rng = np.random.RandomState()
assert rng_type.filter(rng) is rng
with pytest.raises(TypeError):
rng_type.filter(1)
def test_values_eq(self):
rng_type = random_state_type
rng_a = np.random.RandomState(12)
rng_b = np.random.RandomState(12)
rng_c = np.random.RandomState(123)
bg = np.random.PCG64()
rng_d = np.random.RandomState(bg)
rng_e = np.random.RandomState(bg)
bg_2 = np.random.Philox()
rng_f = np.random.RandomState(bg_2)
rng_g = np.random.RandomState(bg_2)
assert rng_type.values_eq(rng_a, rng_b)
assert not rng_type.values_eq(rng_a, rng_c)
assert not rng_type.values_eq(rng_a, rng_d)
assert not rng_type.values_eq(rng_d, rng_a)
assert not rng_type.values_eq(rng_a, rng_d)
assert rng_type.values_eq(rng_d, rng_e)
assert rng_type.values_eq(rng_f, rng_g)
assert not rng_type.values_eq(rng_g, rng_a)
assert not rng_type.values_eq(rng_e, rng_g)
def test_get_shape_info(self):
rng = np.random.RandomState(12)
rng_a = shared(rng)
assert isinstance(
random_state_type.get_shape_info(rng_a), np.random.RandomState
)
def test_get_size(self):
rng = np.random.RandomState(12)
rng_a = shared(rng)
shape_info = random_state_type.get_shape_info(rng_a)
size = random_state_type.get_size(shape_info)
assert size == sys.getsizeof(rng.get_state(legacy=False))
def test_may_share_memory(self):
rng_a = np.random.RandomState(12)
bg = np.random.PCG64()
rng_b = np.random.RandomState(bg)
rng_var_a = shared(rng_a, borrow=True)
rng_var_b = shared(rng_b, borrow=True)
shape_info_a = random_state_type.get_shape_info(rng_var_a)
shape_info_b = random_state_type.get_shape_info(rng_var_b)
assert random_state_type.may_share_memory(shape_info_a, shape_info_b) is False
rng_c = np.random.RandomState(bg)
rng_var_c = shared(rng_c, borrow=True)
shape_info_c = random_state_type.get_shape_info(rng_var_c)
assert random_state_type.may_share_memory(shape_info_b, shape_info_c) is True
import numpy as np
import pytest
import theano.tensor as tt
from tests import unittest_tools as utt
from theano import change_flags, config, function
from theano.compile.mode import Mode
from theano.gof.optdb import Query
from theano.tensor.random.utils import RandomStream, broadcast_params
@pytest.fixture(scope="module", autouse=True)
def set_theano_flags():
opts = Query(include=[None], exclude=[])
py_mode = Mode("py", opts)
with change_flags(mode=py_mode, compute_test_value="warn"):
yield
def test_broadcast_params():
ndims_params = [0, 0]
mean = np.array([0, 1, 2])
cov = np.array(1e-6)
params = [mean, cov]
res = broadcast_params(params, ndims_params)
assert np.array_equal(res[0], mean)
assert np.array_equal(res[1], np.broadcast_to(cov, (3,)))
ndims_params = [1, 2]
mean = np.r_[1, 2, 3]
cov = np.stack([np.eye(3) * 1e-5, np.eye(3) * 1e-4])
params = [mean, cov]
res = broadcast_params(params, ndims_params)
assert np.array_equal(res[0], np.broadcast_to(mean, (2, 3)))
assert np.array_equal(res[1], cov)
mean = np.stack([np.r_[0, 0, 0], np.r_[1, 1, 1]])
cov = np.arange(3 * 3).reshape((3, 3))
params = [mean, cov]
res = broadcast_params(params, ndims_params)
assert np.array_equal(res[0], mean)
assert np.array_equal(res[1], np.broadcast_to(cov, (2, 3, 3)))
mean = np.stack([np.r_[0, 0, 0], np.r_[1, 1, 1]])
cov = np.stack(
[np.arange(3 * 3).reshape((3, 3)), np.arange(3 * 3).reshape((3, 3)) * 10]
)
params = [mean, cov]
res = broadcast_params(params, ndims_params)
assert np.array_equal(res[0], mean)
assert np.array_equal(res[1], cov)
mean = np.array([[1, 2, 3]])
cov = np.stack([np.eye(3) * 1e-5, np.eye(3) * 1e-4])
params = [mean, cov]
res = broadcast_params(params, ndims_params)
assert np.array_equal(res[0], np.array([[1, 2, 3], [1, 2, 3]]))
assert np.array_equal(res[1], cov)
mean = np.array([[0], [10], [100]])
cov = np.diag(np.array([1e-6]))
params = [mean, cov]
res = broadcast_params(params, ndims_params)
assert np.array_equal(res[0], mean)
assert np.array_equal(res[1], np.broadcast_to(cov, (3, 1, 1)))
# Try it in Theano
with change_flags(compute_test_value="raise"):
mean = tt.tensor(config.floatX, [False, True])
mean.tag.test_value = np.array([[0], [10], [100]], dtype=config.floatX)
cov = tt.matrix()
cov.tag.test_value = np.diag(np.array([1e-6], dtype=config.floatX))
params = [mean, cov]
res = broadcast_params(params, ndims_params)
assert np.array_equal(res[0].get_test_value(), mean.get_test_value())
assert np.array_equal(
res[1].get_test_value(), np.broadcast_to(cov.get_test_value(), (3, 1, 1))
)
class TestSharedRandomStream:
def setup_method(self):
utt.seed_rng()
def test_tutorial(self):
srng = RandomStream(seed=234)
rv_u = srng.uniform(0, 1, size=(2, 2))
rv_n = srng.normal(0, 1, size=(2, 2))
f = function([], rv_u)
# Disabling `default_updates` means that we have to pass
# `srng.state_updates` to `function` manually, if we want the shared
# state to change
g = function([], rv_n, no_default_updates=True)
nearly_zeros = function([], rv_u + rv_u - 2 * rv_u)
assert np.all(f() != f())
assert np.all(g() == g())
assert np.all(abs(nearly_zeros()) < 1e-5)
assert isinstance(rv_u.rng.get_value(borrow=True), np.random.RandomState)
def test_basics(self):
random = RandomStream(seed=utt.fetch_seed())
with pytest.raises(TypeError):
random.uniform(0, 1, size=(2, 2), rng=np.random.RandomState(23))
with pytest.raises(AttributeError):
random.blah
with pytest.raises(AttributeError):
np_random = RandomStream(namespace=np)
np_random.ndarray
fn = function([], random.uniform(0, 1, size=(2, 2)), updates=random.updates())
fn_val0 = fn()
fn_val1 = fn()
rng_seed = np.random.RandomState(utt.fetch_seed()).randint(2 ** 30)
rng = np.random.RandomState(int(rng_seed)) # int() is for 32bit
numpy_val0 = rng.uniform(0, 1, size=(2, 2))
numpy_val1 = rng.uniform(0, 1, size=(2, 2))
assert np.allclose(fn_val0, numpy_val0)
assert np.allclose(fn_val1, numpy_val1)
def test_seed(self):
init_seed = 234
random = RandomStream(init_seed)
ref_state = np.random.RandomState(init_seed).get_state()
random_state = random.gen_seedgen.get_state()
assert random.default_instance_seed == init_seed
assert np.array_equal(random_state[1], ref_state[1])
assert random_state[0] == ref_state[0]
assert random_state[2:] == ref_state[2:]
new_seed = 43298
random.seed(new_seed)
ref_state = np.random.RandomState(new_seed).get_state()
random_state = random.gen_seedgen.get_state()
assert np.array_equal(random_state[1], ref_state[1])
assert random_state[0] == ref_state[0]
assert random_state[2:] == ref_state[2:]
random.seed()
ref_state = np.random.RandomState(init_seed).get_state()
random_state = random.gen_seedgen.get_state()
assert random.default_instance_seed == init_seed
assert np.array_equal(random_state[1], ref_state[1])
assert random_state[0] == ref_state[0]
assert random_state[2:] == ref_state[2:]
# Reset the seed
random.seed(new_seed)
# Check state updates
_ = random.normal()
# Now, change the seed when there are state updates
random.seed(new_seed)
rng = np.random.RandomState(new_seed)
update_seed = rng.randint(2 ** 30)
ref_state = np.random.RandomState(update_seed).get_state()
random_state = random.state_updates[0][0].get_value(borrow=True).get_state()
assert np.array_equal(random_state[1], ref_state[1])
assert random_state[0] == ref_state[0]
assert random_state[2:] == ref_state[2:]
def test_uniform(self):
# Test that RandomStream.uniform generates the same results as numpy
# Check over two calls to see if the random state is correctly updated.
random = RandomStream(utt.fetch_seed())
fn = function([], random.uniform(-1, 1, size=(2, 2)))
fn_val0 = fn()
fn_val1 = fn()
rng_seed = np.random.RandomState(utt.fetch_seed()).randint(2 ** 30)
rng = np.random.RandomState(int(rng_seed)) # int() is for 32bit
numpy_val0 = rng.uniform(-1, 1, size=(2, 2))
numpy_val1 = rng.uniform(-1, 1, size=(2, 2))
assert np.allclose(fn_val0, numpy_val0)
assert np.allclose(fn_val1, numpy_val1)
def test_default_updates(self):
# Basic case: default_updates
random_a = RandomStream(utt.fetch_seed())
out_a = random_a.uniform(0, 1, size=(2, 2))
fn_a = function([], out_a)
fn_a_val0 = fn_a()
fn_a_val1 = fn_a()
assert not np.all(fn_a_val0 == fn_a_val1)
nearly_zeros = function([], out_a + out_a - 2 * out_a)
assert np.all(abs(nearly_zeros()) < 1e-5)
# Explicit updates #1
random_b = RandomStream(utt.fetch_seed())
out_b = random_b.uniform(0, 1, size=(2, 2))
fn_b = function([], out_b, updates=random_b.updates())
fn_b_val0 = fn_b()
fn_b_val1 = fn_b()
assert np.all(fn_b_val0 == fn_a_val0)
assert np.all(fn_b_val1 == fn_a_val1)
# Explicit updates #2
random_c = RandomStream(utt.fetch_seed())
out_c = random_c.uniform(0, 1, size=(2, 2))
fn_c = function([], out_c, updates=[out_c.update])
fn_c_val0 = fn_c()
fn_c_val1 = fn_c()
assert np.all(fn_c_val0 == fn_a_val0)
assert np.all(fn_c_val1 == fn_a_val1)
# No updates at all
random_d = RandomStream(utt.fetch_seed())
out_d = random_d.uniform(0, 1, size=(2, 2))
fn_d = function([], out_d, no_default_updates=True)
fn_d_val0 = fn_d()
fn_d_val1 = fn_d()
assert np.all(fn_d_val0 == fn_a_val0)
assert np.all(fn_d_val1 == fn_d_val0)
# No updates for out
random_e = RandomStream(utt.fetch_seed())
out_e = random_e.uniform(0, 1, size=(2, 2))
fn_e = function([], out_e, no_default_updates=[out_e.rng])
fn_e_val0 = fn_e()
fn_e_val1 = fn_e()
assert np.all(fn_e_val0 == fn_a_val0)
assert np.all(fn_e_val1 == fn_e_val0)
def test_multiple_rng_aliasing(self):
# Test that when we have multiple random number generators, we do not alias
# the state_updates member. `state_updates` can be useful when attempting to
# copy the (random) state between two similar theano graphs. The test is
# meant to detect a previous bug where state_updates was initialized as a
# class-attribute, instead of the __init__ function.
rng1 = RandomStream(1234)
rng2 = RandomStream(2392)
assert rng1.state_updates is not rng2.state_updates
assert rng1.gen_seedgen is not rng2.gen_seedgen
def test_random_state_transfer(self):
# Test that random state can be transferred from one theano graph to another.
class Graph:
def __init__(self, seed=123):
self.rng = RandomStream(seed)
self.y = self.rng.uniform(0, 1, size=(1,))
g1 = Graph(seed=123)
f1 = function([], g1.y)
g2 = Graph(seed=987)
f2 = function([], g2.y)
for (su1, su2) in zip(g1.rng.state_updates, g2.rng.state_updates):
su2[0].set_value(su1[0].get_value())
np.testing.assert_array_almost_equal(f1(), f2(), decimal=6)
import numpy as np
from theano import shared
def test_RandomStateSharedVariable():
rng = np.random.RandomState(123)
s_rng_default = shared(rng)
s_rng_True = shared(rng, borrow=True)
s_rng_False = shared(rng, borrow=False)
# test borrow contract: that False means a copy must have been made
assert s_rng_default.container.storage[0] is not rng
assert s_rng_False.container.storage[0] is not rng
# test current implementation: that True means a copy was not made
assert s_rng_True.container.storage[0] is rng
# ensure that all the random number generators are in the same state
v = rng.randn()
v0 = s_rng_default.container.storage[0].randn()
v1 = s_rng_False.container.storage[0].randn()
assert v == v0 == v1
def test_get_value_borrow():
rng = np.random.RandomState(123)
s_rng = shared(rng)
r_ = s_rng.container.storage[0]
r_T = s_rng.get_value(borrow=True)
r_F = s_rng.get_value(borrow=False)
# the contract requires that borrow=False returns a copy
assert r_ is not r_F
# the current implementation allows for True to return the real thing
assert r_ is r_T
# either way, the rngs should all be in the same state
assert r_.rand() == r_F.rand()
def test_get_value_internal_type():
rng = np.random.RandomState(123)
s_rng = shared(rng)
# there is no special behaviour required of return_internal_type
# this test just ensures that the flag doesn't screw anything up
# by repeating the get_value_borrow test.
r_ = s_rng.container.storage[0]
r_T = s_rng.get_value(borrow=True, return_internal_type=True)
r_F = s_rng.get_value(borrow=False, return_internal_type=True)
# the contract requires that borrow=False returns a copy
assert r_ is not r_F
# the current implementation allows for True to return the real thing
assert r_ is r_T
# either way, the rngs should all be in the same state
assert r_.rand() == r_F.rand()
def test_set_value_borrow():
rng = np.random.RandomState(123)
s_rng = shared(rng)
new_rng = np.random.RandomState(234234)
# Test the borrow contract is respected:
# assigning with borrow=False makes a copy
s_rng.set_value(new_rng, borrow=False)
assert new_rng is not s_rng.container.storage[0]
assert new_rng.randn() == s_rng.container.storage[0].randn()
# Test that the current implementation is actually borrowing when it can.
rr = np.random.RandomState(33)
s_rng.set_value(rr, borrow=True)
assert rr is s_rng.container.storage[0]
......@@ -108,6 +108,7 @@ from theano.tensor import (
as_tensor_variable,
batched_dot,
bvector,
cast,
choose,
clip,
constant,
......@@ -1110,6 +1111,18 @@ class TestAsTensorVariable:
a_vector = as_tensor_variable(x_vector)
assert x_vector is a_vector
def test_make_vector(self):
a = tt.iscalar()
x = tt.tile(a, (1, 1, 1))
y = (tt.constant(1, dtype="int64"), x.shape[2])
res = tt.as_tensor(y, ndim=1)
assert isinstance(res.owner.op, tt.opt.MakeVector)
assert tuple(res.owner.inputs) == y
y = (1, x.shape[2])
res = tt.as_tensor(y)
assert isinstance(res.owner.op, tt.opt.MakeVector)
class TestAlloc:
dtype = config.floatX
......@@ -2368,24 +2381,53 @@ class TestOuter:
utt.verify_grad(tt.outer, [data0, data1])
class TestGetVectorLength:
def test_get_vector_length(self):
x = theano.shared(np.zeros((2, 3, 4, 5)))
assert len(list(x.shape)) == 4
assert len(list(x.shape[2:4])) == 2
assert len(list(x.shape[2:])) == 2
assert len(list(x.shape[1:4])) == 3
assert len(list(x.shape[2:2])) == 0
assert len(list(x.shape[1:5])) == 3
assert len(list(x.shape[1:10])) == 3
# Test step
assert len(list(x.shape[1:10:2])) == 2
# Test neg start
assert len(list(x.shape[-1:4])) == 1
assert len(list(x.shape[-6:4])) == 4
# test neg stop
assert len(list(x.shape[1:-2])) == 1
assert len(list(x.shape[1:-1])) == 2
def test_get_vector_length():
x = theano.shared(np.zeros((2, 3, 4, 5)))
assert len(list(x.shape)) == 4
assert len(list(x.shape[2:4])) == 2
assert len(list(x.shape[2:])) == 2
assert len(list(x.shape[1:4])) == 3
assert len(list(x.shape[2:2])) == 0
assert len(list(x.shape[1:5])) == 3
assert len(list(x.shape[1:10])) == 3
# Test step
assert len(list(x.shape[1:10:2])) == 2
# Test neg start
assert len(list(x.shape[-1:4])) == 1
assert len(list(x.shape[-6:4])) == 4
# test neg stop
assert len(list(x.shape[1:-2])) == 1
assert len(list(x.shape[1:-1])) == 2
z = join(0, as_tensor_variable(1, ndim=1), as_tensor_variable(x.shape[0], ndim=1))
assert isinstance(z.owner.op, Join)
assert get_vector_length(z) == 2
z = join(
0, as_tensor_variable([1, 2], ndim=1), as_tensor_variable(x.shape[0], ndim=1)
)
assert isinstance(z.owner.op, Join)
assert get_vector_length(z) == 3
empty_tuple = as_tensor_variable(())
assert 0 == get_vector_length(empty_tuple)
x = lscalar("x")
y = dscalar("y")
triple = as_tensor_variable((x, y, 9.0))
assert 3 == get_vector_length(triple)
triple = cast(as_tensor_variable((x, y, 9.0)), "int64")
assert 3 == get_vector_length(triple)
a, b, c = triple
mode = theano.compile.get_default_mode().excluding("constant_folding")
f = function([x, y], [b, c, a], mode=mode)
topo = f.maker.fgraph.toposort()
assert [True for node in topo if isinstance(node.op, opt.MakeVector)]
assert np.allclose(f(4, 5), [5, 9, 4])
class TestJoinAndSplit:
......@@ -2865,20 +2907,6 @@ class TestJoinAndSplit:
utt.verify_grad(lambda a, b: join(-1, a, b), [v, 2 * v], mode=self.mode)
def test_vector_len(self):
x = lscalar("x")
y = dscalar("y")
triple = as_tensor_variable((x, y, 9.0))
assert 3 == get_vector_length(triple)
a, b, c = triple
f = function([x, y], [b, c, a], mode=self.mode)
topo = f.maker.fgraph.toposort()
assert [True for node in topo if isinstance(node.op, opt.MakeVector)]
assert np.allclose(f(4, 5), [5, 9, 4])
def test_broadcastable_flag_assignment_mixed_otheraxes(self):
# Test that the broadcastable flags for the output of
# a join operation on non-join axes are True if one or
......@@ -5841,6 +5869,19 @@ class TestGetScalarConstantValue:
v = tt.row()
assert get_scalar_constant_value(v.shape[0]) == 1
res = tt.get_scalar_constant_value(tt.as_tensor([10, 20]).shape[0])
assert isinstance(res, np.ndarray)
assert 2 == res
res = tt.get_scalar_constant_value(
9 + tt.as_tensor([1.0]).shape[0],
elemwise=True,
only_process_constants=False,
max_recur=9,
)
assert isinstance(res, np.ndarray)
assert 10 == res
def test_subtensor_of_constant(self):
c = constant(rand(5))
for i in range(c.value.shape[0]):
......
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
差异被折叠。
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论