Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
d1eba87d
提交
d1eba87d
authored
8月 19, 2015
作者:
abergeron
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #3294 from harlouci/numpydoc_sandbox_1
Numpydoc sandbox 1
上级
477fd7cf
88716ac9
全部展开
隐藏空白字符变更
内嵌
并排
正在显示
12 个修改的文件
包含
273 行增加
和
197 行删除
+273
-197
basic_ops.py
theano/sandbox/cuda/basic_ops.py
+0
-0
blas.py
theano/sandbox/cuda/blas.py
+0
-0
blocksparse.py
theano/sandbox/cuda/blocksparse.py
+27
-15
cula.py
theano/sandbox/cuda/cula.py
+6
-2
fourier.py
theano/sandbox/fourier.py
+15
-10
multinomial.py
theano/sandbox/multinomial.py
+7
-2
neighbourhoods.py
theano/sandbox/neighbourhoods.py
+54
-52
rng_mrg.py
theano/sandbox/rng_mrg.py
+88
-68
scan.py
theano/sandbox/scan.py
+42
-31
solve.py
theano/sandbox/solve.py
+4
-2
test_rng_mrg.py
theano/sandbox/test_rng_mrg.py
+30
-15
theano_object.py
theano/sandbox/theano_object.py
+0
-0
没有找到文件。
theano/sandbox/cuda/basic_ops.py
浏览文件 @
d1eba87d
差异被折叠。
点击展开。
theano/sandbox/cuda/blas.py
浏览文件 @
d1eba87d
差异被折叠。
点击展开。
theano/sandbox/cuda/blocksparse.py
浏览文件 @
d1eba87d
...
...
@@ -30,7 +30,9 @@ class SparseBlockGemvSS(GpuOp):
This should not be directly called since the interface is subject
to change without notice. Use the sparse_block_dot_SS() function
for a stable interface.
"""
def
__init__
(
self
,
inplace
=
False
):
self
.
inplace
=
inplace
if
self
.
inplace
:
...
...
@@ -367,9 +369,11 @@ class SparseBlockOuterSS(GpuOp):
The i and j are taken from the xIdx and yIdx lists respectively.
This op should not be called directly since its interface is
subject to change without notice.
It is involved in the gradient
subject to change without notice. It is involved in the gradient
of SparseBlockGemvSS.
"""
def
__init__
(
self
,
inplace
=
False
):
self
.
inplace
=
inplace
if
self
.
inplace
:
...
...
@@ -680,28 +684,36 @@ def sparse_block_dot_SS(W, h, inputIdx, b, outputIdx):
Parameters
----------
var: shape, comment
W: (iBlocks, oBlocks, iSize, oSize), weight matrix
h: (batch, iWin, iSize), input from lower layer (sparse)
inputIdx: (batch, iWin), indexes of the input blocks
b: (oBlocks, oSize), bias vector
outputIdx: (batch, oWin), indexes of the output blocks
returns (batch, oWin, oSize), dot(W[i, j], h[i]) + b[j]
but b[j] is only added once
Notation
--------
W : (iBlocks, oBlocks, iSize, oSize)
Weight matrix.
h : (batch, iWin, iSize)
Input from lower layer (sparse).
inputIdx : (batch, iWin)
Indexes of the input blocks.
b : (oBlocks, oSize)
Bias vector.
outputIdx : (batch, oWin)
Indexes of the output blocks.
Returns
-------
(batch, oWin, oSize)
dot(W[i, j], h[i]) + b[j], but b[j] is only added once.
Notes
-----
- `batch` is the number of examples in a minibatch (batch size).
- `iBlocks` is the total number of blocks in the input (from lower layer).
- `iSize` is the size of each of these input blocks.
- `iWin` is the number of blocks that will be used as inputs. Which blocks
will be used is specified in `inputIdx`.
will be used is specified in `inputIdx`.
- `oBlocks` is the number or possible output blocks.
- `oSize` is the size of each of these output blocks.
- `oWin` is the number of output blocks that will actually be computed.
Which blocks will be computed is specified in `outputIdx`.
Which blocks will be computed is specified in `outputIdx`.
"""
assert
inputIdx
.
ndim
==
h
.
ndim
-
1
assert
outputIdx
.
ndim
==
inputIdx
.
ndim
if
h
.
ndim
==
2
:
...
...
theano/sandbox/cuda/cula.py
浏览文件 @
d1eba87d
...
...
@@ -26,9 +26,13 @@ class GpuSolve(GpuOp):
"""
CULA GPU solver OP.
:param trans: Whether to take the transpose of the input matrix
or not.
Parameters
----------
trans
Whether to take the transpose of the input matrix or not.
"""
__props__
=
(
'trans'
,)
def
__init__
(
self
,
trans
=
'N'
):
...
...
theano/sandbox/fourier.py
浏览文件 @
d1eba87d
"""Provides Ops for FFT and DCT.
"""
Provides Ops for FFT and DCT.
"""
import
numpy
...
...
@@ -23,18 +24,19 @@ grad_todo = GradTodo()
class
FFT
(
Op
):
"""Fast Fourier Transform
"""
Fast Fourier Transform.
.. TODO:
The current implementation just works for matrix inputs, and permits
taking a 1D FFT over
either rows or columns. Add support for N-D FFTs as provided by either numpy or FFTW
directly.
The current implementation just works for matrix inputs, and permits
taking a 1D FFT over either rows or columns. Add support for N-D FFTs
as provided by either numpy or FFTW
directly.
.. TODO:
Give the C code that uses FFTW.
.. TODO:
u
nit tests.
U
nit tests.
"""
...
...
@@ -42,7 +44,7 @@ class FFT(Op):
# don't return the plan object in the 'buf' output
half
=
False
"""Only return the first half (positive-valued) of the frequency components"""
"""Only return the first half (positive-valued) of the frequency components
.
"""
__props__
=
(
"half"
,
"inverse"
)
def
__init__
(
self
,
half
=
False
,
inverse
=
False
):
...
...
@@ -50,7 +52,10 @@ class FFT(Op):
self
.
inverse
=
inverse
def
make_node
(
self
,
frames
,
n
,
axis
):
""" compute an n-point fft of frames along given axis """
"""
Compute an n-point fft of frames along given axis.
"""
_frames
=
tensor
.
as_tensor
(
frames
,
ndim
=
2
)
_n
=
tensor
.
as_tensor
(
n
,
ndim
=
0
)
_axis
=
tensor
.
as_tensor
(
axis
,
ndim
=
0
)
...
...
@@ -103,8 +108,8 @@ def dct_matrix(rows, cols, unitary=True):
"""
Return a (rows x cols) matrix implementing a discrete cosine transform.
This algorithm is adapted from Dan Ellis' Rastmat
spec2cep.m, lines 15 - 20.
This algorithm is adapted from Dan Ellis' Rastmat
spec2cep.m, lines 15-20.
"""
rval
=
numpy
.
zeros
((
rows
,
cols
))
col_range
=
numpy
.
arange
(
cols
)
...
...
theano/sandbox/multinomial.py
浏览文件 @
d1eba87d
...
...
@@ -13,7 +13,11 @@ if cuda_available:
class
MultinomialFromUniform
(
Op
):
'''Converts samples from a uniform into sample from a multinomial.'''
"""
Converts samples from a uniform into sample from a multinomial.
"""
__props__
=
(
"odtype"
,)
def
__init__
(
self
,
odtype
):
...
...
@@ -164,7 +168,8 @@ class GpuMultinomialFromUniform(MultinomialFromUniform, GpuOp):
The output is transposed compared to MultinomialFromUniform.
We must insert a Transpose op after it.
The optimization that move it to the gpu do it.
The optimization that moves it to the gpu does it.
"""
def
make_node
(
self
,
pvals
,
unis
):
...
...
theano/sandbox/neighbourhoods.py
浏览文件 @
d1eba87d
"""WARNING: This code is not recommanded. It is not finished, it is
slower then the version in sandbox/neighbours.py, and it do not work
"""
.. warning:: This code is not recommanded. It is not finished, it is
slower than the version in sandbox/neighbours.py, and it does not work
on the GPU.
We only keep this version here as it is a little bit more generic, so
...
...
@@ -16,66 +17,67 @@ from theano import gof, Op
class
NeighbourhoodsFromImages
(
Op
):
"""
This extracts neighbourhoods from "images", but in a dimension-generic
manner.
__props__
=
(
"n_dims_before"
,
"dims_neighbourhoods"
,
"strides"
,
"ignore_border"
,
"inverse"
)
def
__init__
(
self
,
n_dims_before
,
dims_neighbourhoods
,
strides
=
None
,
ignore_border
=
False
,
inverse
=
False
):
"""
This extracts neighbourhoods from "images", but in a
dimension-generic manner.
In the 2D case, this is similar to downsampling, but instead of reducing
a group of 2x2 pixels (for example) to a single new pixel in the output,
you place those 4 pixels in a row.
In the 2D case, this is similar to downsampling, but instead of reducing
a group of 2x2 pixels (for example) to a single new pixel in the output,
you place those 4 pixels in a row.
For example, say you have this 2x4 image::
For example, say you have this 2x4 image::
[ [ 0.5, 0.6, 0.7, 0.8 ],
[ 0.1, 0.2, 0.3, 0.4 ] ]
and you want to extract 2x2 neighbourhoods. This op would then produce::
and you want to extract 2x2 neighbourhoods. This op would then produce::
[ [ [ 0.5, 0.6, 0.1, 0.2 ] ], # the first 2x2 group of pixels
[ [ 0.7, 0.8, 0.3, 0.4 ] ] ] # the second one
so think of a 2D downsampling where each pixel of the resulting array
is replaced by an array containing the (flattened) pixels of the
corresponding neighbourhood.
If you provide a stack of 2D image, or multiple stacks, each image
will be treated independently, and the first dimensions of the array
will be preserved as such.
This also makes sense in the 1D or 3D case. Below I'll still be calling
those "images", by analogy.
In the 1D case, you're
extracting subsequences from the original sequence. In the 3D case,
you're extracting cuboids. If you ever find a 4D use, tell me! It
should be possible, anyhow.
Parameters
----------
n_dims_before : int
Number of dimensions preceding the "images".
dims_neighbourhoods : tuple of ints
Exact shape of windows to be extracted (e.g. (2,2) in the case above).
n_dims_before + len(dims_neighbourhoods) should be equal to the
number of dimensions in the input given to the op.
strides : tuple of int
Number of elements to skip when moving to the next neighbourhood,
for each dimension of dims_neighbourhoods. There can be overlap
between neighbourhoods, or gaps.
ignore_border : bool
If the dimensions of the neighbourhoods don't exactly divide the
dimensions of the "images", you can either fill the last
neighbourhood with zeros (False) or drop it entirely (True).
inverse : bool
You shouldn't have to use this. Only used by child class
ImagesFromNeighbourhoods which simply reverses the assignment.
"""
So think of a 2D downsampling where each pixel of the resulting array
is replaced by an array containing the (flattened) pixels of the
corresponding neighbourhood.
If you provide a stack of 2D images, or multiple stacks, each image
will be treated independently, and the first dimensions of the array
will be preserved as such.
This also makes sense in the 1D or 3D case. Below I'll still be calling
those "images", by analogy.
In the 1D case, you're extracting subsequences from the original sequence.
In the 3D case, you're extracting cuboids.
If you ever find a 4D use, tell me! It should be possible, anyhow.
Parameters
----------
n_dims_before : int
Number of dimensions preceding the "images".
dims_neighbourhoods : tuple of ints
Exact shape of windows to be extracted (e.g. (2,2) in the case above).
n_dims_before + len(dims_neighbourhoods) should be equal to the
number of dimensions in the input given to the op.
strides : tuple of int
Number of elements to skip when moving to the next neighbourhood,
for each dimension of dims_neighbourhoods. There can be overlap
between neighbourhoods, or gaps.
ignore_border : bool
If the dimensions of the neighbourhoods don't exactly divide the
dimensions of the "images", you can either fill the last
neighbourhood with zeros (False) or drop it entirely (True).
inverse : bool
You shouldn't have to use this. Only used by child class
ImagesFromNeighbourhoods which simply reverses the assignment.
"""
__props__
=
(
"n_dims_before"
,
"dims_neighbourhoods"
,
"strides"
,
"ignore_border"
,
"inverse"
)
def
__init__
(
self
,
n_dims_before
,
dims_neighbourhoods
,
strides
=
None
,
ignore_border
=
False
,
inverse
=
False
):
self
.
n_dims_before
=
n_dims_before
self
.
dims_neighbourhoods
=
dims_neighbourhoods
if
strides
is
not
None
:
...
...
theano/sandbox/rng_mrg.py
浏览文件 @
d1eba87d
"""
Implementation of MRG31k3p random number generator for Theano
Implementation of MRG31k3p random number generator for Theano
.
Generator code in SSJ package (L'Ecuyer & Simard)
Generator code in SSJ package (L'Ecuyer & Simard)
.
http://www.iro.umontreal.ca/~simardr/ssj/indexe.html
"""
...
...
@@ -39,11 +39,14 @@ def matVecModM(A, s, m):
def
multMatVect
(
v
,
A
,
m1
,
B
,
m2
):
"""
multiply the first half of v by A with a modulo of m1
and the second half by B with a modulo of m2
Multiply the first half of v by A with a modulo of m1 and the second half
by B with a modulo of m2.
Notes
-----
The parameters of dot_modulo are passed implicitly because passing them
explicitly takes more time than running the function's C-code.
Note: The parameters of dot_modulo are passed implicitly because passing
them explicitly takes more time then running the function's C-code.
"""
if
multMatVect
.
dot_modulo
is
None
:
A_sym
=
tensor
.
lmatrix
(
'A'
)
...
...
@@ -76,7 +79,8 @@ class DotModulo(Op):
Efficient and numerically stable implementation of a dot product followed
by a modulo operation. This performs the same function as matVecModM.
We do this 2 times on 2 triple inputs and concatenating the output
We do this 2 times on 2 triple inputs and concatenating the output.
"""
__props__
=
()
...
...
@@ -1014,9 +1018,12 @@ def guess_n_streams(size, warn=False):
"""
Return a guess at a good number of streams.
:param warn:
If True, warn when a guess cannot be made (in which case we
return 60 * 256).
Parameters
----------
warn : bool, optional
If True, warn when a guess cannot be made (in which case we
return 60 * 256).
"""
# TODO: a smart way of choosing the number of streams, see #612.
# Note that this code was moved out of `MRG_RandomStreams` so that it can
...
...
@@ -1048,22 +1055,25 @@ def guess_n_streams(size, warn=False):
class
MRG_RandomStreams
(
object
):
"""Module component with similar interface to numpy.random (numpy.random.RandomState)"""
"""
Module component with similar interface to numpy.random
(numpy.random.RandomState).
Parameters
----------
seed : int or list of 6 int
A default seed to initialize the random state.
If a single int is given, it will be replicated 6 times.
The first 3 values of the seed must all be less than M1 = 2147483647,
and not all 0; and the last 3 values must all be less than
M2 = 2147462579, and not all 0.
"""
def
updates
(
self
):
return
list
(
self
.
state_updates
)
def
__init__
(
self
,
seed
=
12345
,
use_cuda
=
None
):
"""
:type seed: int or list of 6 int.
:param seed: a default seed to initialize the random state.
If a single int is given, it will be replicated 6 times.
The first 3 values of the seed must all be less than M1 = 2147483647,
and not all 0; and the last 3 values must all be less than
M2 = 2147462579, and not all 0.
"""
# A list of pairs of the form (input_r, output_r), representing the
# update rules of all the random states generated by this RandomStreams.
self
.
state_updates
=
[]
...
...
@@ -1107,14 +1117,18 @@ class MRG_RandomStreams(object):
raise
TypeError
(
"seed should be 1 integer or 6 integers"
)
def
seed
(
self
,
seed
=
None
):
"""Re-initialize each random stream
:param seed: each random stream will be assigned a unique
state that depends deterministically on this value.
"""
Re-initialize each random stream.
:type seed: None or integer in range 0 to 2**30
Parameters
----------
seed : None or integer in range 0 to 2**30
Each random stream will be assigned a unique state that depends
deterministically on this value.
:rtype: None
Returns
-------
None
"""
if
seed
is
None
:
...
...
@@ -1133,14 +1147,20 @@ class MRG_RandomStreams(object):
old_r
.
set_value
(
rstates
,
borrow
=
True
)
def
inc_rstate
(
self
):
"""Update self.rstate to be skipped 2^134 steps forward to the next stream start"""
"""
Update self.rstate to be skipped 2^134 steps forward to the next stream
start.
"""
#self.rstate = ff_2p134(self.rstate)
self
.
rstate
=
multMatVect
(
self
.
rstate
,
A1p134
,
M1
,
A2p134
,
M2
)
assert
self
.
rstate
.
dtype
==
numpy
.
int32
def
get_substream_rstates
(
self
,
n_streams
,
dtype
,
inc_rstate
=
True
):
"""Initialize a matrix in which each row is a MRG stream state,
"""
Initialize a matrix in which each row is a MRG stream state,
and they are spaced by 2**72 samples.
"""
assert
isinstance
(
dtype
,
str
)
assert
n_streams
<
2
**
72
...
...
@@ -1198,27 +1218,25 @@ class MRG_RandomStreams(object):
distribution between low and high.
If the size argument is ambiguous on the number of dimensions,
ndim may be a plain integer to supplement the missing
information.
:param low:
Lower bound of the interval on which values are sampled. If
the ``dtype`` arg is provided, ``low`` will be cast into
dtype. This bound is excluded.
:param high:
Higher bound of the interval on which values are sampled.
If the ``dtype`` arg is provided, ``high`` will be cast into
dtype. This bound is excluded.
:param size:
ndim may be a plain integer to supplement the missing information.
Parameters
----------
low
Lower bound of the interval on which values are sampled.
If the ``dtype`` arg is provided, ``low`` will be cast into
dtype. This bound is excluded.
high
Higher bound of the interval on which values are sampled.
If the ``dtype`` arg is provided, ``high`` will be cast into
dtype. This bound is excluded.
size
Can be a list of integer or Theano variable (ex: the shape
of other Theano Variable)
:param dtype:
The output data type. If dtype is not specified, it will be
inferred from the dtype of low and high, but will be at
least as precise as floatX.
of other Theano Variable).
dtype
The output data type. If dtype is not specified, it will be
inferred from the dtype of low and high, but will be at
least as precise as floatX.
"""
low
=
as_tensor_variable
(
low
)
...
...
@@ -1300,15 +1318,17 @@ class MRG_RandomStreams(object):
Example : pvals = [[.98, .01, .01], [.01, .98, .01]] will
probably result in [[1,0,0],[0,1,0]].
.. note::
-`size` and `ndim` are only there keep the same signature as other
uniform, binomial, normal, etc.
todo : adapt multinomial to take that into account
Notes
-----
-`size` and `ndim` are only there keep the same signature as other
uniform, binomial, normal, etc.
TODO : adapt multinomial to take that into account
-Does not do any value checking on pvals, i.e. there is no
check that the elements are non-negative, less than 1, or
sum to 1. passing pvals = [[-2., 2.]] will result in
sampling [[0, 0]]
-Does not do any value checking on pvals, i.e. there is no
check that the elements are non-negative, less than 1, or
sum to 1. passing pvals = [[-2., 2.]] will result in
sampling [[0, 0]]
"""
if
pvals
is
None
:
raise
TypeError
(
"You have to specify pvals"
)
...
...
@@ -1342,17 +1362,17 @@ class MRG_RandomStreams(object):
def
normal
(
self
,
size
,
avg
=
0.0
,
std
=
1.0
,
ndim
=
None
,
dtype
=
None
,
nstreams
=
None
):
"""
:param size:
Can be a list of integers or Theano variables (ex: the shape
of another Theano Variable)
:param dtype:
The output data type. If dtype is not specified, it will b
e
inferred from the dtype of low and high, but will be at
least as precise as floatX.
:param nstreams:
Number of streams.
Parameters
----------
size
Can be a list of integers or Theano variables (ex: the shape
of another Theano Variable).
dtyp
e
The output data type. If dtype is not specified, it will be
inferred from the dtype of low and high, but will be at
least as precise as floatX.
nstreams
Number of streams.
"""
# We need an even number of ]0,1[ samples. Then we split them
...
...
theano/sandbox/scan.py
浏览文件 @
d1eba87d
...
...
@@ -49,13 +49,18 @@ def scan(fn,
control over the scan op, avoiding certain difficulties that arose from
missing optimizations.
:param fn: lambda function that describes one step of scan (see the
Parameters
----------
fn
Lambda function that describes one step of scan (see the
official Theano scan function)
:param sequences: similar to the official Theano's scan. This version
sequences
Similar to the official Theano's scan. This version
of scan does not support taps for the sequences (it can only be a
list of tensor). Scan assumes that sequences have the right length
and it does not check for this.
:param states: similar to outputs_info of the official scan function.
states
Similar to outputs_info of the official scan function.
There is one crucial difference though, namely that the `initial`
key in the dictionary has been replace by 'membuf' key. This
reflects the change of meaning. Instead of passing to scan just
...
...
@@ -72,37 +77,43 @@ def scan(fn,
For states that do not require a initial state, one has to provide a
dictionary with a single key 'steps' that says how many intermediate
results to store. See examples below for more insight.
:param n_steps: This parameter is mandatory and it will represent the
n_steps
This parameter is mandatory and it will represent the
number of steps scan will do (scan will not check sequences or any
other source of information to figure out how many steps it needs
to do).
:param mode: Same as for the official scan
:param name: Same as for the official scan
:param profile: Same as for the official scan
Note:
- there is no truncate / go_backwards anymore !
- the outputs returned by scan contain the initial states as well (i.e.
if I loop over k steps, with my smallest tap for an output -3 and keep
al intermediate results, my output will be of length k+3
Examples:
(a) if you do not want to store any intermediate results (just the
last one)
# The memory buffer can be the initial state, just that we need to
# add one extra dimension in front of it
state = TT.unbroadcast(TT.shape_padleft(x0),0)
out,_ = scan(lambda x:x+1, states = state, n_steps = 5)
# Once we got our result we need to remove the extra dimension
out = out[0]
(b) if you want to keep every intermediate results
state = TT.alloc(TT.constant(0), 6, x0.shape[0])
state = TT.set_subtensor(state[0], x0)
out,_ = scan(lambda x:x+1, states = state, n_steps = 5)
out = out[1:]
mode
Same as for the official scan.
name
Same as for the official scan.
profile
Same as for the official scan.
Notes
-----
- There is no truncate / go_backwards anymore !
- The outputs returned by scan contain the initial states as well (i.e.
if I loop over k steps, with my smallest tap for an output -3 and keep
al intermediate results, my output will be of length k+3.
Examples
--------
(a) if you do not want to store any intermediate results (just the
last one)
# The memory buffer can be the initial state, just that we need to
# add one extra dimension in front of it
state = TT.unbroadcast(TT.shape_padleft(x0),0)
out,_ = scan(lambda x:x+1, states = state, n_steps = 5)
# Once we got our result we need to remove the extra dimension
out = out[0]
(b) if you want to keep every intermediate results
state = TT.alloc(TT.constant(0), 6, x0.shape[0])
state = TT.set_subtensor(state[0], x0)
out,_ = scan(lambda x:x+1, states = state, n_steps = 5)
out = out[1:]
"""
def
wrap_into_list
(
x
):
...
...
theano/sandbox/solve.py
浏览文件 @
d1eba87d
...
...
@@ -13,9 +13,11 @@ from theano.tests import unittest_tools as utt
class
Solve
(
gof
.
Op
):
"""
Find the solution to the linear equation Ax=b,
where A is a 2d matrix and b is a 1d or 2d matrix.
Find the solution to the linear equation Ax=b.
A is a 2d matrix and b is a 1d or 2d matrix.
It use numpy.solve to find the solution.
"""
# TODO: Add class options to use the performance-enhancing flags
...
...
theano/sandbox/test_rng_mrg.py
浏览文件 @
d1eba87d
...
...
@@ -75,10 +75,11 @@ def test_deterministic():
def
test_consistency_randomstreams
():
'''Verify that the random numbers generated by MRG_RandomStreams
"""
Verify that the random numbers generated by MRG_RandomStreams
are the same as the reference (Java) implementation by L'Ecuyer et al.
'''
"""
seed
=
12345
n_samples
=
5
n_streams
=
12
...
...
@@ -108,9 +109,11 @@ def test_consistency_randomstreams():
def
test_consistency_cpu_serial
():
'''Verify that the random numbers generated by mrg_uniform, serially,
"""
Verify that the random numbers generated by mrg_uniform, serially,
are the same as the reference (Java) implementation by L'Ecuyer et al.
'''
"""
seed
=
12345
n_samples
=
5
n_streams
=
12
...
...
@@ -149,9 +152,11 @@ def test_consistency_cpu_serial():
def
test_consistency_cpu_parallel
():
'''Verify that the random numbers generated by mrg_uniform, in parallel,
"""
Verify that the random numbers generated by mrg_uniform, in parallel,
are the same as the reference (Java) implementation by L'Ecuyer et al.
'''
"""
seed
=
12345
n_samples
=
5
n_streams
=
12
...
...
@@ -193,9 +198,11 @@ def test_consistency_cpu_parallel():
def
test_consistency_GPU_serial
():
'''Verify that the random numbers generated by GPU_mrg_uniform, serially,
"""
Verify that the random numbers generated by GPU_mrg_uniform, serially,
are the same as the reference (Java) implementation by L'Ecuyer et al.
'''
"""
if
not
cuda_available
:
raise
SkipTest
(
'Optional package cuda not available'
)
if
config
.
mode
==
'FAST_COMPILE'
:
...
...
@@ -250,11 +257,12 @@ def test_consistency_GPU_serial():
def
test_consistency_GPU_parallel
():
'''Verify that the random numbers generated by GPU_mrg_uniform, in
"""
Verify that the random numbers generated by GPU_mrg_uniform, in
parallel, are the same as the reference (Java) implementation by
L'Ecuyer et al.
'''
"""
if
not
cuda_available
:
raise
SkipTest
(
'Optional package cuda not available'
)
if
config
.
mode
==
'FAST_COMPILE'
:
...
...
@@ -310,9 +318,11 @@ def test_consistency_GPU_parallel():
def
test_GPU_nstreams_limit
():
"""Verify that a ValueError is raised when n_streams
"""
Verify that a ValueError is raised when n_streams
is greater than 2**20 on GPU. This is the value of
(NUM_VECTOR_OP_THREADS_PER_BLOCK * NUM_VECTOR_OP_BLOCKS).
"""
if
not
cuda_available
:
raise
SkipTest
(
'Optional package cuda not available'
)
...
...
@@ -335,9 +345,11 @@ def test_GPU_nstreams_limit():
def
test_consistency_GPUA_serial
():
'''Verify that the random numbers generated by GPUA_mrg_uniform, serially,
"""
Verify that the random numbers generated by GPUA_mrg_uniform, serially,
are the same as the reference (Java) implementation by L'Ecuyer et al.
'''
"""
from
theano.sandbox.gpuarray.tests.test_basic_ops
import
\
mode_with_gpu
as
mode
from
theano.sandbox.gpuarray.type
import
gpuarray_shared_constructor
...
...
@@ -387,11 +399,12 @@ def test_consistency_GPUA_serial():
def
test_consistency_GPUA_parallel
():
'''Verify that the random numbers generated by GPUA_mrg_uniform, in
"""
Verify that the random numbers generated by GPUA_mrg_uniform, in
parallel, are the same as the reference (Java) implementation by
L'Ecuyer et al.
'''
"""
from
theano.sandbox.gpuarray.tests.test_basic_ops
import
\
mode_with_gpu
as
mode
from
theano.sandbox.gpuarray.type
import
gpuarray_shared_constructor
...
...
@@ -855,6 +868,7 @@ def test_multiple_rng_aliasing():
copy the (random) state between two similar theano graphs. The test is
meant to detect a previous bug where state_updates was initialized as a
class-attribute, instead of the __init__ function.
"""
rng1
=
MRG_RandomStreams
(
1234
)
rng2
=
MRG_RandomStreams
(
2392
)
...
...
@@ -864,6 +878,7 @@ def test_multiple_rng_aliasing():
def
test_random_state_transfer
():
"""
Test that random state can be transferred from one theano graph to another.
"""
class
Graph
:
def
__init__
(
self
,
seed
=
123
):
...
...
theano/sandbox/theano_object.py
浏览文件 @
d1eba87d
差异被折叠。
点击展开。
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论