Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
d1eba87d
提交
d1eba87d
authored
8月 19, 2015
作者:
abergeron
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #3294 from harlouci/numpydoc_sandbox_1
Numpydoc sandbox 1
上级
477fd7cf
88716ac9
全部展开
显示空白字符变更
内嵌
并排
正在显示
12 个修改的文件
包含
192 行增加
和
116 行删除
+192
-116
basic_ops.py
theano/sandbox/cuda/basic_ops.py
+0
-0
blas.py
theano/sandbox/cuda/blas.py
+0
-0
blocksparse.py
theano/sandbox/cuda/blocksparse.py
+24
-12
cula.py
theano/sandbox/cuda/cula.py
+6
-2
fourier.py
theano/sandbox/fourier.py
+15
-10
multinomial.py
theano/sandbox/multinomial.py
+7
-2
neighbourhoods.py
theano/sandbox/neighbourhoods.py
+18
-16
rng_mrg.py
theano/sandbox/rng_mrg.py
+64
-44
scan.py
theano/sandbox/scan.py
+24
-13
solve.py
theano/sandbox/solve.py
+4
-2
test_rng_mrg.py
theano/sandbox/test_rng_mrg.py
+30
-15
theano_object.py
theano/sandbox/theano_object.py
+0
-0
没有找到文件。
theano/sandbox/cuda/basic_ops.py
浏览文件 @
d1eba87d
差异被折叠。
点击展开。
theano/sandbox/cuda/blas.py
浏览文件 @
d1eba87d
差异被折叠。
点击展开。
theano/sandbox/cuda/blocksparse.py
浏览文件 @
d1eba87d
...
@@ -30,7 +30,9 @@ class SparseBlockGemvSS(GpuOp):
...
@@ -30,7 +30,9 @@ class SparseBlockGemvSS(GpuOp):
This should not be directly called since the interface is subject
This should not be directly called since the interface is subject
to change without notice. Use the sparse_block_dot_SS() function
to change without notice. Use the sparse_block_dot_SS() function
for a stable interface.
for a stable interface.
"""
"""
def
__init__
(
self
,
inplace
=
False
):
def
__init__
(
self
,
inplace
=
False
):
self
.
inplace
=
inplace
self
.
inplace
=
inplace
if
self
.
inplace
:
if
self
.
inplace
:
...
@@ -369,7 +371,9 @@ class SparseBlockOuterSS(GpuOp):
...
@@ -369,7 +371,9 @@ class SparseBlockOuterSS(GpuOp):
This op should not be called directly since its interface is
This op should not be called directly since its interface is
subject to change without notice. It is involved in the gradient
subject to change without notice. It is involved in the gradient
of SparseBlockGemvSS.
of SparseBlockGemvSS.
"""
"""
def
__init__
(
self
,
inplace
=
False
):
def
__init__
(
self
,
inplace
=
False
):
self
.
inplace
=
inplace
self
.
inplace
=
inplace
if
self
.
inplace
:
if
self
.
inplace
:
...
@@ -680,18 +684,24 @@ def sparse_block_dot_SS(W, h, inputIdx, b, outputIdx):
...
@@ -680,18 +684,24 @@ def sparse_block_dot_SS(W, h, inputIdx, b, outputIdx):
Parameters
Parameters
----------
----------
var: shape, comment
W : (iBlocks, oBlocks, iSize, oSize)
W: (iBlocks, oBlocks, iSize, oSize), weight matrix
Weight matrix.
h: (batch, iWin, iSize), input from lower layer (sparse)
h : (batch, iWin, iSize)
inputIdx: (batch, iWin), indexes of the input blocks
Input from lower layer (sparse).
b: (oBlocks, oSize), bias vector
inputIdx : (batch, iWin)
outputIdx: (batch, oWin), indexes of the output blocks
Indexes of the input blocks.
b : (oBlocks, oSize)
returns (batch, oWin, oSize), dot(W[i, j], h[i]) + b[j]
Bias vector.
but b[j] is only added once
outputIdx : (batch, oWin)
Indexes of the output blocks.
Notation
--------
Returns
-------
(batch, oWin, oSize)
dot(W[i, j], h[i]) + b[j], but b[j] is only added once.
Notes
-----
- `batch` is the number of examples in a minibatch (batch size).
- `batch` is the number of examples in a minibatch (batch size).
- `iBlocks` is the total number of blocks in the input (from lower layer).
- `iBlocks` is the total number of blocks in the input (from lower layer).
- `iSize` is the size of each of these input blocks.
- `iSize` is the size of each of these input blocks.
...
@@ -701,7 +711,9 @@ def sparse_block_dot_SS(W, h, inputIdx, b, outputIdx):
...
@@ -701,7 +711,9 @@ def sparse_block_dot_SS(W, h, inputIdx, b, outputIdx):
- `oSize` is the size of each of these output blocks.
- `oSize` is the size of each of these output blocks.
- `oWin` is the number of output blocks that will actually be computed.
- `oWin` is the number of output blocks that will actually be computed.
Which blocks will be computed is specified in `outputIdx`.
Which blocks will be computed is specified in `outputIdx`.
"""
"""
assert
inputIdx
.
ndim
==
h
.
ndim
-
1
assert
inputIdx
.
ndim
==
h
.
ndim
-
1
assert
outputIdx
.
ndim
==
inputIdx
.
ndim
assert
outputIdx
.
ndim
==
inputIdx
.
ndim
if
h
.
ndim
==
2
:
if
h
.
ndim
==
2
:
...
...
theano/sandbox/cuda/cula.py
浏览文件 @
d1eba87d
...
@@ -26,9 +26,13 @@ class GpuSolve(GpuOp):
...
@@ -26,9 +26,13 @@ class GpuSolve(GpuOp):
"""
"""
CULA GPU solver OP.
CULA GPU solver OP.
:param trans: Whether to take the transpose of the input matrix
Parameters
or not.
----------
trans
Whether to take the transpose of the input matrix or not.
"""
"""
__props__
=
(
'trans'
,)
__props__
=
(
'trans'
,)
def
__init__
(
self
,
trans
=
'N'
):
def
__init__
(
self
,
trans
=
'N'
):
...
...
theano/sandbox/fourier.py
浏览文件 @
d1eba87d
"""Provides Ops for FFT and DCT.
"""
Provides Ops for FFT and DCT.
"""
"""
import
numpy
import
numpy
...
@@ -23,18 +24,19 @@ grad_todo = GradTodo()
...
@@ -23,18 +24,19 @@ grad_todo = GradTodo()
class
FFT
(
Op
):
class
FFT
(
Op
):
"""Fast Fourier Transform
"""
Fast Fourier Transform.
.. TODO:
.. TODO:
The current implementation just works for matrix inputs, and permits
taking a 1D FFT over
The current implementation just works for matrix inputs, and permits
either rows or columns. Add support for N-D FFTs as provided by either numpy or FFTW
taking a 1D FFT over either rows or columns. Add support for N-D FFTs
directly.
as provided by either numpy or FFTW
directly.
.. TODO:
.. TODO:
Give the C code that uses FFTW.
Give the C code that uses FFTW.
.. TODO:
.. TODO:
u
nit tests.
U
nit tests.
"""
"""
...
@@ -42,7 +44,7 @@ class FFT(Op):
...
@@ -42,7 +44,7 @@ class FFT(Op):
# don't return the plan object in the 'buf' output
# don't return the plan object in the 'buf' output
half
=
False
half
=
False
"""Only return the first half (positive-valued) of the frequency components"""
"""Only return the first half (positive-valued) of the frequency components
.
"""
__props__
=
(
"half"
,
"inverse"
)
__props__
=
(
"half"
,
"inverse"
)
def
__init__
(
self
,
half
=
False
,
inverse
=
False
):
def
__init__
(
self
,
half
=
False
,
inverse
=
False
):
...
@@ -50,7 +52,10 @@ class FFT(Op):
...
@@ -50,7 +52,10 @@ class FFT(Op):
self
.
inverse
=
inverse
self
.
inverse
=
inverse
def
make_node
(
self
,
frames
,
n
,
axis
):
def
make_node
(
self
,
frames
,
n
,
axis
):
""" compute an n-point fft of frames along given axis """
"""
Compute an n-point fft of frames along given axis.
"""
_frames
=
tensor
.
as_tensor
(
frames
,
ndim
=
2
)
_frames
=
tensor
.
as_tensor
(
frames
,
ndim
=
2
)
_n
=
tensor
.
as_tensor
(
n
,
ndim
=
0
)
_n
=
tensor
.
as_tensor
(
n
,
ndim
=
0
)
_axis
=
tensor
.
as_tensor
(
axis
,
ndim
=
0
)
_axis
=
tensor
.
as_tensor
(
axis
,
ndim
=
0
)
...
@@ -103,8 +108,8 @@ def dct_matrix(rows, cols, unitary=True):
...
@@ -103,8 +108,8 @@ def dct_matrix(rows, cols, unitary=True):
"""
"""
Return a (rows x cols) matrix implementing a discrete cosine transform.
Return a (rows x cols) matrix implementing a discrete cosine transform.
This algorithm is adapted from Dan Ellis' Rastmat
This algorithm is adapted from Dan Ellis' Rastmat
spec2cep.m, lines 15-20.
spec2cep.m, lines 15 - 20.
"""
"""
rval
=
numpy
.
zeros
((
rows
,
cols
))
rval
=
numpy
.
zeros
((
rows
,
cols
))
col_range
=
numpy
.
arange
(
cols
)
col_range
=
numpy
.
arange
(
cols
)
...
...
theano/sandbox/multinomial.py
浏览文件 @
d1eba87d
...
@@ -13,7 +13,11 @@ if cuda_available:
...
@@ -13,7 +13,11 @@ if cuda_available:
class
MultinomialFromUniform
(
Op
):
class
MultinomialFromUniform
(
Op
):
'''Converts samples from a uniform into sample from a multinomial.'''
"""
Converts samples from a uniform into sample from a multinomial.
"""
__props__
=
(
"odtype"
,)
__props__
=
(
"odtype"
,)
def
__init__
(
self
,
odtype
):
def
__init__
(
self
,
odtype
):
...
@@ -164,7 +168,8 @@ class GpuMultinomialFromUniform(MultinomialFromUniform, GpuOp):
...
@@ -164,7 +168,8 @@ class GpuMultinomialFromUniform(MultinomialFromUniform, GpuOp):
The output is transposed compared to MultinomialFromUniform.
The output is transposed compared to MultinomialFromUniform.
We must insert a Transpose op after it.
We must insert a Transpose op after it.
The optimization that move it to the gpu do it.
The optimization that moves it to the gpu does it.
"""
"""
def
make_node
(
self
,
pvals
,
unis
):
def
make_node
(
self
,
pvals
,
unis
):
...
...
theano/sandbox/neighbourhoods.py
浏览文件 @
d1eba87d
"""WARNING: This code is not recommanded. It is not finished, it is
"""
slower then the version in sandbox/neighbours.py, and it do not work
.. warning:: This code is not recommanded. It is not finished, it is
slower than the version in sandbox/neighbours.py, and it does not work
on the GPU.
on the GPU.
We only keep this version here as it is a little bit more generic, so
We only keep this version here as it is a little bit more generic, so
...
@@ -16,15 +17,9 @@ from theano import gof, Op
...
@@ -16,15 +17,9 @@ from theano import gof, Op
class
NeighbourhoodsFromImages
(
Op
):
class
NeighbourhoodsFromImages
(
Op
):
__props__
=
(
"n_dims_before"
,
"dims_neighbourhoods"
,
"strides"
,
"ignore_border"
,
"inverse"
)
def
__init__
(
self
,
n_dims_before
,
dims_neighbourhoods
,
strides
=
None
,
ignore_border
=
False
,
inverse
=
False
):
"""
"""
This extracts neighbourhoods from "images", but in a
This extracts neighbourhoods from "images", but in a dimension-generic
dimension-generic
manner.
manner.
In the 2D case, this is similar to downsampling, but instead of reducing
In the 2D case, this is similar to downsampling, but instead of reducing
a group of 2x2 pixels (for example) to a single new pixel in the output,
a group of 2x2 pixels (for example) to a single new pixel in the output,
...
@@ -40,21 +35,20 @@ class NeighbourhoodsFromImages(Op):
...
@@ -40,21 +35,20 @@ class NeighbourhoodsFromImages(Op):
[ [ [ 0.5, 0.6, 0.1, 0.2 ] ], # the first 2x2 group of pixels
[ [ [ 0.5, 0.6, 0.1, 0.2 ] ], # the first 2x2 group of pixels
[ [ 0.7, 0.8, 0.3, 0.4 ] ] ] # the second one
[ [ 0.7, 0.8, 0.3, 0.4 ] ] ] # the second one
s
o think of a 2D downsampling where each pixel of the resulting array
S
o think of a 2D downsampling where each pixel of the resulting array
is replaced by an array containing the (flattened) pixels of the
is replaced by an array containing the (flattened) pixels of the
corresponding neighbourhood.
corresponding neighbourhood.
If you provide a stack of 2D image
, or multiple stacks, each image
If you provide a stack of 2D images
, or multiple stacks, each image
will be treated independently, and the first dimensions of the array
will be treated independently, and the first dimensions of the array
will be preserved as such.
will be preserved as such.
This also makes sense in the 1D or 3D case. Below I'll still be calling
This also makes sense in the 1D or 3D case. Below I'll still be calling
those "images", by analogy.
those "images", by analogy.
In the 1D case, you're
In the 1D case, you're extracting subsequences from the original sequence.
extracting subsequences from the original sequence. In the 3D case,
In the 3D case, you're extracting cuboids.
you're extracting cuboids. If you ever find a 4D use, tell me! It
If you ever find a 4D use, tell me! It should be possible, anyhow.
should be possible, anyhow.
Parameters
Parameters
----------
----------
...
@@ -75,7 +69,15 @@ class NeighbourhoodsFromImages(Op):
...
@@ -75,7 +69,15 @@ class NeighbourhoodsFromImages(Op):
inverse : bool
inverse : bool
You shouldn't have to use this. Only used by child class
You shouldn't have to use this. Only used by child class
ImagesFromNeighbourhoods which simply reverses the assignment.
ImagesFromNeighbourhoods which simply reverses the assignment.
"""
"""
__props__
=
(
"n_dims_before"
,
"dims_neighbourhoods"
,
"strides"
,
"ignore_border"
,
"inverse"
)
def
__init__
(
self
,
n_dims_before
,
dims_neighbourhoods
,
strides
=
None
,
ignore_border
=
False
,
inverse
=
False
):
self
.
n_dims_before
=
n_dims_before
self
.
n_dims_before
=
n_dims_before
self
.
dims_neighbourhoods
=
dims_neighbourhoods
self
.
dims_neighbourhoods
=
dims_neighbourhoods
if
strides
is
not
None
:
if
strides
is
not
None
:
...
...
theano/sandbox/rng_mrg.py
浏览文件 @
d1eba87d
"""
"""
Implementation of MRG31k3p random number generator for Theano
Implementation of MRG31k3p random number generator for Theano
.
Generator code in SSJ package (L'Ecuyer & Simard)
Generator code in SSJ package (L'Ecuyer & Simard)
.
http://www.iro.umontreal.ca/~simardr/ssj/indexe.html
http://www.iro.umontreal.ca/~simardr/ssj/indexe.html
"""
"""
...
@@ -39,11 +39,14 @@ def matVecModM(A, s, m):
...
@@ -39,11 +39,14 @@ def matVecModM(A, s, m):
def
multMatVect
(
v
,
A
,
m1
,
B
,
m2
):
def
multMatVect
(
v
,
A
,
m1
,
B
,
m2
):
"""
"""
multiply the first half of v by A with a modulo of m1
Multiply the first half of v by A with a modulo of m1 and the second half
and the second half by B with a modulo of m2
by B with a modulo of m2.
Notes
-----
The parameters of dot_modulo are passed implicitly because passing them
explicitly takes more time than running the function's C-code.
Note: The parameters of dot_modulo are passed implicitly because passing
them explicitly takes more time then running the function's C-code.
"""
"""
if
multMatVect
.
dot_modulo
is
None
:
if
multMatVect
.
dot_modulo
is
None
:
A_sym
=
tensor
.
lmatrix
(
'A'
)
A_sym
=
tensor
.
lmatrix
(
'A'
)
...
@@ -76,7 +79,8 @@ class DotModulo(Op):
...
@@ -76,7 +79,8 @@ class DotModulo(Op):
Efficient and numerically stable implementation of a dot product followed
Efficient and numerically stable implementation of a dot product followed
by a modulo operation. This performs the same function as matVecModM.
by a modulo operation. This performs the same function as matVecModM.
We do this 2 times on 2 triple inputs and concatenating the output
We do this 2 times on 2 triple inputs and concatenating the output.
"""
"""
__props__
=
()
__props__
=
()
...
@@ -1014,9 +1018,12 @@ def guess_n_streams(size, warn=False):
...
@@ -1014,9 +1018,12 @@ def guess_n_streams(size, warn=False):
"""
"""
Return a guess at a good number of streams.
Return a guess at a good number of streams.
:param warn:
Parameters
----------
warn : bool, optional
If True, warn when a guess cannot be made (in which case we
If True, warn when a guess cannot be made (in which case we
return 60 * 256).
return 60 * 256).
"""
"""
# TODO: a smart way of choosing the number of streams, see #612.
# TODO: a smart way of choosing the number of streams, see #612.
# Note that this code was moved out of `MRG_RandomStreams` so that it can
# Note that this code was moved out of `MRG_RandomStreams` so that it can
...
@@ -1048,22 +1055,25 @@ def guess_n_streams(size, warn=False):
...
@@ -1048,22 +1055,25 @@ def guess_n_streams(size, warn=False):
class
MRG_RandomStreams
(
object
):
class
MRG_RandomStreams
(
object
):
"""Module component with similar interface to numpy.random (numpy.random.RandomState)"""
def
updates
(
self
):
return
list
(
self
.
state_updates
)
def
__init__
(
self
,
seed
=
12345
,
use_cuda
=
None
):
"""
"""
:type seed: int or list of 6 int.
Module component with similar interface to numpy.random
(numpy.random.RandomState).
:param seed: a default seed to initialize the random state.
Parameters
----------
seed : int or list of 6 int
A default seed to initialize the random state.
If a single int is given, it will be replicated 6 times.
If a single int is given, it will be replicated 6 times.
The first 3 values of the seed must all be less than M1 = 2147483647,
The first 3 values of the seed must all be less than M1 = 2147483647,
and not all 0; and the last 3 values must all be less than
and not all 0; and the last 3 values must all be less than
M2 = 2147462579, and not all 0.
M2 = 2147462579, and not all 0.
"""
"""
def
updates
(
self
):
return
list
(
self
.
state_updates
)
def
__init__
(
self
,
seed
=
12345
,
use_cuda
=
None
):
# A list of pairs of the form (input_r, output_r), representing the
# A list of pairs of the form (input_r, output_r), representing the
# update rules of all the random states generated by this RandomStreams.
# update rules of all the random states generated by this RandomStreams.
self
.
state_updates
=
[]
self
.
state_updates
=
[]
...
@@ -1107,14 +1117,18 @@ class MRG_RandomStreams(object):
...
@@ -1107,14 +1117,18 @@ class MRG_RandomStreams(object):
raise
TypeError
(
"seed should be 1 integer or 6 integers"
)
raise
TypeError
(
"seed should be 1 integer or 6 integers"
)
def
seed
(
self
,
seed
=
None
):
def
seed
(
self
,
seed
=
None
):
"""Re-initialize each random stream
"""
Re-initialize each random stream.
:param seed: each random stream will be assigned a unique
state that depends deterministically on this value.
:type seed: None or integer in range 0 to 2**30
Parameters
----------
seed : None or integer in range 0 to 2**30
Each random stream will be assigned a unique state that depends
deterministically on this value.
:rtype: None
Returns
-------
None
"""
"""
if
seed
is
None
:
if
seed
is
None
:
...
@@ -1133,14 +1147,20 @@ class MRG_RandomStreams(object):
...
@@ -1133,14 +1147,20 @@ class MRG_RandomStreams(object):
old_r
.
set_value
(
rstates
,
borrow
=
True
)
old_r
.
set_value
(
rstates
,
borrow
=
True
)
def
inc_rstate
(
self
):
def
inc_rstate
(
self
):
"""Update self.rstate to be skipped 2^134 steps forward to the next stream start"""
"""
Update self.rstate to be skipped 2^134 steps forward to the next stream
start.
"""
#self.rstate = ff_2p134(self.rstate)
#self.rstate = ff_2p134(self.rstate)
self
.
rstate
=
multMatVect
(
self
.
rstate
,
A1p134
,
M1
,
A2p134
,
M2
)
self
.
rstate
=
multMatVect
(
self
.
rstate
,
A1p134
,
M1
,
A2p134
,
M2
)
assert
self
.
rstate
.
dtype
==
numpy
.
int32
assert
self
.
rstate
.
dtype
==
numpy
.
int32
def
get_substream_rstates
(
self
,
n_streams
,
dtype
,
inc_rstate
=
True
):
def
get_substream_rstates
(
self
,
n_streams
,
dtype
,
inc_rstate
=
True
):
"""Initialize a matrix in which each row is a MRG stream state,
"""
Initialize a matrix in which each row is a MRG stream state,
and they are spaced by 2**72 samples.
and they are spaced by 2**72 samples.
"""
"""
assert
isinstance
(
dtype
,
str
)
assert
isinstance
(
dtype
,
str
)
assert
n_streams
<
2
**
72
assert
n_streams
<
2
**
72
...
@@ -1198,24 +1218,22 @@ class MRG_RandomStreams(object):
...
@@ -1198,24 +1218,22 @@ class MRG_RandomStreams(object):
distribution between low and high.
distribution between low and high.
If the size argument is ambiguous on the number of dimensions,
If the size argument is ambiguous on the number of dimensions,
ndim may be a plain integer to supplement the missing
ndim may be a plain integer to supplement the missing information.
information.
:param low:
Parameters
Lower bound of the interval on which values are sampled. If
----------
the ``dtype`` arg is provided, ``low`` will be cast into
low
Lower bound of the interval on which values are sampled.
If the ``dtype`` arg is provided, ``low`` will be cast into
dtype. This bound is excluded.
dtype. This bound is excluded.
high
:param high:
Higher bound of the interval on which values are sampled.
Higher bound of the interval on which values are sampled.
If the ``dtype`` arg is provided, ``high`` will be cast into
If the ``dtype`` arg is provided, ``high`` will be cast into
dtype. This bound is excluded.
dtype. This bound is excluded.
size
:param size:
Can be a list of integer or Theano variable (ex: the shape
Can be a list of integer or Theano variable (ex: the shape
of other Theano Variable)
of other Theano Variable).
dtype
:param dtype:
The output data type. If dtype is not specified, it will be
The output data type. If dtype is not specified, it will be
inferred from the dtype of low and high, but will be at
inferred from the dtype of low and high, but will be at
least as precise as floatX.
least as precise as floatX.
...
@@ -1300,15 +1318,17 @@ class MRG_RandomStreams(object):
...
@@ -1300,15 +1318,17 @@ class MRG_RandomStreams(object):
Example : pvals = [[.98, .01, .01], [.01, .98, .01]] will
Example : pvals = [[.98, .01, .01], [.01, .98, .01]] will
probably result in [[1,0,0],[0,1,0]].
probably result in [[1,0,0],[0,1,0]].
.. note::
Notes
-----
-`size` and `ndim` are only there keep the same signature as other
-`size` and `ndim` are only there keep the same signature as other
uniform, binomial, normal, etc.
uniform, binomial, normal, etc.
todo
: adapt multinomial to take that into account
TODO
: adapt multinomial to take that into account
-Does not do any value checking on pvals, i.e. there is no
-Does not do any value checking on pvals, i.e. there is no
check that the elements are non-negative, less than 1, or
check that the elements are non-negative, less than 1, or
sum to 1. passing pvals = [[-2., 2.]] will result in
sum to 1. passing pvals = [[-2., 2.]] will result in
sampling [[0, 0]]
sampling [[0, 0]]
"""
"""
if
pvals
is
None
:
if
pvals
is
None
:
raise
TypeError
(
"You have to specify pvals"
)
raise
TypeError
(
"You have to specify pvals"
)
...
@@ -1342,16 +1362,16 @@ class MRG_RandomStreams(object):
...
@@ -1342,16 +1362,16 @@ class MRG_RandomStreams(object):
def
normal
(
self
,
size
,
avg
=
0.0
,
std
=
1.0
,
ndim
=
None
,
def
normal
(
self
,
size
,
avg
=
0.0
,
std
=
1.0
,
ndim
=
None
,
dtype
=
None
,
nstreams
=
None
):
dtype
=
None
,
nstreams
=
None
):
"""
"""
:param size:
Parameters
----------
size
Can be a list of integers or Theano variables (ex: the shape
Can be a list of integers or Theano variables (ex: the shape
of another Theano Variable)
of another Theano Variable).
dtype
:param dtype:
The output data type. If dtype is not specified, it will be
The output data type. If dtype is not specified, it will be
inferred from the dtype of low and high, but will be at
inferred from the dtype of low and high, but will be at
least as precise as floatX.
least as precise as floatX.
nstreams
:param nstreams:
Number of streams.
Number of streams.
"""
"""
...
...
theano/sandbox/scan.py
浏览文件 @
d1eba87d
...
@@ -49,13 +49,18 @@ def scan(fn,
...
@@ -49,13 +49,18 @@ def scan(fn,
control over the scan op, avoiding certain difficulties that arose from
control over the scan op, avoiding certain difficulties that arose from
missing optimizations.
missing optimizations.
:param fn: lambda function that describes one step of scan (see the
Parameters
----------
fn
Lambda function that describes one step of scan (see the
official Theano scan function)
official Theano scan function)
:param sequences: similar to the official Theano's scan. This version
sequences
Similar to the official Theano's scan. This version
of scan does not support taps for the sequences (it can only be a
of scan does not support taps for the sequences (it can only be a
list of tensor). Scan assumes that sequences have the right length
list of tensor). Scan assumes that sequences have the right length
and it does not check for this.
and it does not check for this.
:param states: similar to outputs_info of the official scan function.
states
Similar to outputs_info of the official scan function.
There is one crucial difference though, namely that the `initial`
There is one crucial difference though, namely that the `initial`
key in the dictionary has been replace by 'membuf' key. This
key in the dictionary has been replace by 'membuf' key. This
reflects the change of meaning. Instead of passing to scan just
reflects the change of meaning. Instead of passing to scan just
...
@@ -72,21 +77,27 @@ def scan(fn,
...
@@ -72,21 +77,27 @@ def scan(fn,
For states that do not require a initial state, one has to provide a
For states that do not require a initial state, one has to provide a
dictionary with a single key 'steps' that says how many intermediate
dictionary with a single key 'steps' that says how many intermediate
results to store. See examples below for more insight.
results to store. See examples below for more insight.
:param n_steps: This parameter is mandatory and it will represent the
n_steps
This parameter is mandatory and it will represent the
number of steps scan will do (scan will not check sequences or any
number of steps scan will do (scan will not check sequences or any
other source of information to figure out how many steps it needs
other source of information to figure out how many steps it needs
to do).
to do).
:param mode: Same as for the official scan
mode
:param name: Same as for the official scan
Same as for the official scan.
:param profile: Same as for the official scan
name
Same as for the official scan.
Note:
profile
- there is no truncate / go_backwards anymore !
Same as for the official scan.
- the outputs returned by scan contain the initial states as well (i.e.
Notes
-----
- There is no truncate / go_backwards anymore !
- The outputs returned by scan contain the initial states as well (i.e.
if I loop over k steps, with my smallest tap for an output -3 and keep
if I loop over k steps, with my smallest tap for an output -3 and keep
al intermediate results, my output will be of length k+3
al intermediate results, my output will be of length k+3.
Examples:
Examples
--------
(a) if you do not want to store any intermediate results (just the
(a) if you do not want to store any intermediate results (just the
last one)
last one)
...
...
theano/sandbox/solve.py
浏览文件 @
d1eba87d
...
@@ -13,9 +13,11 @@ from theano.tests import unittest_tools as utt
...
@@ -13,9 +13,11 @@ from theano.tests import unittest_tools as utt
class
Solve
(
gof
.
Op
):
class
Solve
(
gof
.
Op
):
"""
"""
Find the solution to the linear equation Ax=b,
Find the solution to the linear equation Ax=b.
where A is a 2d matrix and b is a 1d or 2d matrix.
A is a 2d matrix and b is a 1d or 2d matrix.
It use numpy.solve to find the solution.
It use numpy.solve to find the solution.
"""
"""
# TODO: Add class options to use the performance-enhancing flags
# TODO: Add class options to use the performance-enhancing flags
...
...
theano/sandbox/test_rng_mrg.py
浏览文件 @
d1eba87d
...
@@ -75,10 +75,11 @@ def test_deterministic():
...
@@ -75,10 +75,11 @@ def test_deterministic():
def
test_consistency_randomstreams
():
def
test_consistency_randomstreams
():
'''Verify that the random numbers generated by MRG_RandomStreams
"""
Verify that the random numbers generated by MRG_RandomStreams
are the same as the reference (Java) implementation by L'Ecuyer et al.
are the same as the reference (Java) implementation by L'Ecuyer et al.
'''
"""
seed
=
12345
seed
=
12345
n_samples
=
5
n_samples
=
5
n_streams
=
12
n_streams
=
12
...
@@ -108,9 +109,11 @@ def test_consistency_randomstreams():
...
@@ -108,9 +109,11 @@ def test_consistency_randomstreams():
def
test_consistency_cpu_serial
():
def
test_consistency_cpu_serial
():
'''Verify that the random numbers generated by mrg_uniform, serially,
"""
Verify that the random numbers generated by mrg_uniform, serially,
are the same as the reference (Java) implementation by L'Ecuyer et al.
are the same as the reference (Java) implementation by L'Ecuyer et al.
'''
"""
seed
=
12345
seed
=
12345
n_samples
=
5
n_samples
=
5
n_streams
=
12
n_streams
=
12
...
@@ -149,9 +152,11 @@ def test_consistency_cpu_serial():
...
@@ -149,9 +152,11 @@ def test_consistency_cpu_serial():
def
test_consistency_cpu_parallel
():
def
test_consistency_cpu_parallel
():
'''Verify that the random numbers generated by mrg_uniform, in parallel,
"""
Verify that the random numbers generated by mrg_uniform, in parallel,
are the same as the reference (Java) implementation by L'Ecuyer et al.
are the same as the reference (Java) implementation by L'Ecuyer et al.
'''
"""
seed
=
12345
seed
=
12345
n_samples
=
5
n_samples
=
5
n_streams
=
12
n_streams
=
12
...
@@ -193,9 +198,11 @@ def test_consistency_cpu_parallel():
...
@@ -193,9 +198,11 @@ def test_consistency_cpu_parallel():
def
test_consistency_GPU_serial
():
def
test_consistency_GPU_serial
():
'''Verify that the random numbers generated by GPU_mrg_uniform, serially,
"""
Verify that the random numbers generated by GPU_mrg_uniform, serially,
are the same as the reference (Java) implementation by L'Ecuyer et al.
are the same as the reference (Java) implementation by L'Ecuyer et al.
'''
"""
if
not
cuda_available
:
if
not
cuda_available
:
raise
SkipTest
(
'Optional package cuda not available'
)
raise
SkipTest
(
'Optional package cuda not available'
)
if
config
.
mode
==
'FAST_COMPILE'
:
if
config
.
mode
==
'FAST_COMPILE'
:
...
@@ -250,11 +257,12 @@ def test_consistency_GPU_serial():
...
@@ -250,11 +257,12 @@ def test_consistency_GPU_serial():
def
test_consistency_GPU_parallel
():
def
test_consistency_GPU_parallel
():
'''Verify that the random numbers generated by GPU_mrg_uniform, in
"""
Verify that the random numbers generated by GPU_mrg_uniform, in
parallel, are the same as the reference (Java) implementation by
parallel, are the same as the reference (Java) implementation by
L'Ecuyer et al.
L'Ecuyer et al.
'''
"""
if
not
cuda_available
:
if
not
cuda_available
:
raise
SkipTest
(
'Optional package cuda not available'
)
raise
SkipTest
(
'Optional package cuda not available'
)
if
config
.
mode
==
'FAST_COMPILE'
:
if
config
.
mode
==
'FAST_COMPILE'
:
...
@@ -310,9 +318,11 @@ def test_consistency_GPU_parallel():
...
@@ -310,9 +318,11 @@ def test_consistency_GPU_parallel():
def
test_GPU_nstreams_limit
():
def
test_GPU_nstreams_limit
():
"""Verify that a ValueError is raised when n_streams
"""
Verify that a ValueError is raised when n_streams
is greater than 2**20 on GPU. This is the value of
is greater than 2**20 on GPU. This is the value of
(NUM_VECTOR_OP_THREADS_PER_BLOCK * NUM_VECTOR_OP_BLOCKS).
(NUM_VECTOR_OP_THREADS_PER_BLOCK * NUM_VECTOR_OP_BLOCKS).
"""
"""
if
not
cuda_available
:
if
not
cuda_available
:
raise
SkipTest
(
'Optional package cuda not available'
)
raise
SkipTest
(
'Optional package cuda not available'
)
...
@@ -335,9 +345,11 @@ def test_GPU_nstreams_limit():
...
@@ -335,9 +345,11 @@ def test_GPU_nstreams_limit():
def
test_consistency_GPUA_serial
():
def
test_consistency_GPUA_serial
():
'''Verify that the random numbers generated by GPUA_mrg_uniform, serially,
"""
Verify that the random numbers generated by GPUA_mrg_uniform, serially,
are the same as the reference (Java) implementation by L'Ecuyer et al.
are the same as the reference (Java) implementation by L'Ecuyer et al.
'''
"""
from
theano.sandbox.gpuarray.tests.test_basic_ops
import
\
from
theano.sandbox.gpuarray.tests.test_basic_ops
import
\
mode_with_gpu
as
mode
mode_with_gpu
as
mode
from
theano.sandbox.gpuarray.type
import
gpuarray_shared_constructor
from
theano.sandbox.gpuarray.type
import
gpuarray_shared_constructor
...
@@ -387,11 +399,12 @@ def test_consistency_GPUA_serial():
...
@@ -387,11 +399,12 @@ def test_consistency_GPUA_serial():
def
test_consistency_GPUA_parallel
():
def
test_consistency_GPUA_parallel
():
'''Verify that the random numbers generated by GPUA_mrg_uniform, in
"""
Verify that the random numbers generated by GPUA_mrg_uniform, in
parallel, are the same as the reference (Java) implementation by
parallel, are the same as the reference (Java) implementation by
L'Ecuyer et al.
L'Ecuyer et al.
'''
"""
from
theano.sandbox.gpuarray.tests.test_basic_ops
import
\
from
theano.sandbox.gpuarray.tests.test_basic_ops
import
\
mode_with_gpu
as
mode
mode_with_gpu
as
mode
from
theano.sandbox.gpuarray.type
import
gpuarray_shared_constructor
from
theano.sandbox.gpuarray.type
import
gpuarray_shared_constructor
...
@@ -855,6 +868,7 @@ def test_multiple_rng_aliasing():
...
@@ -855,6 +868,7 @@ def test_multiple_rng_aliasing():
copy the (random) state between two similar theano graphs. The test is
copy the (random) state between two similar theano graphs. The test is
meant to detect a previous bug where state_updates was initialized as a
meant to detect a previous bug where state_updates was initialized as a
class-attribute, instead of the __init__ function.
class-attribute, instead of the __init__ function.
"""
"""
rng1
=
MRG_RandomStreams
(
1234
)
rng1
=
MRG_RandomStreams
(
1234
)
rng2
=
MRG_RandomStreams
(
2392
)
rng2
=
MRG_RandomStreams
(
2392
)
...
@@ -864,6 +878,7 @@ def test_multiple_rng_aliasing():
...
@@ -864,6 +878,7 @@ def test_multiple_rng_aliasing():
def
test_random_state_transfer
():
def
test_random_state_transfer
():
"""
"""
Test that random state can be transferred from one theano graph to another.
Test that random state can be transferred from one theano graph to another.
"""
"""
class
Graph
:
class
Graph
:
def
__init__
(
self
,
seed
=
123
):
def
__init__
(
self
,
seed
=
123
):
...
...
theano/sandbox/theano_object.py
浏览文件 @
d1eba87d
差异被折叠。
点击展开。
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论