Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
c326cc22
提交
c326cc22
authored
5月 16, 2011
作者:
Olivier Delalleau
浏览文件
操作
浏览文件
下载
差异文件
Merged
上级
1bee17f9
75c9c6c0
全部展开
显示空白字符变更
内嵌
并排
正在显示
20 个修改的文件
包含
423 行增加
和
92 行删除
+423
-92
config.txt
doc/library/config.txt
+36
-1
configdefaults.py
theano/configdefaults.py
+10
-5
configparser.py
theano/configparser.py
+14
-0
apply_shape.py
theano/gof/apply_shape.py
+2
-1
cc.py
theano/gof/cc.py
+5
-2
neighbours.py
theano/sandbox/neighbours.py
+3
-3
rng_mrg.py
theano/sandbox/rng_mrg.py
+65
-29
test_rng_mrg.py
theano/sandbox/test_rng_mrg.py
+18
-7
basic.py
theano/scalar/basic.py
+130
-20
test_basic.py
theano/scalar/tests/test_basic.py
+17
-3
basic.py
theano/tensor/basic.py
+0
-0
elemwise.py
theano/tensor/elemwise.py
+67
-2
Conv3D.py
theano/tensor/nnet/Conv3D.py
+3
-3
conv.py
theano/tensor/nnet/conv.py
+3
-2
test_nnet.py
theano/tensor/nnet/tests/test_nnet.py
+2
-0
opt.py
theano/tensor/opt.py
+19
-8
test_basic.py
theano/tensor/tests/test_basic.py
+0
-0
test_incsubtensor.py
theano/tensor/tests/test_incsubtensor.py
+5
-3
test_opt.py
theano/tensor/tests/test_opt.py
+11
-2
test_tutorial.py
theano/tests/test_tutorial.py
+13
-1
没有找到文件。
doc/library/config.txt
浏览文件 @
c326cc22
...
@@ -144,7 +144,7 @@ import theano and print the config variable, as in:
...
@@ -144,7 +144,7 @@ import theano and print the config variable, as in:
.. attribute:: floatX
.. attribute:: floatX
String value: either 'float64' or 'float32'
.
String value: either 'float64' or 'float32'
Default: 'float64'
Default: 'float64'
...
@@ -152,6 +152,41 @@ import theano and print the config variable, as in:
...
@@ -152,6 +152,41 @@ import theano and print the config variable, as in:
and similar functions. It also sets the default theano bit width for
and similar functions. It also sets the default theano bit width for
arguments passed as Python floating-point numbers.
arguments passed as Python floating-point numbers.
.. attribute:: cast_policy
String value: either 'numpy+floatX', 'numpy' or 'custom'
Default: 'custom'
This specifies how data types are implicitly figured out in Theano, e.g. for
constants or in the result of arithmetic operations. The recommended value is
'numpy+floatX', that mimics numpy's behavior except for floats when
``config.floatX`` is set to 'float32', for which we use float32 instead of
float64 unless the user is explicitly using data typed as float64. When
'numpy' is used, this specific floatX behavior is discarded. The current
default value is 'custom' for backward compatibility reason, and corresponds
to a set of custom rules originally used in Theano (which can be partially
customized, see e.g. the in-code help of ``tensor.NumpyAutocaster``). The
'custom' option will be deprecated in a future release of Theano.
**Until further notice, it is strongly advised to never change this option
within a script, and to always clean your Theano cache whenever you modify its
value**.
.. attribute:: int_division
String value: either 'int', 'floatX' or 'raise'
Default: 'int'
Specifies what to do when one tries to compute `x / y`, where both `x` and
`y` are of integer types (possibly unsigned). 'int' means an integer is
returned (as in Python 2.X), but this behavior is deprecated. 'floatX'
returns a number of type given by ``config.floatX`. 'raise' is the safest
choice (and will become default in a future release of Theano) and raises
an error when one tries to do such an operation, enforcing the use of the
integer division operator (``//``) (if a float result is intended, either
cast one of the arguments to a float, or use `x.__truediv__(y)`).
.. attribute:: mode
.. attribute:: mode
String value: 'Mode', 'ProfileMode', 'DebugMode', 'FAST_RUN', 'FAST_COMPILE'
String value: 'Mode', 'ProfileMode', 'DebugMode', 'FAST_RUN', 'FAST_COMPILE'
...
...
theano/configdefaults.py
浏览文件 @
c326cc22
...
@@ -15,11 +15,16 @@ AddConfigVar('floatX',
...
@@ -15,11 +15,16 @@ AddConfigVar('floatX',
EnumStr
(
'float64'
,
'float32'
),
EnumStr
(
'float64'
,
'float32'
),
)
)
# TODO Work-in-progress
AddConfigVar
(
'cast_policy'
,
#AddConfigVar('casting_policy',
"Rules for implicit type casting (until further notice, do not modify within a script, and clear your Theano cache whenever it is modified)"
,
# "Rules for implicit casts of constants in arithmetic operations",
EnumStr
(
'custom'
,
'numpy+floatX'
,
'numpy'
),
# EnumStr('theano_0.3', 'numpy'),
)
# )
AddConfigVar
(
'int_division'
,
"What to do when one computes x / y, where both x and y are of "
"integer types"
,
EnumStr
(
'int'
,
'raise'
,
'floatX'
),
)
#gpu mean let the driver select the gpu. Needed in case of gpu in exclusive mode.
#gpu mean let the driver select the gpu. Needed in case of gpu in exclusive mode.
#gpuX mean use the gpu number X.
#gpuX mean use the gpu number X.
...
...
theano/configparser.py
浏览文件 @
c326cc22
...
@@ -7,6 +7,8 @@ import ConfigParser
...
@@ -7,6 +7,8 @@ import ConfigParser
import
logging
import
logging
import
warnings
import
warnings
import
theano
_logger
=
logging
.
getLogger
(
'theano.config'
)
_logger
=
logging
.
getLogger
(
'theano.config'
)
class
TheanoConfigWarning
(
Warning
):
class
TheanoConfigWarning
(
Warning
):
...
@@ -103,6 +105,17 @@ def _config_print(thing, buf):
...
@@ -103,6 +105,17 @@ def _config_print(thing, buf):
print
>>
buf
,
" Value: "
,
cv
.
val
print
>>
buf
,
" Value: "
,
cv
.
val
print
>>
buf
,
""
print
>>
buf
,
""
def
get_config_md5
():
"""
Return a string md5 of the current config options. It should be such that
we can safely assume that two different config setups will lead to two
different strings.
"""
all_opts
=
sorted
(
_config_var_list
,
key
=
lambda
cv
:
cv
.
fullname
)
return
theano
.
gof
.
cc
.
hash_from_code
(
'
\n
'
.
join
([
'
%
s =
%
s'
%
(
cv
.
fullname
,
cv
.
val
)
for
cv
in
all_opts
]))
class
TheanoConfigParser
(
object
):
class
TheanoConfigParser
(
object
):
#properties are installed by AddConfigVar
#properties are installed by AddConfigVar
_i_am_a_config_class
=
True
_i_am_a_config_class
=
True
...
@@ -110,6 +123,7 @@ class TheanoConfigParser(object):
...
@@ -110,6 +123,7 @@ class TheanoConfigParser(object):
sio
=
StringIO
.
StringIO
()
sio
=
StringIO
.
StringIO
()
_config_print
(
self
.
__class__
,
sio
)
_config_print
(
self
.
__class__
,
sio
)
return
sio
.
getvalue
()
return
sio
.
getvalue
()
# N.B. all instances of TheanoConfigParser give access to the same properties.
# N.B. all instances of TheanoConfigParser give access to the same properties.
config
=
TheanoConfigParser
()
config
=
TheanoConfigParser
()
...
...
theano/gof/apply_shape.py
浏览文件 @
c326cc22
...
@@ -4,6 +4,7 @@ This is not used currently very used. It appear in some case, but I'm not sure i
...
@@ -4,6 +4,7 @@ This is not used currently very used. It appear in some case, but I'm not sure i
It could help the current system to make it detect problem earlier when contructing the graph instead of during optimization.
It could help the current system to make it detect problem earlier when contructing the graph instead of during optimization.
"""
"""
import
sys
import
sys
import
theano
from
theano
import
gof
from
theano
import
gof
def
ishape
(
v
):
def
ishape
(
v
):
...
@@ -35,7 +36,7 @@ class Apply(gof.Apply):
...
@@ -35,7 +36,7 @@ class Apply(gof.Apply):
try
:
try
:
oshapes
=
infer_shape
(
self
,
ishapes
)
oshapes
=
infer_shape
(
self
,
ishapes
)
except
NotImplemented
Error
:
except
theano
.
tensor
.
Shape
Error
:
return
return
for
o
,
oshp
in
zip
(
outputs
,
oshapes
):
for
o
,
oshp
in
zip
(
outputs
,
oshapes
):
...
...
theano/gof/cc.py
浏览文件 @
c326cc22
...
@@ -16,6 +16,7 @@ else:
...
@@ -16,6 +16,7 @@ else:
def
hash_from_code
(
msg
):
def
hash_from_code
(
msg
):
return
md5
.
new
(
msg
)
.
hexdigest
()
return
md5
.
new
(
msg
)
.
hexdigest
()
import
theano
from
theano.gof.python25
import
all
from
theano.gof.python25
import
all
from
theano
import
config
from
theano
import
config
...
@@ -791,7 +792,7 @@ class CLinker(link.Linker):
...
@@ -791,7 +792,7 @@ class CLinker(link.Linker):
The key returned by this function is of the form (version, signature)
The key returned by this function is of the form (version, signature)
The signature has the following form:
The signature has the following form:
{{{
{{{
'CLinker.cmodule_key', compilation args, libraries,
'CLinker.cmodule_key', compilation args, libraries,
config md5,
(op0, input_signature0, output_signature0),
(op0, input_signature0, output_signature0),
(op1, input_signature1, output_signature1),
(op1, input_signature1, output_signature1),
...
...
...
@@ -858,10 +859,12 @@ class CLinker(link.Linker):
...
@@ -858,10 +859,12 @@ class CLinker(link.Linker):
constant_ids
=
dict
()
constant_ids
=
dict
()
op_pos
=
{}
# Apply -> topological position
op_pos
=
{}
# Apply -> topological position
# first we put the header, compile_args, library names into the signature
# First we put the header, compile_args, library names and config md5
# into the signature.
sig
=
[
'CLinker.cmodule_key'
]
# will be cast to tuple on return
sig
=
[
'CLinker.cmodule_key'
]
# will be cast to tuple on return
if
compile_args
is
not
None
:
sig
.
append
(
tuple
(
compile_args
))
if
compile_args
is
not
None
:
sig
.
append
(
tuple
(
compile_args
))
if
libraries
is
not
None
:
sig
.
append
(
tuple
(
libraries
))
if
libraries
is
not
None
:
sig
.
append
(
tuple
(
libraries
))
sig
.
append
(
theano
.
configparser
.
get_config_md5
())
# technically this should only be appended for gcc-compiled Ops
# technically this should only be appended for gcc-compiled Ops
# and the flags of other compilers should be inserted here... but it's not clear how to
# and the flags of other compilers should be inserted here... but it's not clear how to
...
...
theano/sandbox/neighbours.py
浏览文件 @
c326cc22
...
@@ -246,13 +246,13 @@ def neibs2images(neibs, neib_shape, original_shape, mode='valid'):
...
@@ -246,13 +246,13 @@ def neibs2images(neibs, neib_shape, original_shape, mode='valid'):
neib_shape
=
T
.
as_tensor_variable
(
neib_shape
)
neib_shape
=
T
.
as_tensor_variable
(
neib_shape
)
original_shape
=
T
.
as_tensor_variable
(
original_shape
)
original_shape
=
T
.
as_tensor_variable
(
original_shape
)
new_neib_shape
=
T
.
stack
(
original_shape
[
-
1
]
/
neib_shape
[
1
],
neib_shape
[
1
]
)
new_neib_shape
=
T
.
stack
(
original_shape
[
-
1
]
//
neib_shape
[
1
],
neib_shape
[
1
]
)
output_2d
=
images2neibs
(
neibs
.
dimshuffle
(
'x'
,
'x'
,
0
,
1
),
new_neib_shape
,
mode
=
mode
)
output_2d
=
images2neibs
(
neibs
.
dimshuffle
(
'x'
,
'x'
,
0
,
1
),
new_neib_shape
,
mode
=
mode
)
if
mode
==
'ignore_borders'
:
if
mode
==
'ignore_borders'
:
valid_shape
=
list
(
original_shape
)
valid_shape
=
list
(
original_shape
)
valid_shape
[
2
]
=
valid_shape
[
2
]
/
neib_shape
[
0
]
*
neib_shape
[
0
]
valid_shape
[
2
]
=
(
valid_shape
[
2
]
//
neib_shape
[
0
])
*
neib_shape
[
0
]
valid_shape
[
3
]
=
valid_shape
[
3
]
/
neib_shape
[
1
]
*
neib_shape
[
1
]
valid_shape
[
3
]
=
(
valid_shape
[
3
]
//
neib_shape
[
1
])
*
neib_shape
[
1
]
output_4d
=
output_2d
.
reshape
(
valid_shape
)
output_4d
=
output_2d
.
reshape
(
valid_shape
)
#padding the borders with zeros
#padding the borders with zeros
for
d
in
[
2
,
3
]:
for
d
in
[
2
,
3
]:
...
...
theano/sandbox/rng_mrg.py
浏览文件 @
c326cc22
...
@@ -263,7 +263,7 @@ class mrg_uniform(mrg_uniform_base):
...
@@ -263,7 +263,7 @@ class mrg_uniform(mrg_uniform_base):
if (
%(size)
s->dimensions[0] !=
%(ndim)
s)
if (
%(size)
s->dimensions[0] !=
%(ndim)
s)
{
{
PyErr_Format(PyExc_ValueError, "size must have length
%%
i (not
%%
i)",
PyErr_Format(PyExc_ValueError, "size must have length
%%
i (not
%%
i)",
%(ndim)
s,
%(size)
s->dimensions[0]
);
%(ndim)
s,
int(
%(size)
s->dimensions[0])
);
%(fail)
s
%(fail)
s
}
}
if (
%(size)
s->descr->type_num != PyArray_INT32)
if (
%(size)
s->descr->type_num != PyArray_INT32)
...
@@ -589,6 +589,35 @@ class GPU_mrg_uniform(mrg_uniform_base):
...
@@ -589,6 +589,35 @@ class GPU_mrg_uniform(mrg_uniform_base):
def
c_code_cache_version
(
self
):
def
c_code_cache_version
(
self
):
return
(
4
,)
return
(
4
,)
def
guess_n_streams
(
size
,
warn
=
True
):
"""
Return a guess at a good number of streams.
:param warn: If True, warn when a guess cannot be made (in which case
we return 30 * 256).
"""
# TODO: a smart way of choosing the number of streams, see #612.
# Note that this code was moved out of `MRG_RandomStreams` so that it can
# be easily accessed from tests, where we want to disable the warning.
if
(
isinstance
(
size
,
(
tuple
,
list
))
and
all
([
isinstance
(
i
,
int
)
for
i
in
size
])):
# We can make a guess.
r
=
1
for
s
in
size
:
r
*=
s
if
r
>
6
:
r
=
r
/
6
# chosen as fastest for rbm_benchmark
return
r
else
:
if
warn
:
assert
False
print
>>
sys
.
stderr
,
(
"MRG_RandomStreams Can't determine #streams from "
"size (
%
s), guessing 30*256"
)
%
str
(
size
)
return
30
*
256
class
MRG_RandomStreams
(
object
):
class
MRG_RandomStreams
(
object
):
"""Module component with similar interface to numpy.random (numpy.random.RandomState)"""
"""Module component with similar interface to numpy.random (numpy.random.RandomState)"""
...
@@ -654,18 +683,7 @@ class MRG_RandomStreams(object):
...
@@ -654,18 +683,7 @@ class MRG_RandomStreams(object):
return
rval
return
rval
def
n_streams
(
self
,
size
):
def
n_streams
(
self
,
size
):
# TODO: a smart way of choosing the number of streams, see #612.
return
guess_n_streams
(
size
,
warn
=
True
)
if
isinstance
(
size
,
(
tuple
,
list
))
and
all
([
isinstance
(
i
,
int
)
for
i
in
size
]):
r
=
1
for
s
in
size
:
r
*=
s
if
r
>
6
:
r
=
r
/
6
# chosen as fastest for rbm_benchmark
return
r
print
>>
sys
.
stderr
,
(
"MRG_RandomStreams Can't determine #streams from "
"size (
%
s), guessing 30*256"
)
%
str
(
size
)
return
30
*
256
def
pretty_return
(
self
,
node_rstate
,
new_rstate
,
sample
):
def
pretty_return
(
self
,
node_rstate
,
new_rstate
,
sample
):
sample
.
rstate
=
node_rstate
sample
.
rstate
=
node_rstate
...
@@ -674,7 +692,8 @@ class MRG_RandomStreams(object):
...
@@ -674,7 +692,8 @@ class MRG_RandomStreams(object):
node_rstate
.
default_update
=
new_rstate
node_rstate
.
default_update
=
new_rstate
return
sample
return
sample
def
uniform
(
self
,
size
=
None
,
low
=
0.0
,
high
=
1.0
,
ndim
=
None
,
dtype
=
config
.
floatX
,
nstreams
=
None
):
def
uniform
(
self
,
size
,
low
=
0.0
,
high
=
1.0
,
ndim
=
None
,
dtype
=
'floatX'
,
nstreams
=
None
):
"""
"""
Sample a tensor of given size whose element from a uniform
Sample a tensor of given size whose element from a uniform
distribution between low and high.
distribution between low and high.
...
@@ -683,10 +702,14 @@ class MRG_RandomStreams(object):
...
@@ -683,10 +702,14 @@ class MRG_RandomStreams(object):
ndim may be a plain integer to supplement the missing
ndim may be a plain integer to supplement the missing
information.
information.
:param
:
size: Can be a list of integer or Theano variable
:param size: Can be a list of integer or Theano variable
(ex: the shape of other Theano Variable)
(ex: the shape of other Theano Variable)
TODO: can size be None?
:param dtype: The output data type.
"""
"""
if
dtype
==
'floatX'
:
dtype
=
config
.
floatX
if
isinstance
(
size
,
tuple
):
if
isinstance
(
size
,
tuple
):
msg
=
"size must be a tuple of int or a Theano variable"
msg
=
"size must be a tuple of int or a Theano variable"
assert
all
([
isinstance
(
i
,
int
)
or
isinstance
(
i
,
Variable
)
assert
all
([
isinstance
(
i
,
int
)
or
isinstance
(
i
,
Variable
)
...
@@ -728,16 +751,19 @@ class MRG_RandomStreams(object):
...
@@ -728,16 +751,19 @@ class MRG_RandomStreams(object):
raise
NotImplementedError
(
'Increase the size to match the broadcasting pattern of `low` and `high` arguments'
)
raise
NotImplementedError
(
'Increase the size to match the broadcasting pattern of `low` and `high` arguments'
)
return
r
return
r
def
binomial
(
self
,
size
=
None
,
n
=
1
,
p
=
0.5
,
ndim
=
None
,
dtype
=
'int64'
):
def
binomial
(
self
,
size
=
None
,
n
=
1
,
p
=
0.5
,
ndim
=
None
,
dtype
=
'int64'
,
nstreams
=
None
):
if
n
==
1
:
if
n
==
1
:
if
dtype
==
'float32'
and
self
.
use_cuda
:
if
dtype
==
'float32'
and
self
.
use_cuda
:
return
cast
(
self
.
uniform
(
size
=
size
,
dtype
=
dtype
)
<
p
,
dtype
)
x
=
self
.
uniform
(
size
=
size
,
dtype
=
dtype
,
nstreams
=
nstreams
)
else
:
else
:
return
cast
(
self
.
uniform
(
size
=
size
)
<
p
,
dtype
)
x
=
self
.
uniform
(
size
=
size
,
nstreams
=
nstreams
)
return
cast
(
x
<
p
,
dtype
)
else
:
else
:
raise
NotImplementedError
(
"MRG_RandomStreams.binomial with n > 1"
)
raise
NotImplementedError
(
"MRG_RandomStreams.binomial with n > 1"
)
def
multinomial
(
self
,
size
=
None
,
n
=
1
,
pvals
=
None
,
ndim
=
None
,
dtype
=
'int64'
):
def
multinomial
(
self
,
size
=
None
,
n
=
1
,
pvals
=
None
,
ndim
=
None
,
dtype
=
'int64'
,
nstreams
=
None
):
"""
"""
Sample `n` (currently `n` needs to be 1) times from a multinomial
Sample `n` (currently `n` needs to be 1) times from a multinomial
distribution defined by probabilities pvals.
distribution defined by probabilities pvals.
...
@@ -758,22 +784,31 @@ class MRG_RandomStreams(object):
...
@@ -758,22 +784,31 @@ class MRG_RandomStreams(object):
ndim
,
size
,
pvals
[:,
0
])
ndim
,
size
,
pvals
[:,
0
])
assert
ndim
==
1
assert
ndim
==
1
bcast
=
bcast
+
(
pvals
.
type
.
broadcastable
[
-
1
],)
bcast
=
bcast
+
(
pvals
.
type
.
broadcastable
[
-
1
],)
unis
=
self
.
uniform
(
size
=
size
,
ndim
=
1
)
unis
=
self
.
uniform
(
size
=
size
,
ndim
=
1
,
nstreams
=
nstreams
)
op
=
multinomial
.
MultinomialFromUniform
(
dtype
)
op
=
multinomial
.
MultinomialFromUniform
(
dtype
)
return
op
(
pvals
,
unis
)
return
op
(
pvals
,
unis
)
else
:
else
:
raise
NotImplementedError
((
"MRG_RandomStreams.multinomial only"
raise
NotImplementedError
((
"MRG_RandomStreams.multinomial only"
" implemented with n == 1 and pvals.ndim = 2"
))
" implemented with n == 1 and pvals.ndim = 2"
))
def
normal
(
self
,
size
=
None
,
avg
=
0.0
,
std
=
1.0
,
ndim
=
None
,
dtype
=
config
.
floatX
):
def
normal
(
self
,
size
=
None
,
avg
=
0.0
,
std
=
1.0
,
ndim
=
None
,
dtype
=
'floatX'
,
nstreams
=
None
):
"""
"""
:param: size: Can be a list of integer or Theano variable(ex: the shape of other Theano Variable)
:param size: Can be a list of integers or Theano variables (ex: the
shape of another Theano Variable)
:param dtype: The output data type.
:param nstreams: Number of streams.
"""
"""
# We need an even number of ]0,1[ samples. Then we split them
# We need an even number of ]0,1[ samples. Then we split them
# in two halves. First half becomes our U1's for Box-Muller,
# in two halves. First half becomes our U1's for Box-Muller,
# second half our U2's. See Wikipedia page:
# second half our U2's. See Wikipedia page:
# http://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform
# http://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform
if
dtype
==
'floatX'
:
dtype
=
config
.
floatX
evened
=
False
evened
=
False
constant
=
False
constant
=
False
if
isinstance
(
size
,
tuple
)
and
all
([
isinstance
(
i
,
int
)
for
i
in
size
]):
if
isinstance
(
size
,
tuple
)
and
all
([
isinstance
(
i
,
int
)
for
i
in
size
]):
...
@@ -786,14 +821,15 @@ class MRG_RandomStreams(object):
...
@@ -786,14 +821,15 @@ class MRG_RandomStreams(object):
else
:
else
:
#if even, don't change, if odd, +1
#if even, don't change, if odd, +1
n_samples
=
prod
(
size
)
+
(
prod
(
size
)
%
2
)
n_samples
=
prod
(
size
)
+
(
prod
(
size
)
%
2
)
flattened
=
self
.
uniform
(
size
=
(
n_samples
,),
dtype
=
dtype
)
flattened
=
self
.
uniform
(
size
=
(
n_samples
,),
dtype
=
dtype
,
nstreams
=
nstreams
)
if
constant
:
if
constant
:
U1
=
flattened
[:
n_samples
/
2
]
U1
=
flattened
[:
n_samples
//
2
]
U2
=
flattened
[
n_samples
/
2
:]
U2
=
flattened
[
n_samples
//
2
:]
else
:
else
:
U1
=
flattened
[:
prod
(
flattened
.
shape
)
/
2
]
U1
=
flattened
[:
prod
(
flattened
.
shape
)
//
2
]
U2
=
flattened
[
prod
(
flattened
.
shape
)
/
2
:]
U2
=
flattened
[
prod
(
flattened
.
shape
)
//
2
:]
#normal_samples = zeros_like(flattened)
#normal_samples = zeros_like(flattened)
sqrt_ln_U1
=
sqrt
(
-
2.0
*
log
(
U1
))
sqrt_ln_U1
=
sqrt
(
-
2.0
*
log
(
U1
))
...
...
theano/sandbox/test_rng_mrg.py
浏览文件 @
c326cc22
...
@@ -350,7 +350,9 @@ def test_uniform():
...
@@ -350,7 +350,9 @@ def test_uniform():
print
'ON CPU with size=(
%
s):'
%
str
(
size
)
print
'ON CPU with size=(
%
s):'
%
str
(
size
)
x
=
tensor
.
matrix
()
x
=
tensor
.
matrix
()
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
False
)
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
False
)
u
=
R
.
uniform
(
size
=
size
)
# Note: we specify `nstreams` to avoid a warning.
u
=
R
.
uniform
(
size
=
size
,
nstreams
=
rng_mrg
.
guess_n_streams
(
size
,
warn
=
False
))
f
=
theano
.
function
(
var_input
,
u
,
mode
=
mode
)
f
=
theano
.
function
(
var_input
,
u
,
mode
=
mode
)
assert
any
([
isinstance
(
node
.
op
,
theano
.
sandbox
.
rng_mrg
.
mrg_uniform
)
assert
any
([
isinstance
(
node
.
op
,
theano
.
sandbox
.
rng_mrg
.
mrg_uniform
)
for
node
in
f
.
maker
.
env
.
toposort
()])
for
node
in
f
.
maker
.
env
.
toposort
()])
...
@@ -366,7 +368,8 @@ def test_uniform():
...
@@ -366,7 +368,8 @@ def test_uniform():
print
''
print
''
print
'ON GPU with size=(
%
s):'
%
str
(
size
)
print
'ON GPU with size=(
%
s):'
%
str
(
size
)
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
True
)
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
True
)
u
=
R
.
uniform
(
size
=
size
,
dtype
=
'float32'
)
u
=
R
.
uniform
(
size
=
size
,
dtype
=
'float32'
,
nstreams
=
rng_mrg
.
guess_n_streams
(
size
,
warn
=
False
))
assert
u
.
dtype
==
'float32'
#well, it's really that this test w GPU doesn't make sense otw
assert
u
.
dtype
==
'float32'
#well, it's really that this test w GPU doesn't make sense otw
f
=
theano
.
function
(
var_input
,
theano
.
Out
(
f
=
theano
.
function
(
var_input
,
theano
.
Out
(
theano
.
sandbox
.
cuda
.
basic_ops
.
gpu_from_host
(
u
),
theano
.
sandbox
.
cuda
.
basic_ops
.
gpu_from_host
(
u
),
...
@@ -421,7 +424,9 @@ def test_binomial():
...
@@ -421,7 +424,9 @@ def test_binomial():
print
''
print
''
print
'ON CPU with size=(
%
s) and mean(
%
d):'
%
(
str
(
size
),
mean
)
print
'ON CPU with size=(
%
s) and mean(
%
d):'
%
(
str
(
size
),
mean
)
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
False
)
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
False
)
u
=
R
.
binomial
(
size
=
size
,
p
=
mean
)
# Note: we specify `nstreams` to avoid a warning.
u
=
R
.
binomial
(
size
=
size
,
p
=
mean
,
nstreams
=
rng_mrg
.
guess_n_streams
(
size
,
warn
=
False
))
f
=
theano
.
function
(
var_input
,
u
,
mode
=
mode
)
f
=
theano
.
function
(
var_input
,
u
,
mode
=
mode
)
theano
.
printing
.
debugprint
(
f
)
theano
.
printing
.
debugprint
(
f
)
out
=
f
(
*
input
)
out
=
f
(
*
input
)
...
@@ -433,7 +438,9 @@ def test_binomial():
...
@@ -433,7 +438,9 @@ def test_binomial():
print
''
print
''
print
'ON GPU with size=(
%
s) and mean(
%
d):'
%
(
str
(
size
),
mean
)
print
'ON GPU with size=(
%
s) and mean(
%
d):'
%
(
str
(
size
),
mean
)
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
True
)
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
True
)
u
=
R
.
binomial
(
size
=
size
,
p
=
mean
,
dtype
=
'float32'
)
u
=
R
.
binomial
(
size
=
size
,
p
=
mean
,
dtype
=
'float32'
,
nstreams
=
rng_mrg
.
guess_n_streams
(
size
,
warn
=
False
))
assert
u
.
dtype
==
'float32'
#well, it's really that this test w GPU doesn't make sense otw
assert
u
.
dtype
==
'float32'
#well, it's really that this test w GPU doesn't make sense otw
f
=
theano
.
function
(
var_input
,
theano
.
Out
(
f
=
theano
.
function
(
var_input
,
theano
.
Out
(
theano
.
sandbox
.
cuda
.
basic_ops
.
gpu_from_host
(
u
),
theano
.
sandbox
.
cuda
.
basic_ops
.
gpu_from_host
(
u
),
...
@@ -478,7 +485,9 @@ def test_normal0():
...
@@ -478,7 +485,9 @@ def test_normal0():
print
'ON CPU:'
print
'ON CPU:'
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
False
)
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
False
)
n
=
R
.
normal
(
size
=
size
,
avg
=
avg
,
std
=
std
)
# Note: we specify `nstreams` to avoid a warning.
n
=
R
.
normal
(
size
=
size
,
avg
=
avg
,
std
=
std
,
nstreams
=
rng_mrg
.
guess_n_streams
(
size
,
warn
=
False
))
f
=
theano
.
function
(
var_input
,
n
,
mode
=
mode
)
f
=
theano
.
function
(
var_input
,
n
,
mode
=
mode
)
theano
.
printing
.
debugprint
(
f
)
theano
.
printing
.
debugprint
(
f
)
out
=
f
(
*
input
)
out
=
f
(
*
input
)
...
@@ -491,7 +500,8 @@ def test_normal0():
...
@@ -491,7 +500,8 @@ def test_normal0():
print
''
print
''
print
'ON GPU:'
print
'ON GPU:'
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
True
)
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
True
)
n
=
R
.
normal
(
size
=
size
,
avg
=
avg
,
std
=
std
,
dtype
=
'float32'
)
n
=
R
.
normal
(
size
=
size
,
avg
=
avg
,
std
=
std
,
dtype
=
'float32'
,
nstreams
=
rng_mrg
.
guess_n_streams
(
size
,
warn
=
False
))
assert
n
.
dtype
==
'float32'
#well, it's really that this test w GPU doesn't make sense otw
assert
n
.
dtype
==
'float32'
#well, it's really that this test w GPU doesn't make sense otw
f
=
theano
.
function
(
var_input
,
theano
.
Out
(
f
=
theano
.
function
(
var_input
,
theano
.
Out
(
theano
.
sandbox
.
cuda
.
basic_ops
.
gpu_from_host
(
n
),
theano
.
sandbox
.
cuda
.
basic_ops
.
gpu_from_host
(
n
),
...
@@ -557,7 +567,8 @@ def test_multinomial():
...
@@ -557,7 +567,8 @@ def test_multinomial():
pvals
=
numpy
.
asarray
(
numpy
.
random
.
uniform
(
size
=
sample_size
))
pvals
=
numpy
.
asarray
(
numpy
.
random
.
uniform
(
size
=
sample_size
))
pvals
=
numpy
.
apply_along_axis
(
lambda
row
:
row
/
numpy
.
sum
(
row
),
1
,
pvals
)
pvals
=
numpy
.
apply_along_axis
(
lambda
row
:
row
/
numpy
.
sum
(
row
),
1
,
pvals
)
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
False
)
R
=
MRG_RandomStreams
(
234
,
use_cuda
=
False
)
m
=
R
.
multinomial
(
pvals
=
pvals
,
dtype
=
config
.
floatX
)
# Note: we specify `nstreams` to avoid a warning.
m
=
R
.
multinomial
(
pvals
=
pvals
,
dtype
=
config
.
floatX
,
nstreams
=
30
*
256
)
f
=
theano
.
function
([],
m
,
mode
=
mode_
)
f
=
theano
.
function
([],
m
,
mode
=
mode_
)
theano
.
printing
.
debugprint
(
f
)
theano
.
printing
.
debugprint
(
f
)
out
=
f
()
out
=
f
()
...
...
theano/scalar/basic.py
浏览文件 @
c326cc22
...
@@ -12,8 +12,9 @@ If you want to use a scalar variable in a Theano graph,
...
@@ -12,8 +12,9 @@ If you want to use a scalar variable in a Theano graph,
you probably want to use theano.tensor.[c,z,f,d,b,w,i,l,]scalar!
you probably want to use theano.tensor.[c,z,f,d,b,w,i,l,]scalar!
"""
"""
import
math
import
math
,
warnings
from
copy
import
copy
from
copy
import
copy
from
itertools
import
imap
import
numpy
,
theano
import
numpy
,
theano
...
@@ -26,11 +27,37 @@ builtin_complex = complex
...
@@ -26,11 +27,37 @@ builtin_complex = complex
builtin_int
=
int
builtin_int
=
int
builtin_float
=
float
builtin_float
=
float
class
ComplexError
(
Exception
):
"""Raised if complex numbers are used in an unsupported operation."""
pass
class
IntegerDivisionError
(
Exception
):
"""Raised if someone tries to divide integers with '/' instead of '//'."""
pass
def
upcast
(
dtype
,
*
dtypes
):
def
upcast
(
dtype
,
*
dtypes
):
z
=
numpy
.
zeros
((),
dtype
=
dtype
)
# Should we try to keep float32 instead of float64? This is used so that
for
dtype
in
dtypes
:
# for instance mixing int64 with float32 yields float32 instead of float64.
z
=
z
+
numpy
.
zeros
((),
dtype
=
dtype
)
# Note that we store this boolean as a one-element list so that it can be
return
str
(
z
.
dtype
)
# modified within `make_array`.
keep_float32
=
[(
config
.
cast_policy
==
'numpy+floatX'
and
config
.
floatX
==
'float32'
)]
def
make_array
(
dt
):
if
dt
==
'float64'
:
# There is an explicit float64 dtype: we cannot keep float32.
keep_float32
[
0
]
=
False
return
numpy
.
zeros
((),
dtype
=
dt
)
z
=
make_array
(
dtype
)
for
dt
in
dtypes
:
z
=
z
+
make_array
(
dt
=
dt
)
rval
=
str
(
z
.
dtype
)
if
rval
==
'float64'
and
keep_float32
[
0
]:
return
'float32'
else
:
return
rval
def
as_scalar
(
x
,
name
=
None
):
def
as_scalar
(
x
,
name
=
None
):
if
isinstance
(
x
,
gof
.
Apply
):
if
isinstance
(
x
,
gof
.
Apply
):
...
@@ -47,6 +74,7 @@ def as_scalar(x, name = None):
...
@@ -47,6 +74,7 @@ def as_scalar(x, name = None):
except
TypeError
:
except
TypeError
:
raise
TypeError
(
"Cannot convert
%
s to Scalar"
%
x
,
type
(
x
))
raise
TypeError
(
"Cannot convert
%
s to Scalar"
%
x
,
type
(
x
))
def
constant
(
x
):
def
constant
(
x
):
# pass through numpy scalars, since they are already typed on purpose typically.
# pass through numpy scalars, since they are already typed on purpose typically.
if
hasattr
(
x
,
'dtype'
):
if
hasattr
(
x
,
'dtype'
):
...
@@ -383,6 +411,7 @@ uint_types = uint8, uint16, uint32, uint64
...
@@ -383,6 +411,7 @@ uint_types = uint8, uint16, uint32, uint64
float_types
=
float32
,
float64
float_types
=
float32
,
float64
complex_types
=
complex64
,
complex128
complex_types
=
complex64
,
complex128
discrete_types
=
int_types
+
uint_types
continuous_types
=
float_types
+
complex_types
continuous_types
=
float_types
+
complex_types
class
_scalar_py_operators
:
class
_scalar_py_operators
:
...
@@ -416,7 +445,8 @@ class _scalar_py_operators:
...
@@ -416,7 +445,8 @@ class _scalar_py_operators:
def
__sub__
(
self
,
other
):
return
sub
(
self
,
other
)
def
__sub__
(
self
,
other
):
return
sub
(
self
,
other
)
def
__mul__
(
self
,
other
):
return
mul
(
self
,
other
)
def
__mul__
(
self
,
other
):
return
mul
(
self
,
other
)
def
__div__
(
self
,
other
):
return
div_proxy
(
self
,
other
)
def
__div__
(
self
,
other
):
return
div_proxy
(
self
,
other
)
def
__mod__
(
self
,
other
):
return
mod
(
self
,
other
)
def
__floordiv__
(
self
,
other
):
return
int_div
(
self
,
other
)
def
__mod__
(
self
,
other
):
return
mod_check
(
self
,
other
)
def
__pow__
(
self
,
other
):
return
pow
(
self
,
other
)
def
__pow__
(
self
,
other
):
return
pow
(
self
,
other
)
#ARITHMETIC - RIGHT-OPERAND
#ARITHMETIC - RIGHT-OPERAND
...
@@ -995,32 +1025,74 @@ class Sub(BinaryScalarOp):
...
@@ -995,32 +1025,74 @@ class Sub(BinaryScalarOp):
return
first_part
,
second_part
return
first_part
,
second_part
sub
=
Sub
(
upcast_out
,
name
=
'sub'
)
sub
=
Sub
(
upcast_out
,
name
=
'sub'
)
def
div_proxy
(
x
,
y
):
"""Proxy for either true_div or int_div, depending on types of x, y.
def
int_or_true_div
(
x_discrete
,
y_discrete
):
"""
"""
if
as_scalar
(
x
)
.
type
.
dtype
.
startswith
(
'int'
)
and
as_scalar
(
y
)
.
type
.
dtype
.
startswith
(
'int'
):
Return 'int' or 'true' depending on the type of division used for x / y.
return
int_div
(
x
,
y
)
else
:
:param x_discrete: True if `x` is discrete ([unsigned] integer).
return
true_div
(
x
,
y
)
:param y_discrete: True if `x` is discrete ([unsigned] integer).
:returns: 'int' if `x / y` should be an integer division, or `true` if it
should be a true division.
Raises an IntegerDivisionError if both `x_discrete` and `y_discrete` are
True and `config.int_division` is set to 'raise'.
This function is used by both scalar/basic.py and tensor.basic/py.
"""
if
(
x_discrete
and
y_discrete
):
if
config
.
int_division
==
'raise'
:
raise
IntegerDivisionError
(
"With `config.int_division` set to 'raise', dividing two "
"integer types with '/' is forbidden to avoid confusion "
"between integer and floating point divisions. Please "
"use // for integer division, or if you want a float result "
"either cast one of the arguments to a float or directly call "
"`x.__truediv__(y)`."
)
elif
config
.
int_division
==
'int'
:
warnings
.
warn
(
"Division of two integer types with x / y is deprecated, "
"please use x // y for an integer division "
"(set `config.int_division = raise` to track the origin "
"of this warning)"
,
DeprecationWarning
)
return
'int'
elif
config
.
int_division
==
'floatX'
:
return
'true'
else
:
raise
NotImplementedError
(
config
.
int_division
)
else
:
return
'true'
def
div_proxy
(
x
,
y
):
"""Proxy for either true_div or int_div, depending on types of x, y."""
f
=
eval
(
'
%
s_div'
%
int_or_true_div
(
as_scalar
(
x
)
.
type
in
discrete_types
,
as_scalar
(
y
)
.
type
in
discrete_types
))
return
f
(
x
,
y
)
class
TrueDiv
(
BinaryScalarOp
):
class
TrueDiv
(
BinaryScalarOp
):
def
output_types
(
self
,
types
):
def
output_types
(
self
,
types
):
if
all
(
t
not
in
continuous
_types
for
t
in
types
):
if
all
(
t
in
discrete
_types
for
t
in
types
):
return
[
float64
]
return
[
Scalar
(
config
.
floatX
)
]
else
:
else
:
return
super
(
TrueDiv
,
self
)
.
output_types
(
types
)
return
super
(
TrueDiv
,
self
)
.
output_types
(
types
)
def
impl
(
self
,
x
,
y
):
def
impl
(
self
,
x
,
y
):
x
=
numpy
.
asarray
(
x
)
x
=
numpy
.
asarray
(
x
)
y
=
numpy
.
asarray
(
y
)
y
=
numpy
.
asarray
(
y
)
if
str
(
x
.
dtype
)
.
startswith
(
'int'
)
and
str
(
y
.
dtype
)
.
startswith
(
'int'
):
if
all
(
a
.
dtype
in
discrete_types
for
a
in
(
x
,
y
)
):
return
float
(
x
)
/
y
return
numpy
.
array
(
float
(
x
)
/
y
,
dtype
=
config
.
floatX
)
else
:
else
:
return
x
/
y
return
x
/
y
def
c_code
(
self
,
node
,
name
,
(
x
,
y
),
(
z
,
),
sub
):
def
c_code
(
self
,
node
,
name
,
(
x
,
y
),
(
z
,
),
sub
):
#we generate good c code only when both are complex!
#we generate good c code only when both are complex!
if
sum
([
node
.
inputs
[
0
]
.
type
in
complex_types
,
node
.
inputs
[
1
]
.
type
in
complex_types
])
==
1
:
if
sum
([
node
.
inputs
[
0
]
.
type
in
complex_types
,
node
.
inputs
[
1
]
.
type
in
complex_types
])
==
1
:
raise
NotImplementedError
(
'type not supported'
,
type
)
raise
NotImplementedError
(
'type not supported'
,
type
)
if
node
.
inputs
[
0
]
.
type
in
int_types
and
node
.
inputs
[
1
]
.
type
in
int_types
:
if
(
node
.
inputs
[
0
]
.
type
in
discrete_types
and
node
.
inputs
[
1
]
.
type
in
discrete_types
):
return
"
%(z)
s = ((double)
%(x)
s) /
%(y)
s;"
%
locals
()
return
"
%(z)
s = ((double)
%(x)
s) /
%(y)
s;"
%
locals
()
return
"
%(z)
s =
%(x)
s /
%(y)
s;"
%
locals
()
return
"
%(z)
s =
%(x)
s /
%(y)
s;"
%
locals
()
def
grad
(
self
,
(
x
,
y
),
(
gz
,
)):
def
grad
(
self
,
(
x
,
y
),
(
gz
,
)):
...
@@ -1029,11 +1101,15 @@ class TrueDiv(BinaryScalarOp):
...
@@ -1029,11 +1101,15 @@ class TrueDiv(BinaryScalarOp):
if
x
.
type
in
float_types
:
if
x
.
type
in
float_types
:
first_part
=
cast
(
gz
/
y
,
x
.
type
.
dtype
)
first_part
=
cast
(
gz
/
y
,
x
.
type
.
dtype
)
else
:
else
:
assert
x
.
type
in
discrete_types
first_part
=
None
first_part
=
None
if
y
.
type
in
complex_types
:
raise
NotImplementedError
()
if
y
.
type
in
float_types
:
if
y
.
type
in
float_types
:
second_part
=
cast
(
-
(
gz
*
x
)
/
(
y
*
y
),
y
.
type
.
dtype
)
second_part
=
cast
(
-
(
gz
*
x
)
/
(
y
*
y
),
y
.
type
.
dtype
)
else
:
else
:
assert
y
.
type
in
discrete_types
second_part
=
None
second_part
=
None
return
first_part
,
second_part
return
first_part
,
second_part
true_div
=
TrueDiv
(
upcast_out
,
name
=
'true_div'
)
true_div
=
TrueDiv
(
upcast_out
,
name
=
'true_div'
)
...
@@ -1049,9 +1125,29 @@ int_div = IntDiv(upcast_out, name = 'int_div')
...
@@ -1049,9 +1125,29 @@ int_div = IntDiv(upcast_out, name = 'int_div')
floor_div
=
int_div
floor_div
=
int_div
def
raise_complex_error
():
raise
ComplexError
(
"Theano does not support the mod operator (
%
) on "
"complex numbers, since numpy deprecated it."
)
def
mod_check
(
x
,
y
):
if
(
as_scalar
(
x
)
.
type
in
complex_types
or
as_scalar
(
y
)
.
type
in
complex_types
):
# Currently forbidden.
raise_complex_error
()
else
:
return
mod
(
x
,
y
)
class
Mod
(
BinaryScalarOp
):
class
Mod
(
BinaryScalarOp
):
def
impl
(
self
,
x
,
y
):
def
impl
(
self
,
x
,
y
):
if
isinstance
(
x
,
numpy
.
complex
)
or
isinstance
(
y
,
numpy
.
complex
):
raise_complex_error
()
return
x
%
y
return
x
%
y
def
c_code_cache_version
(
self
):
def
c_code_cache_version
(
self
):
return
(
5
,)
return
(
5
,)
...
@@ -1061,20 +1157,34 @@ class Mod(BinaryScalarOp):
...
@@ -1061,20 +1157,34 @@ class Mod(BinaryScalarOp):
def
c_code
(
self
,
node
,
name
,
(
x
,
y
),
(
z
,
),
sub
):
def
c_code
(
self
,
node
,
name
,
(
x
,
y
),
(
z
,
),
sub
):
"""
"""
We want the result to have the same sign as python, not the other implementa
it
on of mod.
We want the result to have the same sign as python, not the other implementa
ti
on of mod.
"""
"""
#raise NotImplementedError("Unlike Python, C's modulo returns negative modulo on negative dividend (to implement)")
#raise NotImplementedError("Unlike Python, C's modulo returns negative modulo on negative dividend (to implement)")
t
=
node
.
inputs
[
0
]
.
type
.
upcast
(
*
[
i
.
type
for
i
in
node
.
inputs
[
1
:]])
t
=
node
.
inputs
[
0
]
.
type
.
upcast
(
*
[
i
.
type
for
i
in
node
.
inputs
[
1
:]])
if
t
in
int_types
or
t
in
[
'uint8'
,
'int8'
,
'uint16'
,
'int16'
,
'uint32'
,
'int32'
,
'uint64'
,
'int64'
]:
if
(
str
(
t
)
in
imap
(
str
,
discrete_types
)
or
t
in
[
'uint8'
,
'int8'
,
'uint16'
,
'int16'
,
'uint32'
,
'int32'
,
'uint64'
,
'int64'
]
or
t
in
discrete_types
):
# The above or's should not be needed anymore. However, for now we
# keep them out of safety, and verify they are useless with an
# assert.
assert
str
(
t
)
in
imap
(
str
,
discrete_types
)
x_mod_y
=
"THEANO_MACRO_MOD(
%(x)
s,
%(y)
s)"
%
locals
()
x_mod_y
=
"THEANO_MACRO_MOD(
%(x)
s,
%(y)
s)"
%
locals
()
x_mod_ymm
=
"THEANO_MACRO_MOD(-
%(x)
s, -
%(y)
s)"
%
locals
()
x_mod_ymm
=
"THEANO_MACRO_MOD(-
%(x)
s, -
%(y)
s)"
%
locals
()
x_mod_ypm
=
"THEANO_MACRO_MOD(
%(x)
s, -
%(y)
s)"
%
locals
()
x_mod_ypm
=
"THEANO_MACRO_MOD(
%(x)
s, -
%(y)
s)"
%
locals
()
x_mod_ymp
=
"THEANO_MACRO_MOD(-
%(x)
s,
%(y)
s)"
%
locals
()
x_mod_ymp
=
"THEANO_MACRO_MOD(-
%(x)
s,
%(y)
s)"
%
locals
()
elif
t
in
float_types
or
t
in
[
'float32'
,
'float64'
]:
elif
(
str
(
t
)
in
imap
(
str
,
float_types
)
or
t
in
[
'float32'
,
'float64'
]
or
t
in
float_types
):
# The above or's should not be needed anymore. However, for now we
# keep them out of safety, and verify they are useless with an
# assert.
assert
str
(
t
)
in
imap
(
str
,
float_types
)
x_mod_y
=
"fmod(
%(x)
s,
%(y)
s)"
%
locals
()
x_mod_y
=
"fmod(
%(x)
s,
%(y)
s)"
%
locals
()
x_mod_ymm
=
"fmod(-
%(x)
s,-
%(y)
s)"
%
locals
()
x_mod_ymm
=
"fmod(-
%(x)
s,-
%(y)
s)"
%
locals
()
x_mod_ypm
=
"fmod(
%(x)
s,-
%(y)
s)"
%
locals
()
x_mod_ypm
=
"fmod(
%(x)
s,-
%(y)
s)"
%
locals
()
x_mod_ymp
=
"fmod(-
%(x)
s,
%(y)
s)"
%
locals
()
x_mod_ymp
=
"fmod(-
%(x)
s,
%(y)
s)"
%
locals
()
elif
str
(
t
)
in
imap
(
str
,
complex_types
):
raise_complex_error
()
else
:
else
:
raise
NotImplementedError
(
'type not supported'
,
type
)
raise
NotImplementedError
(
'type not supported'
,
type
)
...
...
theano/scalar/tests/test_basic.py
浏览文件 @
c326cc22
...
@@ -37,6 +37,7 @@ class test_ScalarOps(unittest.TestCase):
...
@@ -37,6 +37,7 @@ class test_ScalarOps(unittest.TestCase):
#As we use theano.scalar normally, but we use theano.tensor.scalar
#As we use theano.scalar normally, but we use theano.tensor.scalar
#that is not important. Also this make the theano fct fail at call time
#that is not important. Also this make the theano fct fail at call time
#so this is not a silent bug.
#so this is not a silent bug.
# --> This is why it is purposedly named 'tes_mod' instead of 'test_mod'.
def
tes_mod
(
self
):
def
tes_mod
(
self
):
"""
"""
We add this test as not all language and C implementation give the same
We add this test as not all language and C implementation give the same
...
@@ -174,6 +175,19 @@ class test_logical(unittest.TestCase):
...
@@ -174,6 +175,19 @@ class test_logical(unittest.TestCase):
self
.
assertTrue
(
fn
(
a
,
b
)
==
~
a
,
(
a
,))
self
.
assertTrue
(
fn
(
a
,
b
)
==
~
a
,
(
a
,))
class
test_complex_mod
(
unittest
.
TestCase
):
"""Make sure
%
fails on complex numbers."""
def
test_fail
(
self
):
x
=
complex64
()
y
=
int32
()
try
:
x
%
y
assert
False
except
ComplexError
:
pass
class
test_div
(
unittest
.
TestCase
):
class
test_div
(
unittest
.
TestCase
):
def
test_0
(
self
):
def
test_0
(
self
):
a
=
int8
()
a
=
int8
()
...
@@ -182,9 +196,9 @@ class test_div(unittest.TestCase):
...
@@ -182,9 +196,9 @@ class test_div(unittest.TestCase):
d
=
float64
()
d
=
float64
()
f
=
float32
()
f
=
float32
()
print
(
a
/
b
)
.
owner
.
op
print
(
a
/
/
b
)
.
owner
.
op
assert
isinstance
((
a
/
b
)
.
owner
.
op
,
IntDiv
)
assert
isinstance
((
a
/
/
b
)
.
owner
.
op
,
IntDiv
)
assert
isinstance
((
b
/
a
)
.
owner
.
op
,
IntDiv
)
assert
isinstance
((
b
/
/
a
)
.
owner
.
op
,
IntDiv
)
assert
isinstance
((
b
/
d
)
.
owner
.
op
,
TrueDiv
)
assert
isinstance
((
b
/
d
)
.
owner
.
op
,
TrueDiv
)
assert
isinstance
((
b
/
f
)
.
owner
.
op
,
TrueDiv
)
assert
isinstance
((
b
/
f
)
.
owner
.
op
,
TrueDiv
)
assert
isinstance
((
f
/
a
)
.
owner
.
op
,
TrueDiv
)
assert
isinstance
((
f
/
a
)
.
owner
.
op
,
TrueDiv
)
...
...
theano/tensor/basic.py
浏览文件 @
c326cc22
差异被折叠。
点击展开。
theano/tensor/elemwise.py
浏览文件 @
c326cc22
...
@@ -454,7 +454,73 @@ class Elemwise(Op):
...
@@ -454,7 +454,73 @@ class Elemwise(Op):
"""
"""
inputs
=
map
(
as_tensor_variable
,
inputs
)
inputs
=
map
(
as_tensor_variable
,
inputs
)
shadow
=
self
.
scalar_op
.
make_node
(
*
[
Scalar
(
dtype
=
t
.
type
.
dtype
)()
for
t
in
inputs
])
input_dtypes
=
[
i
.
dtype
for
i
in
inputs
]
scalar_inputs
=
[]
array_inputs
=
[]
for
input_idx
,
input
in
enumerate
(
inputs
):
if
input
.
ndim
==
0
:
scalar_inputs
.
append
((
input_idx
,
input
))
else
:
array_inputs
.
append
((
input_idx
,
input
))
shadow
=
self
.
scalar_op
.
make_node
(
*
[
Scalar
(
dtype
=
dtype
)()
for
dtype
in
input_dtypes
])
out_dtypes
=
[
o
.
type
.
dtype
for
o
in
shadow
.
outputs
]
if
(
scalar_inputs
and
array_inputs
and
theano
.
config
.
cast_policy
in
(
'numpy'
,
'numpy+floatX'
)):
# We need to make sure that scalars do not upcast arrays unless
# they are fundamentally different. This is specified in
# http://docs.scipy.org/doc/numpy/reference/ufuncs.html
# in the 'casting rules' section.
# It seems difficult to find a generic mechanism that would work
# for any elemwise Op. In the following we use a heuristic that
# should work for simple Ops, but may break in the future for more
# complex Ops (in which case we may need to implement a way for
# these Ops to override this heuristic).
# The heuristic consists in detecting a situation where we suspect
# some scalar input upcasted an array, by comparing the highest
# type of the outputs with the highest type of the input arrays.
# If it happens that the former is of higher type than the latter,
# then we go through all scalar inputs and if they are of a higher
# type than the highest type of the input arrays, we pretend they
# actually are of the same type (the idea is that we suspect they
# are responsible for the upcasting, so by downcasting them we hope
# to get rid of this upcasting).
array_dtype
=
scalar
.
upcast
(
*
[
a
[
1
]
.
dtype
for
a
in
array_inputs
])
out_dtype
=
scalar
.
upcast
(
*
out_dtypes
)
def
is_higher
(
dtype_a
,
dtype_b
):
return
(
dtype_a
!=
dtype_b
and
scalar
.
upcast
(
dtype_a
,
dtype_b
)
==
dtype_a
)
if
is_higher
(
out_dtype
,
array_dtype
):
# We are in the situation described above.
modified_scalar_inputs
=
False
for
input_idx
,
input
in
scalar_inputs
:
if
scalar
.
upcast
(
input
.
dtype
,
array_dtype
)
==
out_dtype
:
# This scalar may be responsible for the upcasting.
input_dtypes
[
input_idx
]
=
array_dtype
modified_scalar_inputs
=
True
if
modified_scalar_inputs
:
# Update 'shadow' and 'out_dtypes'.
shadow
=
self
.
scalar_op
.
make_node
(
*
[
Scalar
(
dtype
=
dtype
)()
for
dtype
in
input_dtypes
])
out_dtypes
=
[
o
.
type
.
dtype
for
o
in
shadow
.
outputs
]
# The whole point of all this is to try to avoid upcasting
# the dtype of the input arrays. The following assert makes
# sure this goal was achieved. Note however that it might
# fail for some Ops that purposedly upcast arrays, in which
# case it would probably be better to use a different
# mechanism for such Ops.
out_dtype
=
scalar
.
upcast
(
*
out_dtypes
)
assert
not
is_higher
(
out_dtype
,
array_dtype
)
else
:
# Same as above: safety assert to make sure our heuristics
# did its job. It may fail in the future for some Ops that
# would require a different mechanism.
import
pdb
;
pdb
.
set_trace
()
raise
AssertionError
(
'Heuristic failure - see Elemwise.make_node'
)
target_length
=
max
([
input
.
type
.
ndim
for
input
in
inputs
])
target_length
=
max
([
input
.
type
.
ndim
for
input
in
inputs
])
...
@@ -487,7 +553,6 @@ class Elemwise(Op):
...
@@ -487,7 +553,6 @@ class Elemwise(Op):
for
ob
,
ib
in
zip
(
out_broadcastables
[
overwriter
],
inputs
[
overwritten
]
.
type
.
broadcastable
):
for
ob
,
ib
in
zip
(
out_broadcastables
[
overwriter
],
inputs
[
overwritten
]
.
type
.
broadcastable
):
if
ib
and
not
ob
:
if
ib
and
not
ob
:
raise
ValueError
(
"Operation cannot be done inplace on an input with broadcasted dimensions."
)
raise
ValueError
(
"Operation cannot be done inplace on an input with broadcasted dimensions."
)
out_dtypes
=
[
o
.
type
.
dtype
for
o
in
shadow
.
outputs
]
if
any
(
inputs
[
i
]
.
type
.
dtype
!=
out_dtypes
[
o
]
for
o
,
i
in
inplace_pattern
.
items
()):
if
any
(
inputs
[
i
]
.
type
.
dtype
!=
out_dtypes
[
o
]
for
o
,
i
in
inplace_pattern
.
items
()):
raise
TypeError
(
"Cannot do an inplace operation on incompatible data types."
,
raise
TypeError
(
"Cannot do an inplace operation on incompatible data types."
,
([
i
.
type
.
dtype
for
i
in
inputs
],
out_dtypes
,
inplace_pattern
))
([
i
.
type
.
dtype
for
i
in
inputs
],
out_dtypes
,
inplace_pattern
))
...
...
theano/tensor/nnet/Conv3D.py
浏览文件 @
c326cc22
...
@@ -135,9 +135,9 @@ class Conv3D(theano.Op):
...
@@ -135,9 +135,9 @@ class Conv3D(theano.Op):
vidDur
=
V_shape
[
3
]
vidDur
=
V_shape
[
3
]
filterDur
=
W_shape
[
3
]
filterDur
=
W_shape
[
3
]
output_height
=
T
.
floor
(
(
vidHeight
-
filterHeight
)
/
dr
)
+
1
output_height
=
T
.
floor
(
(
vidHeight
-
filterHeight
)
//
dr
)
+
1
output_width
=
T
.
floor
(
(
vidWidth
-
filterWidth
)
/
dc
)
+
1
output_width
=
T
.
floor
(
(
vidWidth
-
filterWidth
)
//
dc
)
+
1
output_dur
=
T
.
floor
(
(
vidDur
-
filterDur
)
/
dt
)
+
1
output_dur
=
T
.
floor
(
(
vidDur
-
filterDur
)
//
dt
)
+
1
rval
=
(
batch_size
,
output_height
,
output_width
,
output_dur
,
output_channels
)
rval
=
(
batch_size
,
output_height
,
output_width
,
output_dur
,
output_channels
)
...
...
theano/tensor/nnet/conv.py
浏览文件 @
c326cc22
...
@@ -575,14 +575,15 @@ class ConvOp(Op):
...
@@ -575,14 +575,15 @@ class ConvOp(Op):
try
:
try
:
fmshp
=
ConvOp
.
getOutputShape
(
imshp
[
1
:],
kshp
,
(
self
.
dx
,
self
.
dy
),
self
.
out_mode
)
fmshp
=
ConvOp
.
getOutputShape
(
imshp
[
1
:],
kshp
,
(
self
.
dx
,
self
.
dy
),
self
.
out_mode
)
except
TypeError
:
except
TypeError
:
raise
NotImplemented
Error
()
raise
theano
.
tensor
.
Shape
Error
()
outshp
=
(
batch_size
,
fmo
)
+
tuple
(
fmshp
)
outshp
=
(
batch_size
,
fmo
)
+
tuple
(
fmshp
)
return
[
outshp
]
return
[
outshp
]
else
:
else
:
# Haven't implemented this case. imshp and kshp may be symbollic
# Haven't implemented this case. imshp and kshp may be symbollic
# and ConvOp.getOutputShape doesn't handle this. In this case
# and ConvOp.getOutputShape doesn't handle this. In this case
# we simply let the default function do its work.
# we simply let the default function do its work.
raise
NotImplementedError
()
raise
theano
.
tensor
.
ShapeError
()
def
perform
(
self
,
node
,
inp
,
out
):
def
perform
(
self
,
node
,
inp
,
out
):
"""
"""
...
...
theano/tensor/nnet/tests/test_nnet.py
浏览文件 @
c326cc22
...
@@ -879,6 +879,7 @@ def test_argmax_pushdown():
...
@@ -879,6 +879,7 @@ def test_argmax_pushdown():
[
x
],
[
x
],
[
out
])
[
out
])
config
.
warn
.
argmax_pushdown_bug
=
False
theano
.
compile
.
mode
.
optdb
.
query
(
theano
.
compile
.
mode
.
optdb
.
query
(
theano
.
compile
.
mode
.
OPT_FAST_RUN
)
.
optimize
(
env
)
theano
.
compile
.
mode
.
OPT_FAST_RUN
)
.
optimize
(
env
)
...
@@ -922,6 +923,7 @@ def test_argmax_pushdown_bias():
...
@@ -922,6 +923,7 @@ def test_argmax_pushdown_bias():
[
x
,
b
],
[
x
,
b
],
[
out
])
[
out
])
config
.
warn
.
argmax_pushdown_bug
=
False
theano
.
compile
.
mode
.
optdb
.
query
(
theano
.
compile
.
mode
.
optdb
.
query
(
theano
.
compile
.
mode
.
OPT_FAST_RUN
)
.
optimize
(
env
)
theano
.
compile
.
mode
.
OPT_FAST_RUN
)
.
optimize
(
env
)
...
...
theano/tensor/opt.py
浏览文件 @
c326cc22
...
@@ -27,11 +27,12 @@ from theano import compile #to register the optimizer built by this file
...
@@ -27,11 +27,12 @@ from theano import compile #to register the optimizer built by this file
from
theano.gof.python25
import
any
,
all
from
theano.gof.python25
import
any
,
all
from
theano.gof.opt
import
Optimizer
,
pre_constant_merge
,
pre_greedy_local_optimizer
from
theano.gof.opt
import
Optimizer
,
pre_constant_merge
,
pre_greedy_local_optimizer
from
theano.gof
import
toolbox
,
DestroyHandler
from
theano.gof
import
toolbox
,
DestroyHandler
from
basic
import
get_constant_value
from
basic
import
get_constant_value
,
ShapeError
# Utilities
# Utilities
def
out2in
(
*
local_opts
):
def
out2in
(
*
local_opts
):
"""WRITEME """
"""WRITEME """
return
opt
.
TopoOptimizer
(
opt
.
LocalOptGroup
(
*
local_opts
),
return
opt
.
TopoOptimizer
(
opt
.
LocalOptGroup
(
*
local_opts
),
...
@@ -528,7 +529,7 @@ class ShapeFeature(object):
...
@@ -528,7 +529,7 @@ class ShapeFeature(object):
the cost of many Ops accurately, and generate c-code that is specific [e.g. unrolled] to
the cost of many Ops accurately, and generate c-code that is specific [e.g. unrolled] to
particular sizes.
particular sizes.
I
f you can determine the shape only in some case, return NotImplementedError when you can't
I
n cases where you cannot figure out the shape, raise a ShapeError.
.. note::
.. note::
...
@@ -714,13 +715,22 @@ class ShapeFeature(object):
...
@@ -714,13 +715,22 @@ class ShapeFeature(object):
try
:
try
:
o_shapes
=
shape_infer
(
node
,
[
self
.
shape_of
[
r
]
for
r
in
node
.
inputs
])
o_shapes
=
shape_infer
(
node
,
[
self
.
shape_of
[
r
]
for
r
in
node
.
inputs
])
except
NotImplemented
Error
:
except
Shape
Error
:
o_shapes
=
self
.
default_infer_shape
(
node
,
[
self
.
shape_of
[
r
]
for
r
in
node
.
inputs
])
o_shapes
=
self
.
default_infer_shape
(
node
,
[
self
.
shape_of
[
r
]
for
r
in
node
.
inputs
])
except
NotImplementedError
,
e
:
raise
NotImplementedError
(
'Code called by infer_shape failed raising a '
'NotImplementedError. Raising NotImplementedError to '
'indicate that a shape cannot be computed is no longer '
'supported, and one should now use tensor.ShapeError '
'instead. The original exception message is:
%
s'
%
e
)
except
Exception
,
e
:
except
Exception
,
e
:
_logger
.
error
(
'Failed to infer_shape from Op
%
s (i_shapes=
%
s):
%
s
%
s'
%
(
node
.
op
,
_logger
.
error
(
'Failed to infer_shape from Op
%
s (i_shapes=
%
s):
%
s
%
s'
%
(
node
.
op
,
[
self
.
shape_of
[
r
]
for
r
in
node
.
inputs
],
[
self
.
shape_of
[
r
]
for
r
in
node
.
inputs
],
type
(
e
),
str
(
e
)))
type
(
e
),
str
(
e
)))
o_shapes
=
self
.
default_infer_shape
(
node
,
[
self
.
shape_of
[
r
]
for
r
in
node
.
inputs
])
# We raise the exception to make sure the user knows something bad
# is going on.
raise
# this is packed information
# this is packed information
# an element of o_shapes is either None or a tuple
# an element of o_shapes is either None or a tuple
...
@@ -3410,11 +3420,12 @@ def local_elemwise_fusion_op(OP, max_input_fct=lambda node: 1024):
...
@@ -3410,11 +3420,12 @@ def local_elemwise_fusion_op(OP, max_input_fct=lambda node: 1024):
"""
"""
def
local_fuse
(
node
):
def
local_fuse
(
node
):
"""
"""
As part of specialisation, we fuse two consecutive elemwise op of the same shape.
As part of specialization, we fuse two consecutive elemwise Ops of the
same shape.
For mixed dtype, we let the Compise op do the cast. It let the C compile do the cast.
The number of dimension is validated at call time by theano itself.
For mixed dtype, we let the Composite op do the cast. It lets the C
compiler do the cast.
The number of dimensions is validated at call time by theano itself.
"""
"""
# META TODO: PUT THESE THINGS IN TRAC, NOT TODO NOTES!!
# META TODO: PUT THESE THINGS IN TRAC, NOT TODO NOTES!!
# TODO: use broadcast flag?
# TODO: use broadcast flag?
...
...
theano/tensor/tests/test_basic.py
浏览文件 @
c326cc22
差异被折叠。
点击展开。
theano/tensor/tests/test_incsubtensor.py
浏览文件 @
c326cc22
...
@@ -30,9 +30,11 @@ class Test_incsubtensor(unittest.TestCase):
...
@@ -30,9 +30,11 @@ class Test_incsubtensor(unittest.TestCase):
for
do_set
in
[
False
,
True
]:
for
do_set
in
[
False
,
True
]:
if
do_set
:
if
do_set
:
resut
=
T
.
setsubtensor
(
a
,
increment
,
[
sl1
,
sl2
])
resut
=
T
.
setsubtensor
(
a
,
increment
,
[
sl1
,
sl2
],
show_warning
=
False
)
else
:
else
:
resut
=
T
.
incsubtensor
(
a
,
increment
,
[
sl1
,
sl2
])
resut
=
T
.
incsubtensor
(
a
,
increment
,
[
sl1
,
sl2
],
show_warning
=
False
)
f
=
theano
.
function
([
a
,
increment
,
sl2_end
],
resut
)
f
=
theano
.
function
([
a
,
increment
,
sl2_end
],
resut
)
...
@@ -59,7 +61,7 @@ class Test_incsubtensor(unittest.TestCase):
...
@@ -59,7 +61,7 @@ class Test_incsubtensor(unittest.TestCase):
def
inc_slice
(
*
s
):
def
inc_slice
(
*
s
):
def
just_numeric_args
(
a
,
b
):
def
just_numeric_args
(
a
,
b
):
return
T
.
incsubtensor
(
a
,
b
,
s
)
return
T
.
incsubtensor
(
a
,
b
,
s
,
show_warning
=
False
)
return
just_numeric_args
return
just_numeric_args
# vector
# vector
...
...
theano/tensor/tests/test_opt.py
浏览文件 @
c326cc22
...
@@ -647,10 +647,14 @@ def test_local_merge_abs():
...
@@ -647,10 +647,14 @@ def test_local_merge_abs():
def
test_mixeddiv
():
def
test_mixeddiv
():
"""Test that int division
is preserved
"""
"""Test that int division
raises an exception.
"""
i
=
iscalar
()
i
=
iscalar
()
d
=
dscalar
()
d
=
dscalar
()
assert
0
==
function
([
i
,
d
],
d
*
(
i
/
(
i
+
1
)))(
3
,
1.0
)
try
:
0
==
function
([
i
,
d
],
d
*
(
i
/
(
i
+
1
)))(
3
,
1.0
)
assert
False
except
theano
.
scalar
.
IntegerDivisionError
:
pass
def
test_const_type_in_mul_canonizer
():
def
test_const_type_in_mul_canonizer
():
input
=
dmatrix
()
input
=
dmatrix
()
...
@@ -2487,6 +2491,7 @@ class T_local_sum(unittest.TestCase):
...
@@ -2487,6 +2491,7 @@ class T_local_sum(unittest.TestCase):
assert
numpy
.
allclose
(
f
(
input
),
input
.
sum
())
assert
numpy
.
allclose
(
f
(
input
),
input
.
sum
())
config
.
warn
.
sum_sum_bug
=
False
f
=
theano
.
function
([
a
],
a
.
sum
(
0
)
.
sum
(
0
)
.
sum
(
0
),
mode
=
self
.
mode
)
f
=
theano
.
function
([
a
],
a
.
sum
(
0
)
.
sum
(
0
)
.
sum
(
0
),
mode
=
self
.
mode
)
assert
len
(
f
.
maker
.
env
.
nodes
)
==
1
assert
len
(
f
.
maker
.
env
.
nodes
)
==
1
assert
numpy
.
allclose
(
f
(
input
),
input
.
sum
())
assert
numpy
.
allclose
(
f
(
input
),
input
.
sum
())
...
@@ -2496,6 +2501,7 @@ class T_local_sum(unittest.TestCase):
...
@@ -2496,6 +2501,7 @@ class T_local_sum(unittest.TestCase):
input
=
numpy
.
arange
(
3
*
3
*
3
,
dtype
=
config
.
floatX
)
.
reshape
(
3
,
3
,
3
)
input
=
numpy
.
arange
(
3
*
3
*
3
,
dtype
=
config
.
floatX
)
.
reshape
(
3
,
3
,
3
)
dims
=
[(
0
,
0
),(
1
,
0
),(
2
,
0
),(
0
,
1
),(
1
,
1
),(
2
,
1
)]
dims
=
[(
0
,
0
),(
1
,
0
),(
2
,
0
),(
0
,
1
),(
1
,
1
),(
2
,
1
)]
config
.
warn
.
sum_sum_bug
=
False
for
d
,
dd
in
dims
:
for
d
,
dd
in
dims
:
f
=
theano
.
function
([
a
],
a
.
sum
(
d
)
.
sum
(
dd
),
mode
=
self
.
mode
)
f
=
theano
.
function
([
a
],
a
.
sum
(
d
)
.
sum
(
dd
),
mode
=
self
.
mode
)
assert
numpy
.
allclose
(
f
(
input
),
input
.
sum
(
d
)
.
sum
(
dd
))
assert
numpy
.
allclose
(
f
(
input
),
input
.
sum
(
d
)
.
sum
(
dd
))
...
@@ -2541,6 +2547,7 @@ class T_local_sum(unittest.TestCase):
...
@@ -2541,6 +2547,7 @@ class T_local_sum(unittest.TestCase):
assert
len
(
f
.
maker
.
env
.
nodes
)
==
nb_nodes
[
2
]
assert
len
(
f
.
maker
.
env
.
nodes
)
==
nb_nodes
[
2
]
assert
f
.
maker
.
env
.
toposort
()[
-
1
]
.
op
==
T
.
alloc
assert
f
.
maker
.
env
.
toposort
()[
-
1
]
.
op
==
T
.
alloc
config
.
warn
.
sum_sum_bug
=
False
for
d
,
dd
in
[(
0
,
0
),(
1
,
0
),(
2
,
0
),(
0
,
1
),(
1
,
1
),(
2
,
1
)]:
for
d
,
dd
in
[(
0
,
0
),(
1
,
0
),(
2
,
0
),(
0
,
1
),(
1
,
1
),(
2
,
1
)]:
f
=
theano
.
function
([
a
],
t_like
(
a
)
.
sum
(
d
)
.
sum
(
dd
),
mode
=
mode
)
f
=
theano
.
function
([
a
],
t_like
(
a
)
.
sum
(
d
)
.
sum
(
dd
),
mode
=
mode
)
print
f
.
maker
.
env
.
toposort
()
print
f
.
maker
.
env
.
toposort
()
...
@@ -2600,6 +2607,8 @@ class T_local_sum_dimshuffle(unittest.TestCase):
...
@@ -2600,6 +2607,8 @@ class T_local_sum_dimshuffle(unittest.TestCase):
c_val
=
rng
.
randn
(
2
,
2
,
2
)
.
astype
(
config
.
floatX
)
c_val
=
rng
.
randn
(
2
,
2
,
2
)
.
astype
(
config
.
floatX
)
d_val
=
numpy
.
asarray
(
rng
.
randn
(),
config
.
floatX
)
d_val
=
numpy
.
asarray
(
rng
.
randn
(),
config
.
floatX
)
config
.
warn
.
sum_sum_bug
=
False
config
.
warn
.
sum_div_dimshuffle_bug
=
False
for
i
,
s
in
enumerate
(
sums
):
for
i
,
s
in
enumerate
(
sums
):
print
i
print
i
f
=
theano
.
function
([
a
,
b
,
c
,
d
],
s
,
mode
=
self
.
mode
)
f
=
theano
.
function
([
a
,
b
,
c
,
d
],
s
,
mode
=
self
.
mode
)
...
...
theano/tests/test_tutorial.py
浏览文件 @
c326cc22
""" test code snippet in the Theano tutorials.
""" test code snippet in the Theano tutorials.
"""
"""
import
unittest
import
os
,
unittest
import
theano
import
theano
import
theano.tensor
as
T
import
theano.tensor
as
T
from
theano
import
function
from
theano
import
function
...
@@ -722,6 +722,15 @@ class T_loading_and_saving(unittest.TestCase):
...
@@ -722,6 +722,15 @@ class T_loading_and_saving(unittest.TestCase):
mode_instance
=
theano
.
compile
.
mode
.
get_mode
(
None
)
mode_instance
=
theano
.
compile
.
mode
.
get_mode
(
None
)
if
not
isinstance
(
mode_instance
,
theano
.
compile
.
debugmode
.
DebugMode
):
if
not
isinstance
(
mode_instance
,
theano
.
compile
.
debugmode
.
DebugMode
):
if
os
.
path
.
exists
(
'obj.save'
)
or
os
.
path
.
exists
(
'objects.save'
):
# We do not want to delete these files silently, in case for
# some reason they would be something else than test-generated
# files.
# Ideally we would save those files in a temporary directory...
raise
AssertionError
(
'Please get rid of files obj.save and '
'objects.save in directory
%
s'
%
os
.
getcwd
())
f
=
file
(
'obj.save'
,
'wb'
)
f
=
file
(
'obj.save'
,
'wb'
)
cPickle
.
dump
(
my_obj
,
f
,
protocol
=
cPickle
.
HIGHEST_PROTOCOL
)
cPickle
.
dump
(
my_obj
,
f
,
protocol
=
cPickle
.
HIGHEST_PROTOCOL
)
f
.
close
()
f
.
close
()
...
@@ -746,6 +755,9 @@ class T_loading_and_saving(unittest.TestCase):
...
@@ -746,6 +755,9 @@ class T_loading_and_saving(unittest.TestCase):
loaded_objects
.
append
(
cPickle
.
load
(
f
))
loaded_objects
.
append
(
cPickle
.
load
(
f
))
f
.
close
()
f
.
close
()
# Cleanup created files.
os
.
remove
(
'obj.save'
)
os
.
remove
(
'objects.save'
)
class
T_modes
(
unittest
.
TestCase
):
class
T_modes
(
unittest
.
TestCase
):
## All tests here belog to
## All tests here belog to
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论