Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
979a35e7
提交
979a35e7
authored
11月 16, 2010
作者:
David Warde-Farley
浏览文件
操作
浏览文件
下载
差异文件
Remove the updating notice as it gets annoying with multiple pulls.
上级
c841dd22
c05fb70a
显示空白字符变更
内嵌
并排
正在显示
11 个修改的文件
包含
277 行增加
和
41 行删除
+277
-41
aliasing.txt
doc/tutorial/aliasing.txt
+28
-8
index.txt
doc/tutorial/index.txt
+1
-0
function_module.py
theano/compile/function_module.py
+14
-8
io.py
theano/compile/io.py
+0
-0
test_pfunc.py
theano/compile/tests/test_pfunc.py
+114
-5
hg_version_hook.sh
theano/misc/hg_version_hook.sh
+0
-1
test_basic_ops.py
theano/sandbox/cuda/tests/test_basic_ops.py
+1
-1
var.py
theano/sandbox/cuda/var.py
+4
-2
basic.py
theano/sparse/basic.py
+7
-1
basic.py
theano/tensor/basic.py
+4
-0
test_basic.py
theano/tensor/tests/test_basic.py
+104
-15
没有找到文件。
doc/tutorial/aliasing.txt
浏览文件 @
979a35e7
.. _basictutaliasing:
.. _basictutaliasing:
===============
===============
========================================
Memory Aliasing
Understanding Memory Aliasing for Speed and Correctness
===============
===============
========================================
The aggressive reuse of memory is one of the ways Theano makes code fast, and
The aggressive reuse of memory is one of the ways Theano makes code fast, and
it's important for the correctness and speed of your program that you understand
it's important for the correctness and speed of your program that you understand
...
@@ -174,6 +174,26 @@ This pattern works regardless of the compute device, and when the compute device
...
@@ -174,6 +174,26 @@ This pattern works regardless of the compute device, and when the compute device
makes it possible to expose Theano's internal variables without a copy, then it
makes it possible to expose Theano's internal variables without a copy, then it
goes as fast as an in-place update.
goes as fast as an in-place update.
Retrieving and assigning via the .value property
------------------------------------------------
Shared variables have a ``.value`` property that is connected to ``get_value``
and ``set_value``. The borrowing behaviour of the property is controlled by a
boolean configuration variable ``config.shared.value_borrows``, which currently
defaults to ``True``. If that variable is ``True`` then an assignment like ``s.value=v``
is equivalent to ``s.set_value(v, borrow=True)``, and a retrieval like ``print
s.value`` is equivalent to ``print s.get_value(borrow=True)``. Likewise,
if ``config.shared.value_borrows`` is ``False``, then the borrow parameter that the ``.value`` property
passes to ``set_value`` and ``get_value`` is ``False``.
The ``True`` default value of ``config.shared.value_borrows`` means that
aliasing can sometimes happen and sometimes not, which can be confusing.
Be aware that the default value may be changed to ``False`` sometime in the
not-to-distant future. This change will create more copies, and potentially slow
down code that accesses ``.value`` attributes inside tight loops. To avoid this
potential impact on your code, use the ``.get_value`` and ``.set_value`` methods
directly with appropriate flags.
Borrowing when constructing Function objects
Borrowing when constructing Function objects
============================================
============================================
...
@@ -207,7 +227,11 @@ The default is of course to *not borrow* internal results.
...
@@ -207,7 +227,11 @@ The default is of course to *not borrow* internal results.
It is also possible to pass an ``return_internal_type=True`` flag to the ``Out``
It is also possible to pass an ``return_internal_type=True`` flag to the ``Out``
variable which has the same interpretation as the ``return_internal_type`` flag
variable which has the same interpretation as the ``return_internal_type`` flag
to the shared variable's ``get_value`` function.
to the shared variable's ``get_value`` function. Unlike ``get_value()``, the
combination of ``return_internal_type=True`` and ``borrow=True`` arguments to
``Out()`` are not guaranteed to avoid copying an output value. They are just
hints that give more flexibility to the compilation and optimization of the
graph.
*Take home message:*
*Take home message:*
When an input ``x`` to a function is not needed after the function returns and you
When an input ``x`` to a function is not needed after the function returns and you
...
@@ -218,7 +242,3 @@ When a return value ``y`` is large (in terms of memory footprint), and you only
...
@@ -218,7 +242,3 @@ When a return value ``y`` is large (in terms of memory footprint), and you only
away when it's returned, then consider marking it with an ``Out(y,
away when it's returned, then consider marking it with an ``Out(y,
borrow=True)``.
borrow=True)``.
Shared variable .value attribute
================================
TODO: talk about sharedvar.value and the associated config variable.
doc/tutorial/index.txt
浏览文件 @
979a35e7
...
@@ -29,6 +29,7 @@ you out.
...
@@ -29,6 +29,7 @@ you out.
loading_and_saving
loading_and_saving
symbolic_graphs
symbolic_graphs
modes
modes
aliasing
using_gpu
using_gpu
remarks
remarks
debug_faq
debug_faq
...
...
theano/compile/function_module.py
浏览文件 @
979a35e7
...
@@ -5,6 +5,7 @@ __docformat__ = "restructuredtext en"
...
@@ -5,6 +5,7 @@ __docformat__ = "restructuredtext en"
import
copy_reg
import
copy_reg
import
cPickle
import
cPickle
import
itertools
import
sys
,
time
,
copy
import
sys
,
time
,
copy
...
@@ -537,19 +538,24 @@ class Function(object):
...
@@ -537,19 +538,24 @@ class Function(object):
## Collect aliased inputs among the storage space
## Collect aliased inputs among the storage space
args_share_memory
=
[]
args_share_memory
=
[]
for
i
in
xrange
(
len
(
self
.
input_storage
)):
for
i
in
xrange
(
len
(
self
.
input_storage
)):
if
isinstance
(
self
.
input_storage
[
i
]
.
storage
[
0
],
i_var
=
self
.
maker
.
inputs
[
i
]
.
variable
numpy
.
ndarray
):
i_val
=
self
.
input_storage
[
i
]
.
storage
[
0
]
if
hasattr
(
i_var
.
type
,
'may_share_memory'
):
is_aliased
=
False
is_aliased
=
False
for
j
in
xrange
(
len
(
args_share_memory
)):
for
j
in
xrange
(
len
(
args_share_memory
)):
for
k
in
args_share_memory
[
j
]:
if
numpy
.
may_share_memory
(
group_j
=
itertools
.
izip
(
self
.
input_storage
[
i
]
.
storage
[
0
]
,
[
self
.
maker
.
inputs
[
k
]
.
variable
for
k
self
.
input_storage
[
k
]
.
storage
[
0
]):
in
args_share_memory
[
j
]],
[
self
.
input_storage
[
k
]
.
storage
[
0
]
for
k
in
args_share_memory
[
j
]])
if
numpy
.
any
([
(
var
.
type
is
i_var
.
type
and
var
.
type
.
may_share_memory
(
val
,
i_val
)
)
for
(
var
,
val
)
in
group_j
]):
is_aliased
=
True
is_aliased
=
True
args_share_memory
[
j
]
.
append
(
i
)
args_share_memory
[
j
]
.
append
(
i
)
break
break
if
is_aliased
:
break
if
not
is_aliased
:
if
not
is_aliased
:
args_share_memory
.
append
([
i
])
args_share_memory
.
append
([
i
])
...
...
theano/compile/io.py
浏览文件 @
979a35e7
theano/compile/tests/test_pfunc.py
浏览文件 @
979a35e7
...
@@ -517,7 +517,57 @@ class Test_aliasing_rules(unittest.TestCase):
...
@@ -517,7 +517,57 @@ class Test_aliasing_rules(unittest.TestCase):
assert
not
numpy
.
may_share_memory
(
A
.
get_value
(
borrow
=
False
),
data_of
(
A
))
assert
not
numpy
.
may_share_memory
(
A
.
get_value
(
borrow
=
False
),
data_of
(
A
))
def
test_potential_input_aliasing_affecting_inplace_operations
(
self
):
def
test_sparse_input_aliasing_affecting_inplace_operations
(
self
):
##
## Note this test will never fail because I am not aware of any
## inplace op on sparse variables
try
:
import
scipy.sparse
as
sp
except
ImportError
:
pass
#the variable enable_sparse will be used to disable the test file.
from
theano.sparse
import
enable_sparse
if
enable_sparse
==
False
:
raise
SkipTest
(
'Optional package sparse disabled'
)
from
theano
import
sparse
## Note: to trigger this bug with theano rev 4586:2bc6fc7f218b,
# you need to make in inputs mutable ( so that inplace
# operations are used) and to break the elemwise composition
# with some non-elemwise op ( here dot )
x
=
sparse
.
SparseType
(
'csc'
,
dtype
=
'float64'
)()
y
=
sparse
.
SparseType
(
'csc'
,
dtype
=
'float64'
)()
f
=
theano
.
function
(
[
theano
.
In
(
x
,
mutable
=
True
),
theano
.
In
(
y
,
mutable
=
True
)],
(
x
+
y
)
+
(
x
+
y
))
## Test 1. If the same variable is given twice
# Compute bogus values
m
=
sp
.
csc_matrix
(
numpy
.
asarray
([[
1
,
0
,
0
,
0
,
0
],
[
0
,
1
,
0
,
0
,
0
],
[
0
,
0
,
1
,
0
,
0
],
[
0
,
0
,
0
,
1
,
0
],
[
0
,
0
,
0
,
0
,
1
]],
dtype
=
'float64'
))
bogus_vals
=
f
(
m
,
m
)
# Since we used inplace operation v and m may be corrupted
# so we need to recreate them
m
=
sp
.
csc_matrix
(
numpy
.
asarray
([[
1
,
0
,
0
,
0
,
0
],
[
0
,
1
,
0
,
0
,
0
],
[
0
,
0
,
1
,
0
,
0
],
[
0
,
0
,
0
,
1
,
0
],
[
0
,
0
,
0
,
0
,
1
]],
dtype
=
'float64'
))
m_copy
=
m
.
copy
()
vals
=
f
(
m
,
m_copy
)
assert
numpy
.
allclose
(
vals
.
todense
(),
bogus_vals
.
todense
())
def
test_input_aliasing_affecting_inplace_operations
(
self
):
## Note: to trigger this bug with theano rev 4586:2bc6fc7f218b,
## Note: to trigger this bug with theano rev 4586:2bc6fc7f218b,
# you need to make in inputs mutable ( so that inplace
# you need to make in inputs mutable ( so that inplace
...
@@ -532,20 +582,79 @@ class Test_aliasing_rules(unittest.TestCase):
...
@@ -532,20 +582,79 @@ class Test_aliasing_rules(unittest.TestCase):
theano
.
In
(
m1
,
mutable
=
True
),
theano
.
In
(
m1
,
mutable
=
True
),
theano
.
In
(
m2
,
mutable
=
True
)],
theano
.
In
(
m2
,
mutable
=
True
)],
theano
.
dot
(
x
*
2
,
m1
)
+
theano
.
dot
(
y
*
3
,
m2
))
theano
.
dot
(
x
*
2
,
m1
)
+
theano
.
dot
(
y
*
3
,
m2
))
## Test 1. If the same variable is given twice
# Compute bogus values
# Compute bogus values
v
=
numpy
.
asarray
([
1
,
2
],
dtype
=
'float64'
)
v
=
numpy
.
asarray
(
[
1
,
2
,
3
,
4
,
5
],
dtype
=
'float64'
)
m
=
numpy
.
asarray
([[
1
,
0
],[
0
,
1
]],
dtype
=
'float64'
)
m
=
numpy
.
asarray
([[
1
,
0
,
0
,
0
,
0
],
[
0
,
1
,
0
,
0
,
0
],
[
0
,
0
,
1
,
0
,
0
],
[
0
,
0
,
0
,
1
,
0
],
[
0
,
0
,
0
,
0
,
1
]],
dtype
=
'float64'
)
bogus_vals
=
f
(
v
,
v
,
m
,
m
)
bogus_vals
=
f
(
v
,
v
,
m
,
m
)
# Since we used inplace operation v and m may be corrupted
# Since we used inplace operation v and m may be corrupted
# so we need to recreate them
# so we need to recreate them
m
=
numpy
.
asarray
([[
1
,
0
],[
0
,
1
]],
dtype
=
'float64'
)
v
=
numpy
.
asarray
([
1
,
2
],
dtype
=
'float64'
)
v
=
numpy
.
asarray
(
[
1
,
2
,
3
,
4
,
5
],
dtype
=
'float64'
)
m
=
numpy
.
asarray
([[
1
,
0
,
0
,
0
,
0
],
[
0
,
1
,
0
,
0
,
0
],
[
0
,
0
,
1
,
0
,
0
],
[
0
,
0
,
0
,
1
,
0
],
[
0
,
0
,
0
,
0
,
1
]],
dtype
=
'float64'
)
m_copy
=
m
.
copy
()
m_copy
=
m
.
copy
()
v_copy
=
v
.
copy
()
v_copy
=
v
.
copy
()
vals
=
f
(
v
,
v_copy
,
m
,
m_copy
)
vals
=
f
(
v
,
v_copy
,
m
,
m_copy
)
assert
numpy
.
allclose
(
vals
,
bogus_vals
)
assert
numpy
.
allclose
(
vals
,
bogus_vals
)
def
test_partial_input_aliasing_affecting_inplace_operations
(
self
):
## Note: to trigger this bug with theano rev 4586:2bc6fc7f218b,
# you need to make in inputs mutable ( so that inplace
# operations are used) and to break the elemwise composition
# with some non-elemwise op ( here dot )
x
=
theano
.
tensor
.
dvector
()
y
=
theano
.
tensor
.
dvector
()
z
=
theano
.
tensor
.
dvector
()
m1
=
theano
.
tensor
.
dmatrix
()
m2
=
theano
.
tensor
.
dmatrix
()
m3
=
theano
.
tensor
.
dmatrix
()
## Test 2. If variables only partial overlap
# more exactly we care about the case when we have a,b,c
# and a shares memory with b, b shares memory with c, but
# c does not share memory with a
f
=
theano
.
function
(
[
theano
.
In
(
x
,
mutable
=
True
),
theano
.
In
(
y
,
mutable
=
True
),
theano
.
In
(
z
,
mutable
=
True
),
theano
.
In
(
m1
,
mutable
=
True
),
theano
.
In
(
m2
,
mutable
=
True
),
theano
.
In
(
m3
,
mutable
=
True
)],
theano
.
dot
(
x
*
2
,
m1
)
+
theano
.
dot
(
y
*
3
,
m2
)
+
theano
.
dot
(
z
*
4
,
m3
))
# Compute bogus values
v
=
numpy
.
asarray
(
[
1
,
2
,
3
,
4
,
5
],
dtype
=
'float64'
)
m
=
numpy
.
asarray
([[
1
,
0
],
[
0
,
1
]],
dtype
=
'float64'
)
bogus_vals
=
f
(
v
[:
2
],
v
[
1
:
3
],
v
[
2
:
4
],
m
,
m
,
m
)
# Since we used inplace operation v and m may be corrupted
# so we need to recreate them
v
=
numpy
.
asarray
(
[
1
,
2
,
3
,
4
,
5
],
dtype
=
'float64'
)
m
=
numpy
.
asarray
([[
1
,
0
],
[
0
,
1
]],
dtype
=
'float64'
)
m_copy1
=
m
.
copy
()
v_copy1
=
v
.
copy
()
m_copy2
=
m
.
copy
()
v_copy2
=
v
.
copy
()
vals
=
f
(
v
[:
2
],
v_copy1
[
1
:
3
],
v_copy2
[
2
:
4
],
m
,
m_copy1
,
m_copy2
)
assert
numpy
.
allclose
(
vals
,
bogus_vals
)
def
test_potential_output_aliasing_induced_by_updates
(
self
):
def
test_potential_output_aliasing_induced_by_updates
(
self
):
A
=
self
.
shared
(
numpy
.
zeros
((
2
,
2
)))
A
=
self
.
shared
(
numpy
.
zeros
((
2
,
2
)))
...
...
theano/misc/hg_version_hook.sh
浏览文件 @
979a35e7
...
@@ -2,6 +2,5 @@
...
@@ -2,6 +2,5 @@
# Script to update version.py in response to Mercurial hooks. This should
# Script to update version.py in response to Mercurial hooks. This should
# not appear in a release tarball.
# not appear in a release tarball.
echo
"Updating version.py..."
sed
-e
"s/^hg_revision.*/hg_revision = '
`
expr
substr
$HG_NODE
1 12
`
'/"
theano/version.py
>
theano/version.py.out
&&
mv
theano/version.py.out theano/version.py
sed
-e
"s/^hg_revision.*/hg_revision = '
`
expr
substr
$HG_NODE
1 12
`
'/"
theano/version.py
>
theano/version.py.out
&&
mv
theano/version.py.out theano/version.py
theano/sandbox/cuda/tests/test_basic_ops.py
浏览文件 @
979a35e7
...
@@ -803,7 +803,7 @@ def test_duplicate_arg_elemwise():
...
@@ -803,7 +803,7 @@ def test_duplicate_arg_elemwise():
import
theano.tensor.tests.test_basic
import
theano.tensor.tests.test_basic
test_shared_options
=
theano
.
tensor
.
tests
.
test_basic
.
build_test_shared_options
(
tcn
.
shared_constructor
,
'float32'
,
False
,
False
)
test_shared_options
=
theano
.
tensor
.
tests
.
test_basic
.
makeSharedTester
(
tcn
.
shared_constructor
,
'float32'
,
False
,
False
,
False
,
cuda_ndarray
.
CudaNdarray
,
theano
.
tensor
.
exp
,
numpy
.
exp
)
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
test_many_arg_elemwise
()
test_many_arg_elemwise
()
...
...
theano/sandbox/cuda/var.py
浏览文件 @
979a35e7
import
copy
import
numpy
import
numpy
import
theano
import
theano
from
theano
import
Op
,
Type
,
Apply
,
Variable
,
Constant
from
theano
import
Variable
,
Constant
from
theano
import
tensor
from
theano
import
tensor
from
theano.compile
import
shared
,
SharedVariable
from
theano.compile
import
SharedVariable
from
theano.sandbox.cuda.type
import
CudaNdarrayType
from
theano.sandbox.cuda.type
import
CudaNdarrayType
from
theano.sandbox.cuda
import
filter
as
type_support_filter
from
theano.sandbox.cuda
import
filter
as
type_support_filter
...
...
theano/sparse/basic.py
浏览文件 @
979a35e7
...
@@ -99,6 +99,7 @@ def as_sparse_variable(x, name=None):
...
@@ -99,6 +99,7 @@ def as_sparse_variable(x, name=None):
except
TypeError
:
except
TypeError
:
raise
TypeError
(
"Cannot convert
%
s to SparseType"
%
x
,
type
(
x
))
raise
TypeError
(
"Cannot convert
%
s to SparseType"
%
x
,
type
(
x
))
as_sparse
=
as_sparse_variable
as_sparse
=
as_sparse_variable
def
constant
(
x
,
name
=
None
):
def
constant
(
x
,
name
=
None
):
...
@@ -147,7 +148,6 @@ class SparseType(gof.Type):
...
@@ -147,7 +148,6 @@ class SparseType(gof.Type):
@param format: The sparse storage strategy.
@param format: The sparse storage strategy.
@return An empty SparseVariable instance.
@return An empty SparseVariable instance.
"""
"""
dtype
=
str
(
dtype
)
dtype
=
str
(
dtype
)
if
dtype
in
self
.
dtype_set
:
if
dtype
in
self
.
dtype_set
:
self
.
dtype
=
dtype
self
.
dtype
=
dtype
...
@@ -174,6 +174,12 @@ class SparseType(gof.Type):
...
@@ -174,6 +174,12 @@ class SparseType(gof.Type):
raise
NotImplementedError
()
raise
NotImplementedError
()
return
sp
return
sp
@staticmethod
def
may_share_memory
(
a
,
b
):
# This is Fred suggestion for a quick and dirty way of checking
# aliasing .. this can potentially be further refined (ticket #374)
return
a
is
b
def
make_variable
(
self
,
name
=
None
):
def
make_variable
(
self
,
name
=
None
):
return
SparseVariable
(
self
,
name
=
name
)
return
SparseVariable
(
self
,
name
=
name
)
...
...
theano/tensor/basic.py
浏览文件 @
979a35e7
...
@@ -472,6 +472,10 @@ class TensorType(Type):
...
@@ -472,6 +472,10 @@ class TensorType(Type):
return
type
(
self
)
==
type
(
other
)
and
other
.
dtype
==
self
.
dtype
\
return
type
(
self
)
==
type
(
other
)
and
other
.
dtype
==
self
.
dtype
\
and
other
.
broadcastable
==
self
.
broadcastable
and
other
.
broadcastable
==
self
.
broadcastable
@staticmethod
def
may_share_memory
(
a
,
b
):
return
numpy
.
may_share_memory
(
a
,
b
)
@staticmethod
@staticmethod
def
values_eq
(
a
,
b
):
def
values_eq
(
a
,
b
):
#TODO: check to see if the dtype and shapes must match
#TODO: check to see if the dtype and shapes must match
...
...
theano/tensor/tests/test_basic.py
浏览文件 @
979a35e7
...
@@ -3378,10 +3378,14 @@ def test_dimshuffle_duplicate():
...
@@ -3378,10 +3378,14 @@ def test_dimshuffle_duplicate():
assert
success
assert
success
def
build_test_shared_options
(
shared_constructor_
,
def
makeSharedTester
(
shared_constructor_
,
dtype_
,
dtype_
,
get_value_borrow_true_alias_
,
get_value_borrow_true_alias_
,
shared_borrow_true_alias_
):
shared_borrow_true_alias_
,
set_value_borrow_true_alias_
,
internal_type_
,
theano_fct_
,
ref_fct_
):
"""
"""
This is a generic fct to allow reusing the same test function
This is a generic fct to allow reusing the same test function
for many shared variable of many types.
for many shared variable of many types.
...
@@ -3391,6 +3395,10 @@ def build_test_shared_options(shared_constructor_,
...
@@ -3391,6 +3395,10 @@ def build_test_shared_options(shared_constructor_,
dtype
=
dtype_
dtype
=
dtype_
get_value_borrow_true_alias
=
get_value_borrow_true_alias_
get_value_borrow_true_alias
=
get_value_borrow_true_alias_
shared_borrow_true_alias
=
shared_borrow_true_alias_
shared_borrow_true_alias
=
shared_borrow_true_alias_
internal_type
=
internal_type_
theano_fct
=
staticmethod
(
theano_fct_
)
ref_fct
=
staticmethod
(
ref_fct_
)
set_value_borrow_true_alias
=
set_value_borrow_true_alias_
def
test_shared_dont_alias
(
self
):
def
test_shared_dont_alias
(
self
):
dtype
=
self
.
dtype
dtype
=
self
.
dtype
...
@@ -3399,22 +3407,22 @@ def build_test_shared_options(shared_constructor_,
...
@@ -3399,22 +3407,22 @@ def build_test_shared_options(shared_constructor_,
rng
=
numpy
.
random
.
RandomState
([
3
,
5
,
17
])
rng
=
numpy
.
random
.
RandomState
([
3
,
5
,
17
])
x
=
numpy
.
asarray
(
rng
.
uniform
(
0
,
1
,[
2
,
4
]),
dtype
=
dtype
)
x
=
numpy
.
asarray
(
rng
.
uniform
(
0
,
1
,[
2
,
4
]),
dtype
=
dtype
)
x_
sum
=
x
.
sum
(
)
x_
ref
=
self
.
ref_fct
(
x
)
x_shared
=
self
.
shared_constructor
(
x
,
borrow
=
False
)
x_shared
=
self
.
shared_constructor
(
x
,
borrow
=
False
)
total
=
theano
.
tensor
.
sum
(
x_shared
)
total
=
self
.
theano_fct
(
x_shared
)
total_func
=
theano
.
function
([],
total
)
total_func
=
theano
.
function
([],
total
)
total_val
=
total_func
()
total_val
=
total_func
()
assert
numpy
.
allclose
(
x
.
sum
(
),
total_val
)
assert
numpy
.
allclose
(
self
.
ref_fct
(
x
),
total_val
)
x
+=
1
x
+=
1
total_val_2
=
total_func
()
total_val_2
=
total_func
()
#value used to construct should not alias with internal
#value used to construct should not alias with internal
assert
total_val
==
total_val_2
assert
numpy
.
allclose
(
total_val
,
total_val_2
)
x
=
x_shared
.
get_value
(
borrow
=
False
)
x
=
x_shared
.
get_value
(
borrow
=
False
)
...
@@ -3423,7 +3431,7 @@ def build_test_shared_options(shared_constructor_,
...
@@ -3423,7 +3431,7 @@ def build_test_shared_options(shared_constructor_,
total_val_3
=
total_func
()
total_val_3
=
total_func
()
#value returned by access should not alias with internal
#value returned by access should not alias with internal
assert
total_val
==
total_val_3
assert
numpy
.
allclose
(
total_val
,
total_val_3
)
#in this case we can alias
#in this case we can alias
x
=
x_shared
.
get_value
(
borrow
=
True
)
x
=
x_shared
.
get_value
(
borrow
=
True
)
...
@@ -3432,10 +3440,89 @@ def build_test_shared_options(shared_constructor_,
...
@@ -3432,10 +3440,89 @@ def build_test_shared_options(shared_constructor_,
#this is not required by the contract but it is a feature we've
#this is not required by the contract but it is a feature we've
#implemented for some type of SharedVariable.
#implemented for some type of SharedVariable.
if
self
.
get_value_borrow_true_alias
:
if
self
.
get_value_borrow_true_alias
:
assert
numpy
.
allclose
(
x
.
sum
(
),
total_func
())
assert
numpy
.
allclose
(
self
.
ref_fct
(
x
),
total_func
())
else
:
else
:
assert
numpy
.
allclose
(
x_
sum
,
total_func
())
assert
numpy
.
allclose
(
x_
ref
,
total_func
())
def
test_return_internal_type
(
self
):
dtype
=
self
.
dtype
if
dtype
is
None
:
dtype
=
theano
.
config
.
floatX
rng
=
numpy
.
random
.
RandomState
([
3
,
5
,
17
])
x
=
numpy
.
asarray
(
rng
.
uniform
(
0
,
1
,[
2
,
4
]),
dtype
=
dtype
)
x_ref
=
self
.
ref_fct
(
x
)
x_shared
=
self
.
shared_constructor
(
x
,
borrow
=
False
)
total
=
self
.
theano_fct
(
x_shared
)
total_func
=
theano
.
function
([],
total
)
#in this case we can alias with the internal value
x
=
x_shared
.
get_value
(
borrow
=
True
,
return_internal_type
=
True
)
assert
isinstance
(
x
,
self
.
internal_type
)
values_to_add
=
numpy
.
ones
(
x
.
shape
,
dtype
=
dtype
)
if
not
isinstance
(
values_to_add
,
self
.
internal_type
):
values_to_add
=
self
.
internal_type
(
values_to_add
)
#supported for cudandarray, but not ndarray.
x
+=
values_to_add
#supported by ndarray and CudaNdarray
#this is not required by the contract but it is a feature we can
#implement for some type of SharedVariable.
assert
numpy
.
allclose
(
self
.
ref_fct
(
x
),
total_func
())
x
=
x_shared
.
get_value
(
borrow
=
False
,
return_internal_type
=
True
)
assert
isinstance
(
x
,
self
.
internal_type
)
x
+=
values_to_add
#supported by ndarray and CudaNdarray
#this is required by the contract
assert
not
numpy
.
allclose
(
self
.
ref_fct
(
x
),
total_func
())
def
test_set_value
(
self
):
dtype
=
self
.
dtype
if
dtype
is
None
:
dtype
=
theano
.
config
.
floatX
rng
=
numpy
.
random
.
RandomState
([
3
,
5
,
17
])
x
=
numpy
.
asarray
(
rng
.
uniform
(
0
,
1
,[
2
,
4
]),
dtype
=
dtype
)
x_orig
=
x
x_orig_copy
=
x
.
copy
()
x_ref
=
self
.
ref_fct
(
x
)
x_shared
=
self
.
shared_constructor
(
x
,
borrow
=
False
)
total
=
self
.
theano_fct
(
x_shared
)
total_func
=
theano
.
function
([],
total
)
#test if that theano shared variable optimize set_value(borrow=True)
get_x
=
x_shared
.
get_value
(
borrow
=
True
)
assert
get_x
is
not
x_orig
#borrow=False to shared_constructor
get_x
+=
1
x_shared
.
set_value
(
get_x
,
borrow
=
True
)
x
=
x_shared
.
get_value
(
borrow
=
True
)
if
self
.
set_value_borrow_true_alias
:
assert
x
is
get_x
else
:
assert
x
is
not
get_x
assert
numpy
.
allclose
(
self
.
ref_fct
(
x_orig
+
1
),
self
.
ref_fct
(
x
))
#test optimized get set value on the gpu(don't pass data to the cpu)
get_x
=
x_shared
.
get_value
(
borrow
=
True
,
return_internal_type
=
True
)
assert
get_x
is
not
x_orig
#borrow=False to shared_constructor
assert
isinstance
(
get_x
,
self
.
internal_type
)
values_to_add
=
numpy
.
ones
(
x
.
shape
,
dtype
=
dtype
)
if
not
isinstance
(
values_to_add
,
self
.
internal_type
):
values_to_add
=
self
.
internal_type
(
values_to_add
)
#supported for cudandarray, but not ndarray.
assert
isinstance
(
values_to_add
,
self
.
internal_type
)
get_x
+=
values_to_add
#supported by ndarray and CudaNdarray
assert
isinstance
(
get_x
,
self
.
internal_type
)
x_shared
.
set_value
(
get_x
,
borrow
=
True
)
x
=
x_shared
.
get_value
(
borrow
=
True
,
return_internal_type
=
True
)
assert
isinstance
(
x
,
self
.
internal_type
)
assert
x
is
get_x
################ TODO test Out.
def
test_shared_do_alias
(
self
):
def
test_shared_do_alias
(
self
):
dtype
=
self
.
dtype
dtype
=
self
.
dtype
if
dtype
is
None
:
if
dtype
is
None
:
...
@@ -3443,29 +3530,31 @@ def build_test_shared_options(shared_constructor_,
...
@@ -3443,29 +3530,31 @@ def build_test_shared_options(shared_constructor_,
rng
=
numpy
.
random
.
RandomState
([
2
,
4
,
16
])
rng
=
numpy
.
random
.
RandomState
([
2
,
4
,
16
])
x
=
numpy
.
asarray
(
rng
.
uniform
(
1
,
2
,[
4
,
2
]),
dtype
=
dtype
)
x
=
numpy
.
asarray
(
rng
.
uniform
(
1
,
2
,[
4
,
2
]),
dtype
=
dtype
)
x_
sum
=
x
.
sum
(
)
x_
ref
=
self
.
ref_fct
(
x
)
x_shared
=
self
.
shared_constructor
(
x
,
borrow
=
True
)
x_shared
=
self
.
shared_constructor
(
x
,
borrow
=
True
)
total
=
theano
.
tensor
.
sum
(
x_shared
)
total
=
self
.
theano_fct
(
x_shared
)
total_func
=
theano
.
function
([],
total
)
total_func
=
theano
.
function
([],
total
)
total_val
=
total_func
()
total_val
=
total_func
()
assert
numpy
.
allclose
(
x
.
sum
(
),
total_val
)
assert
numpy
.
allclose
(
self
.
ref_fct
(
x
),
total_val
)
x
+=
1
x
+=
1
#not required by the contract but it is a feature we've implemented
#not required by the contract but it is a feature we've implemented
if
self
.
shared_borrow_true_alias
:
if
self
.
shared_borrow_true_alias
:
assert
numpy
.
allclose
(
x
.
sum
(
),
total_func
())
assert
numpy
.
allclose
(
self
.
ref_fct
(
x
),
total_func
())
else
:
else
:
assert
numpy
.
allclose
(
x_
sum
,
total_func
())
assert
numpy
.
allclose
(
x_
ref
,
total_func
())
return
SharedTester
return
SharedTester
test_shared_options
=
build_test_shared_options
(
tensor
.
shared
,
'float64'
,
True
,
True
)
test_shared_options
=
makeSharedTester
(
tensor
.
shared
,
'float64'
,
True
,
True
,
True
,
numpy
.
ndarray
,
theano
.
tensor
.
sum
,
numpy
.
sum
)
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论