Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
65d74e55
提交
65d74e55
authored
12月 01, 2010
作者:
Olivier Delalleau
浏览文件
操作
浏览文件
下载
差异文件
Merged (hopefully properly this time)
上级
59c3a464
3c51f279
隐藏空白字符变更
内嵌
并排
正在显示
4 个修改的文件
包含
91 行增加
和
18 行删除
+91
-18
install.txt
doc/install.txt
+3
-4
basic.py
theano/tensor/basic.py
+32
-8
test_basic.py
theano/tensor/tests/test_basic.py
+52
-2
test_sharedvar.py
theano/tensor/tests/test_sharedvar.py
+4
-4
没有找到文件。
doc/install.txt
浏览文件 @
65d74e55
...
@@ -395,7 +395,7 @@ Windows V1 (Installing from Scratch)
...
@@ -395,7 +395,7 @@ Windows V1 (Installing from Scratch)
You can keep the default install options (except for the installation directory).
You can keep the default install options (except for the installation directory).
- Install Mercurial. You can download it
- Install Mercurial. You can download it
`here <http://mercurial.selenic.com/downloads>`_. You may get either the command
`here <http://mercurial.selenic.com/downloads>`_
_
. You may get either the command
line Windows version or the TortoiseHG GUI version: it does not matter as
line Windows version or the TortoiseHG GUI version: it does not matter as
far as installing Theano is concerned.
far as installing Theano is concerned.
...
@@ -451,7 +451,7 @@ compile GotoBLAS2 (ATLAS may work too, but was not tested, and is
...
@@ -451,7 +451,7 @@ compile GotoBLAS2 (ATLAS may work too, but was not tested, and is
usually reported to be slower and more difficult to compile -- especially
usually reported to be slower and more difficult to compile -- especially
on Windows).
on Windows).
GotoBLAS2 can be downloaded
GotoBLAS2 can be downloaded
`here <http://www.tacc.utexas.edu/tacc-projects/gotoblas2/downloads>`_
`here <http://www.tacc.utexas.edu/tacc-projects/gotoblas2/downloads>`_
_
after registering on the website (we tested v1.13).
after registering on the website (we tested v1.13).
To compile it, you will also need to install MSYS and Perl,
To compile it, you will also need to install MSYS and Perl,
as described below.
as described below.
...
@@ -539,8 +539,7 @@ Windows: Using the GPU
...
@@ -539,8 +539,7 @@ Windows: Using the GPU
Please note that these are tentative instructions (we have not yet been able to
Please note that these are tentative instructions (we have not yet been able to
get the GPU to work under Windows with Theano).
get the GPU to work under Windows with Theano).
Please report your own successes / failures on the
Please report your own successes / failures on the `theano-users`_ mailing list.
`theano-users <http://groups.google.com/group/theano-users>`_ mailing list.
Those are instructions for the 32-bit version of Python (the one that comes
Those are instructions for the 32-bit version of Python (the one that comes
with Python(x,y) is 32-bit).
with Python(x,y) is 32-bit).
...
...
theano/tensor/basic.py
浏览文件 @
65d74e55
...
@@ -33,6 +33,9 @@ def _info(*msg):
...
@@ -33,6 +33,9 @@ def _info(*msg):
def
_warn
(
*
msg
):
def
_warn
(
*
msg
):
_logger
.
warn
(
' '
.
join
(
msg
))
_logger
.
warn
(
' '
.
join
(
msg
))
#This is needed as we will hide it later
python_complex
=
complex
def
check_equal_numpy
(
x
,
y
):
def
check_equal_numpy
(
x
,
y
):
"""
"""
Returns True iff x and y are equal (checks the dtype and
Returns True iff x and y are equal (checks the dtype and
...
@@ -388,6 +391,20 @@ def get_constant_value(v):
...
@@ -388,6 +391,20 @@ def get_constant_value(v):
ret
=
get_constant_value
(
ret
)
ret
=
get_constant_value
(
ret
)
#join can cast implicitly its input in some case.
#join can cast implicitly its input in some case.
return
theano
.
_asarray
(
ret
,
dtype
=
v
.
type
.
dtype
)
return
theano
.
_asarray
(
ret
,
dtype
=
v
.
type
.
dtype
)
if
(
v
.
owner
.
inputs
[
0
]
.
owner
and
isinstance
(
v
.
owner
.
inputs
[
0
]
.
owner
.
op
,
theano
.
tensor
.
opt
.
MakeVector
)
and
# MakeVector normally accept only scalar as input.
# We put this check in case there is change in the future
all
(
var
.
ndim
==
0
for
var
in
v
.
owner
.
inputs
[
0
]
.
owner
.
inputs
)):
# The index list 'idx_list' should have length one
# since joining scalar variables results in a 1D vector.
assert
len
(
v
.
owner
.
op
.
idx_list
)
==
1
ret
=
v
.
owner
.
inputs
[
0
]
.
owner
.
inputs
[
v
.
owner
.
op
.
idx_list
[
0
]]
ret
=
get_constant_value
(
ret
)
#MakeVector can cast implicitly its input in some case.
return
theano
.
_asarray
(
ret
,
dtype
=
v
.
type
.
dtype
)
raise
TypeError
(
v
)
raise
TypeError
(
v
)
...
@@ -1505,7 +1522,7 @@ class SpecifyShape(Op):
...
@@ -1505,7 +1522,7 @@ class SpecifyShape(Op):
L{Op} put into the graph the user provided shape
L{Op} put into the graph the user provided shape
In the case where this op stay in the final graph, we assert the shape.
In the case where this op stay in the final graph, we assert the shape.
For this the output of this op must be used in the graph. This is not
For this the output of this op must be used in the graph. This is not
the case most of the time if we only take the shape of the output.
the case most of the time if we only take the shape of the output.
Maybe there is other optimization that will mess with this.
Maybe there is other optimization that will mess with this.
...
@@ -1524,12 +1541,12 @@ class SpecifyShape(Op):
...
@@ -1524,12 +1541,12 @@ class SpecifyShape(Op):
x
=
as_tensor_variable
(
x
)
x
=
as_tensor_variable
(
x
)
shape
=
as_tensor_variable
(
shape
)
shape
=
as_tensor_variable
(
shape
)
return
Apply
(
self
,
[
x
,
shape
],
[
x
.
type
()])
return
Apply
(
self
,
[
x
,
shape
],
[
x
.
type
()])
def
perform
(
self
,
node
,
(
x
,
shape
),
(
out
,
)):
def
perform
(
self
,
node
,
(
x
,
shape
),
(
out
,
)):
assert
numpy
.
all
(
x
.
shape
==
shape
),
(
"got shape"
,
x
.
shape
,
assert
numpy
.
all
(
x
.
shape
==
shape
),
(
"got shape"
,
x
.
shape
,
"expected"
,
shape
)
"expected"
,
shape
)
out
[
0
]
=
x
out
[
0
]
=
x
def
infer_shape
(
self
,
node
,
(
xshape
,
sshape
)):
def
infer_shape
(
self
,
node
,
(
xshape
,
sshape
)):
new_shape
=
[]
new_shape
=
[]
for
dim
in
range
(
node
.
inputs
[
0
]
.
ndim
):
for
dim
in
range
(
node
.
inputs
[
0
]
.
ndim
):
...
@@ -2276,7 +2293,7 @@ def std(input, axis=None):
...
@@ -2276,7 +2293,7 @@ def std(input, axis=None):
:type axis: None or int or (list of int) (see `Sum`)
:type axis: None or int or (list of int) (see `Sum`)
"""
"""
return
sqrt
(
var
(
input
=
input
,
axis
=
axis
))
return
sqrt
(
var
(
input
=
input
,
axis
=
axis
))
if
0
:
if
0
:
## COMMENTED OUT FEB 17 2010
## COMMENTED OUT FEB 17 2010
## TODO (DOCUMENT AND WRITE TESTS) OR DELETE
## TODO (DOCUMENT AND WRITE TESTS) OR DELETE
...
@@ -3269,11 +3286,18 @@ def stack(*tensors):
...
@@ -3269,11 +3286,18 @@ def stack(*tensors):
raise
Exception
(
'theano.tensor.stack(*tensors) must have at least one parameter'
)
raise
Exception
(
'theano.tensor.stack(*tensors) must have at least one parameter'
)
# If all tensors are scalars of the same type, call make_vector.
# If all tensors are scalars of the same type, call make_vector.
# It makes the graph simpler, by not adding DimShuffles and Rebroadcasts
# It makes the graph simpler, by not adding DimShuffles and Rebroadcasts
if
numpy
.
all
([
isinstance
(
t
,
Variable
)
and
\
if
isinstance
(
tensors
[
0
],
(
numpy
.
number
,
float
,
int
,
python_complex
)):
isinstance
(
t
.
type
,
TensorType
)
and
\
tensors
=
list
(
tensors
)
t
.
ndim
==
0
and
t
.
type
==
tensors
[
0
]
.
type
\
tensors
[
0
]
=
as_tensor_variable
(
tensors
[
0
])
if
numpy
.
all
([
isinstance
(
t
,
(
numpy
.
number
,
float
,
int
,
python_complex
))
#in case their is direct int
or
(
isinstance
(
t
,
Variable
)
and
isinstance
(
t
.
type
,
TensorType
)
and
t
.
ndim
==
0
and
t
.
type
.
__class__
==
tensors
[
0
]
.
type
.
__class__
)
for
t
in
tensors
]):
for
t
in
tensors
]):
return
theano
.
tensor
.
opt
.
MakeVector
(
scal
.
upcast
(
*
[
i
.
dtype
for
i
in
tensors
]))(
*
tensors
)
tensors
=
map
(
as_tensor_variable
,
tensors
)
#in case their is direct int
dtype
=
scal
.
upcast
(
*
[
i
.
dtype
for
i
in
tensors
])
return
theano
.
tensor
.
opt
.
MakeVector
(
dtype
)(
*
tensors
)
return
join
(
0
,
*
[
shape_padleft
(
t
,
1
)
for
t
in
tensors
])
return
join
(
0
,
*
[
shape_padleft
(
t
,
1
)
for
t
in
tensors
])
@constructor
@constructor
...
...
theano/tensor/tests/test_basic.py
浏览文件 @
65d74e55
...
@@ -1552,6 +1552,36 @@ class T_Join_and_Split(unittest.TestCase):
...
@@ -1552,6 +1552,36 @@ class T_Join_and_Split(unittest.TestCase):
assert
len
([
n
for
n
in
e
if
isinstance
(
n
,
Join
)])
==
0
assert
len
([
n
for
n
in
e
if
isinstance
(
n
,
Join
)])
==
0
assert
f
.
maker
.
env
.
outputs
[
0
]
.
dtype
==
config
.
floatX
assert
f
.
maker
.
env
.
outputs
[
0
]
.
dtype
==
config
.
floatX
def
test_stack_scalar_make_vector_dtype
(
self
):
'''Test that calling stack() on scalars instantiates MakeVector,
event when the scalar don't have the same dtype.'''
a
=
tensor
.
iscalar
(
'a'
)
b
=
tensor
.
lscalar
(
'b'
)
s
=
stack
(
a
,
b
,
a
,
b
)
f
=
function
([
a
,
b
],
s
)
val
=
f
(
1
,
2
)
self
.
failUnless
(
numpy
.
all
(
val
==
[
1
,
2
,
1
,
2
]))
e
=
f
.
maker
.
env
.
toposort
()
assert
len
([
n
for
n
in
e
if
isinstance
(
n
.
op
,
opt
.
MakeVector
)])
>
0
assert
len
([
n
for
n
in
e
if
isinstance
(
n
,
Join
)])
==
0
assert
f
.
maker
.
env
.
outputs
[
0
]
.
dtype
==
'int64'
def
test_stack_scalar_make_vector_constant
(
self
):
'''Test that calling stack() on scalars instantiates MakeVector,
event when the scalar are simple int type.'''
a
=
tensor
.
iscalar
(
'a'
)
b
=
tensor
.
lscalar
(
'b'
)
#test when the constant is the first element.
#The first element is used in a special way
s
=
stack
(
10
,
a
,
b
,
numpy
.
int8
(
3
))
f
=
function
([
a
,
b
],
s
)
val
=
f
(
1
,
2
)
self
.
failUnless
(
numpy
.
all
(
val
==
[
10
,
1
,
2
,
3
]))
e
=
f
.
maker
.
env
.
toposort
()
assert
len
([
n
for
n
in
e
if
isinstance
(
n
.
op
,
opt
.
MakeVector
)])
>
0
assert
len
([
n
for
n
in
e
if
isinstance
(
n
,
Join
)])
==
0
assert
f
.
maker
.
env
.
outputs
[
0
]
.
dtype
==
'int64'
def
test_join_vector
(
self
):
def
test_join_vector
(
self
):
a
=
as_tensor_variable
(
numpy
.
array
([
1
,
2
,
3
]))
a
=
as_tensor_variable
(
numpy
.
array
([
1
,
2
,
3
]))
b
=
as_tensor_variable
(
numpy
.
array
([
7
,
8
,
9
]))
b
=
as_tensor_variable
(
numpy
.
array
([
7
,
8
,
9
]))
...
@@ -3440,6 +3470,28 @@ def test_dimshuffle_duplicate():
...
@@ -3440,6 +3470,28 @@ def test_dimshuffle_duplicate():
assert
success
assert
success
class
T_get_constant_value
(
unittest
.
TestCase
):
def
test_get_constant_value
(
self
):
a
=
tensor
.
stack
(
1
,
2
,
3
)
assert
get_constant_value
(
a
[
0
])
==
1
assert
get_constant_value
(
a
[
1
])
==
2
assert
get_constant_value
(
a
[
2
])
==
3
b
=
tensor
.
iscalar
()
a
=
tensor
.
stack
(
b
,
2
,
3
)
self
.
assertRaises
(
TypeError
,
get_constant_value
,
a
[
0
])
assert
get_constant_value
(
a
[
1
])
==
2
assert
get_constant_value
(
a
[
2
])
==
3
#For now get_constant_value got throught only MakeVector and Join of scalar.
v
=
tensor
.
ivector
()
a
=
tensor
.
stack
(
v
,
2
,
3
)
self
.
assertRaises
(
TypeError
,
get_constant_value
,
a
[
0
])
self
.
assertRaises
(
TypeError
,
get_constant_value
,
a
[
1
])
self
.
assertRaises
(
TypeError
,
get_constant_value
,
a
[
2
])
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
if
1
:
if
1
:
unittest
.
main
()
unittest
.
main
()
...
@@ -3449,5 +3501,3 @@ if __name__ == '__main__':
...
@@ -3449,5 +3501,3 @@ if __name__ == '__main__':
suite
=
unittest
.
TestLoader
()
suite
=
unittest
.
TestLoader
()
suite
=
suite
.
loadTestsFromTestCase
(
testcase
)
suite
=
suite
.
loadTestsFromTestCase
(
testcase
)
unittest
.
TextTestRunner
(
verbosity
=
2
)
.
run
(
suite
)
unittest
.
TextTestRunner
(
verbosity
=
2
)
.
run
(
suite
)
theano/tensor/tests/test_sharedvar.py
浏览文件 @
65d74e55
...
@@ -316,20 +316,20 @@ def makeSharedTester(shared_constructor_,
...
@@ -316,20 +316,20 @@ def makeSharedTester(shared_constructor_,
#Test that we forward the input
#Test that we forward the input
specify_shape_fct
=
theano
.
function
([],
x1_specify_shape
)
specify_shape_fct
=
theano
.
function
([],
x1_specify_shape
)
theano
.
printing
.
debugprint
(
specify_shape_fct
)
#
theano.printing.debugprint(specify_shape_fct)
assert
numpy
.
all
(
self
.
ref_fct
(
specify_shape_fct
())
assert
numpy
.
all
(
self
.
ref_fct
(
specify_shape_fct
())
==
self
.
ref_fct
(
x1_2
))
==
self
.
ref_fct
(
x1_2
))
topo_specify
=
specify_shape_fct
.
maker
.
env
.
toposort
()
topo_specify
=
specify_shape_fct
.
maker
.
env
.
toposort
()
if
theano
.
config
.
mode
!=
'FAST_COMPILE'
:
if
theano
.
config
.
mode
!=
'FAST_COMPILE'
:
assert
len
(
topo_specify
)
==
6
assert
len
(
topo_specify
)
==
4
#Test that we put the shape info into the graph
#Test that we put the shape info into the graph
shape_constant_fct
=
theano
.
function
([],
x1_specify_shape
.
shape
)
shape_constant_fct
=
theano
.
function
([],
x1_specify_shape
.
shape
)
theano
.
printing
.
debugprint
(
shape_constant_fct
)
#
theano.printing.debugprint(shape_constant_fct)
assert
numpy
.
all
(
shape_constant_fct
()
==
shape_op_fct
())
assert
numpy
.
all
(
shape_constant_fct
()
==
shape_op_fct
())
topo_cst
=
shape_constant_fct
.
maker
.
env
.
toposort
()
topo_cst
=
shape_constant_fct
.
maker
.
env
.
toposort
()
if
theano
.
config
.
mode
!=
'FAST_COMPILE'
:
if
theano
.
config
.
mode
!=
'FAST_COMPILE'
:
assert
len
(
topo_cst
)
==
6
assert
len
(
topo_cst
)
==
2
#Test that we can replace with values of the different shape
#Test that we can replace with values of the different shape
# but that will raise an error in some case, but not all
# but that will raise an error in some case, but not all
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论