Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
fa896acb
提交
fa896acb
authored
11月 09, 2011
作者:
nouiz
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #200 from delallea/typos
Typos
上级
79f366f8
e800c392
隐藏空白字符变更
内嵌
并排
正在显示
22 个修改的文件
包含
52 行增加
和
52 行删除
+52
-52
extending_theano.txt
doc/cifarSC2011/extending_theano.txt
+1
-1
op.txt
doc/extending/op.txt
+5
-5
type.txt
doc/extending/type.txt
+1
-1
dev_start_guide.txt
doc/internal/dev_start_guide.txt
+9
-9
pfunc.txt
doc/proposals/pfunc.txt
+2
-2
function.py
theano/compile/function.py
+1
-1
io.py
theano/compile/io.py
+3
-3
pfunc.py
theano/compile/pfunc.py
+3
-3
sharedvalue.py
theano/compile/sharedvalue.py
+2
-2
test_shared.py
theano/compile/tests/test_shared.py
+1
-1
configparser.py
theano/configparser.py
+4
-4
basic_ops.py
theano/sandbox/cuda/basic_ops.py
+1
-1
opt.py
theano/sandbox/cuda/opt.py
+2
-2
test_basic_ops.py
theano/sandbox/cuda/tests/test_basic_ops.py
+2
-2
type.py
theano/sandbox/cuda/type.py
+1
-1
test_basic.py
theano/sparse/tests/test_basic.py
+1
-1
basic.py
theano/tensor/basic.py
+1
-1
blas.py
theano/tensor/blas.py
+1
-1
opt.py
theano/tensor/opt.py
+1
-1
test_basic.py
theano/tensor/tests/test_basic.py
+1
-1
test_blas.py
theano/tensor/tests/test_blas.py
+1
-1
test_sharedvar.py
theano/tensor/tests/test_sharedvar.py
+8
-8
没有找到文件。
doc/cifarSC2011/extending_theano.txt
浏览文件 @
fa896acb
...
@@ -79,7 +79,7 @@ and related methods allow the op to generate c code that will be
...
@@ -79,7 +79,7 @@ and related methods allow the op to generate c code that will be
compiled and linked by Theano. On the other hand, the ``make_thunk``
compiled and linked by Theano. On the other hand, the ``make_thunk``
method will be called only once during compilation and should generate
method will be called only once during compilation and should generate
a ``thunk``: a standalone function that when called will do the wanted computations.
a ``thunk``: a standalone function that when called will do the wanted computations.
This is useful
l
if you want to generate code and compile it yourself. For
This is useful if you want to generate code and compile it yourself. For
example, this allows you to use PyCUDA to compile gpu code.
example, this allows you to use PyCUDA to compile gpu code.
Also there are 2 methods that are highly recommended to be implemented. They are
Also there are 2 methods that are highly recommended to be implemented. They are
...
...
doc/extending/op.txt
浏览文件 @
fa896acb
...
@@ -143,12 +143,12 @@ following methods:
...
@@ -143,12 +143,12 @@ following methods:
This function is needed for shape optimization. ``shapes`` is a
This function is needed for shape optimization. ``shapes`` is a
list with one tuple for each input of the Apply node (which corresponds
list with one tuple for each input of the Apply node (which corresponds
to the inputs of the op). Each tuple contains
1 element for
to the inputs of the op). Each tuple contains
as many elements as the
each dimension of the corresponding input. The value is the
number of dimensions of the corresponding input. The value of each element
shape (number of element
s) along the corresponding dimension of that
is the shape (number of item
s) along the corresponding dimension of that
specific input.
specific input.
While this might sound complicated, it is nothing more th
e
n the shape
While this might sound complicated, it is nothing more th
a
n the shape
of each input as symbolic variables (one per dimension).
of each input as symbolic variables (one per dimension).
The function should return a list with one tuple for each output.
The function should return a list with one tuple for each output.
...
@@ -333,7 +333,7 @@ In the following code, we use our new Op:
...
@@ -333,7 +333,7 @@ In the following code, we use our new Op:
Note that there is an implicit call to
Note that there is an implicit call to
``double.filter()`` on each argument, so if we give integers as inputs
``double.filter()`` on each argument, so if we give integers as inputs
they are magically cast
ed
to the right type. Now, what if we try this?
they are magically cast to the right type. Now, what if we try this?
>>> x = double('x')
>>> x = double('x')
>>> z = mul(x, 2)
>>> z = mul(x, 2)
...
...
doc/extending/type.txt
浏览文件 @
fa896acb
...
@@ -27,7 +27,7 @@ default values.
...
@@ -27,7 +27,7 @@ default values.
.. method:: filter(value, strict=False, allow_downcast=None)
.. method:: filter(value, strict=False, allow_downcast=None)
This casts a value to match the Type and returns the
This casts a value to match the Type and returns the
cast
ed
value. If ``value`` is incompatible with the Type,
cast value. If ``value`` is incompatible with the Type,
the method must raise an exception. If ``strict`` is True, ``filter`` must return a
the method must raise an exception. If ``strict`` is True, ``filter`` must return a
reference to ``value`` (i.e. casting prohibited).
reference to ``value`` (i.e. casting prohibited).
If ``strict`` is False, then casting may happen, but downcasting should
If ``strict`` is False, then casting may happen, but downcasting should
...
...
doc/internal/dev_start_guide.txt
浏览文件 @
fa896acb
...
@@ -84,16 +84,16 @@ then go to your fork's github page on the github website, select your feature
...
@@ -84,16 +84,16 @@ then go to your fork's github page on the github website, select your feature
branch and hit the "Pull Request" button in the top right corner.
branch and hit the "Pull Request" button in the top right corner.
If you don't get any feedback, bug us on the theano-dev mailing list.
If you don't get any feedback, bug us on the theano-dev mailing list.
When
the your pull request have
been merged, you can delete the branch
When
your pull request has
been merged, you can delete the branch
from the github list of branch
. That is usefull to don't have
too many
from the github list of branch
es. This is useful to avoid having
too many
that stay there!
branches staying there. Deleting this remote branch is achieved with:
.. code-block:: bash
.. code-block:: bash
git push origin :my_shiny_feature
git push origin :my_shiny_feature
You can keep your local repository up to date with central/master with those
You can keep you local repo up to date with central/master with those
commands:
commands:
.. code-block:: bash
.. code-block:: bash
...
@@ -101,14 +101,14 @@ You can keep you local repo up to date with central/master with those commands:
...
@@ -101,14 +101,14 @@ You can keep you local repo up to date with central/master with those commands:
git fetch central
git fetch central
git merge central/master
git merge central/master
If you want to fix a commit
done in a pull request(i.e. fix small
If you want to fix a commit
already submitted within a pull request (e.g. to
typo) to keep the history clean, you can do it like this
:
fix a small typo), you can do it like this to keep history clean
:
.. code-block:: bash
.. code-block:: bash
git checkout
branch
git checkout
my_shiny_feature
git commit --amend
git commit --amend
git push -u origin
my_shiny_feature:my_shiny_feature
git push -u origin my_shiny_feature:my_shiny_feature
Coding Style Auto Check
Coding Style Auto Check
...
...
doc/proposals/pfunc.txt
浏览文件 @
fa896acb
...
@@ -64,7 +64,7 @@ The proposal is for two new ways of creating a *shared* variable:
...
@@ -64,7 +64,7 @@ The proposal is for two new ways of creating a *shared* variable:
:param value: A value to associate with this variable (a new container will be created).
:param value: A value to associate with this variable (a new container will be created).
:param strict: True -> assignments to .value will not be cast
ed
or copied, so they must
:param strict: True -> assignments to .value will not be cast or copied, so they must
have the correct type.
have the correct type.
:param container: The container to use for this variable. Illegal to pass this as well
:param container: The container to use for this variable. Illegal to pass this as well
...
@@ -185,7 +185,7 @@ Corner cases and exotic examples can be found in the tests.
...
@@ -185,7 +185,7 @@ Corner cases and exotic examples can be found in the tests.
:param mutable: True -> function is allowed to modify this argument.
:param mutable: True -> function is allowed to modify this argument.
:param strict: False -> function arguments may be copied or cast
ed
to match the
:param strict: False -> function arguments may be copied or cast to match the
type required by the parameter `variable`. True -> function arguments must exactly match the type
type required by the parameter `variable`. True -> function arguments must exactly match the type
required by `variable`.
required by `variable`.
...
...
theano/compile/function.py
浏览文件 @
fa896acb
...
@@ -59,7 +59,7 @@ def function(inputs, outputs=None, mode=None, updates=[], givens=[],
...
@@ -59,7 +59,7 @@ def function(inputs, outputs=None, mode=None, updates=[], givens=[],
:param allow_input_downcast: True means that the values passed as
:param allow_input_downcast: True means that the values passed as
inputs when calling the function can be silently downcasted to fit
inputs when calling the function can be silently downcasted to fit
the dtype of the corresponding Variable, which may lose precision.
the dtype of the corresponding Variable, which may lose precision.
False means that it will only be cast
ed
to a more general, or
False means that it will only be cast to a more general, or
precise, type. None (default) is almost like False, but allows
precise, type. None (default) is almost like False, but allows
downcasting of Python float scalars to floatX.
downcasting of Python float scalars to floatX.
...
...
theano/compile/io.py
浏览文件 @
fa896acb
...
@@ -29,13 +29,13 @@ class SymbolicInput(object):
...
@@ -29,13 +29,13 @@ class SymbolicInput(object):
strict: Bool (default: False)
strict: Bool (default: False)
True: means that the value you pass for this input must have exactly the right type
True: means that the value you pass for this input must have exactly the right type
False: the value you pass for this input may be cast
ed
automatically to the proper type
False: the value you pass for this input may be cast automatically to the proper type
allow_downcast: Bool or None (default: None)
allow_downcast: Bool or None (default: None)
Only applies when `strict` is False.
Only applies when `strict` is False.
True: the value you pass for this input can be silently
True: the value you pass for this input can be silently
downcasted to fit the right type, which may lose precision.
downcasted to fit the right type, which may lose precision.
False: the value will only be cast
ed
to a more general, or precise, type.
False: the value will only be cast to a more general, or precise, type.
None: Almost like False, but allows downcast of Python floats to floatX.
None: Almost like False, but allows downcast of Python floats to floatX.
autoname: Bool (default: True)
autoname: Bool (default: True)
...
@@ -173,7 +173,7 @@ class In(SymbolicInput):
...
@@ -173,7 +173,7 @@ class In(SymbolicInput):
Only applies when `strict` is False.
Only applies when `strict` is False.
True: the value you pass for this input can be silently
True: the value you pass for this input can be silently
downcasted to fit the right type, which may lose precision.
downcasted to fit the right type, which may lose precision.
False: the value will only be cast
ed
to a more general, or precise, type.
False: the value will only be cast to a more general, or precise, type.
None: Almost like False, but allows downcast of Python floats to floatX.
None: Almost like False, but allows downcast of Python floats to floatX.
autoname: Bool (default: True)
autoname: Bool (default: True)
...
...
theano/compile/pfunc.py
浏览文件 @
fa896acb
...
@@ -274,12 +274,12 @@ class Param(object):
...
@@ -274,12 +274,12 @@ class Param(object):
False: do not permit any output to be aliased to the input
False: do not permit any output to be aliased to the input
:param strict: False -> function arguments may be copied or cast
ed
to match the
:param strict: False -> function arguments may be copied or cast to match the
type required by the parameter `variable`. True -> function arguments must exactly match the type
type required by the parameter `variable`. True -> function arguments must exactly match the type
required by `variable`.
required by `variable`.
:param allow_downcast: Only applies if `strict` is False.
:param allow_downcast: Only applies if `strict` is False.
True -> allow assigned value to lose precision when cast
ed
during assignment.
True -> allow assigned value to lose precision when cast during assignment.
False -> never allow precision loss.
False -> never allow precision loss.
None -> only allow downcasting of a Python float to a scalar floatX.
None -> only allow downcasting of a Python float to a scalar floatX.
...
@@ -346,7 +346,7 @@ def pfunc(params, outputs=None, mode=None, updates=[], givens=[],
...
@@ -346,7 +346,7 @@ def pfunc(params, outputs=None, mode=None, updates=[], givens=[],
:param allow_input_downcast: True means that the values passed as
:param allow_input_downcast: True means that the values passed as
inputs when calling the function can be silently downcasted to fit
inputs when calling the function can be silently downcasted to fit
the dtype of the corresponding Variable, which may lose precision.
the dtype of the corresponding Variable, which may lose precision.
False means that it will only be cast
ed
to a more general, or
False means that it will only be cast to a more general, or
precise, type. None (default) is almost like False, but allows
precise, type. None (default) is almost like False, but allows
downcasting of Python float scalars to floatX.
downcasting of Python float scalars to floatX.
...
...
theano/compile/sharedvalue.py
浏览文件 @
fa896acb
...
@@ -53,11 +53,11 @@ class SharedVariable(Variable):
...
@@ -53,11 +53,11 @@ class SharedVariable(Variable):
:param value: A value to associate with this variable (a new container will be created).
:param value: A value to associate with this variable (a new container will be created).
:param strict: True -> assignments to .value will not be cast
ed
or copied, so they must
:param strict: True -> assignments to .value will not be cast or copied, so they must
have the correct type.
have the correct type.
:param allow_downcast: Only applies if `strict` is False.
:param allow_downcast: Only applies if `strict` is False.
True -> allow assigned value to lose precision when cast
ed
during assignment.
True -> allow assigned value to lose precision when cast during assignment.
False -> never allow precision loss.
False -> never allow precision loss.
None -> only allow downcasting of a Python float to a scalar floatX.
None -> only allow downcasting of a Python float to a scalar floatX.
...
...
theano/compile/tests/test_shared.py
浏览文件 @
fa896acb
...
@@ -95,7 +95,7 @@ class Test_SharedVariable(unittest.TestCase):
...
@@ -95,7 +95,7 @@ class Test_SharedVariable(unittest.TestCase):
value
=
numpy
.
asarray
([
1.
,
2.
]),
value
=
numpy
.
asarray
([
1.
,
2.
]),
strict
=
False
)
strict
=
False
)
# check that assignments to value are cast
ed
properly
# check that assignments to value are cast properly
u
.
set_value
([
3
,
4
])
u
.
set_value
([
3
,
4
])
assert
type
(
u
.
get_value
())
is
numpy
.
ndarray
assert
type
(
u
.
get_value
())
is
numpy
.
ndarray
assert
str
(
u
.
get_value
(
borrow
=
True
)
.
dtype
)
==
'float64'
assert
str
(
u
.
get_value
(
borrow
=
True
)
.
dtype
)
==
'float64'
...
...
theano/configparser.py
浏览文件 @
fa896acb
...
@@ -263,14 +263,14 @@ class TypedParam(ConfigParam):
...
@@ -263,14 +263,14 @@ class TypedParam(ConfigParam):
def
__init__
(
self
,
default
,
mytype
,
is_valid
=
None
,
allow_override
=
True
):
def
__init__
(
self
,
default
,
mytype
,
is_valid
=
None
,
allow_override
=
True
):
self
.
mytype
=
mytype
self
.
mytype
=
mytype
def
filter
(
val
):
def
filter
(
val
):
cast
ed
_val
=
mytype
(
val
)
cast_val
=
mytype
(
val
)
if
callable
(
is_valid
):
if
callable
(
is_valid
):
if
is_valid
(
cast
ed
_val
):
if
is_valid
(
cast_val
):
return
cast
ed
_val
return
cast_val
else
:
else
:
raise
ValueError
(
'Invalid value (
%
s) for configuration variable "
%
s".'
raise
ValueError
(
'Invalid value (
%
s) for configuration variable "
%
s".'
%
(
val
,
self
.
fullname
),
val
)
%
(
val
,
self
.
fullname
),
val
)
return
cast
ed
_val
return
cast_val
super
(
TypedParam
,
self
)
.
__init__
(
default
,
filter
,
allow_override
=
allow_override
)
super
(
TypedParam
,
self
)
.
__init__
(
default
,
filter
,
allow_override
=
allow_override
)
def
__str__
(
self
):
def
__str__
(
self
):
return
'
%
s (
%
s) '
%
(
self
.
fullname
,
self
.
mytype
)
return
'
%
s (
%
s) '
%
(
self
.
fullname
,
self
.
mytype
)
...
...
theano/sandbox/cuda/basic_ops.py
浏览文件 @
fa896acb
...
@@ -2098,7 +2098,7 @@ def profile_printer(fct_name, compile_time, fct_call_time, fct_call,
...
@@ -2098,7 +2098,7 @@ def profile_printer(fct_name, compile_time, fct_call_time, fct_call,
if
any
([
x
[
1
]
.
op
.
__class__
.
__name__
.
lower
()
.
startswith
(
"gpu"
)
for
x
in
apply_time
.
keys
()]):
if
any
([
x
[
1
]
.
op
.
__class__
.
__name__
.
lower
()
.
startswith
(
"gpu"
)
for
x
in
apply_time
.
keys
()]):
local_time
=
sum
(
apply_time
.
values
())
local_time
=
sum
(
apply_time
.
values
())
print
print
print
'Some info useful
l
for gpu:'
print
'Some info useful for gpu:'
cpu
=
0
cpu
=
0
gpu
=
0
gpu
=
0
...
...
theano/sandbox/cuda/opt.py
浏览文件 @
fa896acb
...
@@ -163,8 +163,8 @@ def local_gpu_elemwise_0(node):
...
@@ -163,8 +163,8 @@ def local_gpu_elemwise_0(node):
elif
numpy
.
all
([
i
.
type
.
dtype
in
upcastable
for
i
in
node
.
inputs
]):
elif
numpy
.
all
([
i
.
type
.
dtype
in
upcastable
for
i
in
node
.
inputs
]):
# second - establish that a new node with upcasted inputs has the same outputs
# second - establish that a new node with upcasted inputs has the same outputs
# types as the original node
# types as the original node
casted
=
node
.
op
.
make_node
(
*
[
tensor
.
cast
(
i
,
'float32'
)
for
i
in
node
.
inputs
])
up
casted
=
node
.
op
.
make_node
(
*
[
tensor
.
cast
(
i
,
'float32'
)
for
i
in
node
.
inputs
])
if
[
o
.
type
for
o
in
casted
.
outputs
]
==
[
o
.
type
for
o
in
node
.
outputs
]:
if
[
o
.
type
for
o
in
up
casted
.
outputs
]
==
[
o
.
type
for
o
in
node
.
outputs
]:
new_inputs
=
[
gpu_from_host
(
tensor
.
cast
(
i
,
'float32'
))
for
i
in
node
.
inputs
]
new_inputs
=
[
gpu_from_host
(
tensor
.
cast
(
i
,
'float32'
))
for
i
in
node
.
inputs
]
gpu_elemwise
=
new_op
(
*
new_inputs
)
gpu_elemwise
=
new_op
(
*
new_inputs
)
...
...
theano/sandbox/cuda/tests/test_basic_ops.py
浏览文件 @
fa896acb
...
@@ -901,7 +901,7 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester(
...
@@ -901,7 +901,7 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester(
shared_borrow_true_alias_
=
True
,
#True when the original value is already a CudaNdarray!
shared_borrow_true_alias_
=
True
,
#True when the original value is already a CudaNdarray!
set_value_borrow_true_alias_
=
True
,
set_value_borrow_true_alias_
=
True
,
set_value_inplace_
=
True
,
set_value_inplace_
=
True
,
set_cast
ed
_value_inplace_
=
False
,
set_cast_value_inplace_
=
False
,
shared_constructor_accept_ndarray_
=
True
,
shared_constructor_accept_ndarray_
=
True
,
internal_type_
=
cuda_ndarray
.
CudaNdarray
,
internal_type_
=
cuda_ndarray
.
CudaNdarray
,
test_internal_type_
=
lambda
a
:
isinstance
(
a
,
cuda_ndarray
.
CudaNdarray
),
test_internal_type_
=
lambda
a
:
isinstance
(
a
,
cuda_ndarray
.
CudaNdarray
),
...
@@ -919,7 +919,7 @@ test_shared_options2 = theano.tensor.tests.test_sharedvar.makeSharedTester(
...
@@ -919,7 +919,7 @@ test_shared_options2 = theano.tensor.tests.test_sharedvar.makeSharedTester(
shared_borrow_true_alias_
=
False
,
shared_borrow_true_alias_
=
False
,
set_value_borrow_true_alias_
=
False
,
set_value_borrow_true_alias_
=
False
,
set_value_inplace_
=
True
,
set_value_inplace_
=
True
,
set_cast
ed
_value_inplace_
=
True
,
set_cast_value_inplace_
=
True
,
shared_constructor_accept_ndarray_
=
True
,
shared_constructor_accept_ndarray_
=
True
,
internal_type_
=
cuda_ndarray
.
CudaNdarray
,
internal_type_
=
cuda_ndarray
.
CudaNdarray
,
test_internal_type_
=
lambda
a
:
isinstance
(
a
,
cuda_ndarray
.
CudaNdarray
),
test_internal_type_
=
lambda
a
:
isinstance
(
a
,
cuda_ndarray
.
CudaNdarray
),
...
...
theano/sandbox/cuda/type.py
浏览文件 @
fa896acb
...
@@ -65,7 +65,7 @@ class CudaNdarrayType(Type):
...
@@ -65,7 +65,7 @@ class CudaNdarrayType(Type):
return
cuda
.
filter
(
data
,
self
.
broadcastable
,
strict
,
old_data
)
return
cuda
.
filter
(
data
,
self
.
broadcastable
,
strict
,
old_data
)
else
:
# (not strict) and (not allow_downcast)
else
:
# (not strict) and (not allow_downcast)
# Check if data.dtype can be accurately cast
ed
to self.dtype
# Check if data.dtype can be accurately cast to self.dtype
if
isinstance
(
data
,
numpy
.
ndarray
):
if
isinstance
(
data
,
numpy
.
ndarray
):
up_dtype
=
scal
.
upcast
(
self
.
dtype
,
data
.
dtype
)
up_dtype
=
scal
.
upcast
(
self
.
dtype
,
data
.
dtype
)
if
up_dtype
==
self
.
dtype
:
if
up_dtype
==
self
.
dtype
:
...
...
theano/sparse/tests/test_basic.py
浏览文件 @
fa896acb
...
@@ -923,7 +923,7 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester(
...
@@ -923,7 +923,7 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester(
shared_borrow_true_alias_
=
True
,
shared_borrow_true_alias_
=
True
,
set_value_borrow_true_alias_
=
True
,
set_value_borrow_true_alias_
=
True
,
set_value_inplace_
=
False
,
set_value_inplace_
=
False
,
set_cast
ed
_value_inplace_
=
False
,
set_cast_value_inplace_
=
False
,
shared_constructor_accept_ndarray_
=
False
,
shared_constructor_accept_ndarray_
=
False
,
internal_type_
=
scipy
.
sparse
.
csc_matrix
,
internal_type_
=
scipy
.
sparse
.
csc_matrix
,
test_internal_type_
=
scipy
.
sparse
.
issparse
,
test_internal_type_
=
scipy
.
sparse
.
issparse
,
...
...
theano/tensor/basic.py
浏览文件 @
fa896acb
...
@@ -244,7 +244,7 @@ class NumpyAutocaster(object):
...
@@ -244,7 +244,7 @@ class NumpyAutocaster(object):
x_
=
theano
.
_asarray
(
x
,
dtype
=
dtype
)
x_
=
theano
.
_asarray
(
x
,
dtype
=
dtype
)
if
numpy
.
all
(
x
==
x_
):
if
numpy
.
all
(
x
==
x_
):
break
break
# returns either an exact x_==x, or the last cast
ed
x_
# returns either an exact x_==x, or the last cast x_
return
x_
return
x_
autocast_int
=
NumpyAutocaster
((
'int8'
,
'int16'
,
'int32'
,
'int64'
))
autocast_int
=
NumpyAutocaster
((
'int8'
,
'int16'
,
'int32'
,
'int64'
))
...
...
theano/tensor/blas.py
浏览文件 @
fa896acb
...
@@ -1126,7 +1126,7 @@ def _gemm_from_factored_list(lst):
...
@@ -1126,7 +1126,7 @@ def _gemm_from_factored_list(lst):
return
False
return
False
lst2
=
[]
lst2
=
[]
# Remove the tuple that can't be cast
ed
correctly.
# Remove the tuple that can't be cast correctly.
# This can happen when we try to cast a complex to a real
# This can happen when we try to cast a complex to a real
for
sM
in
lst
:
for
sM
in
lst
:
if
is_pair
(
sM
):
if
is_pair
(
sM
):
...
...
theano/tensor/opt.py
浏览文件 @
fa896acb
...
@@ -92,7 +92,7 @@ def scalarconsts_rest(inputs):
...
@@ -92,7 +92,7 @@ def scalarconsts_rest(inputs):
def
broadcast_like
(
value
,
template
,
env
,
dtype
=
None
):
def
broadcast_like
(
value
,
template
,
env
,
dtype
=
None
):
"""Return a Variable with the same shape and dtype as the template,
"""Return a Variable with the same shape and dtype as the template,
filled by broadcasting value through it. `value` will be cast
ed
as
filled by broadcasting value through it. `value` will be cast as
necessary.
necessary.
"""
"""
...
...
theano/tensor/tests/test_basic.py
浏览文件 @
fa896acb
...
@@ -561,7 +561,7 @@ _good_broadcast_div_mod_normal_float_no_complex = dict(
...
@@ -561,7 +561,7 @@ _good_broadcast_div_mod_normal_float_no_complex = dict(
dtype_mixup_1
=
(
rand
(
2
,
3
),
randint_nonzero
(
2
,
3
)),
dtype_mixup_1
=
(
rand
(
2
,
3
),
randint_nonzero
(
2
,
3
)),
dtype_mixup_2
=
(
randint_nonzero
(
2
,
3
),
rand
(
2
,
3
)),
dtype_mixup_2
=
(
randint_nonzero
(
2
,
3
),
rand
(
2
,
3
)),
# Fix problem with integers and uintegers and add them.
# Fix problem with integers and uintegers and add them.
# The
m
remove their specific addition to CeilIntDivTester tests.
# The
n
remove their specific addition to CeilIntDivTester tests.
# integer=(randint(2, 3), randint_nonzero(2, 3)),
# integer=(randint(2, 3), randint_nonzero(2, 3)),
# uinteger=(randint(2, 3).astype("uint8"),
# uinteger=(randint(2, 3).astype("uint8"),
# randint_nonzero(2, 3).astype("uint8")),
# randint_nonzero(2, 3).astype("uint8")),
...
...
theano/tensor/tests/test_blas.py
浏览文件 @
fa896acb
...
@@ -1040,7 +1040,7 @@ class BaseGemv(object):
...
@@ -1040,7 +1040,7 @@ class BaseGemv(object):
# The only op in the graph is a dot.
# The only op in the graph is a dot.
# In the gemm case, we create a dot22 for that case
# In the gemm case, we create a dot22 for that case
# There is no dot21.
# There is no dot21.
# Creating one is not useful
l
as this is not faster(in fact it would be slower!
# Creating one is not useful as this is not faster(in fact it would be slower!
# as more code would be in python, numpy.dot will call gemv itself)
# as more code would be in python, numpy.dot will call gemv itself)
# See ticket 594
# See ticket 594
"""
"""
...
...
theano/tensor/tests/test_sharedvar.py
浏览文件 @
fa896acb
...
@@ -17,7 +17,7 @@ def makeSharedTester(shared_constructor_,
...
@@ -17,7 +17,7 @@ def makeSharedTester(shared_constructor_,
shared_borrow_true_alias_
,
shared_borrow_true_alias_
,
set_value_borrow_true_alias_
,
set_value_borrow_true_alias_
,
set_value_inplace_
,
set_value_inplace_
,
set_cast
ed
_value_inplace_
,
set_cast_value_inplace_
,
shared_constructor_accept_ndarray_
,
shared_constructor_accept_ndarray_
,
internal_type_
,
internal_type_
,
test_internal_type_
,
test_internal_type_
,
...
@@ -38,7 +38,7 @@ def makeSharedTester(shared_constructor_,
...
@@ -38,7 +38,7 @@ def makeSharedTester(shared_constructor_,
:param set_value_borrow_true_alias_: Should set_value(val,borrow=True) reuse the val memory space
:param set_value_borrow_true_alias_: Should set_value(val,borrow=True) reuse the val memory space
:param set_value_inplace_: Should this shared variable overwrite the current
:param set_value_inplace_: Should this shared variable overwrite the current
memory when the new value is an ndarray
memory when the new value is an ndarray
:param set_cast
ed
_value_inplace_: Should this shared variable overwrite the
:param set_cast_value_inplace_: Should this shared variable overwrite the
current memory when the new value is of the same
current memory when the new value is of the same
type as the internal type.
type as the internal type.
:param shared_constructor_accept_ndarray_: Do the shared_constructor accept an ndarray as input?
:param shared_constructor_accept_ndarray_: Do the shared_constructor accept an ndarray as input?
...
@@ -71,7 +71,7 @@ def makeSharedTester(shared_constructor_,
...
@@ -71,7 +71,7 @@ def makeSharedTester(shared_constructor_,
ref_fct
=
staticmethod
(
ref_fct_
)
ref_fct
=
staticmethod
(
ref_fct_
)
set_value_borrow_true_alias
=
set_value_borrow_true_alias_
set_value_borrow_true_alias
=
set_value_borrow_true_alias_
set_value_inplace
=
set_value_inplace_
set_value_inplace
=
set_value_inplace_
set_cast
ed_value_inplace
=
set_casted
_value_inplace_
set_cast
_value_inplace
=
set_cast
_value_inplace_
shared_constructor_accept_ndarray
=
shared_constructor_accept_ndarray_
shared_constructor_accept_ndarray
=
shared_constructor_accept_ndarray_
cast_value
=
staticmethod
(
cast_value_
)
cast_value
=
staticmethod
(
cast_value_
)
op_by_matrix
=
op_by_matrix_
op_by_matrix
=
op_by_matrix_
...
@@ -379,14 +379,14 @@ def makeSharedTester(shared_constructor_,
...
@@ -379,14 +379,14 @@ def makeSharedTester(shared_constructor_,
self
.
ref_fct
(
self
.
cast_value
(
nd
)))
self
.
ref_fct
(
self
.
cast_value
(
nd
)))
assert
may_share_memory
(
old_data
,
x_shared
.
container
.
storage
[
0
])
==
self
.
set_value_inplace
assert
may_share_memory
(
old_data
,
x_shared
.
container
.
storage
[
0
])
==
self
.
set_value_inplace
# Test by set_value with borrow=False when new data cast
ed
.
# Test by set_value with borrow=False when new data cast.
# specificaly useful for gpu data
# specificaly useful for gpu data
nd
+=
1
nd
+=
1
old_data
=
x_shared
.
container
.
storage
[
0
]
old_data
=
x_shared
.
container
.
storage
[
0
]
x_shared
.
set_value
(
self
.
cast_value
(
nd
),
borrow
=
False
)
x_shared
.
set_value
(
self
.
cast_value
(
nd
),
borrow
=
False
)
assert
numpy
.
allclose
(
self
.
ref_fct
(
x_shared
.
get_value
(
borrow
=
True
)),
assert
numpy
.
allclose
(
self
.
ref_fct
(
x_shared
.
get_value
(
borrow
=
True
)),
self
.
ref_fct
(
self
.
cast_value
(
nd
)))
self
.
ref_fct
(
self
.
cast_value
(
nd
)))
assert
may_share_memory
(
old_data
,
x_shared
.
container
.
storage
[
0
])
==
self
.
set_cast
ed
_value_inplace
assert
may_share_memory
(
old_data
,
x_shared
.
container
.
storage
[
0
])
==
self
.
set_cast_value_inplace
# Test by set_value with borrow=True
# Test by set_value with borrow=True
nd
+=
1
nd
+=
1
...
@@ -396,12 +396,12 @@ def makeSharedTester(shared_constructor_,
...
@@ -396,12 +396,12 @@ def makeSharedTester(shared_constructor_,
self
.
ref_fct
(
self
.
cast_value
(
nd
)))
self
.
ref_fct
(
self
.
cast_value
(
nd
)))
assert
may_share_memory
(
old_data
,
x_shared
.
container
.
storage
[
0
])
==
self
.
set_value_inplace
assert
may_share_memory
(
old_data
,
x_shared
.
container
.
storage
[
0
])
==
self
.
set_value_inplace
# Test by set_value with borrow=True when new data cast
ed
.
# Test by set_value with borrow=True when new data cast.
nd
+=
1
nd
+=
1
old_data
=
x_shared
.
container
.
storage
[
0
]
old_data
=
x_shared
.
container
.
storage
[
0
]
x_shared
.
set_value
(
self
.
cast_value
(
nd
.
copy
()),
borrow
=
True
)
x_shared
.
set_value
(
self
.
cast_value
(
nd
.
copy
()),
borrow
=
True
)
assert
numpy
.
allclose
(
self
.
ref_fct
(
x_shared
.
get_value
(
borrow
=
True
)),
self
.
ref_fct
(
self
.
cast_value
(
nd
)))
assert
numpy
.
allclose
(
self
.
ref_fct
(
x_shared
.
get_value
(
borrow
=
True
)),
self
.
ref_fct
(
self
.
cast_value
(
nd
)))
assert
may_share_memory
(
old_data
,
x_shared
.
container
.
storage
[
0
])
==
self
.
set_cast
ed
_value_inplace
assert
may_share_memory
(
old_data
,
x_shared
.
container
.
storage
[
0
])
==
self
.
set_cast_value_inplace
def
test_specify_shape
(
self
):
def
test_specify_shape
(
self
):
dtype
=
self
.
dtype
dtype
=
self
.
dtype
...
@@ -628,7 +628,7 @@ test_shared_options=makeSharedTester(
...
@@ -628,7 +628,7 @@ test_shared_options=makeSharedTester(
shared_borrow_true_alias_
=
True
,
shared_borrow_true_alias_
=
True
,
set_value_borrow_true_alias_
=
True
,
set_value_borrow_true_alias_
=
True
,
set_value_inplace_
=
False
,
set_value_inplace_
=
False
,
set_cast
ed
_value_inplace_
=
False
,
set_cast_value_inplace_
=
False
,
shared_constructor_accept_ndarray_
=
True
,
shared_constructor_accept_ndarray_
=
True
,
internal_type_
=
numpy
.
ndarray
,
internal_type_
=
numpy
.
ndarray
,
test_internal_type_
=
lambda
a
:
isinstance
(
a
,
numpy
.
ndarray
),
test_internal_type_
=
lambda
a
:
isinstance
(
a
,
numpy
.
ndarray
),
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论