Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
e6d204ec
提交
e6d204ec
authored
5月 09, 2022
作者:
Ricardo
提交者:
Brandon T. Willard
7月 07, 2022
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Replace use of Rebroadcast by SpecifyShape in convert_variable
Adds condition in convert_variable_test which would fail before this change
上级
be719a61
隐藏空白字符变更
内嵌
并排
正在显示
5 个修改的文件
包含
22 行增加
和
39 行删除
+22
-39
type.py
aesara/tensor/type.py
+2
-4
type.rst
doc/extending/type.rst
+5
-3
shape_info.rst
doc/tutorial/shape_info.rst
+9
-7
test_basic_opt.py
tests/tensor/test_basic_opt.py
+0
-23
test_type.py
tests/tensor/test_type.py
+6
-2
没有找到文件。
aesara/tensor/type.py
浏览文件 @
e6d204ec
...
@@ -328,10 +328,8 @@ class TensorType(CType[np.ndarray], HasDataType, HasShape):
...
@@ -328,10 +328,8 @@ class TensorType(CType[np.ndarray], HasDataType, HasShape):
# Note that, in this case, `var.type != self`, because that's
# Note that, in this case, `var.type != self`, because that's
# covered by the branch above.
# covered by the branch above.
# Use the more specific broadcast/shape information of the two
# Use the more specific static shape information of the two
return
aesara
.
tensor
.
basic
.
Rebroadcast
(
return
aesara
.
tensor
.
specify_shape
(
var
,
self
.
shape
)
*
[(
i
,
b
)
for
i
,
b
in
enumerate
(
self
.
broadcastable
)]
)(
var
)
def
value_zeros
(
self
,
shape
):
def
value_zeros
(
self
,
shape
):
"""Create an numpy ndarray full of 0 values.
"""Create an numpy ndarray full of 0 values.
...
...
doc/extending/type.rst
浏览文件 @
e6d204ec
...
@@ -141,15 +141,17 @@ more specific/informative than ``v1``'s--and both are compatible.
...
@@ -141,15 +141,17 @@ more specific/informative than ``v1``'s--and both are compatible.
>>> v3 = v2.type.filter_variable(v1)
>>> v3 = v2.type.filter_variable(v1)
>>> v3
>>> v3
Rebroadcast{(0, False),(1, True)}
.0
SpecifyShape
.0
>>> import aesara
>>> import aesara
>>> aesara.dprint(v3, print_type=True)
>>> aesara.dprint(v3, print_type=True)
Rebroadcast{(0, False),(1, True)} [id A] <TensorType(float64, (None, 1))> ''
SpecifyShape [id A] <TensorType(float64, (2, 1))>
|<TensorType(float64, (2, None))> [id B] <TensorType(float64, (2, None))>
|<TensorType(float64, (2, None))> [id B] <TensorType(float64, (2, None))>
|TensorConstant{2} [id C] <TensorType(int8, ())>
|TensorConstant{1} [id D] <TensorType(int8, ())>
Performing this in the opposite direction returned the output of a
Performing this in the opposite direction returned the output of a
:class:`
Rebroadcast`\ :class:`Op`. This :class:`Rebroadcast` uses ``v1``
as an
:class:`
SpecifyShape`\ :class:`Op`. This :class:`SpecifyShape` uses ``v1`` static shape
as an
input and serves to produce a new :class:`Variable` that has a :class:`Type` compatible with
input and serves to produce a new :class:`Variable` that has a :class:`Type` compatible with
both ``v1`` and ``v2``.
both ``v1`` and ``v2``.
...
...
doc/tutorial/shape_info.rst
浏览文件 @
e6d204ec
...
@@ -37,17 +37,19 @@ Aesara propagates information about shapes within a graph using specialized
...
@@ -37,17 +37,19 @@ Aesara propagates information about shapes within a graph using specialized
Specifying Exact Shape
Specifying Exact Shape
======================
======================
Currently, specifying a shape is not as easy and flexible as we wish and we plan some
You can create variables with static shape information as follows:
upgrade. Here is the current state of what can be done:
.. code-block:: python
aesara.tensor.tensor("float64", shape=(4, 3, 2))
- You can pass the shape info directly to the ``ConvOp`` created
when calling ``conv2d``. You simply set the parameters ``image_shape``
You can also pass shape infomation directly to some :class:`Op`\s, like ``RandomVariables``
and ``filter_shape`` inside the call. They must be tuples of 4
elements. For example:
.. code-block:: python
.. code-block:: python
aesara.tensor.nnet.conv2d(..., image_shape=(7, 3, 5, 5), filter_shape=(2, 3, 4, 4))
aesara.tensor.random.normal(size=(7, 3, 5, 5))
- You can use the :class:`SpecifyShape`\ :class:`Op` to add shape information anywhere in the
- You can use the :class:`SpecifyShape`\ :class:`Op` to add shape information anywhere in the
graph. This allows to perform some optimizations. In the following example,
graph. This allows to perform some optimizations. In the following example,
...
...
tests/tensor/test_basic_opt.py
浏览文件 @
e6d204ec
...
@@ -3214,9 +3214,6 @@ def test_local_Unique_scalar(return_index, return_counts, return_inverse):
...
@@ -3214,9 +3214,6 @@ def test_local_Unique_scalar(return_index, return_counts, return_inverse):
y_opt
=
y_opt_fg
.
outputs
[
0
]
y_opt
=
y_opt_fg
.
outputs
[
0
]
y_opt_start
=
y_opt
y_opt_start
=
y_opt
if
isinstance
(
y_opt
.
owner
.
op
,
Rebroadcast
):
y_opt_start
=
y_opt
.
owner
.
inputs
[
0
]
assert
isinstance
(
y_opt_start
.
owner
.
op
,
DimShuffle
)
assert
isinstance
(
y_opt_start
.
owner
.
op
,
DimShuffle
)
assert
y_opt_start
.
owner
.
inputs
[
0
]
==
x
assert
y_opt_start
.
owner
.
inputs
[
0
]
==
x
...
@@ -3266,11 +3263,6 @@ def test_local_Unique_Alloc_lift(
...
@@ -3266,11 +3263,6 @@ def test_local_Unique_Alloc_lift(
y_opt
=
y_opt_fg
.
outputs
[
0
]
y_opt
=
y_opt_fg
.
outputs
[
0
]
y_opt_start
=
y_opt
y_opt_start
=
y_opt
# Ignore any initial `Rebroadcast`s (they serve to
# make the replacement match the original type)
if
isinstance
(
y_opt
.
owner
.
op
,
Rebroadcast
):
y_opt_start
=
y_opt
.
owner
.
inputs
[
0
]
assert
isinstance
(
y_opt_start
.
owner
.
op
,
Unique
)
assert
isinstance
(
y_opt_start
.
owner
.
op
,
Unique
)
assert
y_opt_start
.
owner
.
inputs
[
0
]
==
x
assert
y_opt_start
.
owner
.
inputs
[
0
]
==
x
assert
not
any
(
isinstance
(
node
.
op
,
Alloc
)
for
node
in
y_opt_fg
.
apply_nodes
)
assert
not
any
(
isinstance
(
node
.
op
,
Alloc
)
for
node
in
y_opt_fg
.
apply_nodes
)
...
@@ -3329,11 +3321,6 @@ def test_local_Unique_BroadcastTo(
...
@@ -3329,11 +3321,6 @@ def test_local_Unique_BroadcastTo(
y_opt
=
y_opt_fg
.
outputs
[
0
]
y_opt
=
y_opt_fg
.
outputs
[
0
]
y_opt_start
=
y_opt
y_opt_start
=
y_opt
# Ignore any initial `Rebroadcast`s (they serve to
# make the replacement match the original type)
if
isinstance
(
y_opt
.
owner
.
op
,
Rebroadcast
):
y_opt_start
=
y_opt
.
owner
.
inputs
[
0
]
assert
isinstance
(
y_opt_start
.
owner
.
op
,
Unique
)
assert
isinstance
(
y_opt_start
.
owner
.
op
,
Unique
)
assert
y_opt_start
.
owner
.
inputs
[
0
]
==
x
assert
y_opt_start
.
owner
.
inputs
[
0
]
==
x
assert
not
any
(
isinstance
(
node
.
op
,
BroadcastTo
)
for
node
in
y_opt_fg
.
apply_nodes
)
assert
not
any
(
isinstance
(
node
.
op
,
BroadcastTo
)
for
node
in
y_opt_fg
.
apply_nodes
)
...
@@ -3395,11 +3382,6 @@ def test_local_Unique_Repeat(
...
@@ -3395,11 +3382,6 @@ def test_local_Unique_Repeat(
y_opt
=
y_opt_fg
.
outputs
[
0
]
y_opt
=
y_opt_fg
.
outputs
[
0
]
y_opt_start
=
y_opt
y_opt_start
=
y_opt
# Ignore any initial `Rebroadcast`s (they serve to
# make the replacement match the original type)
if
isinstance
(
y_opt
.
owner
.
op
,
Rebroadcast
):
y_opt_start
=
y_opt
.
owner
.
inputs
[
0
]
assert
isinstance
(
y_opt_start
.
owner
.
op
,
Unique
)
assert
isinstance
(
y_opt_start
.
owner
.
op
,
Unique
)
assert
y_opt_start
.
owner
.
inputs
[
0
]
==
x
assert
y_opt_start
.
owner
.
inputs
[
0
]
==
x
assert
not
any
(
isinstance
(
node
.
op
,
Repeat
)
for
node
in
y_opt_fg
.
apply_nodes
)
assert
not
any
(
isinstance
(
node
.
op
,
Repeat
)
for
node
in
y_opt_fg
.
apply_nodes
)
...
@@ -3456,11 +3438,6 @@ def test_local_Unique_second(
...
@@ -3456,11 +3438,6 @@ def test_local_Unique_second(
y_opt
=
y_opt_fg
.
outputs
[
0
]
y_opt
=
y_opt_fg
.
outputs
[
0
]
y_opt_start
=
y_opt
y_opt_start
=
y_opt
# Ignore any initial `Rebroadcast`s (they serve to
# make the replacement match the original type)
if
y_opt
.
owner
and
isinstance
(
y_opt
.
owner
.
op
,
Rebroadcast
):
y_opt_start
=
y_opt
.
owner
.
inputs
[
0
]
assert
isinstance
(
y_opt_start
.
owner
.
op
,
Unique
)
assert
isinstance
(
y_opt_start
.
owner
.
op
,
Unique
)
y_opt_start
=
y_opt_start
.
owner
.
inputs
[
0
]
y_opt_start
=
y_opt_start
.
owner
.
inputs
[
0
]
...
...
tests/tensor/test_type.py
浏览文件 @
e6d204ec
...
@@ -6,7 +6,7 @@ import pytest
...
@@ -6,7 +6,7 @@ import pytest
import
aesara.tensor
as
at
import
aesara.tensor
as
at
from
aesara.configdefaults
import
config
from
aesara.configdefaults
import
config
from
aesara.tensor.
basic
import
Rebroadcast
from
aesara.tensor.
shape
import
SpecifyShape
from
aesara.tensor.type
import
TensorType
from
aesara.tensor.type
import
TensorType
...
@@ -93,6 +93,10 @@ def test_filter_variable():
...
@@ -93,6 +93,10 @@ def test_filter_variable():
res
=
test_type
.
filter_variable
(
test_var2
,
allow_convert
=
True
)
res
=
test_type
.
filter_variable
(
test_var2
,
allow_convert
=
True
)
assert
res
.
type
==
test_type
assert
res
.
type
==
test_type
test_type3
=
TensorType
(
config
.
floatX
,
shape
=
(
1
,
20
))
res
=
test_type3
.
filter_variable
(
test_var
,
allow_convert
=
True
)
assert
res
.
type
==
test_type3
def
test_filter_strict
():
def
test_filter_strict
():
test_type
=
TensorType
(
config
.
floatX
,
[])
test_type
=
TensorType
(
config
.
floatX
,
[])
...
@@ -277,7 +281,7 @@ def test_fixed_shape_convert_variable():
...
@@ -277,7 +281,7 @@ def test_fixed_shape_convert_variable():
t3
=
TensorType
(
"float64"
,
(
False
,
True
))
t3
=
TensorType
(
"float64"
,
(
False
,
True
))
t3_var
=
t3
()
t3_var
=
t3
()
res
=
t2
.
convert_variable
(
t3_var
)
res
=
t2
.
convert_variable
(
t3_var
)
assert
isinstance
(
res
.
owner
.
op
,
Rebroadcast
)
assert
isinstance
(
res
.
owner
.
op
,
SpecifyShape
)
t3
=
TensorType
(
"float64"
,
(
False
,
False
))
t3
=
TensorType
(
"float64"
,
(
False
,
False
))
t4
=
TensorType
(
"float64"
,
(
3
,
2
))
t4
=
TensorType
(
"float64"
,
(
3
,
2
))
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论