Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
ab13fe09
Unverified
提交
ab13fe09
authored
11月 24, 2022
作者:
Thomas Wiecki
提交者:
GitHub
11月 24, 2022
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Add solve_discrete_lyapunov and solve_continuous_lyapunov (#33)
Co-authored-by:
jessegrabowski
<
jessegrabowski@gmail.com
>
上级
f203dd78
隐藏空白字符变更
内嵌
并排
正在显示
2 个修改的文件
包含
232 行增加
和
8 行删除
+232
-8
slinalg.py
pytensor/tensor/slinalg.py
+162
-2
test_slinalg.py
tests/tensor/test_slinalg.py
+70
-6
没有找到文件。
pytensor/tensor/slinalg.py
浏览文件 @
ab13fe09
import
logging
import
warnings
from
typing
import
Union
from
typing
import
TYPE_CHECKING
,
Union
import
numpy
as
np
import
scipy.linalg
from
typing_extensions
import
Literal
import
pytensor.tensor
import
pytensor
import
pytensor.tensor
as
pt
from
pytensor.graph.basic
import
Apply
from
pytensor.graph.op
import
Op
from
pytensor.tensor
import
as_tensor_variable
from
pytensor.tensor
import
basic
as
at
from
pytensor.tensor
import
math
as
atm
from
pytensor.tensor.shape
import
reshape
from
pytensor.tensor.type
import
matrix
,
tensor
,
vector
from
pytensor.tensor.var
import
TensorVariable
if
TYPE_CHECKING
:
from
pytensor.tensor
import
TensorLike
logger
=
logging
.
getLogger
(
__name__
)
...
...
@@ -735,6 +742,159 @@ class ExpmGrad(Op):
expm
=
Expm
()
class
SolveContinuousLyapunov
(
Op
):
__props__
=
()
def
make_node
(
self
,
A
,
B
):
A
=
as_tensor_variable
(
A
)
B
=
as_tensor_variable
(
B
)
out_dtype
=
pytensor
.
scalar
.
upcast
(
A
.
dtype
,
B
.
dtype
)
X
=
pytensor
.
tensor
.
matrix
(
dtype
=
out_dtype
)
return
pytensor
.
graph
.
basic
.
Apply
(
self
,
[
A
,
B
],
[
X
])
def
perform
(
self
,
node
,
inputs
,
output_storage
):
(
A
,
B
)
=
inputs
X
=
output_storage
[
0
]
X
[
0
]
=
scipy
.
linalg
.
solve_continuous_lyapunov
(
A
,
B
)
def
infer_shape
(
self
,
fgraph
,
node
,
shapes
):
return
[
shapes
[
0
]]
def
grad
(
self
,
inputs
,
output_grads
):
# Gradient computations come from Kao and Hennequin (2020), https://arxiv.org/pdf/2011.11430.pdf
# Note that they write the equation as AX + XA.H + Q = 0, while scipy uses AX + XA^H = Q,
# so minor adjustments need to be made.
A
,
Q
=
inputs
(
dX
,)
=
output_grads
X
=
self
(
A
,
Q
)
S
=
self
(
A
.
conj
()
.
T
,
-
dX
)
# Eq 31, adjusted
A_bar
=
S
.
dot
(
X
.
conj
()
.
T
)
+
S
.
conj
()
.
T
.
dot
(
X
)
Q_bar
=
-
S
# Eq 29, adjusted
return
[
A_bar
,
Q_bar
]
class
BilinearSolveDiscreteLyapunov
(
Op
):
def
make_node
(
self
,
A
,
B
):
A
=
as_tensor_variable
(
A
)
B
=
as_tensor_variable
(
B
)
out_dtype
=
pytensor
.
scalar
.
upcast
(
A
.
dtype
,
B
.
dtype
)
X
=
pytensor
.
tensor
.
matrix
(
dtype
=
out_dtype
)
return
pytensor
.
graph
.
basic
.
Apply
(
self
,
[
A
,
B
],
[
X
])
def
perform
(
self
,
node
,
inputs
,
output_storage
):
(
A
,
B
)
=
inputs
X
=
output_storage
[
0
]
X
[
0
]
=
scipy
.
linalg
.
solve_discrete_lyapunov
(
A
,
B
,
method
=
"bilinear"
)
def
infer_shape
(
self
,
fgraph
,
node
,
shapes
):
return
[
shapes
[
0
]]
def
grad
(
self
,
inputs
,
output_grads
):
# Gradient computations come from Kao and Hennequin (2020), https://arxiv.org/pdf/2011.11430.pdf
A
,
Q
=
inputs
(
dX
,)
=
output_grads
X
=
self
(
A
,
Q
)
# Eq 41, note that it is not written as a proper Lyapunov equation
S
=
self
(
A
.
conj
()
.
T
,
dX
)
A_bar
=
pytensor
.
tensor
.
linalg
.
matrix_dot
(
S
,
A
,
X
.
conj
()
.
T
)
+
pytensor
.
tensor
.
linalg
.
matrix_dot
(
S
.
conj
()
.
T
,
A
,
X
)
Q_bar
=
S
return
[
A_bar
,
Q_bar
]
_solve_continuous_lyapunov
=
SolveContinuousLyapunov
()
_solve_bilinear_direct_lyapunov
=
BilinearSolveDiscreteLyapunov
()
def
iscomplexobj
(
x
):
type_
=
x
.
type
dtype
=
type_
.
dtype
return
"complex"
in
dtype
def
_direct_solve_discrete_lyapunov
(
A
:
"TensorLike"
,
Q
:
"TensorLike"
)
->
TensorVariable
:
A_
=
as_tensor_variable
(
A
)
Q_
=
as_tensor_variable
(
Q
)
if
"complex"
in
A_
.
type
.
dtype
:
AA
=
kron
(
A_
,
A_
.
conj
())
else
:
AA
=
kron
(
A_
,
A_
)
X
=
solve
(
pt
.
eye
(
AA
.
shape
[
0
])
-
AA
,
Q_
.
ravel
())
return
reshape
(
X
,
Q_
.
shape
)
def
solve_discrete_lyapunov
(
A
:
"TensorLike"
,
Q
:
"TensorLike"
,
method
:
Literal
[
"direct"
,
"bilinear"
]
=
"direct"
)
->
TensorVariable
:
"""Solve the discrete Lyapunov equation :math:`A X A^H - X = Q`.
Parameters
----------
A
Square matrix of shape N x N; must have the same shape as Q
Q
Square matrix of shape N x N; must have the same shape as A
method
Solver method used, one of ``"direct"`` or ``"bilinear"``. ``"direct"``
solves the problem directly via matrix inversion. This has a pure
PyTensor implementation and can thus be cross-compiled to supported
backends, and should be preferred when ``N`` is not large. The direct
method scales poorly with the size of ``N``, and the bilinear can be
used in these cases.
Returns
-------
Square matrix of shape ``N x N``, representing the solution to the
Lyapunov equation
"""
if
method
not
in
[
"direct"
,
"bilinear"
]:
raise
ValueError
(
f
'Parameter "method" must be one of "direct" or "bilinear", found {method}'
)
if
method
==
"direct"
:
return
_direct_solve_discrete_lyapunov
(
A
,
Q
)
if
method
==
"bilinear"
:
return
_solve_bilinear_direct_lyapunov
(
A
,
Q
)
def
solve_continuous_lyapunov
(
A
:
"TensorLike"
,
Q
:
"TensorLike"
)
->
TensorVariable
:
"""Solve the continuous Lyapunov equation :math:`A X + X A^H + Q = 0`.
Parameters
----------
A
Square matrix of shape ``N x N``; must have the same shape as `Q`.
Q
Square matrix of shape ``N x N``; must have the same shape as `A`.
Returns
-------
Square matrix of shape ``N x N``, representing the solution to the
Lyapunov equation
"""
return
_solve_continuous_lyapunov
(
A
,
Q
)
__all__
=
[
"cholesky"
,
"solve"
,
...
...
tests/tensor/test_slinalg.py
浏览文件 @
ab13fe09
...
...
@@ -2,13 +2,12 @@ import functools
import
itertools
import
numpy
as
np
import
numpy.linalg
import
pytest
import
scipy
import
pytensor
from
pytensor
import
function
,
grad
from
pytensor
import
tensor
as
a
t
from
pytensor
import
tensor
as
p
t
from
pytensor.configdefaults
import
config
from
pytensor.tensor.slinalg
import
(
Cholesky
,
...
...
@@ -23,6 +22,8 @@ from pytensor.tensor.slinalg import (
expm
,
kron
,
solve
,
solve_continuous_lyapunov
,
solve_discrete_lyapunov
,
solve_triangular
,
)
from
pytensor.tensor.type
import
dmatrix
,
matrix
,
tensor
,
vector
...
...
@@ -155,7 +156,7 @@ def test_eigvalsh():
# We need to test None separately, as otherwise DebugMode will
# complain, as this isn't a valid ndarray.
b
=
None
B
=
a
t
.
NoneConst
B
=
p
t
.
NoneConst
f
=
function
([
A
],
eigvalsh
(
A
,
B
))
w
=
f
(
a
)
refw
=
scipy
.
linalg
.
eigvalsh
(
a
,
b
)
...
...
@@ -215,7 +216,7 @@ class TestSolve(utt.InferShapeTester):
rng
=
np
.
random
.
default_rng
(
utt
.
fetch_seed
())
A
=
matrix
()
b_val
=
np
.
asarray
(
rng
.
random
(
b_shape
),
dtype
=
config
.
floatX
)
b
=
a
t
.
as_tensor_variable
(
b_val
)
.
type
()
b
=
p
t
.
as_tensor_variable
(
b_val
)
.
type
()
self
.
_compile_and_check
(
[
A
,
b
],
[
solve
(
A
,
b
)],
...
...
@@ -292,7 +293,7 @@ class TestSolveTriangular(utt.InferShapeTester):
rng
=
np
.
random
.
default_rng
(
utt
.
fetch_seed
())
A
=
matrix
()
b_val
=
np
.
asarray
(
rng
.
random
(
b_shape
),
dtype
=
config
.
floatX
)
b
=
a
t
.
as_tensor_variable
(
b_val
)
.
type
()
b
=
p
t
.
as_tensor_variable
(
b_val
)
.
type
()
self
.
_compile_and_check
(
[
A
,
b
],
[
solve_triangular
(
A
,
b
)],
...
...
@@ -514,7 +515,6 @@ def test_expm_grad_3():
class
TestKron
(
utt
.
InferShapeTester
):
rng
=
np
.
random
.
default_rng
(
43
)
def
setup_method
(
self
):
...
...
@@ -552,3 +552,67 @@ class TestKron(utt.InferShapeTester):
b
=
self
.
rng
.
random
(
shp1
)
.
astype
(
config
.
floatX
)
out
=
f
(
a
,
b
)
assert
np
.
allclose
(
out
,
np
.
kron
(
a
,
b
))
def
test_solve_discrete_lyapunov_via_direct_real
():
N
=
5
rng
=
np
.
random
.
default_rng
(
utt
.
fetch_seed
())
a
=
pt
.
dmatrix
()
q
=
pt
.
dmatrix
()
f
=
function
([
a
,
q
],
[
solve_discrete_lyapunov
(
a
,
q
,
method
=
"direct"
)])
A
=
rng
.
normal
(
size
=
(
N
,
N
))
Q
=
rng
.
normal
(
size
=
(
N
,
N
))
X
=
f
(
A
,
Q
)
assert
np
.
allclose
(
A
@
X
@
A
.
T
-
X
+
Q
,
0.0
)
utt
.
verify_grad
(
solve_discrete_lyapunov
,
pt
=
[
A
,
Q
],
rng
=
rng
)
def
test_solve_discrete_lyapunov_via_direct_complex
():
N
=
5
rng
=
np
.
random
.
default_rng
(
utt
.
fetch_seed
())
a
=
pt
.
zmatrix
()
q
=
pt
.
zmatrix
()
f
=
function
([
a
,
q
],
[
solve_discrete_lyapunov
(
a
,
q
,
method
=
"direct"
)])
A
=
rng
.
normal
(
size
=
(
N
,
N
))
+
rng
.
normal
(
size
=
(
N
,
N
))
*
1
j
Q
=
rng
.
normal
(
size
=
(
N
,
N
))
X
=
f
(
A
,
Q
)
assert
np
.
allclose
(
A
@
X
@
A
.
conj
()
.
T
-
X
+
Q
,
0.0
)
# TODO: the .conj() method currently does not have a gradient; add this test when gradients are implemented.
# utt.verify_grad(solve_discrete_lyapunov, pt=[A, Q], rng=rng)
def
test_solve_discrete_lyapunov_via_bilinear
():
N
=
5
rng
=
np
.
random
.
default_rng
(
utt
.
fetch_seed
())
a
=
pt
.
dmatrix
()
q
=
pt
.
dmatrix
()
f
=
function
([
a
,
q
],
[
solve_discrete_lyapunov
(
a
,
q
,
method
=
"bilinear"
)])
A
=
rng
.
normal
(
size
=
(
N
,
N
))
Q
=
rng
.
normal
(
size
=
(
N
,
N
))
X
=
f
(
A
,
Q
)
assert
np
.
allclose
(
A
@
X
@
A
.
conj
()
.
T
-
X
+
Q
,
0.0
)
utt
.
verify_grad
(
solve_discrete_lyapunov
,
pt
=
[
A
,
Q
],
rng
=
rng
)
def
test_solve_continuous_lyapunov
():
N
=
5
rng
=
np
.
random
.
default_rng
(
utt
.
fetch_seed
())
a
=
pt
.
dmatrix
()
q
=
pt
.
dmatrix
()
f
=
function
([
a
,
q
],
[
solve_continuous_lyapunov
(
a
,
q
)])
A
=
rng
.
normal
(
size
=
(
N
,
N
))
Q
=
rng
.
normal
(
size
=
(
N
,
N
))
X
=
f
(
A
,
Q
)
assert
np
.
allclose
(
A
@
X
+
X
@
A
.
conj
()
.
T
,
Q
)
utt
.
verify_grad
(
solve_continuous_lyapunov
,
pt
=
[
A
,
Q
],
rng
=
rng
)
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论