Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
e4800634
提交
e4800634
authored
1月 07, 2023
作者:
Virgile Andreani
提交者:
Michael Osthege
2月 05, 2023
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Spell-check the repository
上级
a6e7722f
隐藏空白字符变更
内嵌
并排
正在显示
24 个修改的文件
包含
42 行增加
和
42 行删除
+42
-42
creating_a_c_op.rst
doc/extending/creating_a_c_op.rst
+1
-1
creating_an_op.rst
doc/extending/creating_an_op.rst
+2
-2
scan.rst
doc/extending/scan.rst
+1
-1
utils.rst
doc/library/tensor/random/utils.rst
+1
-1
docgen.py
doc/scripts/docgen.py
+1
-1
shape_info.rst
doc/tutorial/shape_info.rst
+1
-1
types.py
pytensor/compile/function/types.py
+1
-1
profiling.py
pytensor/compile/profiling.py
+2
-2
configdefaults.py
pytensor/configdefaults.py
+2
-2
replace.py
pytensor/graph/replace.py
+1
-1
db.py
pytensor/graph/rewriting/db.py
+1
-1
elemwise_codegen.py
pytensor/link/numba/dispatch/elemwise_codegen.py
+1
-1
scalar.py
pytensor/link/numba/dispatch/scalar.py
+1
-1
scan.py
pytensor/link/numba/dispatch/scan.py
+1
-1
ordered_set.py
pytensor/misc/ordered_set.py
+1
-1
pkl_utils.py
pytensor/misc/pkl_utils.py
+1
-1
op.py
pytensor/scan/op.py
+2
-2
basic.py
pytensor/tensor/basic.py
+2
-2
basic.py
pytensor/tensor/rewriting/basic.py
+1
-1
subtensor.py
pytensor/tensor/subtensor.py
+2
-2
test_op.py
tests/tensor/random/test_op.py
+1
-1
test_math.py
tests/tensor/rewriting/test_math.py
+5
-5
test_subtensor.py
tests/tensor/rewriting/test_subtensor.py
+3
-3
test_subtensor.py
tests/tensor/test_subtensor.py
+7
-7
没有找到文件。
doc/extending/creating_a_c_op.rst
浏览文件 @
e4800634
...
...
@@ -930,7 +930,7 @@ discussed below.
For every input which has a :attr:`dtype` attribute (this means
Tensors), the following macros will be
defined unless your `Op` class has an :attr:`Op.check_input` attribute
defined to False. In these descrptions 'i' refers to the position
defined to False. In these descr
i
ptions 'i' refers to the position
(indexed from 0) in the input array.
* ``DTYPE_INPUT_{i}`` : NumPy dtype of the data in the array.
...
...
doc/extending/creating_an_op.rst
浏览文件 @
e4800634
...
...
@@ -20,7 +20,7 @@ As an illustration, this tutorial will demonstrate how a simple Python-based
.. note::
This is an introduct
u
ry tutorial and as such it does not cover how to make
This is an introduct
o
ry tutorial and as such it does not cover how to make
an :class:`Op` that returns a view or modifies the values in its inputs. Thus, all
:class:`Op`\s created with the instructions described here MUST return newly
allocated memory or reuse the memory provided in the parameter
...
...
@@ -203,7 +203,7 @@ or :meth:`Op.make_thunk`.
There are other methods that can be optionally defined by the :class:`Op`:
:meth:`Op.__eq__` and :meth:`Op.__hash__` define respectivel
l
y equality
:meth:`Op.__eq__` and :meth:`Op.__hash__` define respectively equality
between two :class:`Op`\s and the hash of an :class:`Op` instance.
They will be used during the rewriting phase to merge nodes that are doing
equivalent computations (same inputs, same operation).
...
...
doc/extending/scan.rst
浏览文件 @
e4800634
...
...
@@ -92,7 +92,7 @@ designated **inner inputs** and **inner outputs**, respectively.
================
The following are the different types of variables that `Scan` has the
capacity to handle, along with their various caracteristics.
capacity to handle, along with their various c
h
aracteristics.
**Sequence** : A sequence is an PyTensor variable which `Scan` will iterate
over and give sub-elements to its inner function as input. A sequence
...
...
doc/library/tensor/random/utils.rst
浏览文件 @
e4800634
...
...
@@ -12,7 +12,7 @@
Guide
=====
PyTensor assign
e
s NumPy RNG states (e.g. `Generator` or `RandomState` objects) to
PyTensor assigns NumPy RNG states (e.g. `Generator` or `RandomState` objects) to
each `RandomVariable`. The combination of an RNG state, a specific
`RandomVariable` type (e.g. `NormalRV`), and a set of distribution parameters
uniquely defines the `RandomVariable` instances in a graph.
...
...
doc/scripts/docgen.py
浏览文件 @
e4800634
...
...
@@ -21,7 +21,7 @@ if __name__ == '__main__':
print
(
' --cache: use the doctree cache'
)
print
(
' --rst: only compile the doc (requires sphinx)'
)
print
(
' --nopdf: do not produce a PDF file from the doc, only HTML'
)
print
(
' --test: run all the code samples in the documentaton'
)
print
(
' --test: run all the code samples in the documentat
i
on'
)
print
(
' --check: treat warnings as errors'
)
print
(
' --help: this help'
)
print
(
'If one or more files are specified after the options then only '
...
...
doc/tutorial/shape_info.rst
浏览文件 @
e4800634
...
...
@@ -44,7 +44,7 @@ You can create variables with static shape information as follows:
pytensor.tensor.tensor("float64", shape=(4, 3, 2))
You can also pass shape infomation directly to some :class:`Op`\s, like ``RandomVariables``
You can also pass shape info
r
mation directly to some :class:`Op`\s, like ``RandomVariables``
.. code-block:: python
...
...
pytensor/compile/function/types.py
浏览文件 @
e4800634
...
...
@@ -599,7 +599,7 @@ class Function:
# helper function
def
checkSV
(
sv_ori
,
sv_rpl
):
"""
Assert two SharedVariable follow some rest
ir
ctions:
Assert two SharedVariable follow some rest
ri
ctions:
1. same type
2. same shape or dim?
"""
...
...
pytensor/compile/profiling.py
浏览文件 @
e4800634
...
...
@@ -165,7 +165,7 @@ def print_global_stats():
print
(
(
"Global stats: "
,
f
"Time ela
sp
ed since PyTensor import = {time.perf_counter() - pytensor_imported_time:6.3f}s, "
f
"Time ela
ps
ed since PyTensor import = {time.perf_counter() - pytensor_imported_time:6.3f}s, "
f
"Time spent in PyTensor functions = {total_fct_exec_time:6.3f}s, "
"Time spent compiling PyTensor functions: "
f
"rewriting = {total_graph_rewrite_time:6.3f}s, linking = {total_time_linker:6.3f}s "
,
...
...
@@ -768,7 +768,7 @@ class ProfileStats:
f
" output {int(idx)}: dtype={dtype}, shape={sh}, strides={st}{off}"
,
file
=
file
,
)
# Same as before, this I've sacrific
i
ed some information making
# Same as before, this I've sacrificed some information making
# the output more readable
print
(
" ... (remaining
%
i Apply instances account for "
...
...
pytensor/configdefaults.py
浏览文件 @
e4800634
...
...
@@ -792,7 +792,7 @@ def add_testvalue_and_checking_configvars():
"print_test_value"
,
(
"If 'True', the __eval__ of an PyTensor variable will return its test_value "
"when this is available. This has the practical conse
g
uence that, e.g., "
"when this is available. This has the practical conse
q
uence that, e.g., "
"in debugging `my_var` will print the same as `my_var.tag.test_value` "
"when a test value is defined."
),
...
...
@@ -1099,7 +1099,7 @@ def add_optimizer_configvars():
config
.
add
(
"optdb__position_cutoff"
,
"Where to stop ear
il
er during optimization. It represent the"
"Where to stop ear
li
er during optimization. It represent the"
" position of the optimizer where to stop."
,
FloatParam
(
np
.
inf
),
in_c_key
=
False
,
...
...
pytensor/graph/replace.py
浏览文件 @
e4800634
...
...
@@ -103,7 +103,7 @@ def graph_replace(
# inputs do not have owners
# this is exactly the reason to clone conditions
equiv
=
{
c
:
c
.
clone
(
name
=
f
"i-{i}"
)
for
i
,
c
in
enumerate
(
conditions
)}
# some replace keys may dis
sa
pear
# some replace keys may dis
ap
pear
# the reason is they are outside the graph
# clone the graph but preserve the equiv mapping
fg
=
FunctionGraph
(
...
...
pytensor/graph/rewriting/db.py
浏览文件 @
e4800634
...
...
@@ -198,7 +198,7 @@ class RewriteDatabaseQuery:
Parameters
==========
include:
A set of tags such that every rew
ir
te obtained through this
A set of tags such that every rew
ri
te obtained through this
`RewriteDatabaseQuery` must have **one** of the tags listed. This
field is required and basically acts as a starting point for the
search.
...
...
pytensor/link/numba/dispatch/elemwise_codegen.py
浏览文件 @
e4800634
...
...
@@ -81,7 +81,7 @@ def make_outputs(
dtype
=
numba
.
from_dtype
(
np
.
dtype
(
dtype
))
arrtype
=
types
.
Array
(
dtype
,
len
(
iter_shape
),
"C"
)
ar_types
.
append
(
arrtype
)
# This is actually an interal numba function, I guess we could
# This is actually an inter
n
al numba function, I guess we could
# call `numba.nd.unsafe.ndarray` instead?
shape
=
[
length
if
not
bc_dim
else
one
for
length
,
bc_dim
in
zip
(
iter_shape
,
bc
)
...
...
pytensor/link/numba/dispatch/scalar.py
浏览文件 @
e4800634
...
...
@@ -60,7 +60,7 @@ def numba_funcify_ScalarOp(op, node, **kwargs):
input_inner_dtypes
=
None
output_inner_dtype
=
None
# Cython functions might have an additonal argument
# Cython functions might have an addit
i
onal argument
has_pyx_skip_dispatch
=
False
if
scalar_func_path
.
startswith
(
"scipy.special"
):
...
...
pytensor/link/numba/dispatch/scan.py
浏览文件 @
e4800634
...
...
@@ -57,7 +57,7 @@ def numba_funcify_Scan(op, node, **kwargs):
# Apply inner rewrites
# TODO: Not sure this is the right place to do this, should we have a rewrite that
# explicitly triggers the optimization of the inner graphs of Scan?
# The C-code def
f
ers it to the make_thunk phase
# The C-code defers it to the make_thunk phase
rewriter
=
op
.
mode_instance
.
optimizer
rewriter
(
op
.
fgraph
)
...
...
pytensor/misc/ordered_set.py
浏览文件 @
e4800634
...
...
@@ -6,7 +6,7 @@ from collections.abc import MutableSet
def
check_deterministic
(
iterable
):
# Most places where OrderedSet is used, pytensor interprets any exception
# whatsoever as a problem that an optimization introduced into the graph.
# If I raise a TypeError when the Dest
or
yHandler tries to do something
# If I raise a TypeError when the Dest
ro
yHandler tries to do something
# non-deterministic, it will just result in optimizations getting ignored.
# So I must use an assert here. In the long term we should fix the rest of
# pytensor to use exceptions correctly, so that this can be a TypeError.
...
...
pytensor/misc/pkl_utils.py
浏览文件 @
e4800634
...
...
@@ -268,7 +268,7 @@ def load(f, persistent_load=PersistentNdarrayLoad):
:type f: file
:param persistent_load: The persistent loading function to use for
unpickling. This must be compatible with the `persisten_id` function
unpickling. This must be compatible with the `persisten
t
_id` function
used when pickling.
:type persistent_load: callable, optional
...
...
pytensor/scan/op.py
浏览文件 @
e4800634
...
...
@@ -110,7 +110,7 @@ err_msg1 = (
"that scan uses in each of its iterations. "
"In order to solve this issue if the two variable currently "
"have the same dimensionality, you can increase the "
"dimensionality of the varia
lb
e in the initial state of scan "
"dimensionality of the varia
bl
e in the initial state of scan "
"by using dimshuffle or shape_padleft. "
)
err_msg2
=
(
...
...
@@ -138,7 +138,7 @@ err_msg3 = (
"The first dimension of this "
"matrix corresponds to the number of previous time-steps "
"that scan uses in each of its iterations. "
"In order to solve this issue if the two varia
lb
e currently "
"In order to solve this issue if the two varia
bl
e currently "
"have the same dimensionality, you can increase the "
"dimensionality of the variable in the initial state of scan "
"by using dimshuffle or shape_padleft. "
...
...
pytensor/tensor/basic.py
浏览文件 @
e4800634
...
...
@@ -647,7 +647,7 @@ def _conversion(real_value: Op, name: str) -> Op:
return
real_value
# These _conver_to_<type> functions have leading underscores to indicate that
# These _conver
t
_to_<type> functions have leading underscores to indicate that
# they should not be called directly. They do not perform sanity checks about
# what types you are casting to what. That logic is implemented by the
# `cast()` function below.
...
...
@@ -3844,7 +3844,7 @@ class AllocEmpty(COp):
# False and it is set to true only in DebugMode.
# We can't set it in the type as other make_node can reuse the type.
# We can't set it in the variable as it isn't copied when we copy
# the variale. So we set it in the tag.
# the varia
b
le. So we set it in the tag.
output
.
tag
.
nan_guard_mode_check
=
False
return
Apply
(
self
,
_shape
,
[
output
])
...
...
pytensor/tensor/rewriting/basic.py
浏览文件 @
e4800634
...
...
@@ -721,7 +721,7 @@ def local_alloc_unary(fgraph, node):
def
local_cast_cast
(
fgraph
,
node
):
"""cast(cast(x, dtype1), dtype2)
when those contrain:
when those con
s
train:
dtype1 == dtype2
OR the base dtype is the same (int, uint, float, complex)
and the first cast cause an upcast.
...
...
pytensor/tensor/subtensor.py
浏览文件 @
e4800634
...
...
@@ -1738,7 +1738,7 @@ class IncSubtensor(COp):
different types of arrays.
"""
# Parameters of PyArra
r
y_FromAny are:
# Parameters of PyArray_FromAny are:
# array
# dtype: we pass NULL to say any dtype is acceptable, so the existing
# dtype will be copied
...
...
@@ -2200,7 +2200,7 @@ class AdvancedIncSubtensor1(COp):
different types of arrays.
"""
# Parameters of PyArra
r
y_FromAny are:
# Parameters of PyArray_FromAny are:
# array
# dtype: we pass NULL to say any dtype is acceptable, so the existing
# dtype will be copied
...
...
tests/tensor/random/test_op.py
浏览文件 @
e4800634
...
...
@@ -110,7 +110,7 @@ def test_RandomVariable_basics():
rv_shape
=
rv
.
_infer_shape
(
at
.
constant
([]),
(),
[])
assert
rv_shape
.
equals
(
at
.
constant
([],
dtype
=
"int64"
))
# Integer-specifi
c
ed `dtype`
# Integer-specified `dtype`
dtype_1
=
all_dtypes
[
1
]
rv_node
=
rv
.
make_node
(
None
,
None
,
1
)
rv_out
=
rv_node
.
outputs
[
1
]
...
...
tests/tensor/rewriting/test_math.py
浏览文件 @
e4800634
...
...
@@ -132,9 +132,9 @@ rewrite_mode = get_mode(rewrite_mode)
dimshuffle_lift
=
out2in
(
local_dimshuffle_lift
)
_stablize_rewrites
=
RewriteDatabaseQuery
(
include
=
[
"fast_run"
])
_stablize_rewrites
.
position_cutoff
=
1.51
_stab
lize_rewrites
=
optdb
.
query
(
_stab
lize_rewrites
)
_stab
i
lize_rewrites
=
RewriteDatabaseQuery
(
include
=
[
"fast_run"
])
_stab
i
lize_rewrites
.
position_cutoff
=
1.51
_stab
ilize_rewrites
=
optdb
.
query
(
_stabi
lize_rewrites
)
_specialize_rewrites
=
RewriteDatabaseQuery
(
include
=
[
"fast_run"
])
_specialize_rewrites
.
position_cutoff
=
2.01
...
...
@@ -154,7 +154,7 @@ def rewrite(g, level="fast_run"):
elif
level
==
"specialize"
:
_specialize_rewrites
.
rewrite
(
g
)
elif
level
==
"stabilize"
:
_stablize_rewrites
.
rewrite
(
g
)
_stab
i
lize_rewrites
.
rewrite
(
g
)
else
:
raise
ValueError
(
level
)
return
g
...
...
@@ -2989,7 +2989,7 @@ class TestLocalErfc:
# TODO: fix this problem: The python code upcast somewhere internally
# some value of float32 to python float for part of its computation.
# That makes the c and python code generate sligtly different values
# That makes the c and python code generate slig
h
tly different values
if
not
(
config
.
floatX
==
"float32"
and
config
.
mode
in
[
"DebugMode"
,
"DEBUG_MODE"
]
):
...
...
tests/tensor/rewriting/test_subtensor.py
浏览文件 @
e4800634
...
...
@@ -424,7 +424,7 @@ def test_local_subtensor_remove_broadcastable_index():
# testing local_subtensor_remove_broadcastable_index optimization
#
# tests removing broadcastable dimensions with index 0 or -1,
# otherwise the optimzation should not be applied
# otherwise the optim
i
zation should not be applied
mode
=
get_default_mode
()
mode
=
mode
.
including
(
"local_subtensor_remove_broadcastable_index"
)
...
...
@@ -433,7 +433,7 @@ def test_local_subtensor_remove_broadcastable_index():
y2
=
x
.
dimshuffle
(
"x"
,
1
,
0
,
"x"
)
y3
=
x
.
dimshuffle
(
"x"
,
1
,
"x"
,
0
,
"x"
)
# testing for cases that the optimzation should be applied
# testing for cases that the optim
i
zation should be applied
z1
=
y1
[:,
0
,
:]
z2
=
y1
[:,
-
1
,
:]
z3
=
y2
[
0
,
:,
:,
-
1
]
...
...
@@ -459,7 +459,7 @@ def test_local_subtensor_remove_broadcastable_index():
xn
=
rng
.
random
((
5
,
5
))
f
(
xn
)
# testing for cases that the optimzation should not be applied
# testing for cases that the optim
i
zation should not be applied
# to verify that other subtensor usage are passed without errors
w1
=
y1
[
3
,
0
,
:]
w2
=
y1
[
2
:
4
,
-
1
,
:]
...
...
tests/tensor/test_subtensor.py
浏览文件 @
e4800634
...
...
@@ -1445,11 +1445,11 @@ class TestIncSubtensor:
for
do_set
in
[
False
,
True
]:
if
do_set
:
resut
=
set_subtensor
(
a
[
sl1
,
sl2
],
increment
)
resu
l
t
=
set_subtensor
(
a
[
sl1
,
sl2
],
increment
)
else
:
resut
=
inc_subtensor
(
a
[
sl1
,
sl2
],
increment
)
resu
l
t
=
inc_subtensor
(
a
[
sl1
,
sl2
],
increment
)
f
=
pytensor
.
function
([
a
,
increment
,
sl2_end
],
resut
)
f
=
pytensor
.
function
([
a
,
increment
,
sl2_end
],
resu
l
t
)
val_a
=
np
.
ones
((
5
,
5
))
val_inc
=
2.3
...
...
@@ -1517,8 +1517,8 @@ class TestIncSubtensor:
for
method
in
[
set_subtensor
,
inc_subtensor
]:
resut
=
method
(
a
[
sl1
,
sl3
,
sl2
],
increment
)
f
=
pytensor
.
function
([
a
,
increment
,
sl2_end
],
resut
)
resu
l
t
=
method
(
a
[
sl1
,
sl3
,
sl2
],
increment
)
f
=
pytensor
.
function
([
a
,
increment
,
sl2_end
],
resu
l
t
)
expected_result
=
np
.
copy
(
val_a
)
result
=
f
(
val_a
,
val_inc
,
val_sl2_end
)
...
...
@@ -1531,9 +1531,9 @@ class TestIncSubtensor:
utt
.
assert_allclose
(
result
,
expected_result
)
# Test when we broadcast the result
resut
=
method
(
a
[
sl1
,
sl2
],
increment
)
resu
l
t
=
method
(
a
[
sl1
,
sl2
],
increment
)
f
=
pytensor
.
function
([
a
,
increment
,
sl2_end
],
resut
)
f
=
pytensor
.
function
([
a
,
increment
,
sl2_end
],
resu
l
t
)
expected_result
=
np
.
copy
(
val_a
)
result
=
f
(
val_a
,
val_inc
,
val_sl2_end
)
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论