提交 e4800634 authored 作者: Virgile Andreani's avatar Virgile Andreani 提交者: Michael Osthege

Spell-check the repository

上级 a6e7722f
......@@ -930,7 +930,7 @@ discussed below.
For every input which has a :attr:`dtype` attribute (this means
Tensors), the following macros will be
defined unless your `Op` class has an :attr:`Op.check_input` attribute
defined to False. In these descrptions 'i' refers to the position
defined to False. In these descriptions 'i' refers to the position
(indexed from 0) in the input array.
* ``DTYPE_INPUT_{i}`` : NumPy dtype of the data in the array.
......
......@@ -20,7 +20,7 @@ As an illustration, this tutorial will demonstrate how a simple Python-based
.. note::
This is an introductury tutorial and as such it does not cover how to make
This is an introductory tutorial and as such it does not cover how to make
an :class:`Op` that returns a view or modifies the values in its inputs. Thus, all
:class:`Op`\s created with the instructions described here MUST return newly
allocated memory or reuse the memory provided in the parameter
......@@ -203,7 +203,7 @@ or :meth:`Op.make_thunk`.
There are other methods that can be optionally defined by the :class:`Op`:
:meth:`Op.__eq__` and :meth:`Op.__hash__` define respectivelly equality
:meth:`Op.__eq__` and :meth:`Op.__hash__` define respectively equality
between two :class:`Op`\s and the hash of an :class:`Op` instance.
They will be used during the rewriting phase to merge nodes that are doing
equivalent computations (same inputs, same operation).
......
......@@ -92,7 +92,7 @@ designated **inner inputs** and **inner outputs**, respectively.
================
The following are the different types of variables that `Scan` has the
capacity to handle, along with their various caracteristics.
capacity to handle, along with their various characteristics.
**Sequence** : A sequence is an PyTensor variable which `Scan` will iterate
over and give sub-elements to its inner function as input. A sequence
......
......@@ -12,7 +12,7 @@
Guide
=====
PyTensor assignes NumPy RNG states (e.g. `Generator` or `RandomState` objects) to
PyTensor assigns NumPy RNG states (e.g. `Generator` or `RandomState` objects) to
each `RandomVariable`. The combination of an RNG state, a specific
`RandomVariable` type (e.g. `NormalRV`), and a set of distribution parameters
uniquely defines the `RandomVariable` instances in a graph.
......
......@@ -21,7 +21,7 @@ if __name__ == '__main__':
print(' --cache: use the doctree cache')
print(' --rst: only compile the doc (requires sphinx)')
print(' --nopdf: do not produce a PDF file from the doc, only HTML')
print(' --test: run all the code samples in the documentaton')
print(' --test: run all the code samples in the documentation')
print(' --check: treat warnings as errors')
print(' --help: this help')
print('If one or more files are specified after the options then only '
......
......@@ -44,7 +44,7 @@ You can create variables with static shape information as follows:
pytensor.tensor.tensor("float64", shape=(4, 3, 2))
You can also pass shape infomation directly to some :class:`Op`\s, like ``RandomVariables``
You can also pass shape information directly to some :class:`Op`\s, like ``RandomVariables``
.. code-block:: python
......
......@@ -599,7 +599,7 @@ class Function:
# helper function
def checkSV(sv_ori, sv_rpl):
"""
Assert two SharedVariable follow some restirctions:
Assert two SharedVariable follow some restrictions:
1. same type
2. same shape or dim?
"""
......
......@@ -165,7 +165,7 @@ def print_global_stats():
print(
(
"Global stats: ",
f"Time elasped since PyTensor import = {time.perf_counter() - pytensor_imported_time:6.3f}s, "
f"Time elapsed since PyTensor import = {time.perf_counter() - pytensor_imported_time:6.3f}s, "
f"Time spent in PyTensor functions = {total_fct_exec_time:6.3f}s, "
"Time spent compiling PyTensor functions: "
f"rewriting = {total_graph_rewrite_time:6.3f}s, linking = {total_time_linker:6.3f}s ",
......@@ -768,7 +768,7 @@ class ProfileStats:
f" output {int(idx)}: dtype={dtype}, shape={sh}, strides={st}{off}",
file=file,
)
# Same as before, this I've sacrificied some information making
# Same as before, this I've sacrificed some information making
# the output more readable
print(
" ... (remaining %i Apply instances account for "
......
......@@ -792,7 +792,7 @@ def add_testvalue_and_checking_configvars():
"print_test_value",
(
"If 'True', the __eval__ of an PyTensor variable will return its test_value "
"when this is available. This has the practical conseguence that, e.g., "
"when this is available. This has the practical consequence that, e.g., "
"in debugging `my_var` will print the same as `my_var.tag.test_value` "
"when a test value is defined."
),
......@@ -1099,7 +1099,7 @@ def add_optimizer_configvars():
config.add(
"optdb__position_cutoff",
"Where to stop eariler during optimization. It represent the"
"Where to stop earlier during optimization. It represent the"
" position of the optimizer where to stop.",
FloatParam(np.inf),
in_c_key=False,
......
......@@ -103,7 +103,7 @@ def graph_replace(
# inputs do not have owners
# this is exactly the reason to clone conditions
equiv = {c: c.clone(name=f"i-{i}") for i, c in enumerate(conditions)}
# some replace keys may dissapear
# some replace keys may disappear
# the reason is they are outside the graph
# clone the graph but preserve the equiv mapping
fg = FunctionGraph(
......
......@@ -198,7 +198,7 @@ class RewriteDatabaseQuery:
Parameters
==========
include:
A set of tags such that every rewirte obtained through this
A set of tags such that every rewrite obtained through this
`RewriteDatabaseQuery` must have **one** of the tags listed. This
field is required and basically acts as a starting point for the
search.
......
......@@ -81,7 +81,7 @@ def make_outputs(
dtype = numba.from_dtype(np.dtype(dtype))
arrtype = types.Array(dtype, len(iter_shape), "C")
ar_types.append(arrtype)
# This is actually an interal numba function, I guess we could
# This is actually an internal numba function, I guess we could
# call `numba.nd.unsafe.ndarray` instead?
shape = [
length if not bc_dim else one for length, bc_dim in zip(iter_shape, bc)
......
......@@ -60,7 +60,7 @@ def numba_funcify_ScalarOp(op, node, **kwargs):
input_inner_dtypes = None
output_inner_dtype = None
# Cython functions might have an additonal argument
# Cython functions might have an additional argument
has_pyx_skip_dispatch = False
if scalar_func_path.startswith("scipy.special"):
......
......@@ -57,7 +57,7 @@ def numba_funcify_Scan(op, node, **kwargs):
# Apply inner rewrites
# TODO: Not sure this is the right place to do this, should we have a rewrite that
# explicitly triggers the optimization of the inner graphs of Scan?
# The C-code deffers it to the make_thunk phase
# The C-code defers it to the make_thunk phase
rewriter = op.mode_instance.optimizer
rewriter(op.fgraph)
......
......@@ -6,7 +6,7 @@ from collections.abc import MutableSet
def check_deterministic(iterable):
# Most places where OrderedSet is used, pytensor interprets any exception
# whatsoever as a problem that an optimization introduced into the graph.
# If I raise a TypeError when the DestoryHandler tries to do something
# If I raise a TypeError when the DestroyHandler tries to do something
# non-deterministic, it will just result in optimizations getting ignored.
# So I must use an assert here. In the long term we should fix the rest of
# pytensor to use exceptions correctly, so that this can be a TypeError.
......
......@@ -268,7 +268,7 @@ def load(f, persistent_load=PersistentNdarrayLoad):
:type f: file
:param persistent_load: The persistent loading function to use for
unpickling. This must be compatible with the `persisten_id` function
unpickling. This must be compatible with the `persistent_id` function
used when pickling.
:type persistent_load: callable, optional
......
......@@ -110,7 +110,7 @@ err_msg1 = (
"that scan uses in each of its iterations. "
"In order to solve this issue if the two variable currently "
"have the same dimensionality, you can increase the "
"dimensionality of the varialbe in the initial state of scan "
"dimensionality of the variable in the initial state of scan "
"by using dimshuffle or shape_padleft. "
)
err_msg2 = (
......@@ -138,7 +138,7 @@ err_msg3 = (
"The first dimension of this "
"matrix corresponds to the number of previous time-steps "
"that scan uses in each of its iterations. "
"In order to solve this issue if the two varialbe currently "
"In order to solve this issue if the two variable currently "
"have the same dimensionality, you can increase the "
"dimensionality of the variable in the initial state of scan "
"by using dimshuffle or shape_padleft. "
......
......@@ -647,7 +647,7 @@ def _conversion(real_value: Op, name: str) -> Op:
return real_value
# These _conver_to_<type> functions have leading underscores to indicate that
# These _convert_to_<type> functions have leading underscores to indicate that
# they should not be called directly. They do not perform sanity checks about
# what types you are casting to what. That logic is implemented by the
# `cast()` function below.
......@@ -3844,7 +3844,7 @@ class AllocEmpty(COp):
# False and it is set to true only in DebugMode.
# We can't set it in the type as other make_node can reuse the type.
# We can't set it in the variable as it isn't copied when we copy
# the variale. So we set it in the tag.
# the variable. So we set it in the tag.
output.tag.nan_guard_mode_check = False
return Apply(self, _shape, [output])
......
......@@ -721,7 +721,7 @@ def local_alloc_unary(fgraph, node):
def local_cast_cast(fgraph, node):
"""cast(cast(x, dtype1), dtype2)
when those contrain:
when those constrain:
dtype1 == dtype2
OR the base dtype is the same (int, uint, float, complex)
and the first cast cause an upcast.
......
......@@ -1738,7 +1738,7 @@ class IncSubtensor(COp):
different types of arrays.
"""
# Parameters of PyArrary_FromAny are:
# Parameters of PyArray_FromAny are:
# array
# dtype: we pass NULL to say any dtype is acceptable, so the existing
# dtype will be copied
......@@ -2200,7 +2200,7 @@ class AdvancedIncSubtensor1(COp):
different types of arrays.
"""
# Parameters of PyArrary_FromAny are:
# Parameters of PyArray_FromAny are:
# array
# dtype: we pass NULL to say any dtype is acceptable, so the existing
# dtype will be copied
......
......@@ -110,7 +110,7 @@ def test_RandomVariable_basics():
rv_shape = rv._infer_shape(at.constant([]), (), [])
assert rv_shape.equals(at.constant([], dtype="int64"))
# Integer-specificed `dtype`
# Integer-specified `dtype`
dtype_1 = all_dtypes[1]
rv_node = rv.make_node(None, None, 1)
rv_out = rv_node.outputs[1]
......
......@@ -132,9 +132,9 @@ rewrite_mode = get_mode(rewrite_mode)
dimshuffle_lift = out2in(local_dimshuffle_lift)
_stablize_rewrites = RewriteDatabaseQuery(include=["fast_run"])
_stablize_rewrites.position_cutoff = 1.51
_stablize_rewrites = optdb.query(_stablize_rewrites)
_stabilize_rewrites = RewriteDatabaseQuery(include=["fast_run"])
_stabilize_rewrites.position_cutoff = 1.51
_stabilize_rewrites = optdb.query(_stabilize_rewrites)
_specialize_rewrites = RewriteDatabaseQuery(include=["fast_run"])
_specialize_rewrites.position_cutoff = 2.01
......@@ -154,7 +154,7 @@ def rewrite(g, level="fast_run"):
elif level == "specialize":
_specialize_rewrites.rewrite(g)
elif level == "stabilize":
_stablize_rewrites.rewrite(g)
_stabilize_rewrites.rewrite(g)
else:
raise ValueError(level)
return g
......@@ -2989,7 +2989,7 @@ class TestLocalErfc:
# TODO: fix this problem: The python code upcast somewhere internally
# some value of float32 to python float for part of its computation.
# That makes the c and python code generate sligtly different values
# That makes the c and python code generate slightly different values
if not (
config.floatX == "float32" and config.mode in ["DebugMode", "DEBUG_MODE"]
):
......
......@@ -424,7 +424,7 @@ def test_local_subtensor_remove_broadcastable_index():
# testing local_subtensor_remove_broadcastable_index optimization
#
# tests removing broadcastable dimensions with index 0 or -1,
# otherwise the optimzation should not be applied
# otherwise the optimization should not be applied
mode = get_default_mode()
mode = mode.including("local_subtensor_remove_broadcastable_index")
......@@ -433,7 +433,7 @@ def test_local_subtensor_remove_broadcastable_index():
y2 = x.dimshuffle("x", 1, 0, "x")
y3 = x.dimshuffle("x", 1, "x", 0, "x")
# testing for cases that the optimzation should be applied
# testing for cases that the optimization should be applied
z1 = y1[:, 0, :]
z2 = y1[:, -1, :]
z3 = y2[0, :, :, -1]
......@@ -459,7 +459,7 @@ def test_local_subtensor_remove_broadcastable_index():
xn = rng.random((5, 5))
f(xn)
# testing for cases that the optimzation should not be applied
# testing for cases that the optimization should not be applied
# to verify that other subtensor usage are passed without errors
w1 = y1[3, 0, :]
w2 = y1[2:4, -1, :]
......
......@@ -1445,11 +1445,11 @@ class TestIncSubtensor:
for do_set in [False, True]:
if do_set:
resut = set_subtensor(a[sl1, sl2], increment)
result = set_subtensor(a[sl1, sl2], increment)
else:
resut = inc_subtensor(a[sl1, sl2], increment)
result = inc_subtensor(a[sl1, sl2], increment)
f = pytensor.function([a, increment, sl2_end], resut)
f = pytensor.function([a, increment, sl2_end], result)
val_a = np.ones((5, 5))
val_inc = 2.3
......@@ -1517,8 +1517,8 @@ class TestIncSubtensor:
for method in [set_subtensor, inc_subtensor]:
resut = method(a[sl1, sl3, sl2], increment)
f = pytensor.function([a, increment, sl2_end], resut)
result = method(a[sl1, sl3, sl2], increment)
f = pytensor.function([a, increment, sl2_end], result)
expected_result = np.copy(val_a)
result = f(val_a, val_inc, val_sl2_end)
......@@ -1531,9 +1531,9 @@ class TestIncSubtensor:
utt.assert_allclose(result, expected_result)
# Test when we broadcast the result
resut = method(a[sl1, sl2], increment)
result = method(a[sl1, sl2], increment)
f = pytensor.function([a, increment, sl2_end], resut)
f = pytensor.function([a, increment, sl2_end], result)
expected_result = np.copy(val_a)
result = f(val_a, val_inc, val_sl2_end)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论