提交 42e31c46 authored 作者: ricardoV94's avatar ricardoV94 提交者: Ricardo Vieira

Fix broken references

上级 4e55e0ef
......@@ -26,4 +26,4 @@ some of them might be outdated though:
* :ref:`unittest` -- Tutorial on how to use unittest in testing PyTensor.
* :ref:`sparse` -- Description of the ``sparse`` type in PyTensor.
* :ref:`libdoc_sparse` -- Description of the ``sparse`` type in PyTensor.
......@@ -923,7 +923,7 @@ pre-defined macros. These section tags have no macros: ``init_code``,
discussed below.
* ``APPLY_SPECIFIC(str)`` which will automatically append a name
unique to the :ref:`Apply` node that applies the `Op` at the end
unique to the :ref:`apply` node that applies the `Op` at the end
of the provided ``str``. The use of this macro is discussed
further below.
......@@ -994,7 +994,7 @@ Apply node in their own names to avoid conflicts between the different
versions of the apply-specific code. The code that wasn't
apply-specific was simply defined in the ``c_support_code`` method.
To make indentifiers that include the :ref:`Apply` node name use the
To make indentifiers that include the :ref:`apply` node name use the
``APPLY_SPECIFIC(str)`` macro. In the above example, this macro is
used when defining the functions ``vector_elemwise_mult`` and
``vector_times_vector`` as well as when calling function
......
......@@ -7,7 +7,7 @@ Creating a new :class:`Op`: Python implementation
So suppose you have looked through the library documentation and you don't see
a function that does what you want.
If you can implement something in terms of an existing :ref:`Op`, you should do that.
If you can implement something in terms of an existing :ref:`op`, you should do that.
Odds are your function that uses existing PyTensor expressions is short,
has no bugs, and potentially profits from rewrites that have already been
implemented.
......
......@@ -200,7 +200,7 @@ input(s)'s memory). From there, go to the previous section.
certainly lead to erroneous computations.
You can often identify an incorrect `Op.view_map` or :attr:`Op.destroy_map`
by using :ref:`DebugMode`.
by using :ref:`DebugMode <debugmode>`.
.. note::
Consider using :class:`DebugMode` when developing
......
......@@ -197,7 +197,7 @@ Want C speed without writing C code for your new Op? You can use Numba
to generate the C code for you! Here is an `example
Op <https://gist.github.com/nouiz/5492778#file-theano_op-py>`_ doing that.
.. _alternate_PyTensor_types:
.. _alternate_pytensor_types:
Alternate PyTensor Types
========================
......
......@@ -83,7 +83,7 @@ Low-level objects
.. automodule:: pytensor.tensor.random.op
:members: RandomVariable, default_rng
..automodule:: pytensor.tensor.random.type
.. automodule:: pytensor.tensor.random.type
:members: RandomType, RandomGeneratorType, random_generator_type
.. automodule:: pytensor.tensor.random.var
......
......@@ -347,15 +347,7 @@ afterwards compile this expression to get functions,
using pseudo-random numbers is not as straightforward as it is in
NumPy, though also not too complicated.
The way to think about putting randomness into PyTensor's computations is
to put random variables in your graph. PyTensor will allocate a NumPy
`RandomStream` object (a random number generator) for each such
variable, and draw from it as necessary. We will call this sort of
sequence of random numbers a *random stream*. *Random streams* are at
their core shared variables, so the observations on shared variables
hold here as well. PyTensor's random objects are defined and implemented in
:ref:`RandomStream<libdoc_tensor_random_utils>` and, at a lower level,
in :ref:`RandomVariable<libdoc_tensor_random_basic>`.
The general user-facing API is documented in :ref:`RandomStream<libdoc_tensor_random_basic>`
For a more technical explanation of how PyTensor implements random variables see :ref:`prng`.
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论