提交 746eeac4 authored 作者: Oriol (ZBook)'s avatar Oriol (ZBook) 提交者: Brandon T. Willard

use sphinx section names

上级 6828cd5d
......@@ -189,8 +189,8 @@ class NanGuardMode(Mode):
big_is_error : bool
If True, raise an error when a value greater than 1e10 is encountered.
Note
----
Notes
-----
We ignore the linker parameter
"""
......
......@@ -680,8 +680,8 @@ class Rebroadcast(gof.Op):
-----
Works inplace and works for CudaNdarrayType.
Example
-------
Examples
--------
`Rebroadcast((0, True), (1, False))(x)` would make `x` broadcastable in
axis 0 and not broadcastable in axis 1.
......
......@@ -65,8 +65,8 @@ def debug_counter(name, every=1):
This is a utility function one may use when debugging.
Example
-------
Examples
--------
debug_counter('I want to know how often I run this line')
"""
......
......@@ -866,8 +866,8 @@ def clone(i, o, copy_inputs=True, copy_orphans=None):
object
The inputs and outputs of that copy.
Note
----
Notes
-----
A constant, if in the ``i`` list is not an orpha. So it will be
copied depending of the ``copy_inputs`` parameter. Otherwise it
......
......@@ -393,8 +393,8 @@ class Linker(object):
operate in the same storage the fgraph uses, else independent storage
will be allocated for the function.
Example
-------
Examples
--------
e = x + y
fgraph = FunctionGraph([x, y], [e])
fn = MyLinker(fgraph).make_function(inplace)
......
......@@ -187,8 +187,8 @@ class CLinkerObject(object):
Optional: Return a list of compile args recommended to compile the
code returned by other methods in this class.
Example
-------
Examples
--------
return ['-ffast-math']
Compiler arguments related to headers, libraries and search paths should
......
......@@ -1748,8 +1748,8 @@ class GpuDnnPoolDesc(Op):
pad : tuple
(padX, padY) or (padX, padY, padZ)
Note
----
Notes
-----
Not used anymore. Only needed to reload old pickled files.
"""
......
......@@ -1782,8 +1782,8 @@ def verify_grad(
no_debug_ref : bool
Don't use DebugMode for the numerical gradient function.
Note
----
Notes
-----
This function does not support multiple outputs. In
tests/test_scan.py there is an experimental verify_grad that
covers that case as well by using random projections.
......@@ -2380,8 +2380,8 @@ def grad_clip(x, lower_bound, upper_bound):
>>> print(f(2.0))
[array(1.0), array(4.0)]
Note
----
Notes
-----
We register an opt in tensor/opt.py that remove the GradClip.
So it have 0 cost in the forward and only do work in the grad.
......
......@@ -1393,8 +1393,8 @@ def forced_replace(out, x, y):
x := sigmoid(wu)
forced_replace(out, x, y) := y*(1-y)
Note
----
Notes
-----
When it find a match, it don't continue on the corresponding inputs.
"""
if out is None:
......
......@@ -225,8 +225,8 @@ def constant(x, name=None, ndim=None, dtype=None):
ValueError
`x` could not be expanded to have ndim dimensions.
Note
----
Notes
-----
We create a small cache of frequently used constant.
This speed up the Merge optimization for big graph.
We want to cache all scalar to don't merge as frequently constants.
......@@ -4792,8 +4792,8 @@ def shape_padright(t, n_ones=1):
def shape_padaxis(t, axis):
"""Reshape `t` by inserting 1 at the dimension `axis`.
Example
-------
Examples
--------
>>> tensor = theano.tensor.tensor3()
>>> theano.tensor.shape_padaxis(tensor, axis=0)
DimShuffle{x,0,1,2}.0
......
......@@ -80,8 +80,8 @@ class DimShuffle(COp):
inplace : bool, optional
If True (default), the output will be a view of the input.
Note
----
Notes
-----
If `j = new_order[i]` is an index, the output's ith dimension
will be the input's jth dimension.
If `new_order[i]` is `x`, the output's ith dimension will
......@@ -91,8 +91,6 @@ class DimShuffle(COp):
If `input.broadcastable[i] == False` then `i` must be found in new_order.
Broadcastable dimensions, on the other hand, can be discarded.
Note
----
.. code-block:: python
DimShuffle((False, False, False), ['x', 2, 'x', 0, 1])
......@@ -115,8 +113,8 @@ class DimShuffle(COp):
If the tensor has shape (1, 20), the resulting tensor will have shape
(20, ).
Example
-------
Examples
--------
.. code-block:: python
DimShuffle((), ['x']) # make a 0d (scalar) into a 1d vector
......@@ -399,8 +397,8 @@ class Elemwise(OpenMPOp):
variable number of inputs), whereas the numpy function may
not have varargs.
Note
----
Notes
-----
| Elemwise(add) represents + on tensors (x + y)
| Elemwise(add, {0 : 0}) represents the += operation (x += y)
| Elemwise(add, {0 : 1}) represents += on the second argument (y += x)
......@@ -1330,8 +1328,8 @@ class CAReduce(Op):
- List of dimensions that we want to reduce
- If None, all dimensions are reduced
Note
----
Notes
-----
.. code-block:: python
CAReduce(add) # sum (ie, acts like the numpy sum operation)
......
......@@ -303,8 +303,6 @@ solve = Solve()
Solves the equation ``a x = b`` for x, where ``a`` is a matrix and
``b`` can be either a vector or a matrix.
Note
Parameters
----------
a : `(M, M) symbolix matrix`
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论