Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
ae36be01
提交
ae36be01
authored
12月 10, 2016
作者:
Frédéric Bastien
提交者:
GitHub
12月 10, 2016
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #5331 from gvtulder/f-sphinx-latex_font_size
Sphinx doc/conf: latex_font_size is deprecated
上级
60c75959
2a6b7d08
隐藏空白字符变更
内嵌
并排
正在显示
11 个修改的文件
包含
32 行增加
和
20 行删除
+32
-20
conf.py
doc/conf.py
+9
-7
basic.txt
doc/library/tensor/basic.txt
+1
-1
dnn.py
theano/gpuarray/dnn.py
+3
-0
gradient.py
theano/gradient.py
+5
-5
printing.py
theano/printing.py
+1
-1
dnn.py
theano/sandbox/cuda/dnn.py
+1
-0
extra_ops.py
theano/tensor/extra_ops.py
+6
-1
abstract_conv.py
theano/tensor/nnet/abstract_conv.py
+1
-1
nnet.py
theano/tensor/nnet/nnet.py
+1
-1
slinalg.py
theano/tensor/slinalg.py
+3
-3
var.py
theano/tensor/var.py
+1
-0
没有找到文件。
doc/conf.py
浏览文件 @
ae36be01
...
@@ -222,11 +222,16 @@ def linkcode_resolve(domain, info):
...
@@ -222,11 +222,16 @@ def linkcode_resolve(domain, info):
# Options for LaTeX output
# Options for LaTeX output
# ------------------------
# ------------------------
# The paper size ('letter' or 'a4').
latex_elements
=
{
#latex_paper_size = 'letter'
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter',
# The font size ('10pt', '11pt' or '12pt').
# The font size ('10pt', '11pt' or '12pt').
latex_font_size
=
'11pt'
'pointsize'
:
'11pt'
,
# Additional stuff for the LaTeX preamble.
#latex_preamble = '',
}
# Grouping the document tree into LaTeX files. List of tuples
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, document class
# (source start file, target name, title, author, document class
...
@@ -245,9 +250,6 @@ latex_logo = 'images/theano_logo_allblue_200x46.png'
...
@@ -245,9 +250,6 @@ latex_logo = 'images/theano_logo_allblue_200x46.png'
# not chapters.
# not chapters.
#latex_use_parts = False
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
# Documents to append as an appendix to all manuals.
#latex_appendices = []
#latex_appendices = []
...
...
doc/library/tensor/basic.txt
浏览文件 @
ae36be01
...
@@ -1582,7 +1582,7 @@ Linear Algebra
...
@@ -1582,7 +1582,7 @@ Linear Algebra
:param Y: right term
:param Y: right term
:type X: symbolic tensor
:type X: symbolic tensor
:type Y: symbolic tensor
:type Y: symbolic tensor
:rtype:
symbolic matrix or vector
:rtype:
`symbolic matrix or vector`
:return: the inner product of `X` and `Y`.
:return: the inner product of `X` and `Y`.
.. function:: outer(X, Y)
.. function:: outer(X, Y)
...
...
theano/gpuarray/dnn.py
浏览文件 @
ae36be01
...
@@ -945,6 +945,7 @@ def dnn_conv(img, kerns, border_mode='valid', subsample=(1, 1),
...
@@ -945,6 +945,7 @@ def dnn_conv(img, kerns, border_mode='valid', subsample=(1, 1),
and 'float64'. Default is the value of
and 'float64'. Default is the value of
:attr:`config.dnn.conv.precision`.
:attr:`config.dnn.conv.precision`.
.. warning:: The cuDNN library only works with GPUs that have a compute
.. warning:: The cuDNN library only works with GPUs that have a compute
capability of 3.0 or higer. This means that older GPUs will not
capability of 3.0 or higer. This means that older GPUs will not
work with this Op.
work with this Op.
...
@@ -1064,6 +1065,7 @@ def dnn_conv3d(img, kerns, border_mode='valid', subsample=(1, 1, 1),
...
@@ -1064,6 +1065,7 @@ def dnn_conv3d(img, kerns, border_mode='valid', subsample=(1, 1, 1),
and 'float64'. Default is the value of
and 'float64'. Default is the value of
:attr:`config.dnn.conv.precision`.
:attr:`config.dnn.conv.precision`.
.. warning:: The cuDNN library only works with GPUs that have a compute
.. warning:: The cuDNN library only works with GPUs that have a compute
capability of 3.0 or higer. This means that older GPUs will not
capability of 3.0 or higer. This means that older GPUs will not
work with this Op.
work with this Op.
...
@@ -1497,6 +1499,7 @@ def dnn_pool(img, ws, stride=None, mode='max', pad=None):
...
@@ -1497,6 +1499,7 @@ def dnn_pool(img, ws, stride=None, mode='max', pad=None):
(padX, padY) or (padX, padY, padZ)
(padX, padY) or (padX, padY, padZ)
default: (0, 0) or (0, 0, 0)
default: (0, 0) or (0, 0, 0)
.. warning:: The cuDNN library only works with GPU that have a compute
.. warning:: The cuDNN library only works with GPU that have a compute
capability of 3.0 or higer. This means that older GPU will not
capability of 3.0 or higer. This means that older GPU will not
work with this Op.
work with this Op.
...
...
theano/gradient.py
浏览文件 @
ae36be01
...
@@ -173,7 +173,7 @@ def Rop(f, wrt, eval_points):
...
@@ -173,7 +173,7 @@ def Rop(f, wrt, eval_points):
described by `f`
described by `f`
:type eval_points: Variable or list of Variables
:type eval_points: Variable or list of Variables
evalutation points for each of the variables in `wrt`
evalutation points for each of the variables in `wrt`
:rtype:
Variable
or list/tuple of Variables depending on type of f
:rtype:
:class:`~theano.gof.Variable`
or list/tuple of Variables depending on type of f
:return: symbolic expression such that
:return: symbolic expression such that
R_op[i] = sum_j ( d f[i] / d wrt[j]) eval_point[j]
R_op[i] = sum_j ( d f[i] / d wrt[j]) eval_point[j]
where the indices in that expression are magic multidimensional
where the indices in that expression are magic multidimensional
...
@@ -320,7 +320,7 @@ def Lop(f, wrt, eval_points, consider_constant=None,
...
@@ -320,7 +320,7 @@ def Lop(f, wrt, eval_points, consider_constant=None,
:type eval_points: Variable or list of Variables
:type eval_points: Variable or list of Variables
evalutation points for each of the variables in `f`
evalutation points for each of the variables in `f`
:rtype:
Variable
or list/tuple of Variables depending on type of f
:rtype:
:class:`~theano.gof.Variable`
or list/tuple of Variables depending on type of f
:return: symbolic expression such that
:return: symbolic expression such that
L_op[i] = sum_i ( d f[i] / d wrt[j]) eval_point[i]
L_op[i] = sum_i ( d f[i] / d wrt[j]) eval_point[i]
where the indices in that expression are magic multidimensional
where the indices in that expression are magic multidimensional
...
@@ -372,10 +372,10 @@ def grad(cost, wrt, consider_constant=None,
...
@@ -372,10 +372,10 @@ def grad(cost, wrt, consider_constant=None,
Parameters
Parameters
----------
----------
cost : scalar (0-dimensional) tensor variable or None
cost :
:class:`~theano.gof.Variable`
scalar (0-dimensional) tensor variable or None
Value with respect to which we are differentiating. May be
Value with respect to which we are differentiating. May be
`None` if known_grads is provided.
`None` if known_grads is provided.
wrt :
variable or list of v
ariables
wrt :
:class:`~theano.gof.Variable` or list of V
ariables
term[s] for which we want gradients
term[s] for which we want gradients
consider_constant : list of variables
consider_constant : list of variables
expressions not to backpropagate through
expressions not to backpropagate through
...
@@ -646,7 +646,7 @@ def subgraph_grad(wrt, end, start=None, cost=None, details=False):
...
@@ -646,7 +646,7 @@ def subgraph_grad(wrt, end, start=None, cost=None, details=False):
to the variables in `end` (they are used as known_grad in
to the variables in `end` (they are used as known_grad in
theano.grad).
theano.grad).
:type cost: scalar (0-dimensional) variable
:type cost:
:class:`~theano.gof.Variable`
scalar (0-dimensional) variable
:param cost:
:param cost:
Additional costs for which to compute the gradients. For
Additional costs for which to compute the gradients. For
example, these could be weight decay, an l1 constraint, MSE,
example, these could be weight decay, an l1 constraint, MSE,
...
...
theano/printing.py
浏览文件 @
ae36be01
...
@@ -60,7 +60,7 @@ def debugprint(obj, depth=-1, print_type=False,
...
@@ -60,7 +60,7 @@ def debugprint(obj, depth=-1, print_type=False,
used_ids
=
None
):
used_ids
=
None
):
"""Print a computation graph as text to stdout or a file.
"""Print a computation graph as text to stdout or a file.
:type obj:
Variable
, Apply, or Function instance
:type obj:
:class:`~theano.gof.Variable`
, Apply, or Function instance
:param obj: symbolic thing to print
:param obj: symbolic thing to print
:type depth: integer
:type depth: integer
:param depth: print graph to this depth (-1 for unlimited)
:param depth: print graph to this depth (-1 for unlimited)
...
...
theano/sandbox/cuda/dnn.py
浏览文件 @
ae36be01
...
@@ -2029,6 +2029,7 @@ def dnn_pool(img, ws, stride=None, mode='max', pad=None):
...
@@ -2029,6 +2029,7 @@ def dnn_pool(img, ws, stride=None, mode='max', pad=None):
pad_d is the number of zero-valued pixels added to each of the front
pad_d is the number of zero-valued pixels added to each of the front
and back borders (3D pooling only).
and back borders (3D pooling only).
.. warning:: The cuDNN library only works with GPU that have a compute
.. warning:: The cuDNN library only works with GPU that have a compute
capability of 3.0 or higer. This means that older GPU will not
capability of 3.0 or higer. This means that older GPU will not
work with this Op.
work with this Op.
...
...
theano/tensor/extra_ops.py
浏览文件 @
ae36be01
...
@@ -358,6 +358,7 @@ def cumsum(x, axis=None):
...
@@ -358,6 +358,7 @@ def cumsum(x, axis=None):
The axis along which the cumulative sum is computed.
The axis along which the cumulative sum is computed.
The default (None) is to compute the cumsum over the flattened array.
The default (None) is to compute the cumsum over the flattened array.
.. versionadded:: 0.7
.. versionadded:: 0.7
"""
"""
...
@@ -483,6 +484,7 @@ def cumprod(x, axis=None):
...
@@ -483,6 +484,7 @@ def cumprod(x, axis=None):
The axis along which the cumulative product is computed.
The axis along which the cumulative product is computed.
The default (None) is to compute the cumprod over the flattened array.
The default (None) is to compute the cumprod over the flattened array.
.. versionadded:: 0.7
.. versionadded:: 0.7
"""
"""
...
@@ -554,6 +556,7 @@ def diff(x, n=1, axis=-1):
...
@@ -554,6 +556,7 @@ def diff(x, n=1, axis=-1):
axis
axis
The axis along which the difference is taken, default is the last axis.
The axis along which the difference is taken, default is the last axis.
.. versionadded:: 0.6
.. versionadded:: 0.6
"""
"""
...
@@ -582,6 +585,7 @@ def bincount(x, weights=None, minlength=None, assert_nonneg=False):
...
@@ -582,6 +585,7 @@ def bincount(x, weights=None, minlength=None, assert_nonneg=False):
every input x is nonnegative.
every input x is nonnegative.
Optional.
Optional.
.. versionadded:: 0.6
.. versionadded:: 0.6
"""
"""
...
@@ -788,7 +792,8 @@ def repeat(x, repeats, axis=None):
...
@@ -788,7 +792,8 @@ def repeat(x, repeats, axis=None):
----------
----------
x
x
Input data, tensor variable.
Input data, tensor variable.
repeats : int, scalar or tensor variable
repeats
int, scalar or tensor variable
axis : int, optional
axis : int, optional
See Also
See Also
...
...
theano/tensor/nnet/abstract_conv.py
浏览文件 @
ae36be01
...
@@ -864,7 +864,7 @@ def bilinear_upsampling(input,
...
@@ -864,7 +864,7 @@ def bilinear_upsampling(input,
mini-batch of feature map stacks, of shape (batch size,
mini-batch of feature map stacks, of shape (batch size,
input channels, input rows, input columns) that will be upsampled.
input channels, input rows, input columns) that will be upsampled.
ratio:
int or Constant or Scalar Tensor of int* dtype
ratio:
`int or Constant or Scalar Tensor of int* dtype`
the ratio by which the input is upsampled in the 2D space (row and
the ratio by which the input is upsampled in the 2D space (row and
col size).
col size).
...
...
theano/tensor/nnet/nnet.py
浏览文件 @
ae36be01
...
@@ -2181,7 +2181,7 @@ def relu(x, alpha=0):
...
@@ -2181,7 +2181,7 @@ def relu(x, alpha=0):
----------
----------
x : symbolic tensor
x : symbolic tensor
Tensor to compute the activation function for.
Tensor to compute the activation function for.
alpha :
scalar or tensor, optional
alpha :
`scalar or tensor, optional`
Slope for negative input, usually between 0 and 1. The default value
Slope for negative input, usually between 0 and 1. The default value
of 0 will lead to the standard rectifier, 1 will lead to
of 0 will lead to the standard rectifier, 1 will lead to
a linear activation function, and any value in between will give a
a linear activation function, and any value in between will give a
...
...
theano/tensor/slinalg.py
浏览文件 @
ae36be01
...
@@ -278,15 +278,15 @@ Note
...
@@ -278,15 +278,15 @@ Note
Parameters
Parameters
----------
----------
a :
(M, M) symbolix matrix
a :
`(M, M) symbolix matrix`
A square matrix
A square matrix
b :
(M,) or (M, N) symbolic vector or matrix
b :
`(M,) or (M, N) symbolic vector or matrix`
Right hand side matrix in ``a x = b``
Right hand side matrix in ``a x = b``
Returns
Returns
-------
-------
x :
(M, ) or (M, N) symbolic vector or matrix
x :
`(M, ) or (M, N) symbolic vector or matrix`
x will have the same shape as b
x will have the same shape as b
"""
"""
# lower and upper triangular solves
# lower and upper triangular solves
...
...
theano/tensor/var.py
浏览文件 @
ae36be01
...
@@ -314,6 +314,7 @@ class _tensor_py_operators(object):
...
@@ -314,6 +314,7 @@ class _tensor_py_operators(object):
The length of the shape. Passing None here means for
The length of the shape. Passing None here means for
Theano to try and guess the length of `shape`.
Theano to try and guess the length of `shape`.
.. warning:: This has a different signature than numpy's
.. warning:: This has a different signature than numpy's
ndarray.reshape!
ndarray.reshape!
In numpy you do not need to wrap the shape arguments
In numpy you do not need to wrap the shape arguments
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论