提交 a116149c authored 作者: Simon Lefrancois's avatar Simon Lefrancois 提交者: GitHub

Merge pull request #5031 from nouiz/doc

Doc
......@@ -132,6 +132,7 @@ Mehdi Mirza <memirzamo@gmail.com> memimo <memirzamo@gmail.com>
Moslem Kazemi <moslemk@gmail.com> Moslem Kazemi <moslemk@users.noreply.github.com>
Moslem Kazemi <moslemk@gmail.com> Mo <moslemk@gmail.com>
Nicolas Ballas <ballas.n@gmail.com> Kcub <ballas@lrde.epita.fr>
Nicolas Ballas <ballas.n@gmail.com> ballasn <ballas.n@gmail.com>
Nicolas Boulanger-Lewandowski <nicolas_boulanger@hotmail.com> boulanni <nicolas_boulanger@hotmail.com>
Nicolas Pinto <pinto@alum.mit.edu> Nicolas Pinto <nicolas.pinto@gmail.com>
Olivier Breuleux <breuleux@gmail.com> Olivier Breuleux <breuleuo@iro.umontreal.ca>
......@@ -171,6 +172,7 @@ Sebastian Berg <sebastian@sipsolutions.net> seberg <sebastian@sipsolutions.net>
Sebastien Jean <jeasebas@iro.umontreal.ca> sebastien <jeasebas@iro.umontreal.ca>
Sebastien Jean <jeasebas@iro.umontreal.ca> sebastien-j <jeasebas@iro.umontreal.ca>
Sebastien Jean <jeasebas@iro.umontreal.ca> sebastien-j <sebastien.jean@mail.mcgill.ca>
Simon Lefrancois <simon.lefrancois@umontreal.ca> slefrancois <simon.lefrancois@umontreal.ca>
Sina Honari <honaris@iro.umontreal.ca> SinaHonari <sina2222@gmail.com>
Sina Honari <honaris@iro.umontreal.ca> Sina Honari <honaris@eos21.iro.umontreal.ca>
Søren Kaae Sønderby <skaaesonderby@gmail.com> skaae <skaaesonderby@gmail.com>
......
差异被折叠。
......@@ -150,7 +150,7 @@ Functions
.. automodule:: theano.sandbox.cuda.dnn
:noindex:
:members: dnn_conv, dnn_pool
:members: dnn_conv, dnn_pool, dnn_conv3d, dnn_gradweight, dnn_gradinput, dnn_pool, dnn_batch_normalization_train, dnn_batch_normalization_test
Convolution Ops
===============
......
......@@ -18,10 +18,12 @@
- Others
- :func:`softplus`
- :func:`softmax`
- :func:`softsign`
- :func:`relu() <theano.tensor.nnet.relu>`
- :func:`binary_crossentropy`
- :func:`.categorical_crossentropy`
- :func:`h_softmax() <theano.tensor.nnet.h_softmax>`
- :func:`confusion_matrix <theano.tensor.nnet.confusion_matrix>`
.. function:: sigmoid(x)
......@@ -111,6 +113,12 @@
W = T.dmatrix('W')
y = T.nnet.softplus(T.dot(W,x) + b)
.. function:: softsign(x)
Return the elemwise softsign activation function
:math:`\\varphi(\\mathbf{x}) = \\frac{1}{1+|x|}`
.. function:: softmax(x)
Returns the softmax function of x:
......
......@@ -1221,14 +1221,14 @@ def dnn_conv3d(img, kerns, border_mode='valid', subsample=(1, 1, 1),
:param algo: convolution implementation to use. Only 'none' is implemented
for the conv3d. Default is the value of
:attr:`config.dnn.conv.algo_fwd`.
:param precision : dtype in which the computation of the convolution
:param precision: dtype in which the computation of the convolution
should be done. Possible values are 'as_input_f32', 'as_input',
'float16', 'float32' and 'float64'. Default is the value of
:attr:`config.dnn.conv.precision`.
:warning: The cuDNN library only works with GPU that have a compute
capability of 3.0 or higer. This means that older GPU will not
work with this Op.
capability of 3.0 or higer. This means that older GPU will not
work with this Op.
:warning: dnn_conv3d only works with cuDNN library 3.0
"""
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论