提交 d63ad8d5 authored 作者: Olivier Delalleau's avatar Olivier Delalleau

Typo fixes

上级 e2c09157
......@@ -15,8 +15,8 @@
:Parameters: *x* - symbolic Tensor (or compatible)
:Return type: same as x
:Returns: element-wise sigmoid: :math:`sigmoid(x) = \frac{1}{1 + \exp(-x)}`.
:note: see :func:`ultra_fast_sigmoid` or :func:`hard_sigmoid` for faster version.
Speed comparison for 100M float64 element on a Core2 Duo @ 3.16 GHz.
:note: see :func:`ultra_fast_sigmoid` or :func:`hard_sigmoid` for faster versions.
Speed comparison for 100M float64 elements on a Core2 Duo @ 3.16 GHz:
- hard_sigmoid: 1.0s
- ultra_fast_sigmoid: 1.3s
......@@ -44,15 +44,15 @@
:Parameters: *x* - symbolic Tensor (or compatible)
:Return type: same as x
:Returns: approximated element-wise sigmoid: :math:`sigmoid(x) = \frac{1}{1 + \exp(-x)}`.
:note: To automatically change all :func:`sigmoid` op to this version, use
:note: To automatically change all :func:`sigmoid` ops to this version, use
the Theano optimization ``local_ultra_fast_sigmoid``. This can be done
with the Theano flag ``optimizer_including=local_ultra_fast_sigmoid``.
This optimization is done late, so it shouldn't affect
This optimization is done late, so it should not affect
stabilization optimization.
.. note:: The underlying code will return 0.00247262315663 as the
minimum value and 0.997527376843 as the maximum value. So it
never return 0 or 1.
never returns 0 or 1.
......@@ -63,10 +63,10 @@
:Parameters: *x* - symbolic Tensor (or compatible)
:Return type: same as x
:Returns: approximated element-wise sigmoid: :math:`sigmoid(x) = \frac{1}{1 + \exp(-x)}`.
:note: To automatically change all :func:`sigmoid` op to this version, use
:note: To automatically change all :func:`sigmoid` ops to this version, use
the Theano optimization ``local_hard_sigmoid``. This can be done
with the Theano flag ``optimizer_including=local_hard_sigmoid``.
This optimization is done late, so it shouldn't affect
This optimization is done late, so it should not affect
stabilization optimization.
.. note:: The underlying code will return an exact 0 or 1 if an
......
......@@ -98,7 +98,7 @@ for i in xrange(750):
// We block to keep the data in l1
// normal l1 size = 32k: 32k/2(input + output)/8(nb bytes of double)=2k
// We stay bellow the 2k limit to let space for
// This is faster then the not blocking version
// This is faster than the not blocking version
for(int i=0;i<n;i+=2048){
npy_intp nb = (n-i<2048)?n-i:2048;
for(int j=0;j<nb;j++){
......@@ -261,16 +261,16 @@ theano.compile.optdb['uncanonicalize'].register("local_ultra_fast_sigmoid",
def hard_sigmoid(x):
"""An approximation of sigmoid.
More approximate and faster then ultra_fast_sigmoid.
More approximate and faster than ultra_fast_sigmoid.
Approx in 3 parts: 0, scaled linear, 1
Removing the slop and shift don't make it faster.
Removing the slope and shift does not make it faster.
"""
slop = 0.2
slope = 0.2
shift = 0.5
x = (x * 0.2) + shift
x = (x * slope) + shift
x = tensor.clip(x, 0, 1)
return x
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论