提交 aab3f35f authored 作者: Frederic Bastien's avatar Frederic Bastien

Fix docstring

上级 9b1d9969
...@@ -2676,10 +2676,12 @@ def dnn_batch_normalization_train(inputs, gamma, beta, mode='per-activation', ...@@ -2676,10 +2676,12 @@ def dnn_batch_normalization_train(inputs, gamma, beta, mode='per-activation',
For 4d tensors, returned values are equivalent to: For 4d tensors, returned values are equivalent to:
>>> axes = 0 if mode == 'per-activation' else (0, 2, 3) .. code-block:: python
>>> mean = inputs.mean(axes, keepdims=True)
>>> stdinv = T.inv(T.sqrt(inputs.var(axes, keepdims=True) + epsilon)) axes = 0 if mode == 'per-activation' else (0, 2, 3)
>>> out = (inputs - mean) * gamma * stdinv + beta mean = inputs.mean(axes, keepdims=True)
stdinv = T.inv(T.sqrt(inputs.var(axes, keepdims=True) + epsilon))
out = (inputs - mean) * gamma * stdinv + beta
""" """
ndim = inputs.ndim ndim = inputs.ndim
if ndim > 4: if ndim > 4:
...@@ -2742,10 +2744,12 @@ def dnn_batch_normalization_test(inputs, gamma, beta, mean, var, ...@@ -2742,10 +2744,12 @@ def dnn_batch_normalization_test(inputs, gamma, beta, mean, var,
For 4d tensors, the returned value is equivalent to: For 4d tensors, the returned value is equivalent to:
>>> axes = (0,) if mode == 'per-activation' else (0, 2, 3) .. code-block:: python
>>> gamma, beta, mean, var = (T.addbroadcast(t, *axes)
... for t in (gamma, beta, mean, var)) axes = (0,) if mode == 'per-activation' else (0, 2, 3)
>>> out = (inputs - mean) * gamma / T.sqrt(var + epsilon) + beta gamma, beta, mean, var = (T.addbroadcast(t, *axes)
for t in (gamma, beta, mean, var))
out = (inputs - mean) * gamma / T.sqrt(var + epsilon) + beta
""" """
ndim = inputs.ndim ndim = inputs.ndim
if ndim > 4: if ndim > 4:
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论