提交 813c0076 authored 作者: Frederic's avatar Frederic

small doc/comment changes.

上级 10a620ef
......@@ -28,11 +28,11 @@
changed. Here is the algo:
- If we can use `cuDNN <https://developer.nvidia.com/cuDNN>`_, use it.
- If not, use gemm version (slower then cuDNN, use more memory).
- If not, use gemm version (slower then cuDNN, uses more memory).
If the user don't want the extra memory of the gemm version, they
can enable the legacy code that is even slower, but don't use
extra memory. For this, use the Theano flag
If the users don't want the extra memory usage of the gemm
version, they can enable the legacy code that is even slower, but
does not use extra memory. For this, use the Theano flag
``optimizer_excluding=conv_gemm``.
There is no reason to use the legacy code or the gemm version if
......@@ -41,11 +41,11 @@
2 other options:
- There is also the fft version that is the fastest in some cases,
but use even more memory. It don't support striding to remove
computation and have some shapes restriction.
but uses even more memory. It does not support striding to remove
computation and has some shapes restriction.
- There is also the cuda_convnet convolution in Pylearn2. It use a
different memory layout, have shapes restriction, but don't use
- There is also the cuda_convnet convolution in Pylearn2. It uses a
different memory layout, has shapes restrictions, but does not use
extra memory and is faster then the legacy convolution.
......
......@@ -1109,14 +1109,11 @@ def local_gpu_softmax_with_bias(node):
from theano.tensor.nnet import conv
# Need to be registered before local_gpu_conv_legacy. Otherwise, it
# Needs to be registered before local_gpu_conv_legacy. Otherwise, it
# will have priority over this optimization. We want, if cudnn is
# available and the GPU support it, use it. Otherwise, the gemm
# version should be used. If the user want the legacy convolution,
# they should use the Theano flag:
# optimizer_excluding=local_conv_gemm.
# If cudnn is available, this flag should be added:
# optimizer_excluding=local_gpu_conv
# available and the GPU supports it, to use it. Otherwise, the gemm
# version should be used. If the users want the legacy convolution,
# they should use the Theano flag to disable the dnn and/or gemm version.
@register_opt("dnn")
@local_optimizer([gpu_from_host, conv.ConvOp])
def local_gpu_conv(node):
......
......@@ -588,7 +588,7 @@ def test_dnn_valid():
def test_default_conv():
"""Just test that we introduce the right GPU convolution
versoin.
version.
"""
img = theano.tensor.ftensor4()
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论