提交 813c0076 authored 作者: Frederic's avatar Frederic

small doc/comment changes.

上级 10a620ef
...@@ -28,11 +28,11 @@ ...@@ -28,11 +28,11 @@
changed. Here is the algo: changed. Here is the algo:
- If we can use `cuDNN <https://developer.nvidia.com/cuDNN>`_, use it. - If we can use `cuDNN <https://developer.nvidia.com/cuDNN>`_, use it.
- If not, use gemm version (slower then cuDNN, use more memory). - If not, use gemm version (slower then cuDNN, uses more memory).
If the user don't want the extra memory of the gemm version, they If the users don't want the extra memory usage of the gemm
can enable the legacy code that is even slower, but don't use version, they can enable the legacy code that is even slower, but
extra memory. For this, use the Theano flag does not use extra memory. For this, use the Theano flag
``optimizer_excluding=conv_gemm``. ``optimizer_excluding=conv_gemm``.
There is no reason to use the legacy code or the gemm version if There is no reason to use the legacy code or the gemm version if
...@@ -41,11 +41,11 @@ ...@@ -41,11 +41,11 @@
2 other options: 2 other options:
- There is also the fft version that is the fastest in some cases, - There is also the fft version that is the fastest in some cases,
but use even more memory. It don't support striding to remove but uses even more memory. It does not support striding to remove
computation and have some shapes restriction. computation and has some shapes restriction.
- There is also the cuda_convnet convolution in Pylearn2. It use a - There is also the cuda_convnet convolution in Pylearn2. It uses a
different memory layout, have shapes restriction, but don't use different memory layout, has shapes restrictions, but does not use
extra memory and is faster then the legacy convolution. extra memory and is faster then the legacy convolution.
......
...@@ -1109,14 +1109,11 @@ def local_gpu_softmax_with_bias(node): ...@@ -1109,14 +1109,11 @@ def local_gpu_softmax_with_bias(node):
from theano.tensor.nnet import conv from theano.tensor.nnet import conv
# Need to be registered before local_gpu_conv_legacy. Otherwise, it # Needs to be registered before local_gpu_conv_legacy. Otherwise, it
# will have priority over this optimization. We want, if cudnn is # will have priority over this optimization. We want, if cudnn is
# available and the GPU support it, use it. Otherwise, the gemm # available and the GPU supports it, to use it. Otherwise, the gemm
# version should be used. If the user want the legacy convolution, # version should be used. If the users want the legacy convolution,
# they should use the Theano flag: # they should use the Theano flag to disable the dnn and/or gemm version.
# optimizer_excluding=local_conv_gemm.
# If cudnn is available, this flag should be added:
# optimizer_excluding=local_gpu_conv
@register_opt("dnn") @register_opt("dnn")
@local_optimizer([gpu_from_host, conv.ConvOp]) @local_optimizer([gpu_from_host, conv.ConvOp])
def local_gpu_conv(node): def local_gpu_conv(node):
......
...@@ -588,7 +588,7 @@ def test_dnn_valid(): ...@@ -588,7 +588,7 @@ def test_dnn_valid():
def test_default_conv(): def test_default_conv():
"""Just test that we introduce the right GPU convolution """Just test that we introduce the right GPU convolution
versoin. version.
""" """
img = theano.tensor.ftensor4() img = theano.tensor.ftensor4()
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论