提交 10a620ef authored 作者: Frederic's avatar Frederic

Update doc

上级 cdea94b3
...@@ -25,26 +25,30 @@ ...@@ -25,26 +25,30 @@
.. note:: .. note::
As October 20, 2014, the default GPU image convolution As October 20, 2014, the default GPU image convolution
changed. Now, if `cuDNN <https://developer.nvidia.com/cuDNN>`_ is changed. Here is the algo:
available and the GPU selected is supported by it. This give
faster GPU convolution without using more memory then the legacy
convolution.
- If can use cuDNN, use it. - If we can use `cuDNN <https://developer.nvidia.com/cuDNN>`_, use it.
- If not, use gemm version (slower then cuDNN, use more memory). - If not, use gemm version (slower then cuDNN, use more memory).
- If the user don't want the extra memory of the gemm version, If the user don't want the extra memory of the gemm version, they
they can enable the legacy code that is even slower, but don't can enable the legacy code that is even slower, but don't use
use extra memory. extra memory. For this, use the Theano flag
``optimizer_excluding=conv_gemm``.
There is no reason to use the legacy code or the gemm version if
cuDNN is available.
2 other options:
- There is also the fft version that is the fastest in some cases, - There is also the fft version that is the fastest in some cases,
but use even more memory. It don't support striding to remove but use even more memory. It don't support striding to remove
computation and have some shape restriction. computation and have some shapes restriction.
- There is also the cuda_convnet convolution in Pylearn2. It use a - There is also the cuda_convnet convolution in Pylearn2. It use a
different memory layout, have shapes restriction, but don't use different memory layout, have shapes restriction, but don't use
extra memory and is faster then the legacy convolution. extra memory and is faster then the legacy convolution.
TODO: Give examples on how to use these things! They are pretty complicated. TODO: Give examples on how to use these things! They are pretty complicated.
- Convolution operators implemented: - Convolution operators implemented:
......
.. _tut_multi_cores:
============================= =============================
Multi cores support in Theano Multi cores support in Theano
============================= =============================
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论