提交 7510498c authored 作者: Olivier Delalleau's avatar Olivier Delalleau

Small doc typo fixes

上级 10f96ca5
...@@ -29,17 +29,17 @@ TODO: Give examples for how to use these things! They are pretty complicated. ...@@ -29,17 +29,17 @@ TODO: Give examples for how to use these things! They are pretty complicated.
- :func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>`. - :func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>`.
- :func:`conv3D <theano.tensor.nnet.Conv3D.conv3D>`. - :func:`conv3D <theano.tensor.nnet.Conv3D.conv3D>`.
- :func:`conv3d2d <theano.tensor.nnet.conv3d2d.conv3d>` - :func:`conv3d2d <theano.tensor.nnet.conv3d2d.conv3d>`
Another conv3d implementation that use the conv2d with data reshaping. Another conv3d implementation that uses the conv2d with data reshaping.
It is faster in some case then conv3d, specificaly on the GPU. It is faster in some cases than conv3d, specifically on the GPU.
- `Faster conv2d <http://deeplearning.net/software/pylearn2/library/alex.html>`_ - `Faster conv2d <http://deeplearning.net/software/pylearn2/library/alex.html>`_
This is in Pylearn2, not very documented and use a different This is in Pylearn2, not very documented and uses a different
memory layout for the input. It is important to have the input memory layout for the input. It is important to have the input
in the native memory layout, and not use dimshuffle on the in the native memory layout, and not use dimshuffle on the
inputs, otherwise you loose much of the speed up. So this is not inputs, otherwise you lose most of the speed up. So this is not
a drop in replacement of conv2d. a drop in replacement of conv2d.
Normally those are called from the `linear transfrom Normally those are called from the `linear transform
<http://deeplearning.net/software/pylearn2/library/linear.html>`_ <http://deeplearning.net/software/pylearn2/library/linear.html>`_
implementation. implementation.
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论