Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
fe7d8bab
提交
fe7d8bab
authored
8月 21, 2014
作者:
f0k
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Reordered and partly rewrote nnet.conv documentation page to make it more accessible
上级
a869aa21
显示空白字符变更
内嵌
并排
正在显示
1 个修改的文件
包含
67 行增加
和
45 行删除
+67
-45
conv.txt
doc/library/tensor/nnet/conv.txt
+67
-45
没有找到文件。
doc/library/tensor/nnet/conv.txt
浏览文件 @
fe7d8bab
...
@@ -22,23 +22,28 @@
...
@@ -22,23 +22,28 @@
.. moduleauthor:: LISA
.. moduleauthor:: LISA
TODO: Give examples
for
how to use these things! They are pretty complicated.
TODO: Give examples
on
how to use these things! They are pretty complicated.
- Conv
implemented
- Conv
olution operators implemented:
- :func:`signal.conv2d <theano.tensor.signal.conv.conv2d>`.
- :func:`signal.conv2d <theano.tensor.signal.conv.conv2d>`.
See note above.
- :func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>`.
- :func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>`.
This is the standard operator for convolutional neural networks working
with batches of multi-channel 2D images, available for CPU and GPU.
Most of the more efficient GPU implementations listed below can be used
as an automatic replacement for nnet.conv2d by enabling specific graph
optimizations.
- :func:`conv2d_fft <theano.sandbox.cuda.fftconv.conv2d_fft>`
- :func:`conv2d_fft <theano.sandbox.cuda.fftconv.conv2d_fft>`
This is a GPU-only version of nnet.conv2d that uses an FFT transform
This is a GPU-only version of nnet.conv2d that uses an FFT transform
to perform the work. conv2d_fft should not be
us
ed directly as it
to perform the work. conv2d_fft should not be
call
ed directly as it
does not
implement a grad function. Instead, you should use
does not
provide a gradient. Instead, use nnet.conv2d and allow
nnet.conv2d and enable the fft optimizat
ion by setting
Theano's graph optimizer to replace it by the FFT vers
ion by setting
'THEANO_FLAGS=optimizer_including=conv_fft_valid:conv_fft_full'
``THEANO_FLAGS=optimizer_including=conv_fft_valid:conv_fft_full``
in your environement. This is not enabled by default because it
in your environement. This is not enabled by default because it
has some restrictions on input and uses more memory. Also note
has some restrictions on input and uses
a lot
more memory. Also note
that it requires CUDA >= 5.0, scikits.cuda >= 0.5.0 and PyCUDA to run.
that it requires CUDA >= 5.0, scikits.cuda >= 0.5.0 and PyCUDA to run.
To de
sactivate the fft
optimization on a specific nnet.conv2d
To de
activate the FFT
optimization on a specific nnet.conv2d
while the optimization flags are active, you can set its
parameters
while the optimization flags are active, you can set its
``version``
version to 'no_fft'. To enable
for just one Theano function:
parameter to ``'no_fft'``. To enable it
for just one Theano function:
.. code-block:: python
.. code-block:: python
...
@@ -47,17 +52,57 @@ TODO: Give examples for how to use these things! They are pretty complicated.
...
@@ -47,17 +52,57 @@ TODO: Give examples for how to use these things! They are pretty complicated.
f = theano.function(..., mode=mode)
f = theano.function(..., mode=mode)
- `cuda-convnet wrapper for 2d correlation <http://deeplearning.net/software/pylearn2/library/alex.html>`_
Wrapper for an open-source GPU-only implementation of conv2d by Alex
Krizhevsky, very fast, but with several restrictions on input and kernel
shapes, and with a different memory layout for the input.
This is in Pylearn2, where it is normally called from the `linear transform
<http://deeplearning.net/software/pylearn2/library/linear.html>`_
implementation, but it can also be used `directly from within Theano
<http://benanne.github.io/2014/04/03/faster-convolutions-in-theano.html>`_
as a manual replacement for nnet.conv2d.
- :func:`GpuCorrMM <theano.sandbox.cuda.blas.GpuCorrMM>`
This is a GPU-only 2d correlation implementation taken from
`caffe <https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cu>`_
and also used by Torch.
For each element in a batch, it first creates a
`Toeplitz <http://en.wikipedia.org/wiki/Toeplitz_matrix>`_ matrix in a CUDA kernel.
Then, it performs a ``gemm`` call to multiply this Toeplitz matrix and the filters
(hence the name: MM is for matrix multiplication).
It needs extra memory for the Toeplitz matrix, which is a 2D matrix of shape
``(no of channels * filter width * filter height, output width * output height)``.
As it provides a gradient, you can use it as a replacement for nnet.conv2d.
Alternatively, you can use nnet.conv2d and allow Theano's graph optimizer
to replace it by the GEMM version by setting
``THEANO_FLAGS=optimizer_including=conv_gemm`` in your environment.
This is not enabled by default because it uses some extra memory, but the
overhead is small compared to conv2d_fft, there are no restrictions on
input or kernel shapes and it is sometimes still faster than cuda-convnet.
To enable it for just one Theano function:
.. code-block:: python
mode = theano.compile.get_default_mode()
mode = mode.including('conv_gemm')
f = theano.function(..., mode=mode)
- :func:`conv3D <theano.tensor.nnet.Conv3D.conv3D>`
- :func:`conv3D <theano.tensor.nnet.Conv3D.conv3D>`
3D Convolution. Doesn't work on the GPU.
3D Convolution applying multi-channel 3D filters to batches of
multi-channel 3D images.
- :func:`conv3d_fft <theano.sandbox.cuda.fftconv.conv3d_fft>`
- :func:`conv3d_fft <theano.sandbox.cuda.fftconv.conv3d_fft>`
GPU-only version of conv3D using FFT transform. conv3d_fft should
GPU-only version of conv3D using FFT transform. conv3d_fft should
not be call
directly as it does not implement a grad function
.
not be call
ed directly as it does not provide a gradient
.
You can enable it by setting THEANO_FLAGS to
Instead, use conv3D and allow Theano's graph optimizer to replace it by
'optimizer_including=conv3d_fft:convgrad3d_fft:convtransp3d_fft'
the FFT version by setting
It does not support strides.
``THEANO_FLAGS=optimizer_including=conv3d_fft:convgrad3d_fft:convtransp3d_fft``
This is not enabled by default because it uses more memory.
in your environment. This is not enabled by default because it does not
Also note that it requires CUDA >= 5.0,
support strides and uses more memory. Also note that it requires
scikits.cuda >= 0.5.0 and PyCUDA to run.
CUDA >= 5.0,
scikits.cuda >= 0.5.0 and PyCUDA to run.
To enable for just one Theano function:
To enable for just one Theano function:
.. code-block:: python
.. code-block:: python
...
@@ -70,33 +115,10 @@ TODO: Give examples for how to use these things! They are pretty complicated.
...
@@ -70,33 +115,10 @@ TODO: Give examples for how to use these things! They are pretty complicated.
- :func:`conv3d2d <theano.tensor.nnet.conv3d2d.conv3d>`
- :func:`conv3d2d <theano.tensor.nnet.conv3d2d.conv3d>`
Another conv3d implementation that uses the conv2d with data reshaping.
Another conv3d implementation that uses the conv2d with data reshaping.
It is faster in some cases than conv3d, specifically on the GPU.
It is faster in some cases than conv3d, specifically on the GPU.
- `Faster conv2d <http://deeplearning.net/software/pylearn2/library/alex.html>`_
This is in Pylearn2, not very documented and uses a different
memory layout for the input. It is important to have the input
in the native memory layout, and not use dimshuffle on the
inputs, otherwise you lose most of the speed up. So this is not
a drop in replacement of conv2d.
Normally those are called from the `linear transform
<http://deeplearning.net/software/pylearn2/library/linear.html>`_
implementation.
Also, there is restrictions on which shape are supported.
- :func:`GpuCorrMM <theano.sandbox.cuda.blas.GpuCorrMM>`
This is a GPU-only version of a correlation that computes correlations
as `caffe <https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cu>`_.
For each element in a batch, it first creates a
`Toeplitz <http://en.wikipedia.org/wiki/Toeplitz_matrix>`_ matrix in a cuda kernel.
Then, it performs a ``gemm`` call to multiply this Toeplitz matrix and the kernel.
It need extra memory equal to the size of the Toeplitz matrix. Precisely,
the dimensions of this 2D Toeplitz matrix is equal to
``(no of channels * filter width * filter height, output width * output height)``.
You can enable it for call to conv2d 2d by setting ``THEANO_FLAGS=optimizer_including=conv_gemm``
in your environment. This is not enabled by default because it
uses some extra memory. MM mean matrix multiply.
.. autofunction:: theano.tensor.nnet.conv.conv2d
.. autofunction:: theano.tensor.nnet.conv.conv2d
.. autofunction:: theano.sandbox.cuda.fftconv.conv2d_fft
.. autofunction:: theano.sandbox.cuda.blas.GpuCorrMM
.. autofunction:: theano.tensor.nnet.Conv3D.conv3D
.. autofunction:: theano.tensor.nnet.Conv3D.conv3D
.. autofunction:: theano.sandbox.cuda.fftconv.conv3d_fft
.. autofunction:: theano.tensor.nnet.conv3d2d.conv3d
.. autofunction:: theano.tensor.nnet.conv3d2d.conv3d
.. autofunction:: theano.sandbox.cuda.fftconv.conv2d_fft
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论