Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
77729ffa
提交
77729ffa
authored
1月 11, 2016
作者:
Nicolas Ballas
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
update convolution documentation
上级
fe58ada7
隐藏空白字符变更
内嵌
并排
正在显示
4 个修改的文件
包含
83 行增加
和
110 行删除
+83
-110
conv.txt
doc/library/tensor/nnet/conv.txt
+33
-16
__init__.py
theano/tensor/nnet/__init__.py
+23
-14
abstract_conv.py
theano/tensor/nnet/abstract_conv.py
+26
-80
conv.py
theano/tensor/nnet/conv.py
+1
-0
没有找到文件。
doc/library/tensor/nnet/conv.txt
浏览文件 @
77729ffa
...
@@ -9,7 +9,7 @@
...
@@ -9,7 +9,7 @@
Two similar implementation exists for conv2d:
Two similar implementation exists for conv2d:
:func:`signal.conv2d <theano.tensor.signal.conv.conv2d>` and
:func:`signal.conv2d <theano.tensor.signal.conv.conv2d>` and
:func:`nnet.conv2d <theano.tensor.nnet.conv
.conv
2d>`.
:func:`nnet.conv2d <theano.tensor.nnet.conv2d>`.
The former implements a traditional
The former implements a traditional
2D convolution, while the latter implements the convolutional layers
2D convolution, while the latter implements the convolutional layers
...
@@ -24,8 +24,14 @@
...
@@ -24,8 +24,14 @@
.. note::
.. note::
As of October 21st, 2014, the default GPU image convolution
As of December 2015, a new conv2d interface has been introduced.
changed: By default, if :ref:`cuDNN <libdoc_cuda_dnn>`
:func:`nnet.conv2d <theano.tensor.nnet.conv2d>` defines an
abstract theano graph convolution operation
(:func:`nnet.abstract_conv.AbstractConv2d <theano.tensor.nnet.abstract_conv.AbstractConv2d>`)
that will be replaced by an actual convolution implementation during
the optimization phase.
By default, if :ref:`cuDNN <libdoc_cuda_dnn>`
is available, we will use it, otherwise we will fall back to using the
is available, we will use it, otherwise we will fall back to using the
gemm version (slower then cuDNN in most cases and uses more memory).
gemm version (slower then cuDNN in most cases and uses more memory).
...
@@ -45,28 +51,34 @@
...
@@ -45,28 +51,34 @@
into Theano, and an implementation by Alex Krizhevsky available via
into Theano, and an implementation by Alex Krizhevsky available via
Pylearn2. See the documentation below on how to use them.
Pylearn2. See the documentation below on how to use them.
As of November 24th, 2014, you can also use a meta-optimizer to
Old conv2d interface is still accessible through :func:`nnet.conv.conv2d <theano.tensor.nnet.conv.conv2d>`.
automatically choose the fastest implementation for each specific
convolution in your graph. For each instance, it will compile and benchmark
each applicable implementation of the ones listed below and choose the
fastest one. As performance is dependent on input and filter shapes, this
only works for operations introduced via nnet.conv2d with fully specified
shape information.
Enable it via the Theano flag ``optimizer_including=conv_meta``, and
optionally set it to verbose mode via the flag `metaopt.verbose=1`.
TODO: Give examples on how to use these things! They are pretty complicated.
TODO: Give examples on how to use these things! They are pretty complicated.
- Implemented operators for neural network 2D / image convolution:
- Implemented operators for neural network 2D / image convolution:
- :func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>`.
- :func:`nnet.conv.conv2d <theano.tensor.nnet.conv.conv2d>`.
CPU convolution implementation, previously used as the convolution interface.
This is the standard operator for convolutional neural networks working
This is the standard operator for convolutional neural networks working
with batches of multi-channel 2D images, available
for CPU and GPU
. It
with batches of multi-channel 2D images, available. It
computes a convolution, i.e., it flips the kernel.
computes a convolution, i.e., it flips the kernel.
Most of the more efficient GPU implementations listed below can be
Most of the more efficient GPU implementations listed below can be
inserted automatically as a replacement for nnet.conv2d via graph
inserted automatically as a replacement for nnet.conv
.conv
2d via graph
optimizations. Some of these graph optimizations are enabled by default,
optimizations. Some of these graph optimizations are enabled by default,
others can be enabled via Theano flags.
others can be enabled via Theano flags.
Since November 24th, 2014, you can also use a meta-optimizer to
automatically choose the fastest implementation for each specific
convolution in your graph using the old interface. For each instance,
it will compile and benchmark each applicable implementation of the ones
listed below and choose the fastest one.
As performance is dependent on input and filter shapes, this
only works for operations introduced via nnet.conv.conv2d with fully specified
shape information.
Enable it via the Theano flag ``optimizer_including=conv_meta``, and
optionally set it to verbose mode via the flag `metaopt.verbose=1`.
- :func:`conv2d_fft <theano.sandbox.cuda.fftconv.conv2d_fft>` This
- :func:`conv2d_fft <theano.sandbox.cuda.fftconv.conv2d_fft>` This
is a GPU-only version of nnet.conv2d that uses an FFT transform
is a GPU-only version of nnet.conv2d that uses an FFT transform
to perform the work. It flips the kernel just like ``conv2d``.
to perform the work. It flips the kernel just like ``conv2d``.
...
@@ -83,6 +95,7 @@ TODO: Give examples on how to use these things! They are pretty complicated.
...
@@ -83,6 +95,7 @@ TODO: Give examples on how to use these things! They are pretty complicated.
its ``version`` parameter to ``'no_fft'``. To enable it for just
its ``version`` parameter to ``'no_fft'``. To enable it for just
one Theano function:
one Theano function:
.. code-block:: python
.. code-block:: python
mode = theano.compile.get_default_mode()
mode = theano.compile.get_default_mode()
...
@@ -178,8 +191,12 @@ TODO: Give examples on how to use these things! They are pretty complicated.
...
@@ -178,8 +191,12 @@ TODO: Give examples on how to use these things! They are pretty complicated.
It is faster in some cases than conv3d, and work on the GPU.
It is faster in some cases than conv3d, and work on the GPU.
It flip the kernel.
It flip the kernel.
.. autofunction:: theano.tensor.nnet.conv
.conv
2d
.. autofunction:: theano.tensor.nnet.conv2d
.. autofunction:: theano.sandbox.cuda.fftconv.conv2d_fft
.. autofunction:: theano.sandbox.cuda.fftconv.conv2d_fft
.. autofunction:: theano.tensor.nnet.Conv3D.conv3D
.. autofunction:: theano.tensor.nnet.Conv3D.conv3D
.. autofunction:: theano.sandbox.cuda.fftconv.conv3d_fft
.. autofunction:: theano.sandbox.cuda.fftconv.conv3d_fft
.. autofunction:: theano.tensor.nnet.conv3d2d.conv3d
.. autofunction:: theano.tensor.nnet.conv3d2d.conv3d
.. autofunction:: theano.tensor.nnet.conv.conv2d
.. automodule:: theano.tensor.nnet.abstract_conv
:members:
theano/tensor/nnet/__init__.py
浏览文件 @
77729ffa
...
@@ -67,18 +67,19 @@ def conv2d(input, filters, input_shape=None, filter_shape=None,
...
@@ -67,18 +67,19 @@ def conv2d(input, filters, input_shape=None, filter_shape=None,
:type border_mode: str, int or tuple of two int
:type border_mode: str, int or tuple of two int
:param border_mode: Either of the following:
:param border_mode: Either of the following:
* ``'valid'``: apply filter wherever it completely overlaps with the
input. Generates output of shape: input shape - filter shape + 1
``'valid'``: apply filter wherever it completely overlaps with the
* ``'full'``: apply filter wherever it partly overlaps with the input.
input. Generates output of shape: input shape - filter shape + 1
Generates output of shape: input shape + filter shape - 1
``'full'``: apply filter wherever it partly overlaps with the input.
* ``'half'``: pad input with a symmetric border of ``filter rows // 2``
Generates output of shape: input shape + filter shape - 1
rows and ``filter columns // 2`` columns, then perform a valid
``'half'``: pad input with a symmetric border of ``filter rows // 2``
convolution. For filters with an odd number of rows and columns, this
rows and ``filter columns // 2`` columns, then perform a valid
leads to the output shape being equal to the input shape.
convolution. For filters with an odd number of rows and columns, this
* ``int``: pad input with a symmetric border of zeros of the given
leads to the output shape being equal to the input shape.
width, then perform a valid convolution.
``int``: pad input with a symmetric border of zeros of the given
* ``(int1, int2)``: pad input with a symmetric border of ``int1`` rows
width, then perform a valid convolution.
and ``int2`` columns, then perform a valid convolution.
``(int1, int2)``: pad input with a symmetric border of ``int1`` rows
and ``int2`` columns, then perform a valid convolution.
:type subsample: tuple of len 2
:type subsample: tuple of len 2
:param subsample: factor by which to subsample the output.
:param subsample: factor by which to subsample the output.
...
@@ -91,14 +92,22 @@ def conv2d(input, filters, input_shape=None, filter_shape=None,
...
@@ -91,14 +92,22 @@ def conv2d(input, filters, input_shape=None, filter_shape=None,
are not flipped and the operation is referred to as a cross-correlation.
are not flipped and the operation is referred to as a cross-correlation.
:type image_shape: None, tuple/list of len 4 of int or Constant variable
:type image_shape: None, tuple/list of len 4 of int or Constant variable
:param image_shape
Deprecated alias for `input_shape`
:param image_shape
: Deprecated alias for input_shape.
:param
**kwargs
Any other keyword arguments are accepted for backwards
:param
kwargs:
Any other keyword arguments are accepted for backwards
compatibility, but will be ignored.
compatibility, but will be ignored.
:rtype: symbolic 4D tensor
:rtype: symbolic 4D tensor
:return: set of feature maps generated by convolutional layer. Tensor is
:return: set of feature maps generated by convolutional layer. Tensor is
of shape (batch size, output channels, output rows, output columns)
of shape (batch size, output channels, output rows, output columns)
:note: If CuDNN is available, it will be used on the
GPU. Otherwise, it is the *CorrMM* convolution that will be used
"caffe style convolution".
:note: This is only supported in Theano 0.8 or the development
version until it is released.
"""
"""
if
'imshp_logical'
in
kwargs
or
'kshp_logical'
in
kwargs
:
if
'imshp_logical'
in
kwargs
or
'kshp_logical'
in
kwargs
:
...
...
theano/tensor/nnet/abstract_conv.py
浏览文件 @
77729ffa
"""
"""
Define abstract conv2d
interface
Abstract conv
interface
"""
"""
import
logging
import
logging
...
@@ -10,7 +10,7 @@ from theano.gof import Apply, Op
...
@@ -10,7 +10,7 @@ from theano.gof import Apply, Op
__docformat__
=
"restructuredtext en"
__docformat__
=
"restructuredtext en"
_logger
=
logging
.
getLogger
(
"theano.tensor.nnet.
conv2d
"
)
_logger
=
logging
.
getLogger
(
"theano.tensor.nnet.
abstract_conv
"
)
def
get_conv_output_shape
(
image_shape
,
kernel_shape
,
def
get_conv_output_shape
(
image_shape
,
kernel_shape
,
...
@@ -103,70 +103,11 @@ def conv2d(input,
...
@@ -103,70 +103,11 @@ def conv2d(input,
border_mode
=
'valid'
,
border_mode
=
'valid'
,
subsample
=
(
1
,
1
),
subsample
=
(
1
,
1
),
filter_flip
=
True
):
filter_flip
=
True
):
"""This function will build the symbolic graph for convolving a
"""This function will build the symbolic graph for convolving a mini-batch of a
mini-batch of a stack of 2D inputs with a set of 2D filters. The
stack of 2D inputs with a set of 2D filters. The implementation is modelled
implementation is modelled after Convolutional Neural Networks
after Convolutional Neural Networks (CNN).
(CNN).
:type input: symbolic 4D tensor
:param input: mini-batch of feature map stacks, of shape
(batch size, input channels, input rows, input columns).
See the optional parameter ``input_shape``.
:type filters: symbolic 4D tensor
:param filters: set of filters used in CNN layer of shape
(output channels, input channels, filter rows, filter columns).
See the optional parameter ``filter_shape``.
:type input_shape: None, tuple/list of len 4 of int or Constant variable
:param input_shape: The shape of the input parameter.
Optional, possibly used to choose an optimal implementation.
You can give ``None`` for any element of the list to specify that this
element is not known at compile time.
:type filter_shape: None, tuple/list of len 4 of int or Constant variable
:param filter_shape: The shape of the filters parameter.
Optional, possibly used to choose an optimal implementation.
You can give ``None`` for any element of the list to specify that this
element is not known at compile time.
:type border_mode: str, int or tuple of two int
:param border_mode: Either of the following:
* ``'valid'``: apply filter wherever it completely overlaps with the
input. Generates output of shape: input shape - filter shape + 1
* ``'full'``: apply filter wherever it partly overlaps with the input.
Generates output of shape: input shape + filter shape - 1
* ``'half'``: pad input with a symmetric border of ``filter rows // 2``
rows and ``filter columns // 2`` columns, then perform a valid
convolution. For filters with an odd number of rows and columns, this
leads to the output shape being equal to the input shape.
* ``int``: pad input with a symmetric border of zeros of the given
width, then perform a valid convolution.
* ``(int1, int2)``: pad input with a symmetric border of ``int1`` rows
and ``int2`` columns, then perform a valid convolution.
:type subsample: tuple of len 2
:param subsample: factor by which to subsample the output.
Also called strides elsewhere.
:type filter_flip: bool
:param filter_flip: If ``True``, will flip the filter rows and columns
before sliding them over the input. This operation is normally referred
to as a convolution, and this is the default. If ``False``, the filters
are not flipped and the operation is referred to as a
cross-correlation.
:rtype: symbolic 4D tensor
:return: set of feature maps generated by convolutional layer. Tensor is
of shape (batch size, output channels, output rows, output columns)
:note: If CuDNN is available, it will be used on the
GPU. Otherwise, it is the *CorrMM* convolution that will be used
"caffe style convolution".
:note: This is only supported in Theano 0.8 or the development
version until it is released.
Refer to :func:`nnet.conv2d <theano.tensor.nnet.conv2d>` for a more detailed documentation.
"""
"""
conv_op
=
AbstractConv2d
(
imshp
=
input_shape
,
conv_op
=
AbstractConv2d
(
imshp
=
input_shape
,
...
@@ -197,21 +138,21 @@ class BaseAbstractConv2d(Op):
...
@@ -197,21 +138,21 @@ class BaseAbstractConv2d(Op):
element is not known at compile time.
element is not known at compile time.
kshp is defined w.r.t the forward conv.
kshp is defined w.r.t the forward conv.
:type border_mode: str, int or tuple of two int
:type border_mode: str, int or tuple of two int
:param border_mode: Either of the following:
:param border_mode: Either of the following:
* ``'valid'``: apply filter wherever it completely overlaps with the
input. Generates output of shape: input shape - filter shape + 1
``'valid'``: apply filter wherever it completely overlaps with the
* ``'full'``: apply filter wherever it partly overlaps with the input.
input. Generates output of shape: input shape - filter shape + 1
Generates output of shape: input shape + filter shape - 1
``'full'``: apply filter wherever it partly overlaps with the input.
* ``'half'``: pad input with a symmetric border of ``filter rows // 2``
Generates output of shape: input shape + filter shape - 1
rows and ``filter columns // 2`` columns, then perform a valid
``'half'``: pad input with a symmetric border of ``filter rows // 2``
convolution. For filters with an odd number of rows and columns, this
rows and ``filter columns // 2`` columns, then perform a valid
leads to the output shape being equal to the input shape.
convolution. For filters with an odd number of rows and columns, this
* ``int``: pad input with a symmetric border of zeros of the given
leads to the output shape being equal to the input shape.
width, then perform a valid convolution.
``int``: pad input with a symmetric border of zeros of the given
* ``(int1, int2)``: pad input with a symmetric border of ``int1`` rows
width, then perform a valid convolution.
and ``int2`` columns, then perform a valid convolution.
``(int1, int2)``: pad input with a symmetric border of ``int1`` rows
and ``int2`` columns, then perform a valid convolution.
:type subsample: tuple of len 2
:type subsample: tuple of len 2
:param subsample: factor by which to subsample the output.
:param subsample: factor by which to subsample the output.
...
@@ -276,8 +217,9 @@ class BaseAbstractConv2d(Op):
...
@@ -276,8 +217,9 @@ class BaseAbstractConv2d(Op):
class
AbstractConv2d
(
BaseAbstractConv2d
):
class
AbstractConv2d
(
BaseAbstractConv2d
):
"""
""" Abstract Op for the forward convolution.
Abstract Op for the forward convolution.
Refer to :func:`BaseAbstractConv2d <theano.tensor.nnet.abstract_conv.BaseAbstractConv2d>`
for a more detailed documentation.
"""
"""
def
__init__
(
self
,
def
__init__
(
self
,
...
@@ -348,6 +290,8 @@ class AbstractConv2d(BaseAbstractConv2d):
...
@@ -348,6 +290,8 @@ class AbstractConv2d(BaseAbstractConv2d):
class
AbstractConv2d_gradWeights
(
BaseAbstractConv2d
):
class
AbstractConv2d_gradWeights
(
BaseAbstractConv2d
):
"""Gradient wrt. filters for `AbstractConv2d`.
"""Gradient wrt. filters for `AbstractConv2d`.
Refer to :func:`BaseAbstractConv2d <theano.tensor.nnet.abstract_conv.BaseAbstractConv2d>`
for a more detailed documentation.
:note: You will not want to use this directly, but rely on
:note: You will not want to use this directly, but rely on
Theano's automatic differentiation or graph optimization to
Theano's automatic differentiation or graph optimization to
...
@@ -428,6 +372,8 @@ class AbstractConv2d_gradWeights(BaseAbstractConv2d):
...
@@ -428,6 +372,8 @@ class AbstractConv2d_gradWeights(BaseAbstractConv2d):
class
AbstractConv2d_gradInputs
(
BaseAbstractConv2d
):
class
AbstractConv2d_gradInputs
(
BaseAbstractConv2d
):
"""Gradient wrt. inputs for `AbstractConv2d`.
"""Gradient wrt. inputs for `AbstractConv2d`.
Refer to :func:`BaseAbstractConv2d <theano.tensor.nnet.abstract_conv.BaseAbstractConv2d>`
for a more detailed documentation.
:note: You will not want to use this directly, but rely on
:note: You will not want to use this directly, but rely on
Theano's automatic differentiation or graph optimization to
Theano's automatic differentiation or graph optimization to
...
...
theano/tensor/nnet/conv.py
浏览文件 @
77729ffa
...
@@ -40,6 +40,7 @@ _logger = logging.getLogger("theano.tensor.nnet.conv")
...
@@ -40,6 +40,7 @@ _logger = logging.getLogger("theano.tensor.nnet.conv")
def
conv2d
(
input
,
filters
,
image_shape
=
None
,
filter_shape
=
None
,
def
conv2d
(
input
,
filters
,
image_shape
=
None
,
filter_shape
=
None
,
border_mode
=
'valid'
,
subsample
=
(
1
,
1
),
**
kargs
):
border_mode
=
'valid'
,
subsample
=
(
1
,
1
),
**
kargs
):
"""
"""
Deprecated, old conv2d interface.
This function will build the symbolic graph for convolving a stack of
This function will build the symbolic graph for convolving a stack of
input images with a set of filters. The implementation is modelled after
input images with a set of filters. The implementation is modelled after
Convolutional Neural Networks (CNN). It is simply a wrapper to the ConvOp
Convolutional Neural Networks (CNN). It is simply a wrapper to the ConvOp
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论