提交 4f84063c authored 作者: Vikram's avatar Vikram

Better documentation. Check for length of filter in conv2d_grad_wrt_inputs

上级 4a555f19
...@@ -111,7 +111,7 @@ def conv2d(input, filters, input_shape=None, filter_shape=None, ...@@ -111,7 +111,7 @@ def conv2d(input, filters, input_shape=None, filter_shape=None,
unshared: bool unshared: bool
If true, then unshared or 'locally connected' convolution will be If true, then unshared or 'locally connected' convolution will be
performed. A different kernel will be used for each region of the performed. A different filter will be used for each region of the
input. input.
kwargs: Any other keyword arguments are accepted for backwards kwargs: Any other keyword arguments are accepted for backwards
...@@ -226,8 +226,9 @@ def conv2d_transpose(input, filters, output_shape, filter_shape=None, ...@@ -226,8 +226,9 @@ def conv2d_transpose(input, filters, output_shape, filter_shape=None,
unshared: bool unshared: bool
If true, then unshared or 'locally connected' convolution will be If true, then unshared or 'locally connected' convolution will be
performed. A different kernel will be used for each region of the performed. A different filter will be used for each region of the
input. input.
Grouped unshared convolution is supported.
Returns Returns
------- -------
......
...@@ -44,12 +44,13 @@ def get_conv_output_shape(image_shape, kernel_shape, ...@@ -44,12 +44,13 @@ def get_conv_output_shape(image_shape, kernel_shape,
to: batch size, number of input channels, height and width (and to: batch size, number of input channels, height and width (and
possibly depth) of the image. None where undefined. possibly depth) of the image. None where undefined.
kernel_shape: tuple of int (symbolic or numeric) corresponding to the kernel_shape: tuple of int (symbolic or numeric) corresponding to the
kernel shape. For a normal convolution, its four (or five) elements kernel shape. For a normal convolution, its four (for 2D convolution)
must correspond respectively to : number of output channels, number of or five (for 3D convolution) elements must correspond respectively to :
input channels, height and width (and possibly depth) of the kernel. number of output channels, number of input channels, height and width
For an unshared convolution, its six channels must correspond to : (and possibly depth) of the kernel.
number of output channels, height and width For an unshared 2D convolution, its six channels must correspond to :
of the output, number of input channels, height and width of the kernel. number of output channels, height and width of the output, number of
input channels, height and width of the kernel.
None where undefined. None where undefined.
border_mode: string, int (symbolic or numeric) or tuple of int (symbolic border_mode: string, int (symbolic or numeric) or tuple of int (symbolic
or numeric). If it is a string, it must be 'valid', 'half' or 'full'. or numeric). If it is a string, it must be 'valid', 'half' or 'full'.
...@@ -996,7 +997,7 @@ def conv2d_grad_wrt_inputs(output_grad, ...@@ -996,7 +997,7 @@ def conv2d_grad_wrt_inputs(output_grad,
separate groups. Each which carry out convolutions separately separate groups. Each which carry out convolutions separately
unshared: bool unshared: bool
If true, then unshared or 'locally connected' convolution will be If true, then unshared or 'locally connected' convolution will be
performed. A different kernel will be used for each region of the performed. A different filter will be used for each region of the
input. input.
Returns Returns
...@@ -1032,13 +1033,16 @@ def conv2d_grad_wrt_inputs(output_grad, ...@@ -1032,13 +1033,16 @@ def conv2d_grad_wrt_inputs(output_grad,
# checking the type of filter_shape # checking the type of filter_shape
if filter_shape is not None: if filter_shape is not None:
for dim in [0, 1, 2, 3]: if unshared:
expected_dim = 6
else:
expected_dim = 4
assert len(filter_shape) == expected_dim
for dim in range(expected_dim):
assert isinstance(filter_shape[dim], (theano.tensor.TensorConstant, assert isinstance(filter_shape[dim], (theano.tensor.TensorConstant,
integer_types, type(None))) integer_types, type(None)))
if unshared:
for dim in [4, 5]:
assert isinstance(filter_shape[dim], (theano.tensor.TensorConstant,
integer_types, type(None)))
# setting the last two dimensions of input_shape to None, if # setting the last two dimensions of input_shape to None, if
# the type of these dimensions is TensorVariable. # the type of these dimensions is TensorVariable.
...@@ -1278,7 +1282,7 @@ def conv2d_grad_wrt_weights(input, ...@@ -1278,7 +1282,7 @@ def conv2d_grad_wrt_weights(input,
separate groups. Each which carry out convolutions separately separate groups. Each which carry out convolutions separately
unshared: bool unshared: bool
If true, then unshared or 'locally connected' convolution will be If true, then unshared or 'locally connected' convolution will be
performed. A different kernel will be used for each region of the performed. A different filter will be used for each region of the
input. input.
Returns Returns
...@@ -1712,9 +1716,13 @@ class BaseAbstractConv(Op): ...@@ -1712,9 +1716,13 @@ class BaseAbstractConv(Op):
Factor by which to subsample (stride) the input. Factor by which to subsample (stride) the input.
Also called dilation factor. Also called dilation factor.
num_groups : int
Divides the image, kernel and output tensors into num_groups
separate groups. Each which carry out convolutions separately
unshared: bool unshared: bool
If true, then unshared or 'locally connected' convolution will be If true, then unshared or 'locally connected' convolution will be
performed. A different kernel will be used for each region of the performed. A different filter will be used for each region of the
input. input.
""" """
check_broadcast = False check_broadcast = False
...@@ -1843,7 +1851,9 @@ class BaseAbstractConv(Op): ...@@ -1843,7 +1851,9 @@ class BaseAbstractConv(Op):
if unshared and direction == "backprop weights": if unshared and direction == "backprop weights":
if mode != "valid": if mode != "valid":
raise ValueError('conv mode for unshared backprop wrt weights must be "valid"') raise ValueError('conv mode for unshared backprop wrt weights must be "valid"')
# Do a transpose later to bring it to required shape # To allow the same format for the call to 'unshared2d' for all three directions,
# the out_shape is shuffled here.
# We do a transpose in the 'perform' function to bring it to the required shape
out_shape = (img.shape[0], kern.shape[0], out_shape = (img.shape[0], kern.shape[0],
kern.shape[2], kern.shape[3], kern.shape[2], kern.shape[3],
img.shape[2] - kern.shape[2] + 1, img.shape[2] - kern.shape[2] + 1,
......
...@@ -600,8 +600,12 @@ class CorrMM(BaseCorrMM): ...@@ -600,8 +600,12 @@ class CorrMM(BaseCorrMM):
The filter dilation operation applied to each input image. The filter dilation operation applied to each input image.
Should be a tuple with 2 elements. Should be a tuple with 2 elements.
Set to `(1, 1)` to disable filter dilation. Set to `(1, 1)` to disable filter dilation.
unshared: num_groups
Boolean value. If true, then a different kernel will be applied to Divides the image, kernel and output tensors into num_groups
separate groups. Each which carry out convolutions separately.
Should be an integer.
unshared
Boolean value. If true, then a different filter will be applied to
each region of the input image. each region of the input image.
""" """
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论