提交 adef67e2 authored 作者: Frederic Bastien's avatar Frederic Bastien

Update doc and remove usage of outdim parameter as it was renamed. Comments from gh-5873

上级 d18ce33b
...@@ -629,23 +629,23 @@ dimensions, see :meth:`_tensor_py_operators.dimshuffle`. ...@@ -629,23 +629,23 @@ dimensions, see :meth:`_tensor_py_operators.dimshuffle`.
.. autofunction:: patternbroadcast(x, broadcastable) .. autofunction:: patternbroadcast(x, broadcastable)
.. function:: flatten(x, outdim=1) .. function:: flatten(x, ndim=1)
Similar to :func:`reshape`, but the shape is inferred from the shape of `x`. Similar to :func:`reshape`, but the shape is inferred from the shape of `x`.
:param x: variable to be flattened :param x: variable to be flattened
:type x: any TensorVariable (or compatible) :type x: any TensorVariable (or compatible)
:type outdim: int :type ndim: int
:param outdim: the number of dimensions in the returned variable :param ndim: the number of dimensions in the returned variable
:rtype: variable with same dtype as `x` and `outdim` dimensions :rtype: variable with same dtype as `x` and `ndim` dimensions
:returns: variable with the same shape as `x` in the leading `outdim-1` :returns: variable with the same shape as `x` in the leading `ndim-1`
dimensions, but with all remaining dimensions of `x` collapsed into dimensions, but with all remaining dimensions of `x` collapsed into
the last dimension. the last dimension.
For example, if we flatten a tensor of shape (2, 3, 4, 5) with flatten(x, For example, if we flatten a tensor of shape (2, 3, 4, 5) with flatten(x,
outdim=2), then we'll have the same (2-1=1) leading dimensions (2,), and the ndim=2), then we'll have the same (2-1=1) leading dimensions (2,), and the
remaining dimensions are collapsed. So the output in this example would remaining dimensions are collapsed. So the output in this example would
have shape (2, 60). have shape (2, 60).
......
...@@ -749,7 +749,7 @@ class PushOutScanOutput(gof.Optimizer): ...@@ -749,7 +749,7 @@ class PushOutScanOutput(gof.Optimizer):
# dot is usually faster on two large matrices than # dot is usually faster on two large matrices than
# a bunch of small ones # a bunch of small ones
outer_dot_inputs[0] = theano.tensor.flatten( outer_dot_inputs[0] = theano.tensor.flatten(
outer_dot_inputs[0].dimshuffle(1, 0, 2), outdim=2) outer_dot_inputs[0].dimshuffle(1, 0, 2), ndim=2)
shape_input1 = theano.tensor.shape(outer_dot_inputs[1]) shape_input1 = theano.tensor.shape(outer_dot_inputs[1])
outer_dot_inputs[1] =\ outer_dot_inputs[1] =\
......
...@@ -105,8 +105,8 @@ def conv2d(input, filters, image_shape=None, filter_shape=None, ...@@ -105,8 +105,8 @@ def conv2d(input, filters, image_shape=None, filter_shape=None,
" warn.signal_conv2d_interface to False", " warn.signal_conv2d_interface to False",
stacklevel=3) stacklevel=3)
output = tensor.flatten(output.T, outdim=2).T output = tensor.flatten(output.T, ndim=2).T
elif input.ndim == 2 or filters.ndim == 2: elif input.ndim == 2 or filters.ndim == 2:
output = tensor.flatten(output.T, outdim=3).T output = tensor.flatten(output.T, ndim=3).T
return output return output
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论