提交 d4ac3d05 authored 作者: nouiz's avatar nouiz

Merge pull request #264 from delallea/minor

Minor stuff
......@@ -67,13 +67,15 @@ There are less methods to define for an Op than for a Type:
.. method:: infer_shape(node, (i0_shapes,i1_shapes,...))
Allow optimization to lift the Shape op over this op.
Example of why this is good is that we compute an op only to take its shape,
we will be able to have the shape without its computation.
must return a tuple with one tuple with the shape of each output.
Example of matrix-matrix product input_shapes will have as input
(node, ((x0,x1), (y0,y1))) and should return [(x0, y1)]. Both the
inputs and the return value may be theano variables.
Allow optimizations to lift the Shape op over this op.
An example of why this is good is when we only need the shape of a
variable: we will be able to obtain it without computing the variable
itself.
Must return a list where each element is a tuple representing the shape
of one output.
For example, for the matrix-matrix product ``infer_shape`` will have as
inputs (node, ((x0,x1), (y0,y1))) and should return [(x0, y1)]. Both the
inputs and the return value may be Theano variables.
.. method:: c_code_cache_version()
......
......@@ -454,14 +454,14 @@ TensorVariable
A few examples of patterns and their effect:
('x') -> make a 0d (scalar) into a 1d vector
(0, 1) -> identity for 2d vectors
(1, 0) -> inverts the first and second dimensions
('x', 0) -> make a row out of a 1d vector (N to 1xN)
(0, 'x') -> make a column out of a 1d vector (N to Nx1)
(2, 0, 1) -> AxBxC to CxAxB
(0, 'x', 1) -> AxB to Ax1xB
(1, 'x', 0) -> AxB to Bx1xA
* ('x') -> make a 0d (scalar) into a 1d vector
* (0, 1) -> identity for 2d vectors
* (1, 0) -> inverts the first and second dimensions
* ('x', 0) -> make a row out of a 1d vector (N to 1xN)
* (0, 'x') -> make a column out of a 1d vector (N to Nx1)
* (2, 0, 1) -> AxBxC to CxAxB
* (0, 'x', 1) -> AxB to Ax1xB
* (1, 'x', 0) -> AxB to Bx1xA
.. method:: flatten(ndim=1)
......
......@@ -1798,14 +1798,14 @@ pprint.assign(_shape, printing.MemberPrinter('shape'))
class SpecifyShape(Op):
"""
L{Op} put into the graph the user provided shape
L{Op} that puts into the graph the user-provided shape.
In the case where this op stay in the final graph, we assert the shape.
In the case where this op stays in the final graph, we assert the shape.
For this the output of this op must be used in the graph. This is not
the case most of the time if we only take the shape of the output.
Maybe there is other optimization that will mess with this.
Maybe there are other optimizations that will mess with this.
@note: Maybe in the futur we will never do the assert!
@note: Maybe in the future we will never do the assert!
@note: We currently don't support specifying partial shape information.
"""
view_map = {0: [0]}
......@@ -1913,7 +1913,7 @@ class MaxAndArgmax(Op):
def perform(self, node, inp, outs):
x, axis = inp
max, max_idx = outs
if len(axis) == 0 or python_all(axis == range(x.ndim)):
if python_all(axis == range(x.ndim)):
axis = None
max[0] = numpy.asarray(numpy.max(x, axis))
max_idx[0] = theano._asarray(numpy.argmax(x, axis), dtype='int32')
......@@ -2945,18 +2945,19 @@ class Subtensor(Op):
This class uses a relatively complex internal representation of the inputs
to remember how the input tensor x should be sliced. The instance variable
idxlist is a list whose elements are either integers, or slices. The
idx_list is a list whose elements are either integers, or slices. The
integers are indexes into the inputs array, and the start/stop/step members
of each slice are also integer indexes into the inputs array (or None). The
inputs array is the tensor x, followed by scalar integer variables.
@todo: add support for advanced tensor indexing (in Subtensor_dx too).
The idx_list is a tuple similar in structure to the sort of key you might expect in numpy's
basic indexing mode. It has one element for each explicitly named dimension. In numpy, the elements
can be either integers or slices containing integers and None. In Subtensor, each element
can additionally be a Scalar instance, and slice components can also be Scalar instances
too.
The idx_list is a tuple similar in structure to the sort of key you might
expect in numpy's basic indexing mode. It has one element for each
explicitly named dimension. In numpy, the elements can be either integers
or slices containing integers and None. In Subtensor, each element can
additionally be a Scalar instance, and slice components can also be Scalar
instances too.
"""
e_invalid = ( 'The index list is longer (size %d) than the number of '
'dimensions of the tensor(namely %d). You are asking for '
......
......@@ -11,6 +11,8 @@ import operator
import itertools
import sys
import traceback
from itertools import izip
import numpy
import numpy as N # guys... please don't do this in the library :(
......@@ -676,7 +678,7 @@ class ShapeFeature(object):
add an optional Param() argument to promise that inputs will
have a certain shape (or even to have certain shapes in
certain dimensions). We can't automatically infer the shape of
shared variable as they can change of shape during the
shared variables as they can change of shape during the
execution by default. (NOT IMPLEMENTED YET, BUT IS IN TRAC)
......@@ -918,7 +920,7 @@ class ShapeFeature(object):
+ ' != len(node.outputs) = '
+ str(len(node.outputs)))
for r, s in zip(node.outputs, o_shapes):
for r, s in izip(node.outputs, o_shapes):
self.set_shape(r, s)
def on_change_input(self, env, node, i, r, new_r):
......@@ -1431,7 +1433,7 @@ def local_upcast_elemwise_constant_inputs(node):
@gof.local_optimizer([T.Subtensor])
def local_useless_subtensor(node):
"""
Remove Subtensor if it take the full input
Remove Subtensor if it takes the full input
"""
if isinstance(node.op, T.Subtensor):
# This optimization needs ShapeOpt and env.shape_feature
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论