提交 d4ac3d05 authored 作者: nouiz's avatar nouiz

Merge pull request #264 from delallea/minor

Minor stuff
...@@ -67,13 +67,15 @@ There are less methods to define for an Op than for a Type: ...@@ -67,13 +67,15 @@ There are less methods to define for an Op than for a Type:
.. method:: infer_shape(node, (i0_shapes,i1_shapes,...)) .. method:: infer_shape(node, (i0_shapes,i1_shapes,...))
Allow optimization to lift the Shape op over this op. Allow optimizations to lift the Shape op over this op.
Example of why this is good is that we compute an op only to take its shape, An example of why this is good is when we only need the shape of a
we will be able to have the shape without its computation. variable: we will be able to obtain it without computing the variable
must return a tuple with one tuple with the shape of each output. itself.
Example of matrix-matrix product input_shapes will have as input Must return a list where each element is a tuple representing the shape
(node, ((x0,x1), (y0,y1))) and should return [(x0, y1)]. Both the of one output.
inputs and the return value may be theano variables. For example, for the matrix-matrix product ``infer_shape`` will have as
inputs (node, ((x0,x1), (y0,y1))) and should return [(x0, y1)]. Both the
inputs and the return value may be Theano variables.
.. method:: c_code_cache_version() .. method:: c_code_cache_version()
......
...@@ -454,14 +454,14 @@ TensorVariable ...@@ -454,14 +454,14 @@ TensorVariable
A few examples of patterns and their effect: A few examples of patterns and their effect:
('x') -> make a 0d (scalar) into a 1d vector * ('x') -> make a 0d (scalar) into a 1d vector
(0, 1) -> identity for 2d vectors * (0, 1) -> identity for 2d vectors
(1, 0) -> inverts the first and second dimensions * (1, 0) -> inverts the first and second dimensions
('x', 0) -> make a row out of a 1d vector (N to 1xN) * ('x', 0) -> make a row out of a 1d vector (N to 1xN)
(0, 'x') -> make a column out of a 1d vector (N to Nx1) * (0, 'x') -> make a column out of a 1d vector (N to Nx1)
(2, 0, 1) -> AxBxC to CxAxB * (2, 0, 1) -> AxBxC to CxAxB
(0, 'x', 1) -> AxB to Ax1xB * (0, 'x', 1) -> AxB to Ax1xB
(1, 'x', 0) -> AxB to Bx1xA * (1, 'x', 0) -> AxB to Bx1xA
.. method:: flatten(ndim=1) .. method:: flatten(ndim=1)
......
...@@ -1798,14 +1798,14 @@ pprint.assign(_shape, printing.MemberPrinter('shape')) ...@@ -1798,14 +1798,14 @@ pprint.assign(_shape, printing.MemberPrinter('shape'))
class SpecifyShape(Op): class SpecifyShape(Op):
""" """
L{Op} put into the graph the user provided shape L{Op} that puts into the graph the user-provided shape.
In the case where this op stay in the final graph, we assert the shape. In the case where this op stays in the final graph, we assert the shape.
For this the output of this op must be used in the graph. This is not For this the output of this op must be used in the graph. This is not
the case most of the time if we only take the shape of the output. the case most of the time if we only take the shape of the output.
Maybe there is other optimization that will mess with this. Maybe there are other optimizations that will mess with this.
@note: Maybe in the futur we will never do the assert! @note: Maybe in the future we will never do the assert!
@note: We currently don't support specifying partial shape information. @note: We currently don't support specifying partial shape information.
""" """
view_map = {0: [0]} view_map = {0: [0]}
...@@ -1913,7 +1913,7 @@ class MaxAndArgmax(Op): ...@@ -1913,7 +1913,7 @@ class MaxAndArgmax(Op):
def perform(self, node, inp, outs): def perform(self, node, inp, outs):
x, axis = inp x, axis = inp
max, max_idx = outs max, max_idx = outs
if len(axis) == 0 or python_all(axis == range(x.ndim)): if python_all(axis == range(x.ndim)):
axis = None axis = None
max[0] = numpy.asarray(numpy.max(x, axis)) max[0] = numpy.asarray(numpy.max(x, axis))
max_idx[0] = theano._asarray(numpy.argmax(x, axis), dtype='int32') max_idx[0] = theano._asarray(numpy.argmax(x, axis), dtype='int32')
...@@ -2945,18 +2945,19 @@ class Subtensor(Op): ...@@ -2945,18 +2945,19 @@ class Subtensor(Op):
This class uses a relatively complex internal representation of the inputs This class uses a relatively complex internal representation of the inputs
to remember how the input tensor x should be sliced. The instance variable to remember how the input tensor x should be sliced. The instance variable
idxlist is a list whose elements are either integers, or slices. The idx_list is a list whose elements are either integers, or slices. The
integers are indexes into the inputs array, and the start/stop/step members integers are indexes into the inputs array, and the start/stop/step members
of each slice are also integer indexes into the inputs array (or None). The of each slice are also integer indexes into the inputs array (or None). The
inputs array is the tensor x, followed by scalar integer variables. inputs array is the tensor x, followed by scalar integer variables.
@todo: add support for advanced tensor indexing (in Subtensor_dx too). @todo: add support for advanced tensor indexing (in Subtensor_dx too).
The idx_list is a tuple similar in structure to the sort of key you might expect in numpy's The idx_list is a tuple similar in structure to the sort of key you might
basic indexing mode. It has one element for each explicitly named dimension. In numpy, the elements expect in numpy's basic indexing mode. It has one element for each
can be either integers or slices containing integers and None. In Subtensor, each element explicitly named dimension. In numpy, the elements can be either integers
can additionally be a Scalar instance, and slice components can also be Scalar instances or slices containing integers and None. In Subtensor, each element can
too. additionally be a Scalar instance, and slice components can also be Scalar
instances too.
""" """
e_invalid = ( 'The index list is longer (size %d) than the number of ' e_invalid = ( 'The index list is longer (size %d) than the number of '
'dimensions of the tensor(namely %d). You are asking for ' 'dimensions of the tensor(namely %d). You are asking for '
......
...@@ -11,6 +11,8 @@ import operator ...@@ -11,6 +11,8 @@ import operator
import itertools import itertools
import sys import sys
import traceback import traceback
from itertools import izip
import numpy import numpy
import numpy as N # guys... please don't do this in the library :( import numpy as N # guys... please don't do this in the library :(
...@@ -676,7 +678,7 @@ class ShapeFeature(object): ...@@ -676,7 +678,7 @@ class ShapeFeature(object):
add an optional Param() argument to promise that inputs will add an optional Param() argument to promise that inputs will
have a certain shape (or even to have certain shapes in have a certain shape (or even to have certain shapes in
certain dimensions). We can't automatically infer the shape of certain dimensions). We can't automatically infer the shape of
shared variable as they can change of shape during the shared variables as they can change of shape during the
execution by default. (NOT IMPLEMENTED YET, BUT IS IN TRAC) execution by default. (NOT IMPLEMENTED YET, BUT IS IN TRAC)
...@@ -918,7 +920,7 @@ class ShapeFeature(object): ...@@ -918,7 +920,7 @@ class ShapeFeature(object):
+ ' != len(node.outputs) = ' + ' != len(node.outputs) = '
+ str(len(node.outputs))) + str(len(node.outputs)))
for r, s in zip(node.outputs, o_shapes): for r, s in izip(node.outputs, o_shapes):
self.set_shape(r, s) self.set_shape(r, s)
def on_change_input(self, env, node, i, r, new_r): def on_change_input(self, env, node, i, r, new_r):
...@@ -1431,7 +1433,7 @@ def local_upcast_elemwise_constant_inputs(node): ...@@ -1431,7 +1433,7 @@ def local_upcast_elemwise_constant_inputs(node):
@gof.local_optimizer([T.Subtensor]) @gof.local_optimizer([T.Subtensor])
def local_useless_subtensor(node): def local_useless_subtensor(node):
""" """
Remove Subtensor if it take the full input Remove Subtensor if it takes the full input
""" """
if isinstance(node.op, T.Subtensor): if isinstance(node.op, T.Subtensor):
# This optimization needs ShapeOpt and env.shape_feature # This optimization needs ShapeOpt and env.shape_feature
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论