提交 90023285 authored 作者: Frederic's avatar Frederic

Change debugprint id

上级 138944d2
...@@ -400,10 +400,10 @@ such that: ``var.clients[*][0].inputs[index]`` or ...@@ -400,10 +400,10 @@ such that: ``var.clients[*][0].inputs[index]`` or
>>> v = theano.tensor.vector() >>> v = theano.tensor.vector()
>>> f = theano.function([v], (v+1).sum()) >>> f = theano.function([v], (v+1).sum())
>>> theano.printing.debugprint(f) >>> theano.printing.debugprint(f)
Sum{acc_dtype=float64} [@A] '' 1 Sum{acc_dtype=float64} [id A] '' 1
|Elemwise{add,no_inplace} [@B] '' 0 |Elemwise{add,no_inplace} [id B] '' 0
|TensorConstant{(1,) of 1.0} [@C] |TensorConstant{(1,) of 1.0} [id C]
|<TensorType(float64, vector)> [@D] |<TensorType(float64, vector)> [id D]
>>> # Sorted list of all nodes in the compiled graph. >>> # Sorted list of all nodes in the compiled graph.
>>> topo = f.maker.fgraph.toposort() >>> topo = f.maker.fgraph.toposort()
>>> topo[0].outputs[0].clients >>> topo[0].outputs[0].clients
......
...@@ -81,19 +81,19 @@ iteration number or other kinds of information in the name. ...@@ -81,19 +81,19 @@ iteration number or other kinds of information in the name.
2) The second function to print a graph is :func:`theano.printing.debugprint` 2) The second function to print a graph is :func:`theano.printing.debugprint`
>>> theano.printing.debugprint(f.maker.fgraph.outputs[0]) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f.maker.fgraph.outputs[0]) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul,no_inplace} [#A] '' Elemwise{mul,no_inplace} [id A] ''
|TensorConstant{2.0} [#B] |TensorConstant{2.0} [id B]
|x [#C] |x [id C]
Each line printed represents a Variable in the graph. Each line printed represents a Variable in the graph.
The line ``|x [#C]`` means the variable named ``x`` with debugprint identifier The line ``|x [id C]`` means the variable named ``x`` with debugprint identifier
[#C] is an input of the Elemwise. If you accidentally have two variables called ``x`` in [id C] is an input of the Elemwise. If you accidentally have two variables called ``x`` in
your graph, their different debugprint identifier will be your clue. your graph, their different debugprint identifier will be your clue.
The line ``|TensorConstant{2.0} [#B]`` means that there is a constant 2.0 The line ``|TensorConstant{2.0} [id B]`` means that there is a constant 2.0
with this debugprint identifier. with this debugprint identifier.
The line ``Elemwise{mul,no_inplace} [#A] ''`` is indented less than The line ``Elemwise{mul,no_inplace} [id A] ''`` is indented less than
the other ones, because it means there is a variable computed by multiplying the other ones, because it means there is a variable computed by multiplying
the other (more indented) ones together. the other (more indented) ones together.
...@@ -106,25 +106,25 @@ printed? Look for debugprint identifier using the Find feature of your text ...@@ -106,25 +106,25 @@ printed? Look for debugprint identifier using the Find feature of your text
editor. editor.
>>> theano.printing.debugprint(gy) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(gy) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [#A] '' Elemwise{mul} [id A] ''
|Elemwise{mul} [#B] '' |Elemwise{mul} [id B] ''
| |Elemwise{second,no_inplace} [#C] '' | |Elemwise{second,no_inplace} [id C] ''
| | |Elemwise{pow,no_inplace} [#D] '' | | |Elemwise{pow,no_inplace} [id D] ''
| | | |x [#E] | | | |x [id E]
| | | |TensorConstant{2} [#F] | | | |TensorConstant{2} [id F]
| | |TensorConstant{1.0} [#G] | | |TensorConstant{1.0} [id G]
| |TensorConstant{2} [#F] | |TensorConstant{2} [id F]
|Elemwise{pow} [#H] '' |Elemwise{pow} [id H] ''
|x [#E] |x [id E]
|Elemwise{sub} [#I] '' |Elemwise{sub} [id I] ''
|TensorConstant{2} [#F] |TensorConstant{2} [id F]
|DimShuffle{} [#J] '' |DimShuffle{} [id J] ''
|TensorConstant{1} [#K] |TensorConstant{1} [id K]
>>> theano.printing.debugprint(gy, depth=2) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(gy, depth=2) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [#A] '' Elemwise{mul} [id A] ''
|Elemwise{mul} [#B] '' |Elemwise{mul} [id B] ''
|Elemwise{pow} [#C] '' |Elemwise{pow} [id C] ''
If the depth parameter is provided, it limits the number of levels that are If the depth parameter is provided, it limits the number of levels that are
......
...@@ -74,11 +74,11 @@ message becomes : ...@@ -74,11 +74,11 @@ message becomes :
z = z + y z = z + y
Debugprint of the apply node: Debugprint of the apply node:
Elemwise{add,no_inplace} [@A] <TensorType(float64, vector)> '' Elemwise{add,no_inplace} [id A] <TensorType(float64, vector)> ''
|Elemwise{add,no_inplace} [@B] <TensorType(float64, vector)> '' |Elemwise{add,no_inplace} [id B] <TensorType(float64, vector)> ''
| |<TensorType(float64, vector)> [@C] <TensorType(float64, vector)> | |<TensorType(float64, vector)> [id C] <TensorType(float64, vector)>
| |<TensorType(float64, vector)> [@C] <TensorType(float64, vector)> | |<TensorType(float64, vector)> [id C] <TensorType(float64, vector)>
|<TensorType(float64, vector)> [@D] <TensorType(float64, vector)> |<TensorType(float64, vector)> [id D] <TensorType(float64, vector)>
We can here see that the error can be traced back to the line ``z = z + y``. We can here see that the error can be traced back to the line ``z = z + y``.
For this example, using ``optimizer=fast_compile`` worked. If it did not, For this example, using ``optimizer=fast_compile`` worked. If it did not,
...@@ -150,12 +150,12 @@ Running the above code generates the following error message: ...@@ -150,12 +150,12 @@ Running the above code generates the following error message:
Inputs scalar values: ['not scalar', 'not scalar'] Inputs scalar values: ['not scalar', 'not scalar']
Debugprint of the apply node: Debugprint of the apply node:
Dot22 [@A] <TensorType(float64, matrix)> '' Dot22 [id A] <TensorType(float64, matrix)> ''
|x [@B] <TensorType(float64, matrix)> |x [id B] <TensorType(float64, matrix)>
|DimShuffle{1,0} [@C] <TensorType(float64, matrix)> '' |DimShuffle{1,0} [id C] <TensorType(float64, matrix)> ''
|Flatten{2} [@D] <TensorType(float64, matrix)> '' |Flatten{2} [id D] <TensorType(float64, matrix)> ''
|DimShuffle{2,0,1} [@E] <TensorType(float64, 3D)> '' |DimShuffle{2,0,1} [id E] <TensorType(float64, 3D)> ''
|W1 [@F] <TensorType(float64, 3D)> |W1 [id F] <TensorType(float64, 3D)>
HINT: Re-running with most Theano optimization disabled could give you a back-traces when this node was created. This can be done with by setting the Theano flags 'optimizer=fast_compile'. If that does not work, Theano optimization can be disabled with 'optimizer=None'. HINT: Re-running with most Theano optimization disabled could give you a back-traces when this node was created. This can be done with by setting the Theano flags 'optimizer=fast_compile'. If that does not work, Theano optimization can be disabled with 'optimizer=None'.
...@@ -392,8 +392,8 @@ can be achieved as follows: ...@@ -392,8 +392,8 @@ can be achieved as follows:
:options: +NORMALIZE_WHITESPACE :options: +NORMALIZE_WHITESPACE
*** NaN detected *** *** NaN detected ***
Elemwise{Composite{(log(i0) * i0)}} [#A] '' Elemwise{Composite{(log(i0) * i0)}} [id A] ''
|x [#B] |x [id B]
Inputs : [array(0.0)] Inputs : [array(0.0)]
Outputs: [array(nan)] Outputs: [array(nan)]
......
...@@ -67,39 +67,39 @@ Debug Print ...@@ -67,39 +67,39 @@ Debug Print
The pre-compilation graph: The pre-compilation graph:
>>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE
Elemwise{gt,no_inplace} [#A] '' Elemwise{gt,no_inplace} [id A] ''
|Elemwise{true_div,no_inplace} [#B] '' |Elemwise{true_div,no_inplace} [id B] ''
| |DimShuffle{x} [#C] '' | |DimShuffle{x} [id C] ''
| | |TensorConstant{1} [#D] | | |TensorConstant{1} [id D]
| |Elemwise{add,no_inplace} [#E] '' | |Elemwise{add,no_inplace} [id E] ''
| |DimShuffle{x} [#F] '' | |DimShuffle{x} [id F] ''
| | |TensorConstant{1} [#D] | | |TensorConstant{1} [id D]
| |Elemwise{exp,no_inplace} [#G] '' | |Elemwise{exp,no_inplace} [id G] ''
| |Elemwise{sub,no_inplace} [#H] '' | |Elemwise{sub,no_inplace} [id H] ''
| |Elemwise{neg,no_inplace} [#I] '' | |Elemwise{neg,no_inplace} [id I] ''
| | |dot [#J] '' | | |dot [id J] ''
| | |x [#K] | | |x [id K]
| | |w [#L] | | |w [id L]
| |DimShuffle{x} [#M] '' | |DimShuffle{x} [id M] ''
| |b [#N] | |b [id N]
|DimShuffle{x} [#O] '' |DimShuffle{x} [id O] ''
|TensorConstant{0.5} [#P] |TensorConstant{0.5} [id P]
The post-compilation graph: The post-compilation graph:
>>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [#A] '' 4 Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [id A] '' 4
|CGemv{inplace} [#B] '' 3 |CGemv{inplace} [id B] '' 3
| |AllocEmpty{dtype='float64'} [#C] '' 2 | |AllocEmpty{dtype='float64'} [id C] '' 2
| | |Shape_i{0} [#D] '' 1 | | |Shape_i{0} [id D] '' 1
| | |x [#E] | | |x [id E]
| |TensorConstant{1.0} [#F] | |TensorConstant{1.0} [id F]
| |x [#E] | |x [id E]
| |w [#G] | |w [id G]
| |TensorConstant{0.0} [#H] | |TensorConstant{0.0} [id H]
|InplaceDimShuffle{x} [#I] '' 0 |InplaceDimShuffle{x} [id I] '' 0
| |b [#J] | |b [id J]
|TensorConstant{(1,) of 0.5} [#K] |TensorConstant{(1,) of 0.5} [id K]
Picture Printing of Graphs Picture Printing of Graphs
......
...@@ -24,11 +24,11 @@ Currently, information regarding shape is used in two ways in Theano: ...@@ -24,11 +24,11 @@ Currently, information regarding shape is used in two ways in Theano:
>>> x = theano.tensor.matrix('x') >>> x = theano.tensor.matrix('x')
>>> f = theano.function([x], (x ** 2).shape) >>> f = theano.function([x], (x ** 2).shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector{dtype='int64'} [#A] '' 2 MakeVector{dtype='int64'} [id A] '' 2
|Shape_i{0} [#B] '' 1 |Shape_i{0} [id B] '' 1
| |x [#C] | |x [id C]
|Shape_i{1} [#D] '' 0 |Shape_i{1} [id D] '' 0
|x [#C] |x [id C]
The output of this compiled function does not contain any multiplication The output of this compiled function does not contain any multiplication
...@@ -51,24 +51,24 @@ can lead to errors. Consider this example: ...@@ -51,24 +51,24 @@ can lead to errors. Consider this example:
>>> f = theano.function([x, y], z.shape) >>> f = theano.function([x, y], z.shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector{dtype='int64'} [#A] '' 4 MakeVector{dtype='int64'} [id A] '' 4
|Elemwise{Add}[(0, 0)] [#B] '' 3 |Elemwise{Add}[(0, 0)] [id B] '' 3
| |Shape_i{0} [#C] '' 1 | |Shape_i{0} [id C] '' 1
| | |x [#D] | | |x [id D]
| |Shape_i{0} [#E] '' 2 | |Shape_i{0} [id E] '' 2
| |y [#F] | |y [id F]
|Shape_i{1} [#G] '' 0 |Shape_i{1} [id G] '' 0
|x [#D] |x [id D]
>>> f(xv, yv) # DOES NOT RAISE AN ERROR AS SHOULD BE. >>> f(xv, yv) # DOES NOT RAISE AN ERROR AS SHOULD BE.
array([8, 4]) array([8, 4])
>>> f = theano.function([x,y], z)# Do not take the shape. >>> f = theano.function([x,y], z)# Do not take the shape.
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
Join [#A] '' 0 Join [id A] '' 0
|TensorConstant{0} [#B] |TensorConstant{0} [id B]
|x [#C] |x [id C]
|y [#D] |y [id D]
>>> f(xv, yv) # doctest: +ELLIPSIS >>> f(xv, yv) # doctest: +ELLIPSIS
Traceback (most recent call last): Traceback (most recent call last):
...@@ -121,8 +121,8 @@ upgrade. Here is the current state of what can be done: ...@@ -121,8 +121,8 @@ upgrade. Here is the current state of what can be done:
>>> x_specify_shape = theano.tensor.specify_shape(x, (2, 2)) >>> x_specify_shape = theano.tensor.specify_shape(x, (2, 2))
>>> f = theano.function([x], (x_specify_shape ** 2).shape) >>> f = theano.function([x], (x_specify_shape ** 2).shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
DeepCopyOp [#A] '' 0 DeepCopyOp [id A] '' 0
|TensorConstant{(2,) of 2} [#B] |TensorConstant{(2,) of 2} [id B]
Future Plans Future Plans
============ ============
......
...@@ -647,11 +647,11 @@ def debugprint(r, prefix='', depth=-1, done=None, print_type=False, ...@@ -647,11 +647,11 @@ def debugprint(r, prefix='', depth=-1, done=None, print_type=False,
if obj in done: if obj in done:
id_str = done[obj] id_str = done[obj]
elif ids == "id": elif ids == "id":
id_str = "[#%s]" % str(id(r)) id_str = "[id %s]" % str(id(r))
elif ids == "int": elif ids == "int":
id_str = "[#%s]" % str(len(done)) id_str = "[id %s]" % str(len(done))
elif ids == "CHAR": elif ids == "CHAR":
id_str = "[#%s]" % char_from_number(len(done)) id_str = "[id %s]" % char_from_number(len(done))
elif ids == "": elif ids == "":
id_str = "" id_str = ""
done[obj] = id_str done[obj] = id_str
......
...@@ -383,16 +383,16 @@ class TestMergeOptimizer: ...@@ -383,16 +383,16 @@ class TestMergeOptimizer:
g = FunctionGraph([x1, x2], [e]) g = FunctionGraph([x1, x2], [e])
MergeOptimizer().optimize(g) MergeOptimizer().optimize(g)
strg = theano.printing.debugprint(g, file='str') strg = theano.printing.debugprint(g, file='str')
strref = '''Elemwise{add,no_inplace} [@A] '' 4 strref = '''Elemwise{add,no_inplace} [id A] '' 4
|dot [@B] '' 3 |dot [id B] '' 3
| |Assert{msg='Theano Assert failed!'} [@C] '' 2 | |Assert{msg='Theano Assert failed!'} [id C] '' 2
| | |x1 [@D] | | |x1 [id D]
| | |All [@E] '' 1 | | |All [id E] '' 1
| | |Elemwise{gt,no_inplace} [@F] '' 0 | | |Elemwise{gt,no_inplace} [id F] '' 0
| | |x1 [@D] | | |x1 [id D]
| | |x2 [@G] | | |x2 [id G]
| |x2 [@G] | |x2 [id G]
|dot [@B] '' 3 |dot [id B] '' 3
''' '''
assert strg == strref, (strg, strref) assert strg == strref, (strg, strref)
...@@ -407,35 +407,35 @@ class TestMergeOptimizer: ...@@ -407,35 +407,35 @@ class TestMergeOptimizer:
g = FunctionGraph([x1, x2, x3], [e]) g = FunctionGraph([x1, x2, x3], [e])
MergeOptimizer().optimize(g) MergeOptimizer().optimize(g)
strg = theano.printing.debugprint(g, file='str') strg = theano.printing.debugprint(g, file='str')
strref1 = '''Elemwise{add,no_inplace} [@A] '' 6 strref1 = '''Elemwise{add,no_inplace} [id A] '' 6
|dot [@B] '' 5 |dot [id B] '' 5
| |Assert{msg='Theano Assert failed!'} [@C] '' 4 | |Assert{msg='Theano Assert failed!'} [id C] '' 4
| | |x1 [@D] | | |x1 [id D]
| | |All [@E] '' 3 | | |All [id E] '' 3
| | | |Elemwise{gt,no_inplace} [@F] '' 1 | | | |Elemwise{gt,no_inplace} [id F] '' 1
| | | |x1 [@D] | | | |x1 [id D]
| | | |x3 [@G] | | | |x3 [id G]
| | |All [@H] '' 2 | | |All [id H] '' 2
| | |Elemwise{gt,no_inplace} [@I] '' 0 | | |Elemwise{gt,no_inplace} [id I] '' 0
| | |x1 [@D] | | |x1 [id D]
| | |x2 [@J] | | |x2 [id J]
| |x2 [@J] | |x2 [id J]
|dot [@B] '' 5 |dot [id B] '' 5
''' '''
strref2 = '''Elemwise{add,no_inplace} [@A] '' 6 strref2 = '''Elemwise{add,no_inplace} [id A] '' 6
|dot [@B] '' 5 |dot [id B] '' 5
| |Assert{msg='Theano Assert failed!'} [@C] '' 4 | |Assert{msg='Theano Assert failed!'} [id C] '' 4
| | |x1 [@D] | | |x1 [id D]
| | |All [@E] '' 3 | | |All [id E] '' 3
| | | |Elemwise{gt,no_inplace} [@F] '' 1 | | | |Elemwise{gt,no_inplace} [id F] '' 1
| | | |x1 [@D] | | | |x1 [id D]
| | | |x2 [@G] | | | |x2 [id G]
| | |All [@H] '' 2 | | |All [id H] '' 2
| | |Elemwise{gt,no_inplace} [@I] '' 0 | | |Elemwise{gt,no_inplace} [id I] '' 0
| | |x1 [@D] | | |x1 [id D]
| | |x3 [@J] | | |x3 [id J]
| |x2 [@G] | |x2 [id G]
|dot [@B] '' 5 |dot [id B] '' 5
''' '''
# print(strg) # print(strg)
assert strg == strref1 or strg == strref2, (strg, strref1, strref2) assert strg == strref1 or strg == strref2, (strg, strref1, strref2)
...@@ -450,21 +450,21 @@ class TestMergeOptimizer: ...@@ -450,21 +450,21 @@ class TestMergeOptimizer:
g = FunctionGraph([x1, x2, x3], [e]) g = FunctionGraph([x1, x2, x3], [e])
MergeOptimizer().optimize(g) MergeOptimizer().optimize(g)
strg = theano.printing.debugprint(g, file='str') strg = theano.printing.debugprint(g, file='str')
strref = '''Elemwise{add,no_inplace} [@A] '' 7 strref = '''Elemwise{add,no_inplace} [id A] '' 7
|dot [@B] '' 6 |dot [id B] '' 6
| |Assert{msg='Theano Assert failed!'} [@C] '' 5 | |Assert{msg='Theano Assert failed!'} [id C] '' 5
| | |x1 [@D] | | |x1 [id D]
| | |All [@E] '' 3 | | |All [id E] '' 3
| | |Elemwise{gt,no_inplace} [@F] '' 1 | | |Elemwise{gt,no_inplace} [id F] '' 1
| | |x1 [@D] | | |x1 [id D]
| | |x3 [@G] | | |x3 [id G]
| |Assert{msg='Theano Assert failed!'} [@H] '' 4 | |Assert{msg='Theano Assert failed!'} [id H] '' 4
| |x2 [@I] | |x2 [id I]
| |All [@J] '' 2 | |All [id J] '' 2
| |Elemwise{gt,no_inplace} [@K] '' 0 | |Elemwise{gt,no_inplace} [id K] '' 0
| |x2 [@I] | |x2 [id I]
| |x3 [@G] | |x3 [id G]
|dot [@B] '' 6 |dot [id B] '' 6
''' '''
# print(strg) # print(strg)
assert strg == strref, (strg, strref) assert strg == strref, (strg, strref)
...@@ -479,21 +479,21 @@ class TestMergeOptimizer: ...@@ -479,21 +479,21 @@ class TestMergeOptimizer:
g = FunctionGraph([x1, x2, x3], [e]) g = FunctionGraph([x1, x2, x3], [e])
MergeOptimizer().optimize(g) MergeOptimizer().optimize(g)
strg = theano.printing.debugprint(g, file='str') strg = theano.printing.debugprint(g, file='str')
strref = '''Elemwise{add,no_inplace} [@A] '' 7 strref = '''Elemwise{add,no_inplace} [id A] '' 7
|dot [@B] '' 6 |dot [id B] '' 6
| |Assert{msg='Theano Assert failed!'} [@C] '' 5 | |Assert{msg='Theano Assert failed!'} [id C] '' 5
| | |x1 [@D] | | |x1 [id D]
| | |All [@E] '' 3 | | |All [id E] '' 3
| | |Elemwise{gt,no_inplace} [@F] '' 1 | | |Elemwise{gt,no_inplace} [id F] '' 1
| | |x1 [@D] | | |x1 [id D]
| | |x3 [@G] | | |x3 [id G]
| |Assert{msg='Theano Assert failed!'} [@H] '' 4 | |Assert{msg='Theano Assert failed!'} [id H] '' 4
| |x2 [@I] | |x2 [id I]
| |All [@J] '' 2 | |All [id J] '' 2
| |Elemwise{gt,no_inplace} [@K] '' 0 | |Elemwise{gt,no_inplace} [id K] '' 0
| |x2 [@I] | |x2 [id I]
| |x3 [@G] | |x3 [id G]
|dot [@B] '' 6 |dot [id B] '' 6
''' '''
print(strg) print(strg)
assert strg == strref, (strg, strref) assert strg == strref, (strg, strref)
......
...@@ -3150,26 +3150,26 @@ class Test_local_useless_elemwise_comparison(unittest.TestCase): ...@@ -3150,26 +3150,26 @@ class Test_local_useless_elemwise_comparison(unittest.TestCase):
theano.printing.debugprint(Z) theano.printing.debugprint(Z)
# here is the output for the debug print: # here is the output for the debug print:
""" """
Elemwise{add,no_inplace} [@A] '' Elemwise{add,no_inplace} [id A] ''
|for{cpu,scan_fn} [@B] '' |for{cpu,scan_fn} [id B] ''
| |Subtensor{int64} [@C] '' | |Subtensor{int64} [id C] ''
| | |Shape [@D] '' | | |Shape [id D] ''
| | | |Subtensor{int64::} [@E] 'X[0:]' | | | |Subtensor{int64::} [id E] 'X[0:]'
| | | |X [@F] | | | |X [id F]
| | | |Constant{0} [@G] | | | |Constant{0} [id G]
| | |Constant{0} [@H] | | |Constant{0} [id H]
| |Subtensor{:int64:} [@I] '' | |Subtensor{:int64:} [id I] ''
| | |Subtensor{int64::} [@E] 'X[0:]' | | |Subtensor{int64::} [id E] 'X[0:]'
| | |ScalarFromTensor [@J] '' | | |ScalarFromTensor [id J] ''
| | |Subtensor{int64} [@C] '' | | |Subtensor{int64} [id C] ''
| |Subtensor{int64} [@C] '' | |Subtensor{int64} [id C] ''
|Y [@K] |Y [id K]
Inner graphs of the scan ops: Inner graphs of the scan ops:
for{cpu,scan_fn} [@B] '' for{cpu,scan_fn} [id B] ''
>Sum{acc_dtype=float64} [@L] '' >Sum{acc_dtype=float64} [id L] ''
> |X[t] [@M] -> [@I] > |X[t] [id M] -> [id I]
""" """
mode = theano.compile.get_default_mode().excluding('fusion') mode = theano.compile.get_default_mode().excluding('fusion')
...@@ -3177,30 +3177,30 @@ class Test_local_useless_elemwise_comparison(unittest.TestCase): ...@@ -3177,30 +3177,30 @@ class Test_local_useless_elemwise_comparison(unittest.TestCase):
theano.printing.debugprint(f, print_type=True) theano.printing.debugprint(f, print_type=True)
# here is the output for the debug print: # here is the output for the debug print:
""" """
Elemwise{Add}[(0, 0)] [@A] <TensorType(float64, vector)> '' 7 Elemwise{Add}[(0, 0)] [id A] <TensorType(float64, vector)> '' 7
|for{cpu,scan_fn} [@B] <TensorType(float64, vector)> '' 6 |for{cpu,scan_fn} [id B] <TensorType(float64, vector)> '' 6
| |Shape_i{0} [@C] <TensorType(int64, scalar)> '' 0 | |Shape_i{0} [id C] <TensorType(int64, scalar)> '' 0
| | |X [@D] <TensorType(float64, matrix)> | | |X [id D] <TensorType(float64, matrix)>
| |Subtensor{int64:int64:int8} [@E] <TensorType(float64, matrix)> '' 5 | |Subtensor{int64:int64:int8} [id E] <TensorType(float64, matrix)> '' 5
| | |X [@D] <TensorType(float64, matrix)> | | |X [id D] <TensorType(float64, matrix)>
| | |ScalarFromTensor [@F] <int64> '' 4 | | |ScalarFromTensor [id F] <int64> '' 4
| | | |Elemwise{switch,no_inplace} [@G] <TensorType(int64, scalar)> '' 3 | | | |Elemwise{switch,no_inplace} [id G] <TensorType(int64, scalar)> '' 3
| | | |Elemwise{le,no_inplace} [@H] <TensorType(int8, scalar)> '' 2 | | | |Elemwise{le,no_inplace} [id H] <TensorType(int8, scalar)> '' 2
| | | | |Shape_i{0} [@C] <TensorType(int64, scalar)> '' 0 | | | | |Shape_i{0} [id C] <TensorType(int64, scalar)> '' 0
| | | | |TensorConstant{0} [@I] <TensorType(int8, scalar)> | | | | |TensorConstant{0} [id I] <TensorType(int8, scalar)>
| | | |TensorConstant{0} [@I] <TensorType(int8, scalar)> | | | |TensorConstant{0} [id I] <TensorType(int8, scalar)>
| | | |TensorConstant{0} [@J] <TensorType(int64, scalar)> | | | |TensorConstant{0} [id J] <TensorType(int64, scalar)>
| | |ScalarFromTensor [@K] <int64> '' 1 | | |ScalarFromTensor [id K] <int64> '' 1
| | | |Shape_i{0} [@C] <TensorType(int64, scalar)> '' 0 | | | |Shape_i{0} [id C] <TensorType(int64, scalar)> '' 0
| | |Constant{1} [@L] <int8> | | |Constant{1} [id L] <int8>
| |Shape_i{0} [@C] <TensorType(int64, scalar)> '' 0 | |Shape_i{0} [id C] <TensorType(int64, scalar)> '' 0
|Y [@M] <TensorType(float64, vector)> |Y [id M] <TensorType(float64, vector)>
Inner graphs of the scan ops: Inner graphs of the scan ops:
for{cpu,scan_fn} [@B] <TensorType(float64, vector)> '' for{cpu,scan_fn} [id B] <TensorType(float64, vector)> ''
>Sum{acc_dtype=float64} [@N] <TensorType(float64, scalar)> '' >Sum{acc_dtype=float64} [id N] <TensorType(float64, scalar)> ''
> |X[t] [@O] <TensorType(float64, vector)> -> [@E] > |X[t] [id O] <TensorType(float64, vector)> -> [id E]
""" """
def assert_eqs_const(self, f, val): def assert_eqs_const(self, f, val):
......
...@@ -176,13 +176,13 @@ def test_debugprint(): ...@@ -176,13 +176,13 @@ def test_debugprint():
s = s.getvalue() s = s.getvalue()
# The additional white space are needed! # The additional white space are needed!
reference = '\n'.join([ reference = '\n'.join([
"Elemwise{add,no_inplace} [#0] '' ", "Elemwise{add,no_inplace} [id 0] '' ",
" |Elemwise{add,no_inplace} [#1] 'C' ", " |Elemwise{add,no_inplace} [id 1] 'C' ",
" | |A [#2]", " | |A [id 2]",
" | |B [#3]", " | |B [id 3]",
" |Elemwise{add,no_inplace} [#4] '' ", " |Elemwise{add,no_inplace} [id 4] '' ",
" |D [#5]", " |D [id 5]",
" |E [#6]", " |E [id 6]",
]) + '\n' ]) + '\n'
if s != reference: if s != reference:
...@@ -197,13 +197,13 @@ def test_debugprint(): ...@@ -197,13 +197,13 @@ def test_debugprint():
s = s.getvalue() s = s.getvalue()
# The additional white space are needed! # The additional white space are needed!
reference = "\n".join([ reference = "\n".join([
"Elemwise{add,no_inplace} [#A] '' ", "Elemwise{add,no_inplace} [id A] '' ",
" |Elemwise{add,no_inplace} [#B] 'C' ", " |Elemwise{add,no_inplace} [id B] 'C' ",
" | |A [#C]", " | |A [id C]",
" | |B [#D]", " | |B [id D]",
" |Elemwise{add,no_inplace} [#E] '' ", " |Elemwise{add,no_inplace} [id E] '' ",
" |D [#F]", " |D [id F]",
" |E [#G]", " |E [id G]",
]) + '\n' ]) + '\n'
if s != reference: if s != reference:
...@@ -218,11 +218,11 @@ def test_debugprint(): ...@@ -218,11 +218,11 @@ def test_debugprint():
s = s.getvalue() s = s.getvalue()
# The additional white space are needed! # The additional white space are needed!
reference = '\n'.join([ reference = '\n'.join([
"Elemwise{add,no_inplace} [#A] '' ", "Elemwise{add,no_inplace} [id A] '' ",
" |Elemwise{add,no_inplace} [#B] 'C' ", " |Elemwise{add,no_inplace} [id B] 'C' ",
" |Elemwise{add,no_inplace} [#C] '' ", " |Elemwise{add,no_inplace} [id C] '' ",
" |D [#D]", " |D [id D]",
" |E [#E]", " |E [id E]",
]) + '\n' ]) + '\n'
if s != reference: if s != reference:
...@@ -286,40 +286,40 @@ def test_scan_debugprint1(): ...@@ -286,40 +286,40 @@ def test_scan_debugprint1():
for line in output_str.split('\n'): for line in output_str.split('\n'):
lines += [line] lines += [line]
expected_output = """Subtensor{int64} [#A] '' expected_output = """Subtensor{int64} [id A] ''
|Subtensor{int64::} [#B] '' |Subtensor{int64::} [id B] ''
| |for{cpu,scan_fn} [#C] '' | |for{cpu,scan_fn} [id C] ''
| | |k [#D] | | |k [id D]
| | |IncSubtensor{Set;:int64:} [#E] '' | | |IncSubtensor{Set;:int64:} [id E] ''
| | | |AllocEmpty{dtype='float64'} [#F] '' | | | |AllocEmpty{dtype='float64'} [id F] ''
| | | | |Elemwise{add,no_inplace} [#G] '' | | | | |Elemwise{add,no_inplace} [id G] ''
| | | | | |k [#D] | | | | | |k [id D]
| | | | | |Subtensor{int64} [#H] '' | | | | | |Subtensor{int64} [id H] ''
| | | | | |Shape [#I] '' | | | | | |Shape [id I] ''
| | | | | | |Rebroadcast{0} [#J] '' | | | | | | |Rebroadcast{0} [id J] ''
| | | | | | |DimShuffle{x,0} [#K] '' | | | | | | |DimShuffle{x,0} [id K] ''
| | | | | | |Elemwise{second,no_inplace} [#L] '' | | | | | | |Elemwise{second,no_inplace} [id L] ''
| | | | | | |A [#M] | | | | | | |A [id M]
| | | | | | |DimShuffle{x} [#N] '' | | | | | | |DimShuffle{x} [id N] ''
| | | | | | |TensorConstant{1.0} [#O] | | | | | | |TensorConstant{1.0} [id O]
| | | | | |Constant{0} [#P] | | | | | |Constant{0} [id P]
| | | | |Subtensor{int64} [#Q] '' | | | | |Subtensor{int64} [id Q] ''
| | | | |Shape [#R] '' | | | | |Shape [id R] ''
| | | | | |Rebroadcast{0} [#J] '' | | | | | |Rebroadcast{0} [id J] ''
| | | | |Constant{1} [#S] | | | | |Constant{1} [id S]
| | | |Rebroadcast{0} [#J] '' | | | |Rebroadcast{0} [id J] ''
| | | |ScalarFromTensor [#T] '' | | | |ScalarFromTensor [id T] ''
| | | |Subtensor{int64} [#H] '' | | | |Subtensor{int64} [id H] ''
| | |A [#M] | | |A [id M]
| |Constant{1} [#U] | |Constant{1} [id U]
|Constant{-1} [#V] |Constant{-1} [id V]
Inner graphs of the scan ops: Inner graphs of the scan ops:
for{cpu,scan_fn} [#C] '' for{cpu,scan_fn} [id C] ''
>Elemwise{mul,no_inplace} [#W] '' >Elemwise{mul,no_inplace} [id W] ''
> |<TensorType(float64, vector)> [#X] -> [#E] > |<TensorType(float64, vector)> [id X] -> [id E]
> |A_copy [#Y] -> [#M]""" > |A_copy [id Y] -> [id M]"""
for truth, out in zip(expected_output.split("\n"), lines): for truth, out in zip(expected_output.split("\n"), lines):
assert truth.strip() == out.strip() assert truth.strip() == out.strip()
...@@ -349,43 +349,43 @@ def test_scan_debugprint2(): ...@@ -349,43 +349,43 @@ def test_scan_debugprint2():
for line in output_str.split('\n'): for line in output_str.split('\n'):
lines += [line] lines += [line]
expected_output = """Sum{acc_dtype=float64} [#A] '' expected_output = """Sum{acc_dtype=float64} [id A] ''
|for{cpu,scan_fn} [#B] '' |for{cpu,scan_fn} [id B] ''
|Elemwise{minimum,no_inplace} [#C] '' |Elemwise{minimum,no_inplace} [id C] ''
| |Subtensor{int64} [#D] '' | |Subtensor{int64} [id D] ''
| | |Shape [#E] '' | | |Shape [id E] ''
| | | |Subtensor{int64::} [#F] 'coefficients[0:]' | | | |Subtensor{int64::} [id F] 'coefficients[0:]'
| | | |coefficients [#G] | | | |coefficients [id G]
| | | |Constant{0} [#H] | | | |Constant{0} [id H]
| | |Constant{0} [#I] | | |Constant{0} [id I]
| |Subtensor{int64} [#J] '' | |Subtensor{int64} [id J] ''
| |Shape [#K] '' | |Shape [id K] ''
| | |Subtensor{int64::} [#L] '' | | |Subtensor{int64::} [id L] ''
| | |ARange{dtype='int64'} [#M] '' | | |ARange{dtype='int64'} [id M] ''
| | | |TensorConstant{0} [#N] | | | |TensorConstant{0} [id N]
| | | |TensorConstant{10000} [#O] | | | |TensorConstant{10000} [id O]
| | | |TensorConstant{1} [#P] | | | |TensorConstant{1} [id P]
| | |Constant{0} [#Q] | | |Constant{0} [id Q]
| |Constant{0} [#R] | |Constant{0} [id R]
|Subtensor{:int64:} [#S] '' |Subtensor{:int64:} [id S] ''
| |Subtensor{int64::} [#F] 'coefficients[0:]' | |Subtensor{int64::} [id F] 'coefficients[0:]'
| |ScalarFromTensor [#T] '' | |ScalarFromTensor [id T] ''
| |Elemwise{minimum,no_inplace} [#C] '' | |Elemwise{minimum,no_inplace} [id C] ''
|Subtensor{:int64:} [#U] '' |Subtensor{:int64:} [id U] ''
| |Subtensor{int64::} [#L] '' | |Subtensor{int64::} [id L] ''
| |ScalarFromTensor [#V] '' | |ScalarFromTensor [id V] ''
| |Elemwise{minimum,no_inplace} [#C] '' | |Elemwise{minimum,no_inplace} [id C] ''
|Elemwise{minimum,no_inplace} [#C] '' |Elemwise{minimum,no_inplace} [id C] ''
|x [#W] |x [id W]
Inner graphs of the scan ops: Inner graphs of the scan ops:
for{cpu,scan_fn} [#B] '' for{cpu,scan_fn} [id B] ''
>Elemwise{mul,no_inplace} [#X] '' >Elemwise{mul,no_inplace} [id X] ''
> |coefficients[t] [#Y] -> [#S] > |coefficients[t] [id Y] -> [id S]
> |Elemwise{pow,no_inplace} [#Z] '' > |Elemwise{pow,no_inplace} [id Z] ''
> |x_copy [#BA] -> [#W] > |x_copy [id BA] -> [id W]
> |<TensorType(int64, scalar)> [#BB] -> [#U]""" > |<TensorType(int64, scalar)> [id BB] -> [id U]"""
for truth, out in zip(expected_output.split("\n"), lines): for truth, out in zip(expected_output.split("\n"), lines):
assert truth.strip() == out.strip() assert truth.strip() == out.strip()
...@@ -432,77 +432,77 @@ def test_scan_debugprint3(): ...@@ -432,77 +432,77 @@ def test_scan_debugprint3():
for line in output_str.split('\n'): for line in output_str.split('\n'):
lines += [line] lines += [line]
expected_output = """Sum{acc_dtype=float64} [#A] '' expected_output = """Sum{acc_dtype=float64} [id A] ''
|for{cpu,scan_fn} [#B] '' |for{cpu,scan_fn} [id B] ''
|Elemwise{minimum,no_inplace} [#C] '' |Elemwise{minimum,no_inplace} [id C] ''
| |Subtensor{int64} [#D] '' | |Subtensor{int64} [id D] ''
| | |Shape [#E] '' | | |Shape [id E] ''
| | | |Subtensor{int64::} [#F] 'coefficients[0:]' | | | |Subtensor{int64::} [id F] 'coefficients[0:]'
| | | |coefficients [#G] | | | |coefficients [id G]
| | | |Constant{0} [#H] | | | |Constant{0} [id H]
| | |Constant{0} [#I] | | |Constant{0} [id I]
| |Subtensor{int64} [#J] '' | |Subtensor{int64} [id J] ''
| |Shape [#K] '' | |Shape [id K] ''
| | |Subtensor{int64::} [#L] '' | | |Subtensor{int64::} [id L] ''
| | |ARange{dtype='int64'} [#M] '' | | |ARange{dtype='int64'} [id M] ''
| | | |TensorConstant{0} [#N] | | | |TensorConstant{0} [id N]
| | | |TensorConstant{10} [#O] | | | |TensorConstant{10} [id O]
| | | |TensorConstant{1} [#P] | | | |TensorConstant{1} [id P]
| | |Constant{0} [#Q] | | |Constant{0} [id Q]
| |Constant{0} [#R] | |Constant{0} [id R]
|Subtensor{:int64:} [#S] '' |Subtensor{:int64:} [id S] ''
| |Subtensor{int64::} [#F] 'coefficients[0:]' | |Subtensor{int64::} [id F] 'coefficients[0:]'
| |ScalarFromTensor [#T] '' | |ScalarFromTensor [id T] ''
| |Elemwise{minimum,no_inplace} [#C] '' | |Elemwise{minimum,no_inplace} [id C] ''
|Subtensor{:int64:} [#U] '' |Subtensor{:int64:} [id U] ''
| |Subtensor{int64::} [#L] '' | |Subtensor{int64::} [id L] ''
| |ScalarFromTensor [#V] '' | |ScalarFromTensor [id V] ''
| |Elemwise{minimum,no_inplace} [#C] '' | |Elemwise{minimum,no_inplace} [id C] ''
|Elemwise{minimum,no_inplace} [#C] '' |Elemwise{minimum,no_inplace} [id C] ''
|A [#W] |A [id W]
|k [#X] |k [id X]
Inner graphs of the scan ops: Inner graphs of the scan ops:
for{cpu,scan_fn} [#B] '' for{cpu,scan_fn} [id B] ''
>Elemwise{mul,no_inplace} [#Y] '' >Elemwise{mul,no_inplace} [id Y] ''
> |DimShuffle{x} [#Z] '' > |DimShuffle{x} [id Z] ''
> | |coefficients[t] [#BA] -> [#S] > | |coefficients[t] [id BA] -> [id S]
> |Elemwise{pow,no_inplace} [#BB] '' > |Elemwise{pow,no_inplace} [id BB] ''
> |Subtensor{int64} [#BC] '' > |Subtensor{int64} [id BC] ''
> | |Subtensor{int64::} [#BD] '' > | |Subtensor{int64::} [id BD] ''
> | | |for{cpu,scan_fn} [#BE] '' > | | |for{cpu,scan_fn} [id BE] ''
> | | | |k_copy [#BF] -> [#X] > | | | |k_copy [id BF] -> [id X]
> | | | |IncSubtensor{Set;:int64:} [#BG] '' > | | | |IncSubtensor{Set;:int64:} [id BG] ''
> | | | | |AllocEmpty{dtype='float64'} [#BH] '' > | | | | |AllocEmpty{dtype='float64'} [id BH] ''
> | | | | | |Elemwise{add,no_inplace} [#BI] '' > | | | | | |Elemwise{add,no_inplace} [id BI] ''
> | | | | | | |k_copy [#BF] -> [#X] > | | | | | | |k_copy [id BF] -> [id X]
> | | | | | | |Subtensor{int64} [#BJ] '' > | | | | | | |Subtensor{int64} [id BJ] ''
> | | | | | | |Shape [#BK] '' > | | | | | | |Shape [id BK] ''
> | | | | | | | |Rebroadcast{0} [#BL] '' > | | | | | | | |Rebroadcast{0} [id BL] ''
> | | | | | | | |DimShuffle{x,0} [#BM] '' > | | | | | | | |DimShuffle{x,0} [id BM] ''
> | | | | | | | |Elemwise{second,no_inplace} [#BN] '' > | | | | | | | |Elemwise{second,no_inplace} [id BN] ''
> | | | | | | | |A_copy [#BO] -> [#W] > | | | | | | | |A_copy [id BO] -> [id W]
> | | | | | | | |DimShuffle{x} [#BP] '' > | | | | | | | |DimShuffle{x} [id BP] ''
> | | | | | | | |TensorConstant{1.0} [#BQ] > | | | | | | | |TensorConstant{1.0} [id BQ]
> | | | | | | |Constant{0} [#BR] > | | | | | | |Constant{0} [id BR]
> | | | | | |Subtensor{int64} [#BS] '' > | | | | | |Subtensor{int64} [id BS] ''
> | | | | | |Shape [#BT] '' > | | | | | |Shape [id BT] ''
> | | | | | | |Rebroadcast{0} [#BL] '' > | | | | | | |Rebroadcast{0} [id BL] ''
> | | | | | |Constant{1} [#BU] > | | | | | |Constant{1} [id BU]
> | | | | |Rebroadcast{0} [#BL] '' > | | | | |Rebroadcast{0} [id BL] ''
> | | | | |ScalarFromTensor [#BV] '' > | | | | |ScalarFromTensor [id BV] ''
> | | | | |Subtensor{int64} [#BJ] '' > | | | | |Subtensor{int64} [id BJ] ''
> | | | |A_copy [#BO] -> [#W] > | | | |A_copy [id BO] -> [id W]
> | | |Constant{1} [#BW] > | | |Constant{1} [id BW]
> | |Constant{-1} [#BX] > | |Constant{-1} [id BX]
> |DimShuffle{x} [#BY] '' > |DimShuffle{x} [id BY] ''
> |<TensorType(int64, scalar)> [#BZ] -> [#U] > |<TensorType(int64, scalar)> [id BZ] -> [id U]
for{cpu,scan_fn} [#BE] '' for{cpu,scan_fn} [id BE] ''
>Elemwise{mul,no_inplace} [#CA] '' >Elemwise{mul,no_inplace} [id CA] ''
> |<TensorType(float64, vector)> [#CB] -> [#BG] > |<TensorType(float64, vector)> [id CB] -> [id BG]
> |A_copy [#CC] -> [#BO]""" > |A_copy [id CC] -> [id BO]"""
for truth, out in zip(expected_output.split("\n"), lines): for truth, out in zip(expected_output.split("\n"), lines):
assert truth.strip() == out.strip() assert truth.strip() == out.strip()
...@@ -527,54 +527,54 @@ def test_scan_debugprint4(): ...@@ -527,54 +527,54 @@ def test_scan_debugprint4():
for line in output_str.split('\n'): for line in output_str.split('\n'):
lines += [line] lines += [line]
expected_output = """Elemwise{add,no_inplace} [#A] '' expected_output = """Elemwise{add,no_inplace} [id A] ''
|Subtensor{int64::} [#B] '' |Subtensor{int64::} [id B] ''
| |for{cpu,scan_fn}.0 [#C] '' | |for{cpu,scan_fn}.0 [id C] ''
| | |TensorConstant{5} [#D] | | |TensorConstant{5} [id D]
| | |IncSubtensor{Set;:int64:} [#E] '' | | |IncSubtensor{Set;:int64:} [id E] ''
| | | |AllocEmpty{dtype='int64'} [#F] '' | | | |AllocEmpty{dtype='int64'} [id F] ''
| | | | |Elemwise{add,no_inplace} [#G] '' | | | | |Elemwise{add,no_inplace} [id G] ''
| | | | |TensorConstant{5} [#D] | | | | |TensorConstant{5} [id D]
| | | | |Subtensor{int64} [#H] '' | | | | |Subtensor{int64} [id H] ''
| | | | |Shape [#I] '' | | | | |Shape [id I] ''
| | | | | |Subtensor{:int64:} [#J] '' | | | | | |Subtensor{:int64:} [id J] ''
| | | | | |<TensorType(int64, vector)> [#K] | | | | | |<TensorType(int64, vector)> [id K]
| | | | | |Constant{2} [#L] | | | | | |Constant{2} [id L]
| | | | |Constant{0} [#M] | | | | |Constant{0} [id M]
| | | |Subtensor{:int64:} [#J] '' | | | |Subtensor{:int64:} [id J] ''
| | | |ScalarFromTensor [#N] '' | | | |ScalarFromTensor [id N] ''
| | | |Subtensor{int64} [#H] '' | | | |Subtensor{int64} [id H] ''
| | |IncSubtensor{Set;:int64:} [#O] '' | | |IncSubtensor{Set;:int64:} [id O] ''
| | |AllocEmpty{dtype='int64'} [#P] '' | | |AllocEmpty{dtype='int64'} [id P] ''
| | | |Elemwise{add,no_inplace} [#Q] '' | | | |Elemwise{add,no_inplace} [id Q] ''
| | | |TensorConstant{5} [#D] | | | |TensorConstant{5} [id D]
| | | |Subtensor{int64} [#R] '' | | | |Subtensor{int64} [id R] ''
| | | |Shape [#S] '' | | | |Shape [id S] ''
| | | | |Subtensor{:int64:} [#T] '' | | | | |Subtensor{:int64:} [id T] ''
| | | | |<TensorType(int64, vector)> [#U] | | | | |<TensorType(int64, vector)> [id U]
| | | | |Constant{2} [#V] | | | | |Constant{2} [id V]
| | | |Constant{0} [#W] | | | |Constant{0} [id W]
| | |Subtensor{:int64:} [#T] '' | | |Subtensor{:int64:} [id T] ''
| | |ScalarFromTensor [#X] '' | | |ScalarFromTensor [id X] ''
| | |Subtensor{int64} [#R] '' | | |Subtensor{int64} [id R] ''
| |Constant{2} [#Y] | |Constant{2} [id Y]
|Subtensor{int64::} [#Z] '' |Subtensor{int64::} [id Z] ''
|for{cpu,scan_fn}.1 [#C] '' |for{cpu,scan_fn}.1 [id C] ''
|Constant{2} [#BA] |Constant{2} [id BA]
Inner graphs of the scan ops: Inner graphs of the scan ops:
for{cpu,scan_fn}.0 [#C] '' for{cpu,scan_fn}.0 [id C] ''
>Elemwise{add,no_inplace} [#BB] '' >Elemwise{add,no_inplace} [id BB] ''
> |<TensorType(int64, scalar)> [#BC] -> [#E] > |<TensorType(int64, scalar)> [id BC] -> [id E]
> |<TensorType(int64, scalar)> [#BD] -> [#E] > |<TensorType(int64, scalar)> [id BD] -> [id E]
>Elemwise{add,no_inplace} [#BE] '' >Elemwise{add,no_inplace} [id BE] ''
> |<TensorType(int64, scalar)> [#BF] -> [#O] > |<TensorType(int64, scalar)> [id BF] -> [id O]
> |<TensorType(int64, scalar)> [#BG] -> [#O] > |<TensorType(int64, scalar)> [id BG] -> [id O]
for{cpu,scan_fn}.1 [#C] '' for{cpu,scan_fn}.1 [id C] ''
>Elemwise{add,no_inplace} [#BB] '' >Elemwise{add,no_inplace} [id BB] ''
>Elemwise{add,no_inplace} [#BE] ''""" >Elemwise{add,no_inplace} [id BE] ''"""
for truth, out in zip(expected_output.split("\n"), lines): for truth, out in zip(expected_output.split("\n"), lines):
assert truth.strip() == out.strip() assert truth.strip() == out.strip()
...@@ -598,122 +598,122 @@ def test_scan_debugprint5(): ...@@ -598,122 +598,122 @@ def test_scan_debugprint5():
for line in output_str.split('\n'): for line in output_str.split('\n'):
lines += [line] lines += [line]
expected_output = """Subtensor{int64} [#A] '' expected_output = """Subtensor{int64} [id A] ''
|for{cpu,grad_of_scan_fn}.1 [#B] '' |for{cpu,grad_of_scan_fn}.1 [id B] ''
| |Elemwise{sub,no_inplace} [#C] '' | |Elemwise{sub,no_inplace} [id C] ''
| | |Subtensor{int64} [#D] '' | | |Subtensor{int64} [id D] ''
| | | |Shape [#E] '' | | | |Shape [id E] ''
| | | | |for{cpu,scan_fn} [#F] '' | | | | |for{cpu,scan_fn} [id F] ''
| | | | |k [#G] | | | | |k [id G]
| | | | |IncSubtensor{Set;:int64:} [#H] '' | | | | |IncSubtensor{Set;:int64:} [id H] ''
| | | | | |AllocEmpty{dtype='float64'} [#I] '' | | | | | |AllocEmpty{dtype='float64'} [id I] ''
| | | | | | |Elemwise{add,no_inplace} [#J] '' | | | | | | |Elemwise{add,no_inplace} [id J] ''
| | | | | | | |k [#G] | | | | | | | |k [id G]
| | | | | | | |Subtensor{int64} [#K] '' | | | | | | | |Subtensor{int64} [id K] ''
| | | | | | | |Shape [#L] '' | | | | | | | |Shape [id L] ''
| | | | | | | | |Rebroadcast{0} [#M] '' | | | | | | | | |Rebroadcast{0} [id M] ''
| | | | | | | | |DimShuffle{x,0} [#N] '' | | | | | | | | |DimShuffle{x,0} [id N] ''
| | | | | | | | |Elemwise{second,no_inplace} [#O] '' | | | | | | | | |Elemwise{second,no_inplace} [id O] ''
| | | | | | | | |A [#P] | | | | | | | | |A [id P]
| | | | | | | | |DimShuffle{x} [#Q] '' | | | | | | | | |DimShuffle{x} [id Q] ''
| | | | | | | | |TensorConstant{1.0} [#R] | | | | | | | | |TensorConstant{1.0} [id R]
| | | | | | | |Constant{0} [#S] | | | | | | | |Constant{0} [id S]
| | | | | | |Subtensor{int64} [#T] '' | | | | | | |Subtensor{int64} [id T] ''
| | | | | | |Shape [#U] '' | | | | | | |Shape [id U] ''
| | | | | | | |Rebroadcast{0} [#M] '' | | | | | | | |Rebroadcast{0} [id M] ''
| | | | | | |Constant{1} [#V] | | | | | | |Constant{1} [id V]
| | | | | |Rebroadcast{0} [#M] '' | | | | | |Rebroadcast{0} [id M] ''
| | | | | |ScalarFromTensor [#W] '' | | | | | |ScalarFromTensor [id W] ''
| | | | | |Subtensor{int64} [#K] '' | | | | | |Subtensor{int64} [id K] ''
| | | | |A [#P] | | | | |A [id P]
| | | |Constant{0} [#X] | | | |Constant{0} [id X]
| | |TensorConstant{1} [#Y] | | |TensorConstant{1} [id Y]
| |Subtensor{:int64:} [#Z] '' | |Subtensor{:int64:} [id Z] ''
| | |Subtensor{::int64} [#BA] '' | | |Subtensor{::int64} [id BA] ''
| | | |Subtensor{:int64:} [#BB] '' | | | |Subtensor{:int64:} [id BB] ''
| | | | |for{cpu,scan_fn} [#F] '' | | | | |for{cpu,scan_fn} [id F] ''
| | | | |Constant{-1} [#BC] | | | | |Constant{-1} [id BC]
| | | |Constant{-1} [#BD] | | | |Constant{-1} [id BD]
| | |ScalarFromTensor [#BE] '' | | |ScalarFromTensor [id BE] ''
| | |Elemwise{sub,no_inplace} [#C] '' | | |Elemwise{sub,no_inplace} [id C] ''
| |Subtensor{:int64:} [#BF] '' | |Subtensor{:int64:} [id BF] ''
| | |Subtensor{:int64:} [#BG] '' | | |Subtensor{:int64:} [id BG] ''
| | | |Subtensor{::int64} [#BH] '' | | | |Subtensor{::int64} [id BH] ''
| | | | |for{cpu,scan_fn} [#F] '' | | | | |for{cpu,scan_fn} [id F] ''
| | | | |Constant{-1} [#BI] | | | | |Constant{-1} [id BI]
| | | |Constant{-1} [#BJ] | | | |Constant{-1} [id BJ]
| | |ScalarFromTensor [#BK] '' | | |ScalarFromTensor [id BK] ''
| | |Elemwise{sub,no_inplace} [#C] '' | | |Elemwise{sub,no_inplace} [id C] ''
| |Subtensor{::int64} [#BL] '' | |Subtensor{::int64} [id BL] ''
| | |IncSubtensor{Inc;int64::} [#BM] '' | | |IncSubtensor{Inc;int64::} [id BM] ''
| | | |Elemwise{second,no_inplace} [#BN] '' | | | |Elemwise{second,no_inplace} [id BN] ''
| | | | |for{cpu,scan_fn} [#BO] '' | | | | |for{cpu,scan_fn} [id BO] ''
| | | | | |k [#G] | | | | | |k [id G]
| | | | | |IncSubtensor{Set;:int64:} [#H] '' | | | | | |IncSubtensor{Set;:int64:} [id H] ''
| | | | | |A [#P] | | | | | |A [id P]
| | | | |DimShuffle{x,x} [#BP] '' | | | | |DimShuffle{x,x} [id BP] ''
| | | | |TensorConstant{0.0} [#BQ] | | | | |TensorConstant{0.0} [id BQ]
| | | |IncSubtensor{Inc;int64} [#BR] '' | | | |IncSubtensor{Inc;int64} [id BR] ''
| | | | |Elemwise{second,no_inplace} [#BS] '' | | | | |Elemwise{second,no_inplace} [id BS] ''
| | | | | |Subtensor{int64::} [#BT] '' | | | | | |Subtensor{int64::} [id BT] ''
| | | | | | |for{cpu,scan_fn} [#BO] '' | | | | | | |for{cpu,scan_fn} [id BO] ''
| | | | | | |Constant{1} [#BU] | | | | | | |Constant{1} [id BU]
| | | | | |DimShuffle{x,x} [#BV] '' | | | | | |DimShuffle{x,x} [id BV] ''
| | | | | |TensorConstant{0.0} [#BQ] | | | | | |TensorConstant{0.0} [id BQ]
| | | | |Elemwise{second} [#BW] '' | | | | |Elemwise{second} [id BW] ''
| | | | | |Subtensor{int64} [#BX] '' | | | | | |Subtensor{int64} [id BX] ''
| | | | | | |Subtensor{int64::} [#BT] '' | | | | | | |Subtensor{int64::} [id BT] ''
| | | | | | |Constant{-1} [#BY] | | | | | | |Constant{-1} [id BY]
| | | | | |DimShuffle{x} [#BZ] '' | | | | | |DimShuffle{x} [id BZ] ''
| | | | | |Elemwise{second,no_inplace} [#CA] '' | | | | | |Elemwise{second,no_inplace} [id CA] ''
| | | | | |Sum{acc_dtype=float64} [#CB] '' | | | | | |Sum{acc_dtype=float64} [id CB] ''
| | | | | | |Subtensor{int64} [#BX] '' | | | | | | |Subtensor{int64} [id BX] ''
| | | | | |TensorConstant{1.0} [#R] | | | | | |TensorConstant{1.0} [id R]
| | | | |Constant{-1} [#BY] | | | | |Constant{-1} [id BY]
| | | |Constant{1} [#BU] | | | |Constant{1} [id BU]
| | |Constant{-1} [#CC] | | |Constant{-1} [id CC]
| |Alloc [#CD] '' | |Alloc [id CD] ''
| | |TensorConstant{0.0} [#BQ] | | |TensorConstant{0.0} [id BQ]
| | |Elemwise{add,no_inplace} [#CE] '' | | |Elemwise{add,no_inplace} [id CE] ''
| | | |Elemwise{sub,no_inplace} [#C] '' | | | |Elemwise{sub,no_inplace} [id C] ''
| | | |TensorConstant{1} [#Y] | | | |TensorConstant{1} [id Y]
| | |Subtensor{int64} [#CF] '' | | |Subtensor{int64} [id CF] ''
| | |Shape [#CG] '' | | |Shape [id CG] ''
| | | |A [#P] | | | |A [id P]
| | |Constant{0} [#CH] | | |Constant{0} [id CH]
| |A [#P] | |A [id P]
|Constant{-1} [#CI] |Constant{-1} [id CI]
Inner graphs of the scan ops: Inner graphs of the scan ops:
for{cpu,grad_of_scan_fn}.1 [#B] '' for{cpu,grad_of_scan_fn}.1 [id B] ''
>Elemwise{add,no_inplace} [#CJ] '' >Elemwise{add,no_inplace} [id CJ] ''
> |Elemwise{mul} [#CK] '' > |Elemwise{mul} [id CK] ''
> | |<TensorType(float64, vector)> [#CL] -> [#BL] > | |<TensorType(float64, vector)> [id CL] -> [id BL]
> | |A_copy [#CM] -> [#P] > | |A_copy [id CM] -> [id P]
> |<TensorType(float64, vector)> [#CN] -> [#BL] > |<TensorType(float64, vector)> [id CN] -> [id BL]
>Elemwise{add,no_inplace} [#CO] '' >Elemwise{add,no_inplace} [id CO] ''
> |Elemwise{mul} [#CP] '' > |Elemwise{mul} [id CP] ''
> | |<TensorType(float64, vector)> [#CL] -> [#BL] > | |<TensorType(float64, vector)> [id CL] -> [id BL]
> | |<TensorType(float64, vector)> [#CQ] -> [#Z] > | |<TensorType(float64, vector)> [id CQ] -> [id Z]
> |<TensorType(float64, vector)> [#CR] -> [#CD] > |<TensorType(float64, vector)> [id CR] -> [id CD]
for{cpu,scan_fn} [#F] '' for{cpu,scan_fn} [id F] ''
>Elemwise{mul,no_inplace} [#CS] '' >Elemwise{mul,no_inplace} [id CS] ''
> |<TensorType(float64, vector)> [#CT] -> [#H] > |<TensorType(float64, vector)> [id CT] -> [id H]
> |A_copy [#CU] -> [#P] > |A_copy [id CU] -> [id P]
for{cpu,scan_fn} [#F] '' for{cpu,scan_fn} [id F] ''
>Elemwise{mul,no_inplace} [#CS] '' >Elemwise{mul,no_inplace} [id CS] ''
for{cpu,scan_fn} [#F] '' for{cpu,scan_fn} [id F] ''
>Elemwise{mul,no_inplace} [#CS] '' >Elemwise{mul,no_inplace} [id CS] ''
for{cpu,scan_fn} [#BO] '' for{cpu,scan_fn} [id BO] ''
>Elemwise{mul,no_inplace} [#CS] '' >Elemwise{mul,no_inplace} [id CS] ''
for{cpu,scan_fn} [#BO] '' for{cpu,scan_fn} [id BO] ''
>Elemwise{mul,no_inplace} [#CS] ''""" >Elemwise{mul,no_inplace} [id CS] ''"""
for truth, out in zip(expected_output.split("\n"), lines): for truth, out in zip(expected_output.split("\n"), lines):
assert truth.strip() == out.strip() assert truth.strip() == out.strip()
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论