提交 23369bc4 authored 作者: Pascal Lamblin's avatar Pascal Lamblin

Merge pull request #1953 from nouiz/debugprint

Change @ caractere to #, to prevent collision with github convention to ...
...@@ -400,10 +400,10 @@ such that: ``var.clients[*][0].inputs[index]`` or ...@@ -400,10 +400,10 @@ such that: ``var.clients[*][0].inputs[index]`` or
>>> v = theano.tensor.vector() >>> v = theano.tensor.vector()
>>> f = theano.function([v], (v+1).sum()) >>> f = theano.function([v], (v+1).sum())
>>> theano.printing.debugprint(f) >>> theano.printing.debugprint(f)
Sum{acc_dtype=float64} [@A] '' 1 Sum{acc_dtype=float64} [id A] '' 1
|Elemwise{add,no_inplace} [@B] '' 0 |Elemwise{add,no_inplace} [id B] '' 0
|TensorConstant{(1,) of 1.0} [@C] |TensorConstant{(1,) of 1.0} [id C]
|<TensorType(float64, vector)> [@D] |<TensorType(float64, vector)> [id D]
>>> # Sorted list of all nodes in the compiled graph. >>> # Sorted list of all nodes in the compiled graph.
>>> topo = f.maker.fgraph.toposort() >>> topo = f.maker.fgraph.toposort()
>>> topo[0].outputs[0].clients >>> topo[0].outputs[0].clients
......
...@@ -81,19 +81,19 @@ iteration number or other kinds of information in the name. ...@@ -81,19 +81,19 @@ iteration number or other kinds of information in the name.
2) The second function to print a graph is :func:`theano.printing.debugprint` 2) The second function to print a graph is :func:`theano.printing.debugprint`
>>> theano.printing.debugprint(f.maker.fgraph.outputs[0]) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f.maker.fgraph.outputs[0]) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul,no_inplace} [@A] '' Elemwise{mul,no_inplace} [id A] ''
|TensorConstant{2.0} [@B] |TensorConstant{2.0} [id B]
|x [@C] |x [id C]
Each line printed represents a Variable in the graph. Each line printed represents a Variable in the graph.
The line ``|x [@C`` means the variable named ``x`` with debugprint identifier The line ``|x [id C]`` means the variable named ``x`` with debugprint identifier
[@C] is an input of the Elemwise. If you accidentally have two variables called ``x`` in [id C] is an input of the Elemwise. If you accidentally have two variables called ``x`` in
your graph, their different debugprint identifier will be your clue. your graph, their different debugprint identifier will be your clue.
The line ``|TensorConstant{2.0} [@B]`` means that there is a constant 2.0 The line ``|TensorConstant{2.0} [id B]`` means that there is a constant 2.0
with this debugprint identifier. with this debugprint identifier.
The line ``Elemwise{mul,no_inplace} [@A] ''`` is indented less than The line ``Elemwise{mul,no_inplace} [id A] ''`` is indented less than
the other ones, because it means there is a variable computed by multiplying the other ones, because it means there is a variable computed by multiplying
the other (more indented) ones together. the other (more indented) ones together.
...@@ -106,25 +106,26 @@ printed? Look for debugprint identifier using the Find feature of your text ...@@ -106,25 +106,26 @@ printed? Look for debugprint identifier using the Find feature of your text
editor. editor.
>>> theano.printing.debugprint(gy) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(gy) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [@A] '' Elemwise{mul} [id A] ''
|Elemwise{mul} [@B] '' |Elemwise{mul} [id B] ''
| |Elemwise{second,no_inplace} [@C] '' | |Elemwise{second,no_inplace} [id C] ''
| | |Elemwise{pow,no_inplace} [@D] '' | | |Elemwise{pow,no_inplace} [id D] ''
| | | |x [@E] | | | |x [id E]
| | | |TensorConstant{2} [@F] | | | |TensorConstant{2} [id F]
| | |TensorConstant{1.0} [@G] | | |TensorConstant{1.0} [id G]
| |TensorConstant{2} [@F] | |TensorConstant{2} [id F]
|Elemwise{pow} [@H] '' |Elemwise{pow} [id H] ''
|x [@E] |x [id E]
|Elemwise{sub} [@I] '' |Elemwise{sub} [id I] ''
|TensorConstant{2} [@F] |TensorConstant{2} [id F]
|DimShuffle{} [@J] '' |DimShuffle{} [id J] ''
|TensorConstant{1} [@K] |TensorConstant{1} [id K]
>>> theano.printing.debugprint(gy, depth=2) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(gy, depth=2) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [@A] '' Elemwise{mul} [id A] ''
|Elemwise{mul} [@B] '' |Elemwise{mul} [id B] ''
|Elemwise{pow} [@C] '' |Elemwise{pow} [id C] ''
If the depth parameter is provided, it limits the number of levels that are If the depth parameter is provided, it limits the number of levels that are
shown. shown.
......
...@@ -74,11 +74,11 @@ message becomes : ...@@ -74,11 +74,11 @@ message becomes :
z = z + y z = z + y
Debugprint of the apply node: Debugprint of the apply node:
Elemwise{add,no_inplace} [@A] <TensorType(float64, vector)> '' Elemwise{add,no_inplace} [id A] <TensorType(float64, vector)> ''
|Elemwise{add,no_inplace} [@B] <TensorType(float64, vector)> '' |Elemwise{add,no_inplace} [id B] <TensorType(float64, vector)> ''
| |<TensorType(float64, vector)> [@C] <TensorType(float64, vector)> | |<TensorType(float64, vector)> [id C] <TensorType(float64, vector)>
| |<TensorType(float64, vector)> [@C] <TensorType(float64, vector)> | |<TensorType(float64, vector)> [id C] <TensorType(float64, vector)>
|<TensorType(float64, vector)> [@D] <TensorType(float64, vector)> |<TensorType(float64, vector)> [id D] <TensorType(float64, vector)>
We can here see that the error can be traced back to the line ``z = z + y``. We can here see that the error can be traced back to the line ``z = z + y``.
For this example, using ``optimizer=fast_compile`` worked. If it did not, For this example, using ``optimizer=fast_compile`` worked. If it did not,
...@@ -150,12 +150,12 @@ Running the above code generates the following error message: ...@@ -150,12 +150,12 @@ Running the above code generates the following error message:
Inputs scalar values: ['not scalar', 'not scalar'] Inputs scalar values: ['not scalar', 'not scalar']
Debugprint of the apply node: Debugprint of the apply node:
Dot22 [@A] <TensorType(float64, matrix)> '' Dot22 [id A] <TensorType(float64, matrix)> ''
|x [@B] <TensorType(float64, matrix)> |x [id B] <TensorType(float64, matrix)>
|DimShuffle{1,0} [@C] <TensorType(float64, matrix)> '' |DimShuffle{1,0} [id C] <TensorType(float64, matrix)> ''
|Flatten{2} [@D] <TensorType(float64, matrix)> '' |Flatten{2} [id D] <TensorType(float64, matrix)> ''
|DimShuffle{2,0,1} [@E] <TensorType(float64, 3D)> '' |DimShuffle{2,0,1} [id E] <TensorType(float64, 3D)> ''
|W1 [@F] <TensorType(float64, 3D)> |W1 [id F] <TensorType(float64, 3D)>
HINT: Re-running with most Theano optimization disabled could give you a back-traces when this node was created. This can be done with by setting the Theano flags 'optimizer=fast_compile'. If that does not work, Theano optimization can be disabled with 'optimizer=None'. HINT: Re-running with most Theano optimization disabled could give you a back-traces when this node was created. This can be done with by setting the Theano flags 'optimizer=fast_compile'. If that does not work, Theano optimization can be disabled with 'optimizer=None'.
...@@ -392,8 +392,8 @@ can be achieved as follows: ...@@ -392,8 +392,8 @@ can be achieved as follows:
:options: +NORMALIZE_WHITESPACE :options: +NORMALIZE_WHITESPACE
*** NaN detected *** *** NaN detected ***
Elemwise{Composite{(log(i0) * i0)}} [@A] '' Elemwise{Composite{(log(i0) * i0)}} [id A] ''
|x [@B] |x [id B]
Inputs : [array(0.0)] Inputs : [array(0.0)]
Outputs: [array(nan)] Outputs: [array(nan)]
......
...@@ -67,39 +67,39 @@ Debug Print ...@@ -67,39 +67,39 @@ Debug Print
The pre-compilation graph: The pre-compilation graph:
>>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE
Elemwise{gt,no_inplace} [@A] '' Elemwise{gt,no_inplace} [id A] ''
|Elemwise{true_div,no_inplace} [@B] '' |Elemwise{true_div,no_inplace} [id B] ''
| |DimShuffle{x} [@C] '' | |DimShuffle{x} [id C] ''
| | |TensorConstant{1} [@D] | | |TensorConstant{1} [id D]
| |Elemwise{add,no_inplace} [@E] '' | |Elemwise{add,no_inplace} [id E] ''
| |DimShuffle{x} [@F] '' | |DimShuffle{x} [id F] ''
| | |TensorConstant{1} [@D] | | |TensorConstant{1} [id D]
| |Elemwise{exp,no_inplace} [@G] '' | |Elemwise{exp,no_inplace} [id G] ''
| |Elemwise{sub,no_inplace} [@H] '' | |Elemwise{sub,no_inplace} [id H] ''
| |Elemwise{neg,no_inplace} [@I] '' | |Elemwise{neg,no_inplace} [id I] ''
| | |dot [@J] '' | | |dot [id J] ''
| | |x [@K] | | |x [id K]
| | |w [@L] | | |w [id L]
| |DimShuffle{x} [@M] '' | |DimShuffle{x} [id M] ''
| |b [@N] | |b [id N]
|DimShuffle{x} [@O] '' |DimShuffle{x} [id O] ''
|TensorConstant{0.5} [@P] |TensorConstant{0.5} [id P]
The post-compilation graph: The post-compilation graph:
>>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [@A] '' 4 Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [id A] '' 4
|CGemv{inplace} [@B] '' 3 |CGemv{inplace} [id B] '' 3
| |AllocEmpty{dtype='float64'} [@C] '' 2 | |AllocEmpty{dtype='float64'} [id C] '' 2
| | |Shape_i{0} [@D] '' 1 | | |Shape_i{0} [id D] '' 1
| | |x [@E] | | |x [id E]
| |TensorConstant{1.0} [@F] | |TensorConstant{1.0} [id F]
| |x [@E] | |x [id E]
| |w [@G] | |w [id G]
| |TensorConstant{0.0} [@H] | |TensorConstant{0.0} [id H]
|InplaceDimShuffle{x} [@I] '' 0 |InplaceDimShuffle{x} [id I] '' 0
| |b [@J] | |b [id J]
|TensorConstant{(1,) of 0.5} [@K] |TensorConstant{(1,) of 0.5} [id K]
Picture Printing of Graphs Picture Printing of Graphs
......
...@@ -24,11 +24,11 @@ Currently, information regarding shape is used in two ways in Theano: ...@@ -24,11 +24,11 @@ Currently, information regarding shape is used in two ways in Theano:
>>> x = theano.tensor.matrix('x') >>> x = theano.tensor.matrix('x')
>>> f = theano.function([x], (x ** 2).shape) >>> f = theano.function([x], (x ** 2).shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector{dtype='int64'} [@A] '' 2 MakeVector{dtype='int64'} [id A] '' 2
|Shape_i{0} [@B] '' 1 |Shape_i{0} [id B] '' 1
| |x [@C] | |x [id C]
|Shape_i{1} [@D] '' 0 |Shape_i{1} [id D] '' 0
|x [@C] |x [id C]
The output of this compiled function does not contain any multiplication The output of this compiled function does not contain any multiplication
...@@ -51,24 +51,24 @@ can lead to errors. Consider this example: ...@@ -51,24 +51,24 @@ can lead to errors. Consider this example:
>>> f = theano.function([x, y], z.shape) >>> f = theano.function([x, y], z.shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector{dtype='int64'} [@A] '' 4 MakeVector{dtype='int64'} [id A] '' 4
|Elemwise{Add}[(0, 0)] [@B] '' 3 |Elemwise{Add}[(0, 0)] [id B] '' 3
| |Shape_i{0} [@C] '' 1 | |Shape_i{0} [id C] '' 1
| | |x [@D] | | |x [id D]
| |Shape_i{0} [@E] '' 2 | |Shape_i{0} [id E] '' 2
| |y [@F] | |y [id F]
|Shape_i{1} [@G] '' 0 |Shape_i{1} [id G] '' 0
|x [@D] |x [id D]
>>> f(xv, yv) # DOES NOT RAISE AN ERROR AS SHOULD BE. >>> f(xv, yv) # DOES NOT RAISE AN ERROR AS SHOULD BE.
array([8, 4]) array([8, 4])
>>> f = theano.function([x,y], z)# Do not take the shape. >>> f = theano.function([x,y], z)# Do not take the shape.
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
Join [@A] '' 0 Join [id A] '' 0
|TensorConstant{0} [@B] |TensorConstant{0} [id B]
|x [@C] |x [id C]
|y [@D] |y [id D]
>>> f(xv, yv) # doctest: +ELLIPSIS >>> f(xv, yv) # doctest: +ELLIPSIS
Traceback (most recent call last): Traceback (most recent call last):
...@@ -121,8 +121,8 @@ upgrade. Here is the current state of what can be done: ...@@ -121,8 +121,8 @@ upgrade. Here is the current state of what can be done:
>>> x_specify_shape = theano.tensor.specify_shape(x, (2, 2)) >>> x_specify_shape = theano.tensor.specify_shape(x, (2, 2))
>>> f = theano.function([x], (x_specify_shape ** 2).shape) >>> f = theano.function([x], (x_specify_shape ** 2).shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
DeepCopyOp [@A] '' 0 DeepCopyOp [id A] '' 0
|TensorConstant{(2,) of 2} [@B] |TensorConstant{(2,) of 2} [id B]
Future Plans Future Plans
============ ============
......
...@@ -647,11 +647,11 @@ def debugprint(r, prefix='', depth=-1, done=None, print_type=False, ...@@ -647,11 +647,11 @@ def debugprint(r, prefix='', depth=-1, done=None, print_type=False,
if obj in done: if obj in done:
id_str = done[obj] id_str = done[obj]
elif ids == "id": elif ids == "id":
id_str = "[@%s]" % str(id(r)) id_str = "[id %s]" % str(id(r))
elif ids == "int": elif ids == "int":
id_str = "[@%s]" % str(len(done)) id_str = "[id %s]" % str(len(done))
elif ids == "CHAR": elif ids == "CHAR":
id_str = "[@%s]" % char_from_number(len(done)) id_str = "[id %s]" % char_from_number(len(done))
elif ids == "": elif ids == "":
id_str = "" id_str = ""
done[obj] = id_str done[obj] = id_str
......
...@@ -383,16 +383,16 @@ class TestMergeOptimizer: ...@@ -383,16 +383,16 @@ class TestMergeOptimizer:
g = FunctionGraph([x1, x2], [e]) g = FunctionGraph([x1, x2], [e])
MergeOptimizer().optimize(g) MergeOptimizer().optimize(g)
strg = theano.printing.debugprint(g, file='str') strg = theano.printing.debugprint(g, file='str')
strref = '''Elemwise{add,no_inplace} [@A] '' 4 strref = '''Elemwise{add,no_inplace} [id A] '' 4
|dot [@B] '' 3 |dot [id B] '' 3
| |Assert{msg='Theano Assert failed!'} [@C] '' 2 | |Assert{msg='Theano Assert failed!'} [id C] '' 2
| | |x1 [@D] | | |x1 [id D]
| | |All [@E] '' 1 | | |All [id E] '' 1
| | |Elemwise{gt,no_inplace} [@F] '' 0 | | |Elemwise{gt,no_inplace} [id F] '' 0
| | |x1 [@D] | | |x1 [id D]
| | |x2 [@G] | | |x2 [id G]
| |x2 [@G] | |x2 [id G]
|dot [@B] '' 3 |dot [id B] '' 3
''' '''
assert strg == strref, (strg, strref) assert strg == strref, (strg, strref)
...@@ -407,35 +407,35 @@ class TestMergeOptimizer: ...@@ -407,35 +407,35 @@ class TestMergeOptimizer:
g = FunctionGraph([x1, x2, x3], [e]) g = FunctionGraph([x1, x2, x3], [e])
MergeOptimizer().optimize(g) MergeOptimizer().optimize(g)
strg = theano.printing.debugprint(g, file='str') strg = theano.printing.debugprint(g, file='str')
strref1 = '''Elemwise{add,no_inplace} [@A] '' 6 strref1 = '''Elemwise{add,no_inplace} [id A] '' 6
|dot [@B] '' 5 |dot [id B] '' 5
| |Assert{msg='Theano Assert failed!'} [@C] '' 4 | |Assert{msg='Theano Assert failed!'} [id C] '' 4
| | |x1 [@D] | | |x1 [id D]
| | |All [@E] '' 3 | | |All [id E] '' 3
| | | |Elemwise{gt,no_inplace} [@F] '' 1 | | | |Elemwise{gt,no_inplace} [id F] '' 1
| | | |x1 [@D] | | | |x1 [id D]
| | | |x3 [@G] | | | |x3 [id G]
| | |All [@H] '' 2 | | |All [id H] '' 2
| | |Elemwise{gt,no_inplace} [@I] '' 0 | | |Elemwise{gt,no_inplace} [id I] '' 0
| | |x1 [@D] | | |x1 [id D]
| | |x2 [@J] | | |x2 [id J]
| |x2 [@J] | |x2 [id J]
|dot [@B] '' 5 |dot [id B] '' 5
''' '''
strref2 = '''Elemwise{add,no_inplace} [@A] '' 6 strref2 = '''Elemwise{add,no_inplace} [id A] '' 6
|dot [@B] '' 5 |dot [id B] '' 5
| |Assert{msg='Theano Assert failed!'} [@C] '' 4 | |Assert{msg='Theano Assert failed!'} [id C] '' 4
| | |x1 [@D] | | |x1 [id D]
| | |All [@E] '' 3 | | |All [id E] '' 3
| | | |Elemwise{gt,no_inplace} [@F] '' 1 | | | |Elemwise{gt,no_inplace} [id F] '' 1
| | | |x1 [@D] | | | |x1 [id D]
| | | |x2 [@G] | | | |x2 [id G]
| | |All [@H] '' 2 | | |All [id H] '' 2
| | |Elemwise{gt,no_inplace} [@I] '' 0 | | |Elemwise{gt,no_inplace} [id I] '' 0
| | |x1 [@D] | | |x1 [id D]
| | |x3 [@J] | | |x3 [id J]
| |x2 [@G] | |x2 [id G]
|dot [@B] '' 5 |dot [id B] '' 5
''' '''
# print(strg) # print(strg)
assert strg == strref1 or strg == strref2, (strg, strref1, strref2) assert strg == strref1 or strg == strref2, (strg, strref1, strref2)
...@@ -450,21 +450,21 @@ class TestMergeOptimizer: ...@@ -450,21 +450,21 @@ class TestMergeOptimizer:
g = FunctionGraph([x1, x2, x3], [e]) g = FunctionGraph([x1, x2, x3], [e])
MergeOptimizer().optimize(g) MergeOptimizer().optimize(g)
strg = theano.printing.debugprint(g, file='str') strg = theano.printing.debugprint(g, file='str')
strref = '''Elemwise{add,no_inplace} [@A] '' 7 strref = '''Elemwise{add,no_inplace} [id A] '' 7
|dot [@B] '' 6 |dot [id B] '' 6
| |Assert{msg='Theano Assert failed!'} [@C] '' 5 | |Assert{msg='Theano Assert failed!'} [id C] '' 5
| | |x1 [@D] | | |x1 [id D]
| | |All [@E] '' 3 | | |All [id E] '' 3
| | |Elemwise{gt,no_inplace} [@F] '' 1 | | |Elemwise{gt,no_inplace} [id F] '' 1
| | |x1 [@D] | | |x1 [id D]
| | |x3 [@G] | | |x3 [id G]
| |Assert{msg='Theano Assert failed!'} [@H] '' 4 | |Assert{msg='Theano Assert failed!'} [id H] '' 4
| |x2 [@I] | |x2 [id I]
| |All [@J] '' 2 | |All [id J] '' 2
| |Elemwise{gt,no_inplace} [@K] '' 0 | |Elemwise{gt,no_inplace} [id K] '' 0
| |x2 [@I] | |x2 [id I]
| |x3 [@G] | |x3 [id G]
|dot [@B] '' 6 |dot [id B] '' 6
''' '''
# print(strg) # print(strg)
assert strg == strref, (strg, strref) assert strg == strref, (strg, strref)
...@@ -479,21 +479,21 @@ class TestMergeOptimizer: ...@@ -479,21 +479,21 @@ class TestMergeOptimizer:
g = FunctionGraph([x1, x2, x3], [e]) g = FunctionGraph([x1, x2, x3], [e])
MergeOptimizer().optimize(g) MergeOptimizer().optimize(g)
strg = theano.printing.debugprint(g, file='str') strg = theano.printing.debugprint(g, file='str')
strref = '''Elemwise{add,no_inplace} [@A] '' 7 strref = '''Elemwise{add,no_inplace} [id A] '' 7
|dot [@B] '' 6 |dot [id B] '' 6
| |Assert{msg='Theano Assert failed!'} [@C] '' 5 | |Assert{msg='Theano Assert failed!'} [id C] '' 5
| | |x1 [@D] | | |x1 [id D]
| | |All [@E] '' 3 | | |All [id E] '' 3
| | |Elemwise{gt,no_inplace} [@F] '' 1 | | |Elemwise{gt,no_inplace} [id F] '' 1
| | |x1 [@D] | | |x1 [id D]
| | |x3 [@G] | | |x3 [id G]
| |Assert{msg='Theano Assert failed!'} [@H] '' 4 | |Assert{msg='Theano Assert failed!'} [id H] '' 4
| |x2 [@I] | |x2 [id I]
| |All [@J] '' 2 | |All [id J] '' 2
| |Elemwise{gt,no_inplace} [@K] '' 0 | |Elemwise{gt,no_inplace} [id K] '' 0
| |x2 [@I] | |x2 [id I]
| |x3 [@G] | |x3 [id G]
|dot [@B] '' 6 |dot [id B] '' 6
''' '''
print(strg) print(strg)
assert strg == strref, (strg, strref) assert strg == strref, (strg, strref)
......
...@@ -3150,26 +3150,26 @@ class Test_local_useless_elemwise_comparison(unittest.TestCase): ...@@ -3150,26 +3150,26 @@ class Test_local_useless_elemwise_comparison(unittest.TestCase):
theano.printing.debugprint(Z) theano.printing.debugprint(Z)
# here is the output for the debug print: # here is the output for the debug print:
""" """
Elemwise{add,no_inplace} [@A] '' Elemwise{add,no_inplace} [id A] ''
|for{cpu,scan_fn} [@B] '' |for{cpu,scan_fn} [id B] ''
| |Subtensor{int64} [@C] '' | |Subtensor{int64} [id C] ''
| | |Shape [@D] '' | | |Shape [id D] ''
| | | |Subtensor{int64::} [@E] 'X[0:]' | | | |Subtensor{int64::} [id E] 'X[0:]'
| | | |X [@F] | | | |X [id F]
| | | |Constant{0} [@G] | | | |Constant{0} [id G]
| | |Constant{0} [@H] | | |Constant{0} [id H]
| |Subtensor{:int64:} [@I] '' | |Subtensor{:int64:} [id I] ''
| | |Subtensor{int64::} [@E] 'X[0:]' | | |Subtensor{int64::} [id E] 'X[0:]'
| | |ScalarFromTensor [@J] '' | | |ScalarFromTensor [id J] ''
| | |Subtensor{int64} [@C] '' | | |Subtensor{int64} [id C] ''
| |Subtensor{int64} [@C] '' | |Subtensor{int64} [id C] ''
|Y [@K] |Y [id K]
Inner graphs of the scan ops: Inner graphs of the scan ops:
for{cpu,scan_fn} [@B] '' for{cpu,scan_fn} [id B] ''
>Sum{acc_dtype=float64} [@L] '' >Sum{acc_dtype=float64} [id L] ''
> |X[t] [@M] -> [@I] > |X[t] [id M] -> [id I]
""" """
mode = theano.compile.get_default_mode().excluding('fusion') mode = theano.compile.get_default_mode().excluding('fusion')
...@@ -3177,30 +3177,30 @@ class Test_local_useless_elemwise_comparison(unittest.TestCase): ...@@ -3177,30 +3177,30 @@ class Test_local_useless_elemwise_comparison(unittest.TestCase):
theano.printing.debugprint(f, print_type=True) theano.printing.debugprint(f, print_type=True)
# here is the output for the debug print: # here is the output for the debug print:
""" """
Elemwise{Add}[(0, 0)] [@A] <TensorType(float64, vector)> '' 7 Elemwise{Add}[(0, 0)] [id A] <TensorType(float64, vector)> '' 7
|for{cpu,scan_fn} [@B] <TensorType(float64, vector)> '' 6 |for{cpu,scan_fn} [id B] <TensorType(float64, vector)> '' 6
| |Shape_i{0} [@C] <TensorType(int64, scalar)> '' 0 | |Shape_i{0} [id C] <TensorType(int64, scalar)> '' 0
| | |X [@D] <TensorType(float64, matrix)> | | |X [id D] <TensorType(float64, matrix)>
| |Subtensor{int64:int64:int8} [@E] <TensorType(float64, matrix)> '' 5 | |Subtensor{int64:int64:int8} [id E] <TensorType(float64, matrix)> '' 5
| | |X [@D] <TensorType(float64, matrix)> | | |X [id D] <TensorType(float64, matrix)>
| | |ScalarFromTensor [@F] <int64> '' 4 | | |ScalarFromTensor [id F] <int64> '' 4
| | | |Elemwise{switch,no_inplace} [@G] <TensorType(int64, scalar)> '' 3 | | | |Elemwise{switch,no_inplace} [id G] <TensorType(int64, scalar)> '' 3
| | | |Elemwise{le,no_inplace} [@H] <TensorType(int8, scalar)> '' 2 | | | |Elemwise{le,no_inplace} [id H] <TensorType(int8, scalar)> '' 2
| | | | |Shape_i{0} [@C] <TensorType(int64, scalar)> '' 0 | | | | |Shape_i{0} [id C] <TensorType(int64, scalar)> '' 0
| | | | |TensorConstant{0} [@I] <TensorType(int8, scalar)> | | | | |TensorConstant{0} [id I] <TensorType(int8, scalar)>
| | | |TensorConstant{0} [@I] <TensorType(int8, scalar)> | | | |TensorConstant{0} [id I] <TensorType(int8, scalar)>
| | | |TensorConstant{0} [@J] <TensorType(int64, scalar)> | | | |TensorConstant{0} [id J] <TensorType(int64, scalar)>
| | |ScalarFromTensor [@K] <int64> '' 1 | | |ScalarFromTensor [id K] <int64> '' 1
| | | |Shape_i{0} [@C] <TensorType(int64, scalar)> '' 0 | | | |Shape_i{0} [id C] <TensorType(int64, scalar)> '' 0
| | |Constant{1} [@L] <int8> | | |Constant{1} [id L] <int8>
| |Shape_i{0} [@C] <TensorType(int64, scalar)> '' 0 | |Shape_i{0} [id C] <TensorType(int64, scalar)> '' 0
|Y [@M] <TensorType(float64, vector)> |Y [id M] <TensorType(float64, vector)>
Inner graphs of the scan ops: Inner graphs of the scan ops:
for{cpu,scan_fn} [@B] <TensorType(float64, vector)> '' for{cpu,scan_fn} [id B] <TensorType(float64, vector)> ''
>Sum{acc_dtype=float64} [@N] <TensorType(float64, scalar)> '' >Sum{acc_dtype=float64} [id N] <TensorType(float64, scalar)> ''
> |X[t] [@O] <TensorType(float64, vector)> -> [@E] > |X[t] [id O] <TensorType(float64, vector)> -> [id E]
""" """
def assert_eqs_const(self, f, val): def assert_eqs_const(self, f, val):
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论