提交 90023285 authored 作者: Frederic's avatar Frederic

Change debugprint id

上级 138944d2
......@@ -400,10 +400,10 @@ such that: ``var.clients[*][0].inputs[index]`` or
>>> v = theano.tensor.vector()
>>> f = theano.function([v], (v+1).sum())
>>> theano.printing.debugprint(f)
Sum{acc_dtype=float64} [@A] '' 1
|Elemwise{add,no_inplace} [@B] '' 0
|TensorConstant{(1,) of 1.0} [@C]
|<TensorType(float64, vector)> [@D]
Sum{acc_dtype=float64} [id A] '' 1
|Elemwise{add,no_inplace} [id B] '' 0
|TensorConstant{(1,) of 1.0} [id C]
|<TensorType(float64, vector)> [id D]
>>> # Sorted list of all nodes in the compiled graph.
>>> topo = f.maker.fgraph.toposort()
>>> topo[0].outputs[0].clients
......
......@@ -81,19 +81,19 @@ iteration number or other kinds of information in the name.
2) The second function to print a graph is :func:`theano.printing.debugprint`
>>> theano.printing.debugprint(f.maker.fgraph.outputs[0]) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul,no_inplace} [#A] ''
|TensorConstant{2.0} [#B]
|x [#C]
Elemwise{mul,no_inplace} [id A] ''
|TensorConstant{2.0} [id B]
|x [id C]
Each line printed represents a Variable in the graph.
The line ``|x [#C]`` means the variable named ``x`` with debugprint identifier
[#C] is an input of the Elemwise. If you accidentally have two variables called ``x`` in
The line ``|x [id C]`` means the variable named ``x`` with debugprint identifier
[id C] is an input of the Elemwise. If you accidentally have two variables called ``x`` in
your graph, their different debugprint identifier will be your clue.
The line ``|TensorConstant{2.0} [#B]`` means that there is a constant 2.0
The line ``|TensorConstant{2.0} [id B]`` means that there is a constant 2.0
with this debugprint identifier.
The line ``Elemwise{mul,no_inplace} [#A] ''`` is indented less than
The line ``Elemwise{mul,no_inplace} [id A] ''`` is indented less than
the other ones, because it means there is a variable computed by multiplying
the other (more indented) ones together.
......@@ -106,25 +106,25 @@ printed? Look for debugprint identifier using the Find feature of your text
editor.
>>> theano.printing.debugprint(gy) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [#A] ''
|Elemwise{mul} [#B] ''
| |Elemwise{second,no_inplace} [#C] ''
| | |Elemwise{pow,no_inplace} [#D] ''
| | | |x [#E]
| | | |TensorConstant{2} [#F]
| | |TensorConstant{1.0} [#G]
| |TensorConstant{2} [#F]
|Elemwise{pow} [#H] ''
|x [#E]
|Elemwise{sub} [#I] ''
|TensorConstant{2} [#F]
|DimShuffle{} [#J] ''
|TensorConstant{1} [#K]
Elemwise{mul} [id A] ''
|Elemwise{mul} [id B] ''
| |Elemwise{second,no_inplace} [id C] ''
| | |Elemwise{pow,no_inplace} [id D] ''
| | | |x [id E]
| | | |TensorConstant{2} [id F]
| | |TensorConstant{1.0} [id G]
| |TensorConstant{2} [id F]
|Elemwise{pow} [id H] ''
|x [id E]
|Elemwise{sub} [id I] ''
|TensorConstant{2} [id F]
|DimShuffle{} [id J] ''
|TensorConstant{1} [id K]
>>> theano.printing.debugprint(gy, depth=2) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [#A] ''
|Elemwise{mul} [#B] ''
|Elemwise{pow} [#C] ''
Elemwise{mul} [id A] ''
|Elemwise{mul} [id B] ''
|Elemwise{pow} [id C] ''
If the depth parameter is provided, it limits the number of levels that are
......
......@@ -74,11 +74,11 @@ message becomes :
z = z + y
Debugprint of the apply node:
Elemwise{add,no_inplace} [@A] <TensorType(float64, vector)> ''
|Elemwise{add,no_inplace} [@B] <TensorType(float64, vector)> ''
| |<TensorType(float64, vector)> [@C] <TensorType(float64, vector)>
| |<TensorType(float64, vector)> [@C] <TensorType(float64, vector)>
|<TensorType(float64, vector)> [@D] <TensorType(float64, vector)>
Elemwise{add,no_inplace} [id A] <TensorType(float64, vector)> ''
|Elemwise{add,no_inplace} [id B] <TensorType(float64, vector)> ''
| |<TensorType(float64, vector)> [id C] <TensorType(float64, vector)>
| |<TensorType(float64, vector)> [id C] <TensorType(float64, vector)>
|<TensorType(float64, vector)> [id D] <TensorType(float64, vector)>
We can here see that the error can be traced back to the line ``z = z + y``.
For this example, using ``optimizer=fast_compile`` worked. If it did not,
......@@ -150,12 +150,12 @@ Running the above code generates the following error message:
Inputs scalar values: ['not scalar', 'not scalar']
Debugprint of the apply node:
Dot22 [@A] <TensorType(float64, matrix)> ''
|x [@B] <TensorType(float64, matrix)>
|DimShuffle{1,0} [@C] <TensorType(float64, matrix)> ''
|Flatten{2} [@D] <TensorType(float64, matrix)> ''
|DimShuffle{2,0,1} [@E] <TensorType(float64, 3D)> ''
|W1 [@F] <TensorType(float64, 3D)>
Dot22 [id A] <TensorType(float64, matrix)> ''
|x [id B] <TensorType(float64, matrix)>
|DimShuffle{1,0} [id C] <TensorType(float64, matrix)> ''
|Flatten{2} [id D] <TensorType(float64, matrix)> ''
|DimShuffle{2,0,1} [id E] <TensorType(float64, 3D)> ''
|W1 [id F] <TensorType(float64, 3D)>
HINT: Re-running with most Theano optimization disabled could give you a back-traces when this node was created. This can be done with by setting the Theano flags 'optimizer=fast_compile'. If that does not work, Theano optimization can be disabled with 'optimizer=None'.
......@@ -392,8 +392,8 @@ can be achieved as follows:
:options: +NORMALIZE_WHITESPACE
*** NaN detected ***
Elemwise{Composite{(log(i0) * i0)}} [#A] ''
|x [#B]
Elemwise{Composite{(log(i0) * i0)}} [id A] ''
|x [id B]
Inputs : [array(0.0)]
Outputs: [array(nan)]
......
......@@ -67,39 +67,39 @@ Debug Print
The pre-compilation graph:
>>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE
Elemwise{gt,no_inplace} [#A] ''
|Elemwise{true_div,no_inplace} [#B] ''
| |DimShuffle{x} [#C] ''
| | |TensorConstant{1} [#D]
| |Elemwise{add,no_inplace} [#E] ''
| |DimShuffle{x} [#F] ''
| | |TensorConstant{1} [#D]
| |Elemwise{exp,no_inplace} [#G] ''
| |Elemwise{sub,no_inplace} [#H] ''
| |Elemwise{neg,no_inplace} [#I] ''
| | |dot [#J] ''
| | |x [#K]
| | |w [#L]
| |DimShuffle{x} [#M] ''
| |b [#N]
|DimShuffle{x} [#O] ''
|TensorConstant{0.5} [#P]
Elemwise{gt,no_inplace} [id A] ''
|Elemwise{true_div,no_inplace} [id B] ''
| |DimShuffle{x} [id C] ''
| | |TensorConstant{1} [id D]
| |Elemwise{add,no_inplace} [id E] ''
| |DimShuffle{x} [id F] ''
| | |TensorConstant{1} [id D]
| |Elemwise{exp,no_inplace} [id G] ''
| |Elemwise{sub,no_inplace} [id H] ''
| |Elemwise{neg,no_inplace} [id I] ''
| | |dot [id J] ''
| | |x [id K]
| | |w [id L]
| |DimShuffle{x} [id M] ''
| |b [id N]
|DimShuffle{x} [id O] ''
|TensorConstant{0.5} [id P]
The post-compilation graph:
>>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [#A] '' 4
|CGemv{inplace} [#B] '' 3
| |AllocEmpty{dtype='float64'} [#C] '' 2
| | |Shape_i{0} [#D] '' 1
| | |x [#E]
| |TensorConstant{1.0} [#F]
| |x [#E]
| |w [#G]
| |TensorConstant{0.0} [#H]
|InplaceDimShuffle{x} [#I] '' 0
| |b [#J]
|TensorConstant{(1,) of 0.5} [#K]
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [id A] '' 4
|CGemv{inplace} [id B] '' 3
| |AllocEmpty{dtype='float64'} [id C] '' 2
| | |Shape_i{0} [id D] '' 1
| | |x [id E]
| |TensorConstant{1.0} [id F]
| |x [id E]
| |w [id G]
| |TensorConstant{0.0} [id H]
|InplaceDimShuffle{x} [id I] '' 0
| |b [id J]
|TensorConstant{(1,) of 0.5} [id K]
Picture Printing of Graphs
......
......@@ -24,11 +24,11 @@ Currently, information regarding shape is used in two ways in Theano:
>>> x = theano.tensor.matrix('x')
>>> f = theano.function([x], (x ** 2).shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector{dtype='int64'} [#A] '' 2
|Shape_i{0} [#B] '' 1
| |x [#C]
|Shape_i{1} [#D] '' 0
|x [#C]
MakeVector{dtype='int64'} [id A] '' 2
|Shape_i{0} [id B] '' 1
| |x [id C]
|Shape_i{1} [id D] '' 0
|x [id C]
The output of this compiled function does not contain any multiplication
......@@ -51,24 +51,24 @@ can lead to errors. Consider this example:
>>> f = theano.function([x, y], z.shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector{dtype='int64'} [#A] '' 4
|Elemwise{Add}[(0, 0)] [#B] '' 3
| |Shape_i{0} [#C] '' 1
| | |x [#D]
| |Shape_i{0} [#E] '' 2
| |y [#F]
|Shape_i{1} [#G] '' 0
|x [#D]
MakeVector{dtype='int64'} [id A] '' 4
|Elemwise{Add}[(0, 0)] [id B] '' 3
| |Shape_i{0} [id C] '' 1
| | |x [id D]
| |Shape_i{0} [id E] '' 2
| |y [id F]
|Shape_i{1} [id G] '' 0
|x [id D]
>>> f(xv, yv) # DOES NOT RAISE AN ERROR AS SHOULD BE.
array([8, 4])
>>> f = theano.function([x,y], z)# Do not take the shape.
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
Join [#A] '' 0
|TensorConstant{0} [#B]
|x [#C]
|y [#D]
Join [id A] '' 0
|TensorConstant{0} [id B]
|x [id C]
|y [id D]
>>> f(xv, yv) # doctest: +ELLIPSIS
Traceback (most recent call last):
......@@ -121,8 +121,8 @@ upgrade. Here is the current state of what can be done:
>>> x_specify_shape = theano.tensor.specify_shape(x, (2, 2))
>>> f = theano.function([x], (x_specify_shape ** 2).shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
DeepCopyOp [#A] '' 0
|TensorConstant{(2,) of 2} [#B]
DeepCopyOp [id A] '' 0
|TensorConstant{(2,) of 2} [id B]
Future Plans
============
......
......@@ -647,11 +647,11 @@ def debugprint(r, prefix='', depth=-1, done=None, print_type=False,
if obj in done:
id_str = done[obj]
elif ids == "id":
id_str = "[#%s]" % str(id(r))
id_str = "[id %s]" % str(id(r))
elif ids == "int":
id_str = "[#%s]" % str(len(done))
id_str = "[id %s]" % str(len(done))
elif ids == "CHAR":
id_str = "[#%s]" % char_from_number(len(done))
id_str = "[id %s]" % char_from_number(len(done))
elif ids == "":
id_str = ""
done[obj] = id_str
......
......@@ -383,16 +383,16 @@ class TestMergeOptimizer:
g = FunctionGraph([x1, x2], [e])
MergeOptimizer().optimize(g)
strg = theano.printing.debugprint(g, file='str')
strref = '''Elemwise{add,no_inplace} [@A] '' 4
|dot [@B] '' 3
| |Assert{msg='Theano Assert failed!'} [@C] '' 2
| | |x1 [@D]
| | |All [@E] '' 1
| | |Elemwise{gt,no_inplace} [@F] '' 0
| | |x1 [@D]
| | |x2 [@G]
| |x2 [@G]
|dot [@B] '' 3
strref = '''Elemwise{add,no_inplace} [id A] '' 4
|dot [id B] '' 3
| |Assert{msg='Theano Assert failed!'} [id C] '' 2
| | |x1 [id D]
| | |All [id E] '' 1
| | |Elemwise{gt,no_inplace} [id F] '' 0
| | |x1 [id D]
| | |x2 [id G]
| |x2 [id G]
|dot [id B] '' 3
'''
assert strg == strref, (strg, strref)
......@@ -407,35 +407,35 @@ class TestMergeOptimizer:
g = FunctionGraph([x1, x2, x3], [e])
MergeOptimizer().optimize(g)
strg = theano.printing.debugprint(g, file='str')
strref1 = '''Elemwise{add,no_inplace} [@A] '' 6
|dot [@B] '' 5
| |Assert{msg='Theano Assert failed!'} [@C] '' 4
| | |x1 [@D]
| | |All [@E] '' 3
| | | |Elemwise{gt,no_inplace} [@F] '' 1
| | | |x1 [@D]
| | | |x3 [@G]
| | |All [@H] '' 2
| | |Elemwise{gt,no_inplace} [@I] '' 0
| | |x1 [@D]
| | |x2 [@J]
| |x2 [@J]
|dot [@B] '' 5
strref1 = '''Elemwise{add,no_inplace} [id A] '' 6
|dot [id B] '' 5
| |Assert{msg='Theano Assert failed!'} [id C] '' 4
| | |x1 [id D]
| | |All [id E] '' 3
| | | |Elemwise{gt,no_inplace} [id F] '' 1
| | | |x1 [id D]
| | | |x3 [id G]
| | |All [id H] '' 2
| | |Elemwise{gt,no_inplace} [id I] '' 0
| | |x1 [id D]
| | |x2 [id J]
| |x2 [id J]
|dot [id B] '' 5
'''
strref2 = '''Elemwise{add,no_inplace} [@A] '' 6
|dot [@B] '' 5
| |Assert{msg='Theano Assert failed!'} [@C] '' 4
| | |x1 [@D]
| | |All [@E] '' 3
| | | |Elemwise{gt,no_inplace} [@F] '' 1
| | | |x1 [@D]
| | | |x2 [@G]
| | |All [@H] '' 2
| | |Elemwise{gt,no_inplace} [@I] '' 0
| | |x1 [@D]
| | |x3 [@J]
| |x2 [@G]
|dot [@B] '' 5
strref2 = '''Elemwise{add,no_inplace} [id A] '' 6
|dot [id B] '' 5
| |Assert{msg='Theano Assert failed!'} [id C] '' 4
| | |x1 [id D]
| | |All [id E] '' 3
| | | |Elemwise{gt,no_inplace} [id F] '' 1
| | | |x1 [id D]
| | | |x2 [id G]
| | |All [id H] '' 2
| | |Elemwise{gt,no_inplace} [id I] '' 0
| | |x1 [id D]
| | |x3 [id J]
| |x2 [id G]
|dot [id B] '' 5
'''
# print(strg)
assert strg == strref1 or strg == strref2, (strg, strref1, strref2)
......@@ -450,21 +450,21 @@ class TestMergeOptimizer:
g = FunctionGraph([x1, x2, x3], [e])
MergeOptimizer().optimize(g)
strg = theano.printing.debugprint(g, file='str')
strref = '''Elemwise{add,no_inplace} [@A] '' 7
|dot [@B] '' 6
| |Assert{msg='Theano Assert failed!'} [@C] '' 5
| | |x1 [@D]
| | |All [@E] '' 3
| | |Elemwise{gt,no_inplace} [@F] '' 1
| | |x1 [@D]
| | |x3 [@G]
| |Assert{msg='Theano Assert failed!'} [@H] '' 4
| |x2 [@I]
| |All [@J] '' 2
| |Elemwise{gt,no_inplace} [@K] '' 0
| |x2 [@I]
| |x3 [@G]
|dot [@B] '' 6
strref = '''Elemwise{add,no_inplace} [id A] '' 7
|dot [id B] '' 6
| |Assert{msg='Theano Assert failed!'} [id C] '' 5
| | |x1 [id D]
| | |All [id E] '' 3
| | |Elemwise{gt,no_inplace} [id F] '' 1
| | |x1 [id D]
| | |x3 [id G]
| |Assert{msg='Theano Assert failed!'} [id H] '' 4
| |x2 [id I]
| |All [id J] '' 2
| |Elemwise{gt,no_inplace} [id K] '' 0
| |x2 [id I]
| |x3 [id G]
|dot [id B] '' 6
'''
# print(strg)
assert strg == strref, (strg, strref)
......@@ -479,21 +479,21 @@ class TestMergeOptimizer:
g = FunctionGraph([x1, x2, x3], [e])
MergeOptimizer().optimize(g)
strg = theano.printing.debugprint(g, file='str')
strref = '''Elemwise{add,no_inplace} [@A] '' 7
|dot [@B] '' 6
| |Assert{msg='Theano Assert failed!'} [@C] '' 5
| | |x1 [@D]
| | |All [@E] '' 3
| | |Elemwise{gt,no_inplace} [@F] '' 1
| | |x1 [@D]
| | |x3 [@G]
| |Assert{msg='Theano Assert failed!'} [@H] '' 4
| |x2 [@I]
| |All [@J] '' 2
| |Elemwise{gt,no_inplace} [@K] '' 0
| |x2 [@I]
| |x3 [@G]
|dot [@B] '' 6
strref = '''Elemwise{add,no_inplace} [id A] '' 7
|dot [id B] '' 6
| |Assert{msg='Theano Assert failed!'} [id C] '' 5
| | |x1 [id D]
| | |All [id E] '' 3
| | |Elemwise{gt,no_inplace} [id F] '' 1
| | |x1 [id D]
| | |x3 [id G]
| |Assert{msg='Theano Assert failed!'} [id H] '' 4
| |x2 [id I]
| |All [id J] '' 2
| |Elemwise{gt,no_inplace} [id K] '' 0
| |x2 [id I]
| |x3 [id G]
|dot [id B] '' 6
'''
print(strg)
assert strg == strref, (strg, strref)
......
......@@ -3150,26 +3150,26 @@ class Test_local_useless_elemwise_comparison(unittest.TestCase):
theano.printing.debugprint(Z)
# here is the output for the debug print:
"""
Elemwise{add,no_inplace} [@A] ''
|for{cpu,scan_fn} [@B] ''
| |Subtensor{int64} [@C] ''
| | |Shape [@D] ''
| | | |Subtensor{int64::} [@E] 'X[0:]'
| | | |X [@F]
| | | |Constant{0} [@G]
| | |Constant{0} [@H]
| |Subtensor{:int64:} [@I] ''
| | |Subtensor{int64::} [@E] 'X[0:]'
| | |ScalarFromTensor [@J] ''
| | |Subtensor{int64} [@C] ''
| |Subtensor{int64} [@C] ''
|Y [@K]
Elemwise{add,no_inplace} [id A] ''
|for{cpu,scan_fn} [id B] ''
| |Subtensor{int64} [id C] ''
| | |Shape [id D] ''
| | | |Subtensor{int64::} [id E] 'X[0:]'
| | | |X [id F]
| | | |Constant{0} [id G]
| | |Constant{0} [id H]
| |Subtensor{:int64:} [id I] ''
| | |Subtensor{int64::} [id E] 'X[0:]'
| | |ScalarFromTensor [id J] ''
| | |Subtensor{int64} [id C] ''
| |Subtensor{int64} [id C] ''
|Y [id K]
Inner graphs of the scan ops:
for{cpu,scan_fn} [@B] ''
>Sum{acc_dtype=float64} [@L] ''
> |X[t] [@M] -> [@I]
for{cpu,scan_fn} [id B] ''
>Sum{acc_dtype=float64} [id L] ''
> |X[t] [id M] -> [id I]
"""
mode = theano.compile.get_default_mode().excluding('fusion')
......@@ -3177,30 +3177,30 @@ class Test_local_useless_elemwise_comparison(unittest.TestCase):
theano.printing.debugprint(f, print_type=True)
# here is the output for the debug print:
"""
Elemwise{Add}[(0, 0)] [@A] <TensorType(float64, vector)> '' 7
|for{cpu,scan_fn} [@B] <TensorType(float64, vector)> '' 6
| |Shape_i{0} [@C] <TensorType(int64, scalar)> '' 0
| | |X [@D] <TensorType(float64, matrix)>
| |Subtensor{int64:int64:int8} [@E] <TensorType(float64, matrix)> '' 5
| | |X [@D] <TensorType(float64, matrix)>
| | |ScalarFromTensor [@F] <int64> '' 4
| | | |Elemwise{switch,no_inplace} [@G] <TensorType(int64, scalar)> '' 3
| | | |Elemwise{le,no_inplace} [@H] <TensorType(int8, scalar)> '' 2
| | | | |Shape_i{0} [@C] <TensorType(int64, scalar)> '' 0
| | | | |TensorConstant{0} [@I] <TensorType(int8, scalar)>
| | | |TensorConstant{0} [@I] <TensorType(int8, scalar)>
| | | |TensorConstant{0} [@J] <TensorType(int64, scalar)>
| | |ScalarFromTensor [@K] <int64> '' 1
| | | |Shape_i{0} [@C] <TensorType(int64, scalar)> '' 0
| | |Constant{1} [@L] <int8>
| |Shape_i{0} [@C] <TensorType(int64, scalar)> '' 0
|Y [@M] <TensorType(float64, vector)>
Elemwise{Add}[(0, 0)] [id A] <TensorType(float64, vector)> '' 7
|for{cpu,scan_fn} [id B] <TensorType(float64, vector)> '' 6
| |Shape_i{0} [id C] <TensorType(int64, scalar)> '' 0
| | |X [id D] <TensorType(float64, matrix)>
| |Subtensor{int64:int64:int8} [id E] <TensorType(float64, matrix)> '' 5
| | |X [id D] <TensorType(float64, matrix)>
| | |ScalarFromTensor [id F] <int64> '' 4
| | | |Elemwise{switch,no_inplace} [id G] <TensorType(int64, scalar)> '' 3
| | | |Elemwise{le,no_inplace} [id H] <TensorType(int8, scalar)> '' 2
| | | | |Shape_i{0} [id C] <TensorType(int64, scalar)> '' 0
| | | | |TensorConstant{0} [id I] <TensorType(int8, scalar)>
| | | |TensorConstant{0} [id I] <TensorType(int8, scalar)>
| | | |TensorConstant{0} [id J] <TensorType(int64, scalar)>
| | |ScalarFromTensor [id K] <int64> '' 1
| | | |Shape_i{0} [id C] <TensorType(int64, scalar)> '' 0
| | |Constant{1} [id L] <int8>
| |Shape_i{0} [id C] <TensorType(int64, scalar)> '' 0
|Y [id M] <TensorType(float64, vector)>
Inner graphs of the scan ops:
for{cpu,scan_fn} [@B] <TensorType(float64, vector)> ''
>Sum{acc_dtype=float64} [@N] <TensorType(float64, scalar)> ''
> |X[t] [@O] <TensorType(float64, vector)> -> [@E]
for{cpu,scan_fn} [id B] <TensorType(float64, vector)> ''
>Sum{acc_dtype=float64} [id N] <TensorType(float64, scalar)> ''
> |X[t] [id O] <TensorType(float64, vector)> -> [id E]
"""
def assert_eqs_const(self, f, val):
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论