提交 138944d2 authored 作者: Frederic's avatar Frederic

Change @ to # in debugprint

上级 296293c8
......@@ -81,19 +81,19 @@ iteration number or other kinds of information in the name.
2) The second function to print a graph is :func:`theano.printing.debugprint`
>>> theano.printing.debugprint(f.maker.fgraph.outputs[0]) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul,no_inplace} [@A] ''
|TensorConstant{2.0} [@B]
|x [@C]
Elemwise{mul,no_inplace} [#A] ''
|TensorConstant{2.0} [#B]
|x [#C]
Each line printed represents a Variable in the graph.
The line ``|x [@C`` means the variable named ``x`` with debugprint identifier
[@C] is an input of the Elemwise. If you accidentally have two variables called ``x`` in
The line ``|x [#C]`` means the variable named ``x`` with debugprint identifier
[#C] is an input of the Elemwise. If you accidentally have two variables called ``x`` in
your graph, their different debugprint identifier will be your clue.
The line ``|TensorConstant{2.0} [@B]`` means that there is a constant 2.0
The line ``|TensorConstant{2.0} [#B]`` means that there is a constant 2.0
with this debugprint identifier.
The line ``Elemwise{mul,no_inplace} [@A] ''`` is indented less than
The line ``Elemwise{mul,no_inplace} [#A] ''`` is indented less than
the other ones, because it means there is a variable computed by multiplying
the other (more indented) ones together.
......@@ -106,25 +106,26 @@ printed? Look for debugprint identifier using the Find feature of your text
editor.
>>> theano.printing.debugprint(gy) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [@A] ''
|Elemwise{mul} [@B] ''
| |Elemwise{second,no_inplace} [@C] ''
| | |Elemwise{pow,no_inplace} [@D] ''
| | | |x [@E]
| | | |TensorConstant{2} [@F]
| | |TensorConstant{1.0} [@G]
| |TensorConstant{2} [@F]
|Elemwise{pow} [@H] ''
|x [@E]
|Elemwise{sub} [@I] ''
|TensorConstant{2} [@F]
|DimShuffle{} [@J] ''
|TensorConstant{1} [@K]
Elemwise{mul} [#A] ''
|Elemwise{mul} [#B] ''
| |Elemwise{second,no_inplace} [#C] ''
| | |Elemwise{pow,no_inplace} [#D] ''
| | | |x [#E]
| | | |TensorConstant{2} [#F]
| | |TensorConstant{1.0} [#G]
| |TensorConstant{2} [#F]
|Elemwise{pow} [#H] ''
|x [#E]
|Elemwise{sub} [#I] ''
|TensorConstant{2} [#F]
|DimShuffle{} [#J] ''
|TensorConstant{1} [#K]
>>> theano.printing.debugprint(gy, depth=2) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [@A] ''
|Elemwise{mul} [@B] ''
|Elemwise{pow} [@C] ''
Elemwise{mul} [#A] ''
|Elemwise{mul} [#B] ''
|Elemwise{pow} [#C] ''
If the depth parameter is provided, it limits the number of levels that are
shown.
......
......@@ -392,8 +392,8 @@ can be achieved as follows:
:options: +NORMALIZE_WHITESPACE
*** NaN detected ***
Elemwise{Composite{(log(i0) * i0)}} [@A] ''
|x [@B]
Elemwise{Composite{(log(i0) * i0)}} [#A] ''
|x [#B]
Inputs : [array(0.0)]
Outputs: [array(nan)]
......
......@@ -67,39 +67,39 @@ Debug Print
The pre-compilation graph:
>>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE
Elemwise{gt,no_inplace} [@A] ''
|Elemwise{true_div,no_inplace} [@B] ''
| |DimShuffle{x} [@C] ''
| | |TensorConstant{1} [@D]
| |Elemwise{add,no_inplace} [@E] ''
| |DimShuffle{x} [@F] ''
| | |TensorConstant{1} [@D]
| |Elemwise{exp,no_inplace} [@G] ''
| |Elemwise{sub,no_inplace} [@H] ''
| |Elemwise{neg,no_inplace} [@I] ''
| | |dot [@J] ''
| | |x [@K]
| | |w [@L]
| |DimShuffle{x} [@M] ''
| |b [@N]
|DimShuffle{x} [@O] ''
|TensorConstant{0.5} [@P]
Elemwise{gt,no_inplace} [#A] ''
|Elemwise{true_div,no_inplace} [#B] ''
| |DimShuffle{x} [#C] ''
| | |TensorConstant{1} [#D]
| |Elemwise{add,no_inplace} [#E] ''
| |DimShuffle{x} [#F] ''
| | |TensorConstant{1} [#D]
| |Elemwise{exp,no_inplace} [#G] ''
| |Elemwise{sub,no_inplace} [#H] ''
| |Elemwise{neg,no_inplace} [#I] ''
| | |dot [#J] ''
| | |x [#K]
| | |w [#L]
| |DimShuffle{x} [#M] ''
| |b [#N]
|DimShuffle{x} [#O] ''
|TensorConstant{0.5} [#P]
The post-compilation graph:
>>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [@A] '' 4
|CGemv{inplace} [@B] '' 3
| |AllocEmpty{dtype='float64'} [@C] '' 2
| | |Shape_i{0} [@D] '' 1
| | |x [@E]
| |TensorConstant{1.0} [@F]
| |x [@E]
| |w [@G]
| |TensorConstant{0.0} [@H]
|InplaceDimShuffle{x} [@I] '' 0
| |b [@J]
|TensorConstant{(1,) of 0.5} [@K]
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [#A] '' 4
|CGemv{inplace} [#B] '' 3
| |AllocEmpty{dtype='float64'} [#C] '' 2
| | |Shape_i{0} [#D] '' 1
| | |x [#E]
| |TensorConstant{1.0} [#F]
| |x [#E]
| |w [#G]
| |TensorConstant{0.0} [#H]
|InplaceDimShuffle{x} [#I] '' 0
| |b [#J]
|TensorConstant{(1,) of 0.5} [#K]
Picture Printing of Graphs
......
......@@ -24,11 +24,11 @@ Currently, information regarding shape is used in two ways in Theano:
>>> x = theano.tensor.matrix('x')
>>> f = theano.function([x], (x ** 2).shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector{dtype='int64'} [@A] '' 2
|Shape_i{0} [@B] '' 1
| |x [@C]
|Shape_i{1} [@D] '' 0
|x [@C]
MakeVector{dtype='int64'} [#A] '' 2
|Shape_i{0} [#B] '' 1
| |x [#C]
|Shape_i{1} [#D] '' 0
|x [#C]
The output of this compiled function does not contain any multiplication
......@@ -51,24 +51,24 @@ can lead to errors. Consider this example:
>>> f = theano.function([x, y], z.shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector{dtype='int64'} [@A] '' 4
|Elemwise{Add}[(0, 0)] [@B] '' 3
| |Shape_i{0} [@C] '' 1
| | |x [@D]
| |Shape_i{0} [@E] '' 2
| |y [@F]
|Shape_i{1} [@G] '' 0
|x [@D]
MakeVector{dtype='int64'} [#A] '' 4
|Elemwise{Add}[(0, 0)] [#B] '' 3
| |Shape_i{0} [#C] '' 1
| | |x [#D]
| |Shape_i{0} [#E] '' 2
| |y [#F]
|Shape_i{1} [#G] '' 0
|x [#D]
>>> f(xv, yv) # DOES NOT RAISE AN ERROR AS SHOULD BE.
array([8, 4])
>>> f = theano.function([x,y], z)# Do not take the shape.
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
Join [@A] '' 0
|TensorConstant{0} [@B]
|x [@C]
|y [@D]
Join [#A] '' 0
|TensorConstant{0} [#B]
|x [#C]
|y [#D]
>>> f(xv, yv) # doctest: +ELLIPSIS
Traceback (most recent call last):
......@@ -121,8 +121,8 @@ upgrade. Here is the current state of what can be done:
>>> x_specify_shape = theano.tensor.specify_shape(x, (2, 2))
>>> f = theano.function([x], (x_specify_shape ** 2).shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
DeepCopyOp [@A] '' 0
|TensorConstant{(2,) of 2} [@B]
DeepCopyOp [#A] '' 0
|TensorConstant{(2,) of 2} [#B]
Future Plans
============
......
......@@ -647,11 +647,11 @@ def debugprint(r, prefix='', depth=-1, done=None, print_type=False,
if obj in done:
id_str = done[obj]
elif ids == "id":
id_str = "[@%s]" % str(id(r))
id_str = "[#%s]" % str(id(r))
elif ids == "int":
id_str = "[@%s]" % str(len(done))
id_str = "[#%s]" % str(len(done))
elif ids == "CHAR":
id_str = "[@%s]" % char_from_number(len(done))
id_str = "[#%s]" % char_from_number(len(done))
elif ids == "":
id_str = ""
done[obj] = id_str
......
......@@ -176,13 +176,13 @@ def test_debugprint():
s = s.getvalue()
# The additional white space are needed!
reference = '\n'.join([
"Elemwise{add,no_inplace} [@0] '' ",
" |Elemwise{add,no_inplace} [@1] 'C' ",
" | |A [@2]",
" | |B [@3]",
" |Elemwise{add,no_inplace} [@4] '' ",
" |D [@5]",
" |E [@6]",
"Elemwise{add,no_inplace} [#0] '' ",
" |Elemwise{add,no_inplace} [#1] 'C' ",
" | |A [#2]",
" | |B [#3]",
" |Elemwise{add,no_inplace} [#4] '' ",
" |D [#5]",
" |E [#6]",
]) + '\n'
if s != reference:
......@@ -197,13 +197,13 @@ def test_debugprint():
s = s.getvalue()
# The additional white space are needed!
reference = "\n".join([
"Elemwise{add,no_inplace} [@A] '' ",
" |Elemwise{add,no_inplace} [@B] 'C' ",
" | |A [@C]",
" | |B [@D]",
" |Elemwise{add,no_inplace} [@E] '' ",
" |D [@F]",
" |E [@G]",
"Elemwise{add,no_inplace} [#A] '' ",
" |Elemwise{add,no_inplace} [#B] 'C' ",
" | |A [#C]",
" | |B [#D]",
" |Elemwise{add,no_inplace} [#E] '' ",
" |D [#F]",
" |E [#G]",
]) + '\n'
if s != reference:
......@@ -218,11 +218,11 @@ def test_debugprint():
s = s.getvalue()
# The additional white space are needed!
reference = '\n'.join([
"Elemwise{add,no_inplace} [@A] '' ",
" |Elemwise{add,no_inplace} [@B] 'C' ",
" |Elemwise{add,no_inplace} [@C] '' ",
" |D [@D]",
" |E [@E]",
"Elemwise{add,no_inplace} [#A] '' ",
" |Elemwise{add,no_inplace} [#B] 'C' ",
" |Elemwise{add,no_inplace} [#C] '' ",
" |D [#D]",
" |E [#E]",
]) + '\n'
if s != reference:
......@@ -286,40 +286,40 @@ def test_scan_debugprint1():
for line in output_str.split('\n'):
lines += [line]
expected_output = """Subtensor{int64} [@A] ''
|Subtensor{int64::} [@B] ''
| |for{cpu,scan_fn} [@C] ''
| | |k [@D]
| | |IncSubtensor{Set;:int64:} [@E] ''
| | | |AllocEmpty{dtype='float64'} [@F] ''
| | | | |Elemwise{add,no_inplace} [@G] ''
| | | | | |k [@D]
| | | | | |Subtensor{int64} [@H] ''
| | | | | |Shape [@I] ''
| | | | | | |Rebroadcast{0} [@J] ''
| | | | | | |DimShuffle{x,0} [@K] ''
| | | | | | |Elemwise{second,no_inplace} [@L] ''
| | | | | | |A [@M]
| | | | | | |DimShuffle{x} [@N] ''
| | | | | | |TensorConstant{1.0} [@O]
| | | | | |Constant{0} [@P]
| | | | |Subtensor{int64} [@Q] ''
| | | | |Shape [@R] ''
| | | | | |Rebroadcast{0} [@J] ''
| | | | |Constant{1} [@S]
| | | |Rebroadcast{0} [@J] ''
| | | |ScalarFromTensor [@T] ''
| | | |Subtensor{int64} [@H] ''
| | |A [@M]
| |Constant{1} [@U]
|Constant{-1} [@V]
expected_output = """Subtensor{int64} [#A] ''
|Subtensor{int64::} [#B] ''
| |for{cpu,scan_fn} [#C] ''
| | |k [#D]
| | |IncSubtensor{Set;:int64:} [#E] ''
| | | |AllocEmpty{dtype='float64'} [#F] ''
| | | | |Elemwise{add,no_inplace} [#G] ''
| | | | | |k [#D]
| | | | | |Subtensor{int64} [#H] ''
| | | | | |Shape [#I] ''
| | | | | | |Rebroadcast{0} [#J] ''
| | | | | | |DimShuffle{x,0} [#K] ''
| | | | | | |Elemwise{second,no_inplace} [#L] ''
| | | | | | |A [#M]
| | | | | | |DimShuffle{x} [#N] ''
| | | | | | |TensorConstant{1.0} [#O]
| | | | | |Constant{0} [#P]
| | | | |Subtensor{int64} [#Q] ''
| | | | |Shape [#R] ''
| | | | | |Rebroadcast{0} [#J] ''
| | | | |Constant{1} [#S]
| | | |Rebroadcast{0} [#J] ''
| | | |ScalarFromTensor [#T] ''
| | | |Subtensor{int64} [#H] ''
| | |A [#M]
| |Constant{1} [#U]
|Constant{-1} [#V]
Inner graphs of the scan ops:
for{cpu,scan_fn} [@C] ''
>Elemwise{mul,no_inplace} [@W] ''
> |<TensorType(float64, vector)> [@X] -> [@E]
> |A_copy [@Y] -> [@M]"""
for{cpu,scan_fn} [#C] ''
>Elemwise{mul,no_inplace} [#W] ''
> |<TensorType(float64, vector)> [#X] -> [#E]
> |A_copy [#Y] -> [#M]"""
for truth, out in zip(expected_output.split("\n"), lines):
assert truth.strip() == out.strip()
......@@ -349,43 +349,43 @@ def test_scan_debugprint2():
for line in output_str.split('\n'):
lines += [line]
expected_output = """Sum{acc_dtype=float64} [@A] ''
|for{cpu,scan_fn} [@B] ''
|Elemwise{minimum,no_inplace} [@C] ''
| |Subtensor{int64} [@D] ''
| | |Shape [@E] ''
| | | |Subtensor{int64::} [@F] 'coefficients[0:]'
| | | |coefficients [@G]
| | | |Constant{0} [@H]
| | |Constant{0} [@I]
| |Subtensor{int64} [@J] ''
| |Shape [@K] ''
| | |Subtensor{int64::} [@L] ''
| | |ARange{dtype='int64'} [@M] ''
| | | |TensorConstant{0} [@N]
| | | |TensorConstant{10000} [@O]
| | | |TensorConstant{1} [@P]
| | |Constant{0} [@Q]
| |Constant{0} [@R]
|Subtensor{:int64:} [@S] ''
| |Subtensor{int64::} [@F] 'coefficients[0:]'
| |ScalarFromTensor [@T] ''
| |Elemwise{minimum,no_inplace} [@C] ''
|Subtensor{:int64:} [@U] ''
| |Subtensor{int64::} [@L] ''
| |ScalarFromTensor [@V] ''
| |Elemwise{minimum,no_inplace} [@C] ''
|Elemwise{minimum,no_inplace} [@C] ''
|x [@W]
expected_output = """Sum{acc_dtype=float64} [#A] ''
|for{cpu,scan_fn} [#B] ''
|Elemwise{minimum,no_inplace} [#C] ''
| |Subtensor{int64} [#D] ''
| | |Shape [#E] ''
| | | |Subtensor{int64::} [#F] 'coefficients[0:]'
| | | |coefficients [#G]
| | | |Constant{0} [#H]
| | |Constant{0} [#I]
| |Subtensor{int64} [#J] ''
| |Shape [#K] ''
| | |Subtensor{int64::} [#L] ''
| | |ARange{dtype='int64'} [#M] ''
| | | |TensorConstant{0} [#N]
| | | |TensorConstant{10000} [#O]
| | | |TensorConstant{1} [#P]
| | |Constant{0} [#Q]
| |Constant{0} [#R]
|Subtensor{:int64:} [#S] ''
| |Subtensor{int64::} [#F] 'coefficients[0:]'
| |ScalarFromTensor [#T] ''
| |Elemwise{minimum,no_inplace} [#C] ''
|Subtensor{:int64:} [#U] ''
| |Subtensor{int64::} [#L] ''
| |ScalarFromTensor [#V] ''
| |Elemwise{minimum,no_inplace} [#C] ''
|Elemwise{minimum,no_inplace} [#C] ''
|x [#W]
Inner graphs of the scan ops:
for{cpu,scan_fn} [@B] ''
>Elemwise{mul,no_inplace} [@X] ''
> |coefficients[t] [@Y] -> [@S]
> |Elemwise{pow,no_inplace} [@Z] ''
> |x_copy [@BA] -> [@W]
> |<TensorType(int64, scalar)> [@BB] -> [@U]"""
for{cpu,scan_fn} [#B] ''
>Elemwise{mul,no_inplace} [#X] ''
> |coefficients[t] [#Y] -> [#S]
> |Elemwise{pow,no_inplace} [#Z] ''
> |x_copy [#BA] -> [#W]
> |<TensorType(int64, scalar)> [#BB] -> [#U]"""
for truth, out in zip(expected_output.split("\n"), lines):
assert truth.strip() == out.strip()
......@@ -432,77 +432,77 @@ def test_scan_debugprint3():
for line in output_str.split('\n'):
lines += [line]
expected_output = """Sum{acc_dtype=float64} [@A] ''
|for{cpu,scan_fn} [@B] ''
|Elemwise{minimum,no_inplace} [@C] ''
| |Subtensor{int64} [@D] ''
| | |Shape [@E] ''
| | | |Subtensor{int64::} [@F] 'coefficients[0:]'
| | | |coefficients [@G]
| | | |Constant{0} [@H]
| | |Constant{0} [@I]
| |Subtensor{int64} [@J] ''
| |Shape [@K] ''
| | |Subtensor{int64::} [@L] ''
| | |ARange{dtype='int64'} [@M] ''
| | | |TensorConstant{0} [@N]
| | | |TensorConstant{10} [@O]
| | | |TensorConstant{1} [@P]
| | |Constant{0} [@Q]
| |Constant{0} [@R]
|Subtensor{:int64:} [@S] ''
| |Subtensor{int64::} [@F] 'coefficients[0:]'
| |ScalarFromTensor [@T] ''
| |Elemwise{minimum,no_inplace} [@C] ''
|Subtensor{:int64:} [@U] ''
| |Subtensor{int64::} [@L] ''
| |ScalarFromTensor [@V] ''
| |Elemwise{minimum,no_inplace} [@C] ''
|Elemwise{minimum,no_inplace} [@C] ''
|A [@W]
|k [@X]
expected_output = """Sum{acc_dtype=float64} [#A] ''
|for{cpu,scan_fn} [#B] ''
|Elemwise{minimum,no_inplace} [#C] ''
| |Subtensor{int64} [#D] ''
| | |Shape [#E] ''
| | | |Subtensor{int64::} [#F] 'coefficients[0:]'
| | | |coefficients [#G]
| | | |Constant{0} [#H]
| | |Constant{0} [#I]
| |Subtensor{int64} [#J] ''
| |Shape [#K] ''
| | |Subtensor{int64::} [#L] ''
| | |ARange{dtype='int64'} [#M] ''
| | | |TensorConstant{0} [#N]
| | | |TensorConstant{10} [#O]
| | | |TensorConstant{1} [#P]
| | |Constant{0} [#Q]
| |Constant{0} [#R]
|Subtensor{:int64:} [#S] ''
| |Subtensor{int64::} [#F] 'coefficients[0:]'
| |ScalarFromTensor [#T] ''
| |Elemwise{minimum,no_inplace} [#C] ''
|Subtensor{:int64:} [#U] ''
| |Subtensor{int64::} [#L] ''
| |ScalarFromTensor [#V] ''
| |Elemwise{minimum,no_inplace} [#C] ''
|Elemwise{minimum,no_inplace} [#C] ''
|A [#W]
|k [#X]
Inner graphs of the scan ops:
for{cpu,scan_fn} [@B] ''
>Elemwise{mul,no_inplace} [@Y] ''
> |DimShuffle{x} [@Z] ''
> | |coefficients[t] [@BA] -> [@S]
> |Elemwise{pow,no_inplace} [@BB] ''
> |Subtensor{int64} [@BC] ''
> | |Subtensor{int64::} [@BD] ''
> | | |for{cpu,scan_fn} [@BE] ''
> | | | |k_copy [@BF] -> [@X]
> | | | |IncSubtensor{Set;:int64:} [@BG] ''
> | | | | |AllocEmpty{dtype='float64'} [@BH] ''
> | | | | | |Elemwise{add,no_inplace} [@BI] ''
> | | | | | | |k_copy [@BF] -> [@X]
> | | | | | | |Subtensor{int64} [@BJ] ''
> | | | | | | |Shape [@BK] ''
> | | | | | | | |Rebroadcast{0} [@BL] ''
> | | | | | | | |DimShuffle{x,0} [@BM] ''
> | | | | | | | |Elemwise{second,no_inplace} [@BN] ''
> | | | | | | | |A_copy [@BO] -> [@W]
> | | | | | | | |DimShuffle{x} [@BP] ''
> | | | | | | | |TensorConstant{1.0} [@BQ]
> | | | | | | |Constant{0} [@BR]
> | | | | | |Subtensor{int64} [@BS] ''
> | | | | | |Shape [@BT] ''
> | | | | | | |Rebroadcast{0} [@BL] ''
> | | | | | |Constant{1} [@BU]
> | | | | |Rebroadcast{0} [@BL] ''
> | | | | |ScalarFromTensor [@BV] ''
> | | | | |Subtensor{int64} [@BJ] ''
> | | | |A_copy [@BO] -> [@W]
> | | |Constant{1} [@BW]
> | |Constant{-1} [@BX]
> |DimShuffle{x} [@BY] ''
> |<TensorType(int64, scalar)> [@BZ] -> [@U]
for{cpu,scan_fn} [@BE] ''
>Elemwise{mul,no_inplace} [@CA] ''
> |<TensorType(float64, vector)> [@CB] -> [@BG]
> |A_copy [@CC] -> [@BO]"""
for{cpu,scan_fn} [#B] ''
>Elemwise{mul,no_inplace} [#Y] ''
> |DimShuffle{x} [#Z] ''
> | |coefficients[t] [#BA] -> [#S]
> |Elemwise{pow,no_inplace} [#BB] ''
> |Subtensor{int64} [#BC] ''
> | |Subtensor{int64::} [#BD] ''
> | | |for{cpu,scan_fn} [#BE] ''
> | | | |k_copy [#BF] -> [#X]
> | | | |IncSubtensor{Set;:int64:} [#BG] ''
> | | | | |AllocEmpty{dtype='float64'} [#BH] ''
> | | | | | |Elemwise{add,no_inplace} [#BI] ''
> | | | | | | |k_copy [#BF] -> [#X]
> | | | | | | |Subtensor{int64} [#BJ] ''
> | | | | | | |Shape [#BK] ''
> | | | | | | | |Rebroadcast{0} [#BL] ''
> | | | | | | | |DimShuffle{x,0} [#BM] ''
> | | | | | | | |Elemwise{second,no_inplace} [#BN] ''
> | | | | | | | |A_copy [#BO] -> [#W]
> | | | | | | | |DimShuffle{x} [#BP] ''
> | | | | | | | |TensorConstant{1.0} [#BQ]
> | | | | | | |Constant{0} [#BR]
> | | | | | |Subtensor{int64} [#BS] ''
> | | | | | |Shape [#BT] ''
> | | | | | | |Rebroadcast{0} [#BL] ''
> | | | | | |Constant{1} [#BU]
> | | | | |Rebroadcast{0} [#BL] ''
> | | | | |ScalarFromTensor [#BV] ''
> | | | | |Subtensor{int64} [#BJ] ''
> | | | |A_copy [#BO] -> [#W]
> | | |Constant{1} [#BW]
> | |Constant{-1} [#BX]
> |DimShuffle{x} [#BY] ''
> |<TensorType(int64, scalar)> [#BZ] -> [#U]
for{cpu,scan_fn} [#BE] ''
>Elemwise{mul,no_inplace} [#CA] ''
> |<TensorType(float64, vector)> [#CB] -> [#BG]
> |A_copy [#CC] -> [#BO]"""
for truth, out in zip(expected_output.split("\n"), lines):
assert truth.strip() == out.strip()
......@@ -527,54 +527,54 @@ def test_scan_debugprint4():
for line in output_str.split('\n'):
lines += [line]
expected_output = """Elemwise{add,no_inplace} [@A] ''
|Subtensor{int64::} [@B] ''
| |for{cpu,scan_fn}.0 [@C] ''
| | |TensorConstant{5} [@D]
| | |IncSubtensor{Set;:int64:} [@E] ''
| | | |AllocEmpty{dtype='int64'} [@F] ''
| | | | |Elemwise{add,no_inplace} [@G] ''
| | | | |TensorConstant{5} [@D]
| | | | |Subtensor{int64} [@H] ''
| | | | |Shape [@I] ''
| | | | | |Subtensor{:int64:} [@J] ''
| | | | | |<TensorType(int64, vector)> [@K]
| | | | | |Constant{2} [@L]
| | | | |Constant{0} [@M]
| | | |Subtensor{:int64:} [@J] ''
| | | |ScalarFromTensor [@N] ''
| | | |Subtensor{int64} [@H] ''
| | |IncSubtensor{Set;:int64:} [@O] ''
| | |AllocEmpty{dtype='int64'} [@P] ''
| | | |Elemwise{add,no_inplace} [@Q] ''
| | | |TensorConstant{5} [@D]
| | | |Subtensor{int64} [@R] ''
| | | |Shape [@S] ''
| | | | |Subtensor{:int64:} [@T] ''
| | | | |<TensorType(int64, vector)> [@U]
| | | | |Constant{2} [@V]
| | | |Constant{0} [@W]
| | |Subtensor{:int64:} [@T] ''
| | |ScalarFromTensor [@X] ''
| | |Subtensor{int64} [@R] ''
| |Constant{2} [@Y]
|Subtensor{int64::} [@Z] ''
|for{cpu,scan_fn}.1 [@C] ''
|Constant{2} [@BA]
expected_output = """Elemwise{add,no_inplace} [#A] ''
|Subtensor{int64::} [#B] ''
| |for{cpu,scan_fn}.0 [#C] ''
| | |TensorConstant{5} [#D]
| | |IncSubtensor{Set;:int64:} [#E] ''
| | | |AllocEmpty{dtype='int64'} [#F] ''
| | | | |Elemwise{add,no_inplace} [#G] ''
| | | | |TensorConstant{5} [#D]
| | | | |Subtensor{int64} [#H] ''
| | | | |Shape [#I] ''
| | | | | |Subtensor{:int64:} [#J] ''
| | | | | |<TensorType(int64, vector)> [#K]
| | | | | |Constant{2} [#L]
| | | | |Constant{0} [#M]
| | | |Subtensor{:int64:} [#J] ''
| | | |ScalarFromTensor [#N] ''
| | | |Subtensor{int64} [#H] ''
| | |IncSubtensor{Set;:int64:} [#O] ''
| | |AllocEmpty{dtype='int64'} [#P] ''
| | | |Elemwise{add,no_inplace} [#Q] ''
| | | |TensorConstant{5} [#D]
| | | |Subtensor{int64} [#R] ''
| | | |Shape [#S] ''
| | | | |Subtensor{:int64:} [#T] ''
| | | | |<TensorType(int64, vector)> [#U]
| | | | |Constant{2} [#V]
| | | |Constant{0} [#W]
| | |Subtensor{:int64:} [#T] ''
| | |ScalarFromTensor [#X] ''
| | |Subtensor{int64} [#R] ''
| |Constant{2} [#Y]
|Subtensor{int64::} [#Z] ''
|for{cpu,scan_fn}.1 [#C] ''
|Constant{2} [#BA]
Inner graphs of the scan ops:
for{cpu,scan_fn}.0 [@C] ''
>Elemwise{add,no_inplace} [@BB] ''
> |<TensorType(int64, scalar)> [@BC] -> [@E]
> |<TensorType(int64, scalar)> [@BD] -> [@E]
>Elemwise{add,no_inplace} [@BE] ''
> |<TensorType(int64, scalar)> [@BF] -> [@O]
> |<TensorType(int64, scalar)> [@BG] -> [@O]
for{cpu,scan_fn}.0 [#C] ''
>Elemwise{add,no_inplace} [#BB] ''
> |<TensorType(int64, scalar)> [#BC] -> [#E]
> |<TensorType(int64, scalar)> [#BD] -> [#E]
>Elemwise{add,no_inplace} [#BE] ''
> |<TensorType(int64, scalar)> [#BF] -> [#O]
> |<TensorType(int64, scalar)> [#BG] -> [#O]
for{cpu,scan_fn}.1 [@C] ''
>Elemwise{add,no_inplace} [@BB] ''
>Elemwise{add,no_inplace} [@BE] ''"""
for{cpu,scan_fn}.1 [#C] ''
>Elemwise{add,no_inplace} [#BB] ''
>Elemwise{add,no_inplace} [#BE] ''"""
for truth, out in zip(expected_output.split("\n"), lines):
assert truth.strip() == out.strip()
......@@ -598,122 +598,122 @@ def test_scan_debugprint5():
for line in output_str.split('\n'):
lines += [line]
expected_output = """Subtensor{int64} [@A] ''
|for{cpu,grad_of_scan_fn}.1 [@B] ''
| |Elemwise{sub,no_inplace} [@C] ''
| | |Subtensor{int64} [@D] ''
| | | |Shape [@E] ''
| | | | |for{cpu,scan_fn} [@F] ''
| | | | |k [@G]
| | | | |IncSubtensor{Set;:int64:} [@H] ''
| | | | | |AllocEmpty{dtype='float64'} [@I] ''
| | | | | | |Elemwise{add,no_inplace} [@J] ''
| | | | | | | |k [@G]
| | | | | | | |Subtensor{int64} [@K] ''
| | | | | | | |Shape [@L] ''
| | | | | | | | |Rebroadcast{0} [@M] ''
| | | | | | | | |DimShuffle{x,0} [@N] ''
| | | | | | | | |Elemwise{second,no_inplace} [@O] ''
| | | | | | | | |A [@P]
| | | | | | | | |DimShuffle{x} [@Q] ''
| | | | | | | | |TensorConstant{1.0} [@R]
| | | | | | | |Constant{0} [@S]
| | | | | | |Subtensor{int64} [@T] ''
| | | | | | |Shape [@U] ''
| | | | | | | |Rebroadcast{0} [@M] ''
| | | | | | |Constant{1} [@V]
| | | | | |Rebroadcast{0} [@M] ''
| | | | | |ScalarFromTensor [@W] ''
| | | | | |Subtensor{int64} [@K] ''
| | | | |A [@P]
| | | |Constant{0} [@X]
| | |TensorConstant{1} [@Y]
| |Subtensor{:int64:} [@Z] ''
| | |Subtensor{::int64} [@BA] ''
| | | |Subtensor{:int64:} [@BB] ''
| | | | |for{cpu,scan_fn} [@F] ''
| | | | |Constant{-1} [@BC]
| | | |Constant{-1} [@BD]
| | |ScalarFromTensor [@BE] ''
| | |Elemwise{sub,no_inplace} [@C] ''
| |Subtensor{:int64:} [@BF] ''
| | |Subtensor{:int64:} [@BG] ''
| | | |Subtensor{::int64} [@BH] ''
| | | | |for{cpu,scan_fn} [@F] ''
| | | | |Constant{-1} [@BI]
| | | |Constant{-1} [@BJ]
| | |ScalarFromTensor [@BK] ''
| | |Elemwise{sub,no_inplace} [@C] ''
| |Subtensor{::int64} [@BL] ''
| | |IncSubtensor{Inc;int64::} [@BM] ''
| | | |Elemwise{second,no_inplace} [@BN] ''
| | | | |for{cpu,scan_fn} [@BO] ''
| | | | | |k [@G]
| | | | | |IncSubtensor{Set;:int64:} [@H] ''
| | | | | |A [@P]
| | | | |DimShuffle{x,x} [@BP] ''
| | | | |TensorConstant{0.0} [@BQ]
| | | |IncSubtensor{Inc;int64} [@BR] ''
| | | | |Elemwise{second,no_inplace} [@BS] ''
| | | | | |Subtensor{int64::} [@BT] ''
| | | | | | |for{cpu,scan_fn} [@BO] ''
| | | | | | |Constant{1} [@BU]
| | | | | |DimShuffle{x,x} [@BV] ''
| | | | | |TensorConstant{0.0} [@BQ]
| | | | |Elemwise{second} [@BW] ''
| | | | | |Subtensor{int64} [@BX] ''
| | | | | | |Subtensor{int64::} [@BT] ''
| | | | | | |Constant{-1} [@BY]
| | | | | |DimShuffle{x} [@BZ] ''
| | | | | |Elemwise{second,no_inplace} [@CA] ''
| | | | | |Sum{acc_dtype=float64} [@CB] ''
| | | | | | |Subtensor{int64} [@BX] ''
| | | | | |TensorConstant{1.0} [@R]
| | | | |Constant{-1} [@BY]
| | | |Constant{1} [@BU]
| | |Constant{-1} [@CC]
| |Alloc [@CD] ''
| | |TensorConstant{0.0} [@BQ]
| | |Elemwise{add,no_inplace} [@CE] ''
| | | |Elemwise{sub,no_inplace} [@C] ''
| | | |TensorConstant{1} [@Y]
| | |Subtensor{int64} [@CF] ''
| | |Shape [@CG] ''
| | | |A [@P]
| | |Constant{0} [@CH]
| |A [@P]
|Constant{-1} [@CI]
expected_output = """Subtensor{int64} [#A] ''
|for{cpu,grad_of_scan_fn}.1 [#B] ''
| |Elemwise{sub,no_inplace} [#C] ''
| | |Subtensor{int64} [#D] ''
| | | |Shape [#E] ''
| | | | |for{cpu,scan_fn} [#F] ''
| | | | |k [#G]
| | | | |IncSubtensor{Set;:int64:} [#H] ''
| | | | | |AllocEmpty{dtype='float64'} [#I] ''
| | | | | | |Elemwise{add,no_inplace} [#J] ''
| | | | | | | |k [#G]
| | | | | | | |Subtensor{int64} [#K] ''
| | | | | | | |Shape [#L] ''
| | | | | | | | |Rebroadcast{0} [#M] ''
| | | | | | | | |DimShuffle{x,0} [#N] ''
| | | | | | | | |Elemwise{second,no_inplace} [#O] ''
| | | | | | | | |A [#P]
| | | | | | | | |DimShuffle{x} [#Q] ''
| | | | | | | | |TensorConstant{1.0} [#R]
| | | | | | | |Constant{0} [#S]
| | | | | | |Subtensor{int64} [#T] ''
| | | | | | |Shape [#U] ''
| | | | | | | |Rebroadcast{0} [#M] ''
| | | | | | |Constant{1} [#V]
| | | | | |Rebroadcast{0} [#M] ''
| | | | | |ScalarFromTensor [#W] ''
| | | | | |Subtensor{int64} [#K] ''
| | | | |A [#P]
| | | |Constant{0} [#X]
| | |TensorConstant{1} [#Y]
| |Subtensor{:int64:} [#Z] ''
| | |Subtensor{::int64} [#BA] ''
| | | |Subtensor{:int64:} [#BB] ''
| | | | |for{cpu,scan_fn} [#F] ''
| | | | |Constant{-1} [#BC]
| | | |Constant{-1} [#BD]
| | |ScalarFromTensor [#BE] ''
| | |Elemwise{sub,no_inplace} [#C] ''
| |Subtensor{:int64:} [#BF] ''
| | |Subtensor{:int64:} [#BG] ''
| | | |Subtensor{::int64} [#BH] ''
| | | | |for{cpu,scan_fn} [#F] ''
| | | | |Constant{-1} [#BI]
| | | |Constant{-1} [#BJ]
| | |ScalarFromTensor [#BK] ''
| | |Elemwise{sub,no_inplace} [#C] ''
| |Subtensor{::int64} [#BL] ''
| | |IncSubtensor{Inc;int64::} [#BM] ''
| | | |Elemwise{second,no_inplace} [#BN] ''
| | | | |for{cpu,scan_fn} [#BO] ''
| | | | | |k [#G]
| | | | | |IncSubtensor{Set;:int64:} [#H] ''
| | | | | |A [#P]
| | | | |DimShuffle{x,x} [#BP] ''
| | | | |TensorConstant{0.0} [#BQ]
| | | |IncSubtensor{Inc;int64} [#BR] ''
| | | | |Elemwise{second,no_inplace} [#BS] ''
| | | | | |Subtensor{int64::} [#BT] ''
| | | | | | |for{cpu,scan_fn} [#BO] ''
| | | | | | |Constant{1} [#BU]
| | | | | |DimShuffle{x,x} [#BV] ''
| | | | | |TensorConstant{0.0} [#BQ]
| | | | |Elemwise{second} [#BW] ''
| | | | | |Subtensor{int64} [#BX] ''
| | | | | | |Subtensor{int64::} [#BT] ''
| | | | | | |Constant{-1} [#BY]
| | | | | |DimShuffle{x} [#BZ] ''
| | | | | |Elemwise{second,no_inplace} [#CA] ''
| | | | | |Sum{acc_dtype=float64} [#CB] ''
| | | | | | |Subtensor{int64} [#BX] ''
| | | | | |TensorConstant{1.0} [#R]
| | | | |Constant{-1} [#BY]
| | | |Constant{1} [#BU]
| | |Constant{-1} [#CC]
| |Alloc [#CD] ''
| | |TensorConstant{0.0} [#BQ]
| | |Elemwise{add,no_inplace} [#CE] ''
| | | |Elemwise{sub,no_inplace} [#C] ''
| | | |TensorConstant{1} [#Y]
| | |Subtensor{int64} [#CF] ''
| | |Shape [#CG] ''
| | | |A [#P]
| | |Constant{0} [#CH]
| |A [#P]
|Constant{-1} [#CI]
Inner graphs of the scan ops:
for{cpu,grad_of_scan_fn}.1 [@B] ''
>Elemwise{add,no_inplace} [@CJ] ''
> |Elemwise{mul} [@CK] ''
> | |<TensorType(float64, vector)> [@CL] -> [@BL]
> | |A_copy [@CM] -> [@P]
> |<TensorType(float64, vector)> [@CN] -> [@BL]
>Elemwise{add,no_inplace} [@CO] ''
> |Elemwise{mul} [@CP] ''
> | |<TensorType(float64, vector)> [@CL] -> [@BL]
> | |<TensorType(float64, vector)> [@CQ] -> [@Z]
> |<TensorType(float64, vector)> [@CR] -> [@CD]
for{cpu,scan_fn} [@F] ''
>Elemwise{mul,no_inplace} [@CS] ''
> |<TensorType(float64, vector)> [@CT] -> [@H]
> |A_copy [@CU] -> [@P]
for{cpu,scan_fn} [@F] ''
>Elemwise{mul,no_inplace} [@CS] ''
for{cpu,scan_fn} [@F] ''
>Elemwise{mul,no_inplace} [@CS] ''
for{cpu,scan_fn} [@BO] ''
>Elemwise{mul,no_inplace} [@CS] ''
for{cpu,scan_fn} [@BO] ''
>Elemwise{mul,no_inplace} [@CS] ''"""
for{cpu,grad_of_scan_fn}.1 [#B] ''
>Elemwise{add,no_inplace} [#CJ] ''
> |Elemwise{mul} [#CK] ''
> | |<TensorType(float64, vector)> [#CL] -> [#BL]
> | |A_copy [#CM] -> [#P]
> |<TensorType(float64, vector)> [#CN] -> [#BL]
>Elemwise{add,no_inplace} [#CO] ''
> |Elemwise{mul} [#CP] ''
> | |<TensorType(float64, vector)> [#CL] -> [#BL]
> | |<TensorType(float64, vector)> [#CQ] -> [#Z]
> |<TensorType(float64, vector)> [#CR] -> [#CD]
for{cpu,scan_fn} [#F] ''
>Elemwise{mul,no_inplace} [#CS] ''
> |<TensorType(float64, vector)> [#CT] -> [#H]
> |A_copy [#CU] -> [#P]
for{cpu,scan_fn} [#F] ''
>Elemwise{mul,no_inplace} [#CS] ''
for{cpu,scan_fn} [#F] ''
>Elemwise{mul,no_inplace} [#CS] ''
for{cpu,scan_fn} [#BO] ''
>Elemwise{mul,no_inplace} [#CS] ''
for{cpu,scan_fn} [#BO] ''
>Elemwise{mul,no_inplace} [#CS] ''"""
for truth, out in zip(expected_output.split("\n"), lines):
assert truth.strip() == out.strip()
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论