提交 138944d2 authored 作者: Frederic's avatar Frederic

Change @ to # in debugprint

上级 296293c8
...@@ -81,19 +81,19 @@ iteration number or other kinds of information in the name. ...@@ -81,19 +81,19 @@ iteration number or other kinds of information in the name.
2) The second function to print a graph is :func:`theano.printing.debugprint` 2) The second function to print a graph is :func:`theano.printing.debugprint`
>>> theano.printing.debugprint(f.maker.fgraph.outputs[0]) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f.maker.fgraph.outputs[0]) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul,no_inplace} [@A] '' Elemwise{mul,no_inplace} [#A] ''
|TensorConstant{2.0} [@B] |TensorConstant{2.0} [#B]
|x [@C] |x [#C]
Each line printed represents a Variable in the graph. Each line printed represents a Variable in the graph.
The line ``|x [@C`` means the variable named ``x`` with debugprint identifier The line ``|x [#C]`` means the variable named ``x`` with debugprint identifier
[@C] is an input of the Elemwise. If you accidentally have two variables called ``x`` in [#C] is an input of the Elemwise. If you accidentally have two variables called ``x`` in
your graph, their different debugprint identifier will be your clue. your graph, their different debugprint identifier will be your clue.
The line ``|TensorConstant{2.0} [@B]`` means that there is a constant 2.0 The line ``|TensorConstant{2.0} [#B]`` means that there is a constant 2.0
with this debugprint identifier. with this debugprint identifier.
The line ``Elemwise{mul,no_inplace} [@A] ''`` is indented less than The line ``Elemwise{mul,no_inplace} [#A] ''`` is indented less than
the other ones, because it means there is a variable computed by multiplying the other ones, because it means there is a variable computed by multiplying
the other (more indented) ones together. the other (more indented) ones together.
...@@ -106,25 +106,26 @@ printed? Look for debugprint identifier using the Find feature of your text ...@@ -106,25 +106,26 @@ printed? Look for debugprint identifier using the Find feature of your text
editor. editor.
>>> theano.printing.debugprint(gy) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(gy) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [@A] '' Elemwise{mul} [#A] ''
|Elemwise{mul} [@B] '' |Elemwise{mul} [#B] ''
| |Elemwise{second,no_inplace} [@C] '' | |Elemwise{second,no_inplace} [#C] ''
| | |Elemwise{pow,no_inplace} [@D] '' | | |Elemwise{pow,no_inplace} [#D] ''
| | | |x [@E] | | | |x [#E]
| | | |TensorConstant{2} [@F] | | | |TensorConstant{2} [#F]
| | |TensorConstant{1.0} [@G] | | |TensorConstant{1.0} [#G]
| |TensorConstant{2} [@F] | |TensorConstant{2} [#F]
|Elemwise{pow} [@H] '' |Elemwise{pow} [#H] ''
|x [@E] |x [#E]
|Elemwise{sub} [@I] '' |Elemwise{sub} [#I] ''
|TensorConstant{2} [@F] |TensorConstant{2} [#F]
|DimShuffle{} [@J] '' |DimShuffle{} [#J] ''
|TensorConstant{1} [@K] |TensorConstant{1} [#K]
>>> theano.printing.debugprint(gy, depth=2) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(gy, depth=2) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [@A] '' Elemwise{mul} [#A] ''
|Elemwise{mul} [@B] '' |Elemwise{mul} [#B] ''
|Elemwise{pow} [@C] '' |Elemwise{pow} [#C] ''
If the depth parameter is provided, it limits the number of levels that are If the depth parameter is provided, it limits the number of levels that are
shown. shown.
......
...@@ -392,8 +392,8 @@ can be achieved as follows: ...@@ -392,8 +392,8 @@ can be achieved as follows:
:options: +NORMALIZE_WHITESPACE :options: +NORMALIZE_WHITESPACE
*** NaN detected *** *** NaN detected ***
Elemwise{Composite{(log(i0) * i0)}} [@A] '' Elemwise{Composite{(log(i0) * i0)}} [#A] ''
|x [@B] |x [#B]
Inputs : [array(0.0)] Inputs : [array(0.0)]
Outputs: [array(nan)] Outputs: [array(nan)]
......
...@@ -67,39 +67,39 @@ Debug Print ...@@ -67,39 +67,39 @@ Debug Print
The pre-compilation graph: The pre-compilation graph:
>>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE
Elemwise{gt,no_inplace} [@A] '' Elemwise{gt,no_inplace} [#A] ''
|Elemwise{true_div,no_inplace} [@B] '' |Elemwise{true_div,no_inplace} [#B] ''
| |DimShuffle{x} [@C] '' | |DimShuffle{x} [#C] ''
| | |TensorConstant{1} [@D] | | |TensorConstant{1} [#D]
| |Elemwise{add,no_inplace} [@E] '' | |Elemwise{add,no_inplace} [#E] ''
| |DimShuffle{x} [@F] '' | |DimShuffle{x} [#F] ''
| | |TensorConstant{1} [@D] | | |TensorConstant{1} [#D]
| |Elemwise{exp,no_inplace} [@G] '' | |Elemwise{exp,no_inplace} [#G] ''
| |Elemwise{sub,no_inplace} [@H] '' | |Elemwise{sub,no_inplace} [#H] ''
| |Elemwise{neg,no_inplace} [@I] '' | |Elemwise{neg,no_inplace} [#I] ''
| | |dot [@J] '' | | |dot [#J] ''
| | |x [@K] | | |x [#K]
| | |w [@L] | | |w [#L]
| |DimShuffle{x} [@M] '' | |DimShuffle{x} [#M] ''
| |b [@N] | |b [#N]
|DimShuffle{x} [@O] '' |DimShuffle{x} [#O] ''
|TensorConstant{0.5} [@P] |TensorConstant{0.5} [#P]
The post-compilation graph: The post-compilation graph:
>>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [@A] '' 4 Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [#A] '' 4
|CGemv{inplace} [@B] '' 3 |CGemv{inplace} [#B] '' 3
| |AllocEmpty{dtype='float64'} [@C] '' 2 | |AllocEmpty{dtype='float64'} [#C] '' 2
| | |Shape_i{0} [@D] '' 1 | | |Shape_i{0} [#D] '' 1
| | |x [@E] | | |x [#E]
| |TensorConstant{1.0} [@F] | |TensorConstant{1.0} [#F]
| |x [@E] | |x [#E]
| |w [@G] | |w [#G]
| |TensorConstant{0.0} [@H] | |TensorConstant{0.0} [#H]
|InplaceDimShuffle{x} [@I] '' 0 |InplaceDimShuffle{x} [#I] '' 0
| |b [@J] | |b [#J]
|TensorConstant{(1,) of 0.5} [@K] |TensorConstant{(1,) of 0.5} [#K]
Picture Printing of Graphs Picture Printing of Graphs
......
...@@ -24,11 +24,11 @@ Currently, information regarding shape is used in two ways in Theano: ...@@ -24,11 +24,11 @@ Currently, information regarding shape is used in two ways in Theano:
>>> x = theano.tensor.matrix('x') >>> x = theano.tensor.matrix('x')
>>> f = theano.function([x], (x ** 2).shape) >>> f = theano.function([x], (x ** 2).shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector{dtype='int64'} [@A] '' 2 MakeVector{dtype='int64'} [#A] '' 2
|Shape_i{0} [@B] '' 1 |Shape_i{0} [#B] '' 1
| |x [@C] | |x [#C]
|Shape_i{1} [@D] '' 0 |Shape_i{1} [#D] '' 0
|x [@C] |x [#C]
The output of this compiled function does not contain any multiplication The output of this compiled function does not contain any multiplication
...@@ -51,24 +51,24 @@ can lead to errors. Consider this example: ...@@ -51,24 +51,24 @@ can lead to errors. Consider this example:
>>> f = theano.function([x, y], z.shape) >>> f = theano.function([x, y], z.shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector{dtype='int64'} [@A] '' 4 MakeVector{dtype='int64'} [#A] '' 4
|Elemwise{Add}[(0, 0)] [@B] '' 3 |Elemwise{Add}[(0, 0)] [#B] '' 3
| |Shape_i{0} [@C] '' 1 | |Shape_i{0} [#C] '' 1
| | |x [@D] | | |x [#D]
| |Shape_i{0} [@E] '' 2 | |Shape_i{0} [#E] '' 2
| |y [@F] | |y [#F]
|Shape_i{1} [@G] '' 0 |Shape_i{1} [#G] '' 0
|x [@D] |x [#D]
>>> f(xv, yv) # DOES NOT RAISE AN ERROR AS SHOULD BE. >>> f(xv, yv) # DOES NOT RAISE AN ERROR AS SHOULD BE.
array([8, 4]) array([8, 4])
>>> f = theano.function([x,y], z)# Do not take the shape. >>> f = theano.function([x,y], z)# Do not take the shape.
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
Join [@A] '' 0 Join [#A] '' 0
|TensorConstant{0} [@B] |TensorConstant{0} [#B]
|x [@C] |x [#C]
|y [@D] |y [#D]
>>> f(xv, yv) # doctest: +ELLIPSIS >>> f(xv, yv) # doctest: +ELLIPSIS
Traceback (most recent call last): Traceback (most recent call last):
...@@ -121,8 +121,8 @@ upgrade. Here is the current state of what can be done: ...@@ -121,8 +121,8 @@ upgrade. Here is the current state of what can be done:
>>> x_specify_shape = theano.tensor.specify_shape(x, (2, 2)) >>> x_specify_shape = theano.tensor.specify_shape(x, (2, 2))
>>> f = theano.function([x], (x_specify_shape ** 2).shape) >>> f = theano.function([x], (x_specify_shape ** 2).shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE >>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
DeepCopyOp [@A] '' 0 DeepCopyOp [#A] '' 0
|TensorConstant{(2,) of 2} [@B] |TensorConstant{(2,) of 2} [#B]
Future Plans Future Plans
============ ============
......
...@@ -647,11 +647,11 @@ def debugprint(r, prefix='', depth=-1, done=None, print_type=False, ...@@ -647,11 +647,11 @@ def debugprint(r, prefix='', depth=-1, done=None, print_type=False,
if obj in done: if obj in done:
id_str = done[obj] id_str = done[obj]
elif ids == "id": elif ids == "id":
id_str = "[@%s]" % str(id(r)) id_str = "[#%s]" % str(id(r))
elif ids == "int": elif ids == "int":
id_str = "[@%s]" % str(len(done)) id_str = "[#%s]" % str(len(done))
elif ids == "CHAR": elif ids == "CHAR":
id_str = "[@%s]" % char_from_number(len(done)) id_str = "[#%s]" % char_from_number(len(done))
elif ids == "": elif ids == "":
id_str = "" id_str = ""
done[obj] = id_str done[obj] = id_str
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论