提交 138944d2 authored 作者: Frederic's avatar Frederic

Change @ to # in debugprint

上级 296293c8
......@@ -81,19 +81,19 @@ iteration number or other kinds of information in the name.
2) The second function to print a graph is :func:`theano.printing.debugprint`
>>> theano.printing.debugprint(f.maker.fgraph.outputs[0]) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul,no_inplace} [@A] ''
|TensorConstant{2.0} [@B]
|x [@C]
Elemwise{mul,no_inplace} [#A] ''
|TensorConstant{2.0} [#B]
|x [#C]
Each line printed represents a Variable in the graph.
The line ``|x [@C`` means the variable named ``x`` with debugprint identifier
[@C] is an input of the Elemwise. If you accidentally have two variables called ``x`` in
The line ``|x [#C]`` means the variable named ``x`` with debugprint identifier
[#C] is an input of the Elemwise. If you accidentally have two variables called ``x`` in
your graph, their different debugprint identifier will be your clue.
The line ``|TensorConstant{2.0} [@B]`` means that there is a constant 2.0
The line ``|TensorConstant{2.0} [#B]`` means that there is a constant 2.0
with this debugprint identifier.
The line ``Elemwise{mul,no_inplace} [@A] ''`` is indented less than
The line ``Elemwise{mul,no_inplace} [#A] ''`` is indented less than
the other ones, because it means there is a variable computed by multiplying
the other (more indented) ones together.
......@@ -106,25 +106,26 @@ printed? Look for debugprint identifier using the Find feature of your text
editor.
>>> theano.printing.debugprint(gy) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [@A] ''
|Elemwise{mul} [@B] ''
| |Elemwise{second,no_inplace} [@C] ''
| | |Elemwise{pow,no_inplace} [@D] ''
| | | |x [@E]
| | | |TensorConstant{2} [@F]
| | |TensorConstant{1.0} [@G]
| |TensorConstant{2} [@F]
|Elemwise{pow} [@H] ''
|x [@E]
|Elemwise{sub} [@I] ''
|TensorConstant{2} [@F]
|DimShuffle{} [@J] ''
|TensorConstant{1} [@K]
Elemwise{mul} [#A] ''
|Elemwise{mul} [#B] ''
| |Elemwise{second,no_inplace} [#C] ''
| | |Elemwise{pow,no_inplace} [#D] ''
| | | |x [#E]
| | | |TensorConstant{2} [#F]
| | |TensorConstant{1.0} [#G]
| |TensorConstant{2} [#F]
|Elemwise{pow} [#H] ''
|x [#E]
|Elemwise{sub} [#I] ''
|TensorConstant{2} [#F]
|DimShuffle{} [#J] ''
|TensorConstant{1} [#K]
>>> theano.printing.debugprint(gy, depth=2) # doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [@A] ''
|Elemwise{mul} [@B] ''
|Elemwise{pow} [@C] ''
Elemwise{mul} [#A] ''
|Elemwise{mul} [#B] ''
|Elemwise{pow} [#C] ''
If the depth parameter is provided, it limits the number of levels that are
shown.
......
......@@ -392,8 +392,8 @@ can be achieved as follows:
:options: +NORMALIZE_WHITESPACE
*** NaN detected ***
Elemwise{Composite{(log(i0) * i0)}} [@A] ''
|x [@B]
Elemwise{Composite{(log(i0) * i0)}} [#A] ''
|x [#B]
Inputs : [array(0.0)]
Outputs: [array(nan)]
......
......@@ -67,39 +67,39 @@ Debug Print
The pre-compilation graph:
>>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE
Elemwise{gt,no_inplace} [@A] ''
|Elemwise{true_div,no_inplace} [@B] ''
| |DimShuffle{x} [@C] ''
| | |TensorConstant{1} [@D]
| |Elemwise{add,no_inplace} [@E] ''
| |DimShuffle{x} [@F] ''
| | |TensorConstant{1} [@D]
| |Elemwise{exp,no_inplace} [@G] ''
| |Elemwise{sub,no_inplace} [@H] ''
| |Elemwise{neg,no_inplace} [@I] ''
| | |dot [@J] ''
| | |x [@K]
| | |w [@L]
| |DimShuffle{x} [@M] ''
| |b [@N]
|DimShuffle{x} [@O] ''
|TensorConstant{0.5} [@P]
Elemwise{gt,no_inplace} [#A] ''
|Elemwise{true_div,no_inplace} [#B] ''
| |DimShuffle{x} [#C] ''
| | |TensorConstant{1} [#D]
| |Elemwise{add,no_inplace} [#E] ''
| |DimShuffle{x} [#F] ''
| | |TensorConstant{1} [#D]
| |Elemwise{exp,no_inplace} [#G] ''
| |Elemwise{sub,no_inplace} [#H] ''
| |Elemwise{neg,no_inplace} [#I] ''
| | |dot [#J] ''
| | |x [#K]
| | |w [#L]
| |DimShuffle{x} [#M] ''
| |b [#N]
|DimShuffle{x} [#O] ''
|TensorConstant{0.5} [#P]
The post-compilation graph:
>>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [@A] '' 4
|CGemv{inplace} [@B] '' 3
| |AllocEmpty{dtype='float64'} [@C] '' 2
| | |Shape_i{0} [@D] '' 1
| | |x [@E]
| |TensorConstant{1.0} [@F]
| |x [@E]
| |w [@G]
| |TensorConstant{0.0} [@H]
|InplaceDimShuffle{x} [@I] '' 0
| |b [@J]
|TensorConstant{(1,) of 0.5} [@K]
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [#A] '' 4
|CGemv{inplace} [#B] '' 3
| |AllocEmpty{dtype='float64'} [#C] '' 2
| | |Shape_i{0} [#D] '' 1
| | |x [#E]
| |TensorConstant{1.0} [#F]
| |x [#E]
| |w [#G]
| |TensorConstant{0.0} [#H]
|InplaceDimShuffle{x} [#I] '' 0
| |b [#J]
|TensorConstant{(1,) of 0.5} [#K]
Picture Printing of Graphs
......
......@@ -24,11 +24,11 @@ Currently, information regarding shape is used in two ways in Theano:
>>> x = theano.tensor.matrix('x')
>>> f = theano.function([x], (x ** 2).shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector{dtype='int64'} [@A] '' 2
|Shape_i{0} [@B] '' 1
| |x [@C]
|Shape_i{1} [@D] '' 0
|x [@C]
MakeVector{dtype='int64'} [#A] '' 2
|Shape_i{0} [#B] '' 1
| |x [#C]
|Shape_i{1} [#D] '' 0
|x [#C]
The output of this compiled function does not contain any multiplication
......@@ -51,24 +51,24 @@ can lead to errors. Consider this example:
>>> f = theano.function([x, y], z.shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector{dtype='int64'} [@A] '' 4
|Elemwise{Add}[(0, 0)] [@B] '' 3
| |Shape_i{0} [@C] '' 1
| | |x [@D]
| |Shape_i{0} [@E] '' 2
| |y [@F]
|Shape_i{1} [@G] '' 0
|x [@D]
MakeVector{dtype='int64'} [#A] '' 4
|Elemwise{Add}[(0, 0)] [#B] '' 3
| |Shape_i{0} [#C] '' 1
| | |x [#D]
| |Shape_i{0} [#E] '' 2
| |y [#F]
|Shape_i{1} [#G] '' 0
|x [#D]
>>> f(xv, yv) # DOES NOT RAISE AN ERROR AS SHOULD BE.
array([8, 4])
>>> f = theano.function([x,y], z)# Do not take the shape.
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
Join [@A] '' 0
|TensorConstant{0} [@B]
|x [@C]
|y [@D]
Join [#A] '' 0
|TensorConstant{0} [#B]
|x [#C]
|y [#D]
>>> f(xv, yv) # doctest: +ELLIPSIS
Traceback (most recent call last):
......@@ -121,8 +121,8 @@ upgrade. Here is the current state of what can be done:
>>> x_specify_shape = theano.tensor.specify_shape(x, (2, 2))
>>> f = theano.function([x], (x_specify_shape ** 2).shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
DeepCopyOp [@A] '' 0
|TensorConstant{(2,) of 2} [@B]
DeepCopyOp [#A] '' 0
|TensorConstant{(2,) of 2} [#B]
Future Plans
============
......
......@@ -647,11 +647,11 @@ def debugprint(r, prefix='', depth=-1, done=None, print_type=False,
if obj in done:
id_str = done[obj]
elif ids == "id":
id_str = "[@%s]" % str(id(r))
id_str = "[#%s]" % str(id(r))
elif ids == "int":
id_str = "[@%s]" % str(len(done))
id_str = "[#%s]" % str(len(done))
elif ids == "CHAR":
id_str = "[@%s]" % char_from_number(len(done))
id_str = "[#%s]" % char_from_number(len(done))
elif ids == "":
id_str = ""
done[obj] = id_str
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论