提交 1739dda0 authored 作者: Mikhail Korobov's avatar Mikhail Korobov

DOC fixed Python 3 compatibility issues in Tutorial and Library Reference

* use Python 2/3 compatible syntax for print; * use range instead of xrange (creating a list of 100..1000 ints in Python 2 is not a big deal)
上级 12aa9519
......@@ -90,7 +90,7 @@ Since we provided a ``value`` for ``s`` and ``x``, we can call it with just a va
>>> inc(5) # update s with 10+3*5
[]
>>> print inc[s]
>>> print(inc[s])
25.0
The effect of this call is to increment the storage associated to ``s`` in ``inc`` by 15.
......@@ -100,9 +100,9 @@ If we pass two arguments to ``inc``, then we override the value associated to
>>> inc(3, 4) # update s with 25 + 3*4
[]
>>> print inc[s]
>>> print(inc[s])
37.0
>>> print inc[x] # the override value of 4 was only temporary
>>> print(inc[x]) # the override value of 4 was only temporary
3.0
If we pass three arguments to ``inc``, then we override the value associated
......@@ -111,7 +111,7 @@ Since ``s``'s value is updated on every call, the old value of ``s`` will be ign
>>> inc(3, 4, 7) # update s with 7 + 3*4
[]
>>> print inc[s]
>>> print(inc[s])
19.0
We can also assign to ``inc[s]`` directly:
......
......@@ -35,7 +35,7 @@ variables, type this from the command-line:
.. code-block:: bash
python -c 'import theano; print theano.config' | less
python -c 'import theano; print(theano.config)' | less
Environment Variables
=====================
......@@ -98,7 +98,7 @@ import theano and print the config variable, as in:
.. code-block:: bash
python -c 'import theano; print theano.config' | less
python -c 'import theano; print(theano.config)' | less
.. attribute:: device
......@@ -525,7 +525,7 @@ import theano and print the config variable, as in:
This is a Python format string that specifies the subdirectory
of ``config.base_compiledir`` in which to store platform-dependent
compiled modules. To see a list of all available substitution keys,
run ``python -c "import theano; print theano.config"``, and look
run ``python -c "import theano; print(theano.config)"``, and look
for compiledir_format.
This flag's value cannot be modified during the program execution.
......
......@@ -24,7 +24,7 @@ More precisely, if *A* is a tensor you want to compute
.. code-block:: python
result = 1
for i in xrange(k):
for i in range(k):
result = result * A
There are three things here that we need to handle: the initial value
......@@ -57,8 +57,8 @@ The equivalent Theano code would be:
# compiled function that returns A**k
power = theano.function(inputs=[A,k], outputs=final_result, updates=updates)
print power(range(10),2)
print power(range(10),4)
print(power(range(10),2))
print(power(range(10),4))
.. testoutput::
......@@ -121,8 +121,8 @@ from a list of its coefficients:
# Test
test_coefficients = numpy.asarray([1, 0, 2], dtype=numpy.float32)
test_value = 3
print calculate_polynomial(test_coefficients, test_value)
print 1.0 * (3 ** 0) + 0.0 * (3 ** 1) + 2.0 * (3 ** 2)
print(calculate_polynomial(test_coefficients, test_value))
print(1.0 * (3 ** 0) + 0.0 * (3 ** 1) + 2.0 * (3 ** 2))
.. testoutput::
......@@ -513,7 +513,7 @@ value ``max_value``.
f = theano.function([max_value], values)
print f(45)
print(f(45))
.. testoutput::
......
......@@ -97,7 +97,7 @@ The second step is to combine *x* and *y* into their sum *z*:
function to pretty-print out the computation associated to *z*.
>>> from theano import pp
>>> print pp(z)
>>> print(pp(z))
(x + y)
......
......@@ -279,22 +279,24 @@ For GPU graphs, this borrowing can have a major speed impact. See the following
Out(sandbox.cuda.basic_ops.gpu_from_host(tensor.exp(x)),
borrow=True))
t0 = time.time()
for i in xrange(iters):
for i in range(iters):
r = f1()
t1 = time.time()
no_borrow = t1 - t0
t0 = time.time()
for i in xrange(iters):
for i in range(iters):
r = f2()
t1 = time.time()
print 'Looping', iters, 'times took', no_borrow, 'seconds without borrow',
print 'and', t1 - t0, 'seconds with borrow.'
print(
"Looping %s times took %s seconds without borrow "
"and %s seconds with borrow" % (iters, no_borrow, (t1 - t0))
)
if numpy.any([isinstance(x.op, tensor.Elemwise) and
('Gpu' not in type(x.op).__name__)
for x in f1.maker.fgraph.toposort()]):
print 'Used the cpu'
print('Used the cpu')
else:
print 'Used the gpu'
print('Used the gpu')
Which produces this output:
......
......@@ -43,14 +43,14 @@ IfElse vs Switch
n_times = 10
tic = time.clock()
for i in xrange(n_times):
for i in range(n_times):
f_switch(val1, val2, big_mat1, big_mat2)
print 'time spent evaluating both values %f sec' % (time.clock() - tic)
print('time spent evaluating both values %f sec' % (time.clock() - tic))
tic = time.clock()
for i in xrange(n_times):
for i in range(n_times):
f_lazyifelse(val1, val2, big_mat1, big_mat2)
print 'time spent evaluating one value %f sec' % (time.clock() - tic)
print('time spent evaluating one value %f sec' % (time.clock() - tic))
.. testoutput::
:hide:
......
......@@ -328,13 +328,15 @@ shows how to print all inputs and outputs:
.. testcode::
from __future__ import print_function
import theano
def inspect_inputs(i, node, fn):
print i, node, "input(s) value(s):", [input[0] for input in fn.inputs],
print(i, node, "input(s) value(s):", [input[0] for input in fn.inputs],
end='')
def inspect_outputs(i, node, fn):
print "output(s) value(s):", [output[0] for output in fn.outputs]
print("output(s) value(s):", [output[0] for output in fn.outputs])
x = theano.tensor.dscalar('x')
f = theano.function([x], [5 * x],
......@@ -376,10 +378,10 @@ can be achieved as follows:
for output in fn.outputs:
if (not isinstance(output[0], numpy.random.RandomState) and
numpy.isnan(output[0]).any()):
print '*** NaN detected ***'
print('*** NaN detected ***')
theano.printing.debugprint(node)
print 'Inputs : %s' % [input[0] for input in fn.inputs]
print 'Outputs: %s' % [output[0] for output in fn.outputs]
print('Inputs : %s' % [input[0] for input in fn.inputs])
print('Outputs: %s' % [output[0] for output in fn.outputs])
break
x = theano.tensor.dscalar('x')
......
......@@ -277,7 +277,7 @@ The full documentation can be found in the library: :ref:`Scan <lib_scan>`.
x = np.eye(5, dtype=theano.config.floatX)[0]
w = np.eye(5, 3, dtype=theano.config.floatX)
w[2] = np.ones((3), dtype=theano.config.floatX)
print compute_jac_t(w, x)[0]
print(compute_jac_t(w, x)[0])
# compare with numpy
print(((1 - np.tanh(x.dot(w)) ** 2) * w).T)
......@@ -412,7 +412,7 @@ Note that if you want to use a random variable ``d`` that will not be updated th
outputs=polynomial)
test_coeff = numpy.asarray([1, 0, 2], dtype=numpy.float32)
print calculate_polynomial(test_coeff, 3)
print(calculate_polynomial(test_coeff, 3))
.. testoutput::
......
......@@ -31,7 +31,7 @@ variables, type this from the command-line:
.. code-block:: bash
python -c 'import theano; print theano.config' | less
python -c 'import theano; print(theano.config)' | less
For more detail, see :ref:`Configuration <libdoc_config>` in the library.
......
......@@ -138,11 +138,11 @@ a ``csr`` one.
>>> y = sparse.CSR(data, indices, indptr, shape)
>>> f = theano.function([x], y)
>>> a = sp.csc_matrix(np.asarray([[0, 1, 1], [0, 0, 0], [1, 0, 0]]))
>>> print a.toarray()
>>> print(a.toarray())
[[0 1 1]
[0 0 0]
[1 0 0]]
>>> print f(a).toarray()
>>> print(f(a).toarray())
[[0 0 1]
[1 0 0]
[1 0 0]]
......@@ -165,11 +165,11 @@ provide a structured gradient. More explication below.
>>> y = sparse.structured_add(x, 2)
>>> f = theano.function([x], y)
>>> a = sp.csc_matrix(np.asarray([[0, 0, -1], [0, -2, 1], [3, 0, 0]], dtype='float32'))
>>> print a.toarray()
>>> print(a.toarray())
[[ 0. 0. -1.]
[ 0. -2. 1.]
[ 3. 0. 0.]]
>>> print f(a).toarray()
>>> print(f(a).toarray())
[[ 0. 0. 1.]
[ 0. 0. 3.]
[ 5. 0. 0.]]
......
......@@ -158,7 +158,7 @@ as we apply it. Consider the following example of optimization:
>>> a = theano.tensor.vector("a") # declare symbolic variable
>>> b = a + a ** 10 # build symbolic expression
>>> f = theano.function([a], b) # compile function
>>> print f([0, 1, 2]) # prints `array([0,2,1026])`
>>> print(f([0, 1, 2])) # prints `array([0,2,1026])`
[ 0. 2. 1026.]
>>> theano.printing.pydotprint(b, outfile="./pics/symbolic_graph_unopt.png", var_with_name_simple=True) # doctest: +SKIP
The output file is available at ./pics/symbolic_graph_unopt.png
......
......@@ -48,7 +48,7 @@ file and run it.
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
......@@ -124,7 +124,7 @@ after the ``T.exp(x)`` is replaced by a GPU version of ``exp()``.
f = function([], sandbox.cuda.basic_ops.gpu_from_host(T.exp(x)))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
......@@ -405,7 +405,7 @@ into a file and run it.
f = function([], tensor.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
......@@ -473,7 +473,7 @@ the GPU object directly. The following code is modifed to do just that.
f = function([], sandbox.gpuarray.basic_ops.gpu_from_host(tensor.exp(x)))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论