Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
0d69ea0a
提交
0d69ea0a
authored
1月 05, 2016
作者:
Frédéric Bastien
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #3831 from kmike/py3-fixes
Assorted Python 3 fixes
上级
87c4e01f
1739dda0
显示空白字符变更
内嵌
并排
正在显示
14 个修改的文件
包含
50 行增加
和
54 行删除
+50
-54
io.txt
doc/library/compile/io.txt
+4
-4
config.txt
doc/library/config.txt
+3
-3
scan.txt
doc/library/scan.txt
+6
-6
adding.txt
doc/tutorial/adding.txt
+1
-1
aliasing.txt
doc/tutorial/aliasing.txt
+8
-6
conditions.txt
doc/tutorial/conditions.txt
+4
-4
debug_faq.txt
doc/tutorial/debug_faq.txt
+7
-5
loop.txt
doc/tutorial/loop.txt
+2
-2
modes.txt
doc/tutorial/modes.txt
+1
-1
sparse.txt
doc/tutorial/sparse.txt
+4
-4
symbolic_graphs.txt
doc/tutorial/symbolic_graphs.txt
+1
-1
using_gpu.txt
doc/tutorial/using_gpu.txt
+4
-4
opt.py
theano/gof/opt.py
+4
-12
test_sharedvar.py
theano/tensor/tests/test_sharedvar.py
+1
-1
没有找到文件。
doc/library/compile/io.txt
浏览文件 @
0d69ea0a
...
...
@@ -90,7 +90,7 @@ Since we provided a ``value`` for ``s`` and ``x``, we can call it with just a va
>>> inc(5) # update s with 10+3*5
[]
>>> print
inc[s]
>>> print
(inc[s])
25.0
The effect of this call is to increment the storage associated to ``s`` in ``inc`` by 15.
...
...
@@ -100,9 +100,9 @@ If we pass two arguments to ``inc``, then we override the value associated to
>>> inc(3, 4) # update s with 25 + 3*4
[]
>>> print
inc[s]
>>> print
(inc[s])
37.0
>>> print
inc[x]
# the override value of 4 was only temporary
>>> print
(inc[x])
# the override value of 4 was only temporary
3.0
If we pass three arguments to ``inc``, then we override the value associated
...
...
@@ -111,7 +111,7 @@ Since ``s``'s value is updated on every call, the old value of ``s`` will be ign
>>> inc(3, 4, 7) # update s with 7 + 3*4
[]
>>> print
inc[s]
>>> print
(inc[s])
19.0
We can also assign to ``inc[s]`` directly:
...
...
doc/library/config.txt
浏览文件 @
0d69ea0a
...
...
@@ -35,7 +35,7 @@ variables, type this from the command-line:
.. code-block:: bash
python -c 'import theano; print
theano.config
' | less
python -c 'import theano; print
(theano.config)
' | less
Environment Variables
=====================
...
...
@@ -98,7 +98,7 @@ import theano and print the config variable, as in:
.. code-block:: bash
python -c 'import theano; print
theano.config
' | less
python -c 'import theano; print
(theano.config)
' | less
.. attribute:: device
...
...
@@ -525,7 +525,7 @@ import theano and print the config variable, as in:
This is a Python format string that specifies the subdirectory
of ``config.base_compiledir`` in which to store platform-dependent
compiled modules. To see a list of all available substitution keys,
run ``python -c "import theano; print
theano.config
"``, and look
run ``python -c "import theano; print
(theano.config)
"``, and look
for compiledir_format.
This flag's value cannot be modified during the program execution.
...
...
doc/library/scan.txt
浏览文件 @
0d69ea0a
...
...
@@ -24,7 +24,7 @@ More precisely, if *A* is a tensor you want to compute
.. code-block:: python
result = 1
for i in
x
range(k):
for i in range(k):
result = result * A
There are three things here that we need to handle: the initial value
...
...
@@ -57,8 +57,8 @@ The equivalent Theano code would be:
# compiled function that returns A**k
power = theano.function(inputs=[A,k], outputs=final_result, updates=updates)
print
power(range(10),2
)
print
power(range(10),4
)
print
(power(range(10),2)
)
print
(power(range(10),4)
)
.. testoutput::
...
...
@@ -121,8 +121,8 @@ from a list of its coefficients:
# Test
test_coefficients = numpy.asarray([1, 0, 2], dtype=numpy.float32)
test_value = 3
print
calculate_polynomial(test_coefficients, test_value
)
print
1.0 * (3 ** 0) + 0.0 * (3 ** 1) + 2.0 * (3 ** 2
)
print
(calculate_polynomial(test_coefficients, test_value)
)
print
(1.0 * (3 ** 0) + 0.0 * (3 ** 1) + 2.0 * (3 ** 2)
)
.. testoutput::
...
...
@@ -513,7 +513,7 @@ value ``max_value``.
f = theano.function([max_value], values)
print
f(45
)
print
(f(45)
)
.. testoutput::
...
...
doc/tutorial/adding.txt
浏览文件 @
0d69ea0a
...
...
@@ -97,7 +97,7 @@ The second step is to combine *x* and *y* into their sum *z*:
function to pretty-print out the computation associated to *z*.
>>> from theano import pp
>>> print
pp(z
)
>>> print
(pp(z)
)
(x + y)
...
...
doc/tutorial/aliasing.txt
浏览文件 @
0d69ea0a
...
...
@@ -279,22 +279,24 @@ For GPU graphs, this borrowing can have a major speed impact. See the following
Out(sandbox.cuda.basic_ops.gpu_from_host(tensor.exp(x)),
borrow=True))
t0 = time.time()
for i in
x
range(iters):
for i in range(iters):
r = f1()
t1 = time.time()
no_borrow = t1 - t0
t0 = time.time()
for i in
x
range(iters):
for i in range(iters):
r = f2()
t1 = time.time()
print 'Looping', iters, 'times took', no_borrow, 'seconds without borrow',
print 'and', t1 - t0, 'seconds with borrow.'
print(
"Looping %s times took %s seconds without borrow "
"and %s seconds with borrow" % (iters, no_borrow, (t1 - t0))
)
if numpy.any([isinstance(x.op, tensor.Elemwise) and
('Gpu' not in type(x.op).__name__)
for x in f1.maker.fgraph.toposort()]):
print
'Used the cpu'
print
('Used the cpu')
else:
print
'Used the gpu'
print
('Used the gpu')
Which produces this output:
...
...
doc/tutorial/conditions.txt
浏览文件 @
0d69ea0a
...
...
@@ -43,14 +43,14 @@ IfElse vs Switch
n_times = 10
tic = time.clock()
for i in
x
range(n_times):
for i in range(n_times):
f_switch(val1, val2, big_mat1, big_mat2)
print
'time spent evaluating both values %f sec' % (time.clock() - tic
)
print
('time spent evaluating both values %f sec' % (time.clock() - tic)
)
tic = time.clock()
for i in
x
range(n_times):
for i in range(n_times):
f_lazyifelse(val1, val2, big_mat1, big_mat2)
print
'time spent evaluating one value %f sec' % (time.clock() - tic
)
print
('time spent evaluating one value %f sec' % (time.clock() - tic)
)
.. testoutput::
:hide:
...
...
doc/tutorial/debug_faq.txt
浏览文件 @
0d69ea0a
...
...
@@ -328,13 +328,15 @@ shows how to print all inputs and outputs:
.. testcode::
from __future__ import print_function
import theano
def inspect_inputs(i, node, fn):
print i, node, "input(s) value(s):", [input[0] for input in fn.inputs],
print(i, node, "input(s) value(s):", [input[0] for input in fn.inputs],
end='')
def inspect_outputs(i, node, fn):
print
"output(s) value(s):", [output[0] for output in fn.outputs]
print
("output(s) value(s):", [output[0] for output in fn.outputs])
x = theano.tensor.dscalar('x')
f = theano.function([x], [5 * x],
...
...
@@ -376,10 +378,10 @@ can be achieved as follows:
for output in fn.outputs:
if (not isinstance(output[0], numpy.random.RandomState) and
numpy.isnan(output[0]).any()):
print
'*** NaN detected ***'
print
('*** NaN detected ***')
theano.printing.debugprint(node)
print
'Inputs : %s' % [input[0] for input in fn.inputs]
print
'Outputs: %s' % [output[0] for output in fn.outputs]
print
('Inputs : %s' % [input[0] for input in fn.inputs])
print
('Outputs: %s' % [output[0] for output in fn.outputs])
break
x = theano.tensor.dscalar('x')
...
...
doc/tutorial/loop.txt
浏览文件 @
0d69ea0a
...
...
@@ -277,7 +277,7 @@ The full documentation can be found in the library: :ref:`Scan <lib_scan>`.
x = np.eye(5, dtype=theano.config.floatX)[0]
w = np.eye(5, 3, dtype=theano.config.floatX)
w[2] = np.ones((3), dtype=theano.config.floatX)
print
compute_jac_t(w, x)[0]
print
(compute_jac_t(w, x)[0])
# compare with numpy
print(((1 - np.tanh(x.dot(w)) ** 2) * w).T)
...
...
@@ -412,7 +412,7 @@ Note that if you want to use a random variable ``d`` that will not be updated th
outputs=polynomial)
test_coeff = numpy.asarray([1, 0, 2], dtype=numpy.float32)
print
calculate_polynomial(test_coeff, 3
)
print
(calculate_polynomial(test_coeff, 3)
)
.. testoutput::
...
...
doc/tutorial/modes.txt
浏览文件 @
0d69ea0a
...
...
@@ -31,7 +31,7 @@ variables, type this from the command-line:
.. code-block:: bash
python -c 'import theano; print
theano.config
' | less
python -c 'import theano; print
(theano.config)
' | less
For more detail, see :ref:`Configuration <libdoc_config>` in the library.
...
...
doc/tutorial/sparse.txt
浏览文件 @
0d69ea0a
...
...
@@ -138,11 +138,11 @@ a ``csr`` one.
>>> y = sparse.CSR(data, indices, indptr, shape)
>>> f = theano.function([x], y)
>>> a = sp.csc_matrix(np.asarray([[0, 1, 1], [0, 0, 0], [1, 0, 0]]))
>>> print
a.toarray(
)
>>> print
(a.toarray()
)
[[0 1 1]
[0 0 0]
[1 0 0]]
>>> print
f(a).toarray(
)
>>> print
(f(a).toarray()
)
[[0 0 1]
[1 0 0]
[1 0 0]]
...
...
@@ -165,11 +165,11 @@ provide a structured gradient. More explication below.
>>> y = sparse.structured_add(x, 2)
>>> f = theano.function([x], y)
>>> a = sp.csc_matrix(np.asarray([[0, 0, -1], [0, -2, 1], [3, 0, 0]], dtype='float32'))
>>> print
a.toarray(
)
>>> print
(a.toarray()
)
[[ 0. 0. -1.]
[ 0. -2. 1.]
[ 3. 0. 0.]]
>>> print
f(a).toarray(
)
>>> print
(f(a).toarray()
)
[[ 0. 0. 1.]
[ 0. 0. 3.]
[ 5. 0. 0.]]
...
...
doc/tutorial/symbolic_graphs.txt
浏览文件 @
0d69ea0a
...
...
@@ -158,7 +158,7 @@ as we apply it. Consider the following example of optimization:
>>> a = theano.tensor.vector("a") # declare symbolic variable
>>> b = a + a ** 10 # build symbolic expression
>>> f = theano.function([a], b) # compile function
>>> print
f([0, 1, 2])
# prints `array([0,2,1026])`
>>> print
(f([0, 1, 2]))
# prints `array([0,2,1026])`
[ 0. 2. 1026.]
>>> theano.printing.pydotprint(b, outfile="./pics/symbolic_graph_unopt.png", var_with_name_simple=True) # doctest: +SKIP
The output file is available at ./pics/symbolic_graph_unopt.png
...
...
doc/tutorial/using_gpu.txt
浏览文件 @
0d69ea0a
...
...
@@ -48,7 +48,7 @@ file and run it.
f = function([], T.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in
x
range(iters):
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
...
...
@@ -124,7 +124,7 @@ after the ``T.exp(x)`` is replaced by a GPU version of ``exp()``.
f = function([], sandbox.cuda.basic_ops.gpu_from_host(T.exp(x)))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in
x
range(iters):
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
...
...
@@ -405,7 +405,7 @@ into a file and run it.
f = function([], tensor.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in
x
range(iters):
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
...
...
@@ -473,7 +473,7 @@ the GPU object directly. The following code is modifed to do just that.
f = function([], sandbox.gpuarray.basic_ops.gpu_from_host(tensor.exp(x)))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in
x
range(iters):
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
...
...
theano/gof/opt.py
浏览文件 @
0d69ea0a
...
...
@@ -292,15 +292,7 @@ class SeqOptimizer(Optimizer, list):
else
:
ll
.
append
((
opt
.
name
,
opt
.
__class__
.
__name__
,
opts
.
index
(
opt
)))
lll
=
list
(
zip
(
prof
,
ll
))
def
cmp
(
a
,
b
):
if
a
[
0
]
==
b
[
0
]:
return
0
elif
a
[
0
]
<
b
[
0
]:
return
-
1
return
1
lll
.
sort
(
cmp
)
lll
=
sorted
(
zip
(
prof
,
ll
),
key
=
lambda
a
:
a
[
0
])
for
(
t
,
opt
)
in
lll
[::
-
1
]:
# if t < 1:
...
...
@@ -2361,7 +2353,7 @@ class EquilibriumOptimizer(NavigatorOptimizer):
t
,
count
,
n_created
,
o
),
file
=
stream
)
print
(
blanc
,
'
%.3
fs - in
%
d optimization that where not used (display only those with a runtime > 0)'
%
(
not_used_time
,
len
(
not_used
)),
file
=
stream
)
not_used
.
sort
()
not_used
.
sort
(
key
=
lambda
nu
:
(
nu
[
0
],
str
(
nu
[
1
]))
)
for
(
t
,
o
)
in
not_used
[::
-
1
]:
if
t
>
0
:
# Skip opt that have 0 times, they probably wasn't even tried.
...
...
@@ -2370,8 +2362,8 @@ class EquilibriumOptimizer(NavigatorOptimizer):
gf_opts
=
[
o
for
o
in
(
opt
.
global_optimizers
+
list
(
opt
.
final_optimizers
)
+
list
(
opt
.
cleanup_optimizers
))
if
o
.
print_profile
.
func_code
is
not
Optimizer
.
print_profile
.
func_code
]
if
o
.
print_profile
.
__code__
is
not
Optimizer
.
print_profile
.
__code__
]
if
not
gf_opts
:
return
print
(
blanc
,
"Global, final and clean up optimizers"
,
file
=
stream
)
...
...
theano/tensor/tests/test_sharedvar.py
浏览文件 @
0d69ea0a
...
...
@@ -316,7 +316,7 @@ def makeSharedTester(shared_constructor_,
if
dtype
is
None
:
dtype
=
theano
.
config
.
floatX
shp
=
(
100
/
4
,
1024
)
# 100KB
shp
=
(
100
/
/
4
,
1024
)
# 100KB
x
=
numpy
.
zeros
(
shp
,
dtype
=
dtype
)
x
=
self
.
cast_value
(
x
)
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论