提交 1a02b91d authored 作者: Olivier Delalleau's avatar Olivier Delalleau

Typo fix: slower then -> slower than

上级 46a85a15
...@@ -1229,7 +1229,7 @@ Linear Algebra ...@@ -1229,7 +1229,7 @@ Linear Algebra
>>> result = batched_dot(first, second) >>> result = batched_dot(first, second)
:note: This is a subset of numpy.einsum, but we do not provide it for now. :note: This is a subset of numpy.einsum, but we do not provide it for now.
But numpy einsum is slower then dot or tensordot: But numpy einsum is slower than dot or tensordot:
http://mail.scipy.org/pipermail/numpy-discussion/2012-October/064259.html http://mail.scipy.org/pipermail/numpy-discussion/2012-October/064259.html
:param X: left term :param X: left term
......
...@@ -671,7 +671,7 @@ Test them first, as they are not guaranteed to always provide a speedup.""" ...@@ -671,7 +671,7 @@ Test them first, as they are not guaranteed to always provide a speedup."""
if not config.lib.amdlibm and any([exp_float32_op(a.op) and if not config.lib.amdlibm and any([exp_float32_op(a.op) and
a.inputs[0].dtype == 'float32' a.inputs[0].dtype == 'float32'
for i, a in apply_time]): for i, a in apply_time]):
print " - With the default gcc libm, exp in float32 is slower then in float64! Try Theano flag floatX=float64, or install amdlibm and set the theano flags lib.amdlibm=True" print " - With the default gcc libm, exp in float32 is slower than in float64! Try Theano flag floatX=float64, or install amdlibm and set the theano flags lib.amdlibm=True"
printed_tip = True printed_tip = True
#tip 4 #tip 4
......
...@@ -824,7 +824,7 @@ if 0: # old code still to be ported from ProfileMode ...@@ -824,7 +824,7 @@ if 0: # old code still to be ported from ProfileMode
#tip 3 #tip 3
if not config.lib.amdlibm and any([exp_float32_op(a.op) and a.inputs[0].dtype=='float32' for i,a in apply_time]): if not config.lib.amdlibm and any([exp_float32_op(a.op) and a.inputs[0].dtype=='float32' for i,a in apply_time]):
print " - With the default gcc libm, exp in float32 is slower then in float64! Try Theano flags floatX=float64 or install amdlibm and set the theano flags lib.amdlibm=True" print " - With the default gcc libm, exp in float32 is slower than in float64! Try Theano flags floatX=float64 or install amdlibm and set the theano flags lib.amdlibm=True"
#tip 4 #tip 4
for a, t in apply_time.iteritems(): for a, t in apply_time.iteritems():
......
...@@ -162,7 +162,7 @@ class ConvOp(OpenMPOp): ...@@ -162,7 +162,7 @@ class ConvOp(OpenMPOp):
#It is an Intel(R) Xeon(R) CPU E5430 @ 2.66GHz. It is computer with theano/tensor/nnet/tests/speed_test_conv.py #It is an Intel(R) Xeon(R) CPU E5430 @ 2.66GHz. It is computer with theano/tensor/nnet/tests/speed_test_conv.py
# and took 5 minutes to run. # and took 5 minutes to run.
#TODO: we should compute this table for each computer/os as this can change. #TODO: we should compute this table for each computer/os as this can change.
# I saw on one computer that the speed with the shape can be slower then without! # I saw on one computer that the speed with the shape can be slower than without!
# using the real shape and the same dtype could also help. # using the real shape and the same dtype could also help.
#unroll_batch, unroll_kern, valid time, full time #unroll_batch, unroll_kern, valid time, full time
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论