提交 ae33d372 authored 作者: David Warde-Farley's avatar David Warde-Farley

Merge pull request #1129 from delallea/minor

Minor fixes (typo / doc)
......@@ -133,9 +133,11 @@ Community
* Ask/view questions/answers at `metaoptimize/qa/tags/theano`_ (it's like stack overflow for machine learning)
* We try to stay organized with `Assembla's tickets <http://www.assembla.com/spaces/theano/tickets>`__
* We use `Github tickets <http://github.com/Theano/Theano/issues>`__ to keep track of issues
(however, some old tickets can still be found on
``Assembla <http://www.assembla.com/spaces/theano/tickets>`__).
* Come visit us in Montreal! Most of the developers are students in the LISA_ group at the `University of Montreal`_.
* Come visit us in Montreal! Most developers are students in the LISA_ group at the `University of Montreal`_.
.. toctree::
:maxdepth: 1
......
......@@ -1229,7 +1229,7 @@ Linear Algebra
>>> result = batched_dot(first, second)
:note: This is a subset of numpy.einsum, but we do not provide it for now.
But numpy einsum is slower then dot or tensordot:
But numpy einsum is slower than dot or tensordot:
http://mail.scipy.org/pipermail/numpy-discussion/2012-October/064259.html
:param X: left term
......
......@@ -671,7 +671,7 @@ Test them first, as they are not guaranteed to always provide a speedup."""
if not config.lib.amdlibm and any([exp_float32_op(a.op) and
a.inputs[0].dtype == 'float32'
for i, a in apply_time]):
print " - With the default gcc libm, exp in float32 is slower then in float64! Try Theano flag floatX=float64, or install amdlibm and set the theano flags lib.amdlibm=True"
print " - With the default gcc libm, exp in float32 is slower than in float64! Try Theano flag floatX=float64, or install amdlibm and set the theano flags lib.amdlibm=True"
printed_tip = True
#tip 4
......
......@@ -824,7 +824,7 @@ if 0: # old code still to be ported from ProfileMode
#tip 3
if not config.lib.amdlibm and any([exp_float32_op(a.op) and a.inputs[0].dtype=='float32' for i,a in apply_time]):
print " - With the default gcc libm, exp in float32 is slower then in float64! Try Theano flags floatX=float64 or install amdlibm and set the theano flags lib.amdlibm=True"
print " - With the default gcc libm, exp in float32 is slower than in float64! Try Theano flags floatX=float64 or install amdlibm and set the theano flags lib.amdlibm=True"
#tip 4
for a, t in apply_time.iteritems():
......
......@@ -736,7 +736,7 @@ class TensorType(Type):
except AttributeError:
msg = ""
raise TypeError("The numpy.ndarray object is not aligned."
" Theano c code do not support that.",
" Theano C code does not support that.",
msg,
"object shape", data.shape,
"object strides", data.strides)
......
......@@ -162,7 +162,7 @@ class ConvOp(OpenMPOp):
#It is an Intel(R) Xeon(R) CPU E5430 @ 2.66GHz. It is computer with theano/tensor/nnet/tests/speed_test_conv.py
# and took 5 minutes to run.
#TODO: we should compute this table for each computer/os as this can change.
# I saw on one computer that the speed with the shape can be slower then without!
# I saw on one computer that the speed with the shape can be slower than without!
# using the real shape and the same dtype could also help.
#unroll_batch, unroll_kern, valid time, full time
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论