提交 c7e88a00 authored 作者: Frédéric Bastien's avatar Frédéric Bastien

Merge pull request #2150 from delallea/minor

Minor fixes
......@@ -576,5 +576,5 @@ the elements of the shape).
The C code works as the ViewOp. Shape_i has the additional ``i`` parameter
that you can use with ``%(i)s``.
In your CHECK_INPUT, you must check that the input have enough ndim to
be able to get the ith shapes.
In your CHECK_INPUT, you must check that the input has enough dimensions to
be able to access the i-th one.
......@@ -140,12 +140,12 @@ default values.
.. method:: may_share_memory(a, b)
Optional to run, but mandatory for DebugMode. Return True if the python
Optional to run, but mandatory for DebugMode. Return True if the Python
objects `a` and `b` could share memory. Return False
otherwise. It is used to debug when Ops didn't declare memory
aliaing between variables. Can be a static method.
It is highly recommande to use and is mandatory for Type in Theano
as our buildbot run in DebugMode.
otherwise. It is used to debug when Ops did not declare memory
aliasing between variables. Can be a static method.
It is highly recommended to use and is mandatory for Type in Theano
as our buildbot runs in DebugMode.
For each method, the *default* is what ``Type`` defines
for you. So, if you create an instance of ``Type`` or an
......
......@@ -97,7 +97,7 @@ TODO: Give examples on how to use these things! They are pretty complicated.
- :func:`GpuDnnConv <theano.sandbox.cuda.dnn.GpuDnnConv>` GPU-only
convolution using NVIDIA's cuDNN library. To enable it (and
other cudnn-acclerated ops), set
other cudnn-accelerated ops), set
``THEANO_FLAGS=optimizer_including=cudnn`` in your environment.
This requires that you have cuDNN installed and available. It
also requires a GPU with compute capability 3.0 or more.
......
......@@ -444,8 +444,8 @@ signature:
.. note::
Not providing the `infer_shape` method cause shapes-related
optimization to not work with that op. For example
Not providing the `infer_shape` method prevents shape-related
optimizations from working with this op. For example
`your_op(inputs, ...).shape` will need the op to be executed just
to get the shape.
......@@ -456,7 +456,7 @@ signature:
.. note::
It converts the python function to a callable object that takes as
It converts the Python function to a callable object that takes as
inputs Theano variables that were declared.
......
......@@ -20,10 +20,10 @@ The most frequent way to control the number of threads used is via the
threads you want to use before starting the Python process. Some BLAS
implementations support other environment variables.
To test if you BLAS support OpenMP/Multiple cores, you can use the theano/misc/check_blas.py scripts from the command line like this::
To test if you BLAS supports OpenMP/Multiple cores, you can use the theano/misc/check_blas.py script from the command line like this::
OMP_NUM_THREAD=1 python theano/misc/check_blas.py -q
OMP_NUM_THREAD=2 python theano/misc/check_blas.py -q
OMP_NUM_THREADS=1 python theano/misc/check_blas.py -q
OMP_NUM_THREADS=2 python theano/misc/check_blas.py -q
......@@ -57,7 +57,7 @@ threads you want to use before starting the Python process. You can
test this with this command::
$OMP_NUM_THREADS=2 python theano/misc/elemwise_openmp_speedup.py
OMP_NUM_THREADS=2 python theano/misc/elemwise_openmp_speedup.py
#The output
Fast op time without openmp 0.000533s with openmp 0.000474s speedup 1.12
......
......@@ -11,6 +11,6 @@ tutorials/exercises if you need to learn it or only need a refresher:
* `Python Challenge <http://www.pythonchallenge.com/>`__
* `Dive into Python <http://diveintopython.net/>`__
* `Google Python Class <http://code.google.com/edu/languages/google-python-class/index.html>`__
* `Enthought python course <https://training.enthought.com/?utm_source=academic&utm_medium=email&utm_campaign=EToD-Launch#/courses>`__ (free for academics)
* `Enthought Python course <https://training.enthought.com/?utm_source=academic&utm_medium=email&utm_campaign=EToD-Launch#/courses>`__ (free for academics)
We have a tutorial on how :ref:`Python manages its memory <python-memory-management>`.
......@@ -435,10 +435,10 @@ AddConfigVar('warn.reduce_join',
'might have given an incorrect result. '
'To disable this warning, set the Theano flag '
'warn.reduce_join to False. The problem was an '
'optimization that modify the pattern '
'optimization, that modified the pattern '
'"Reduce{scalar.op}(Join(axis=0, a, b), axis=0)", '
'did not checked the reduction axis. So if the '
'reduction axis is not 0, you got wrong answer.'),
'did not check the reduction axis. So if the '
'reduction axis was not 0, you got a wrong answer.'),
BoolParam(warn_default('0.7')),
in_c_key=False)
......@@ -534,7 +534,7 @@ AddConfigVar('openmp_elemwise_minsize',
AddConfigVar('check_input',
"Specify if types should check their input in their C code. "
"It can be used to speed up compilation, reduce overhead"
"(particularly for scalars) and reduce the number of generated C"
"It can be used to speed up compilation, reduce overhead "
"(particularly for scalars) and reduce the number of generated C "
"files.",
BoolParam(True))
......@@ -52,7 +52,7 @@ class CudaNdarrayConstant(_operators, Constant):
try:
data = str(numpy.asarray(self.data))
except Exception, e:
data = "error while transfering the value:" + str(e)
data = "error while transferring the value: " + str(e)
return "CudaNdarrayConstant{"+data+"}"
CudaNdarrayType.Constant = CudaNdarrayConstant
......
......@@ -3544,10 +3544,10 @@ def local_reduce_join(node):
'might have given an incorrect result for this code. '
'To disable this warning, set the Theano flag '
'warn.reduce_join to False. The problem was an '
'optimization that modify the pattern '
'optimization, that modified the pattern '
'"Reduce{scalar.op}(Join(axis=0, a, b), axis=0)", '
'did not checked the reduction axis. So if the '
'reduction axis is not 0, you got wrong answer.'
'did not check the reduction axis. So if the '
'reduction axis was not 0, you got a wrong answer.'
))
return
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论