提交 48f4b192 authored 作者: Frédéric Bastien's avatar Frédéric Bastien

Merge pull request #1812 from delallea/minor

Minor fixes
...@@ -558,9 +558,9 @@ default, it will recompile the c code for each process. ...@@ -558,9 +558,9 @@ default, it will recompile the c code for each process.
Shape and Shape_i Shape and Shape_i
================= =================
We have 2 generic Ops Shape and Shape_i that return the shape of any We have 2 generic Ops, Shape and Shape_i, that return the shape of any
Theano Variable that have a shape attribute and Shape_i return only of Theano Variable that has a shape attribute (Shape_i returns only one of
the element of the shape. the elements of the shape).
.. code-block:: python .. code-block:: python
...@@ -568,5 +568,5 @@ the element of the shape. ...@@ -568,5 +568,5 @@ the element of the shape.
theano.compile.ops.register_shape_c_code(YOUR_TYPE_CLASS, THE_C_CODE, version=()) theano.compile.ops.register_shape_c_code(YOUR_TYPE_CLASS, THE_C_CODE, version=())
theano.compile.ops.register_shape_i_c_code(YOUR_TYPE_CLASS, THE_C_CODE, version=()) theano.compile.ops.register_shape_i_c_code(YOUR_TYPE_CLASS, THE_C_CODE, version=())
The c code work as the ViewOp. Shape_i have the additional i parameter The C code works as the ViewOp. Shape_i has the additional ``i`` parameter
that you can use with %(i)s. that you can use with ``%(i)s``.
...@@ -234,8 +234,8 @@ From here, the easiest way to get started is (this requires setuptools_ or distr ...@@ -234,8 +234,8 @@ From here, the easiest way to get started is (this requires setuptools_ or distr
.. note:: .. note::
"python setup.py develop ..." don't work on Python 3 as it don't call "python setup.py develop ..." does not work on Python 3 as it does not call
the converter from Python2 code to Python 3 code. the converter from Python 2 code to Python 3 code.
This will install a ``.pth`` file in your ``site-packages`` directory that This will install a ``.pth`` file in your ``site-packages`` directory that
tells Python where to look for your Theano installation (i.e. in the tells Python where to look for your Theano installation (i.e. in the
......
...@@ -167,7 +167,7 @@ yourself. Here is some code that will help you. ...@@ -167,7 +167,7 @@ yourself. Here is some code that will help you.
make FC=gfortran make FC=gfortran
sudo make PREFIX=/usr/local/ install sudo make PREFIX=/usr/local/ install
# Tell Theano to use OpenBLAS. # Tell Theano to use OpenBLAS.
# This work only for the current user. # This works only for the current user.
# Each Theano user on that computer should run that line. # Each Theano user on that computer should run that line.
echo -e "\n[blas]\nldflags = -lopenblas\n" >> ~/.theanorc echo -e "\n[blas]\nldflags = -lopenblas\n" >> ~/.theanorc
......
...@@ -216,7 +216,7 @@ import theano and print the config variable, as in: ...@@ -216,7 +216,7 @@ import theano and print the config variable, as in:
Positive int value, default: 200000. Positive int value, default: 200000.
This specifies the vectors minimum size for which elemwise ops This specifies the vectors minimum size for which elemwise ops
use openmp, if openmp is enable. use openmp, if openmp is enabled.
.. attribute:: cast_policy .. attribute:: cast_policy
......
...@@ -17,8 +17,8 @@ those operations will run in parallel in Theano. ...@@ -17,8 +17,8 @@ those operations will run in parallel in Theano.
The most frequent way to control the number of threads used is via the The most frequent way to control the number of threads used is via the
``OMP_NUM_THREADS`` environment variable. Set it to the number of ``OMP_NUM_THREADS`` environment variable. Set it to the number of
threads you want to use before starting the python process. Some BLAS threads you want to use before starting the Python process. Some BLAS
implementations support other enviroment variables. implementations support other environment variables.
Parallel element wise ops with OpenMP Parallel element wise ops with OpenMP
...@@ -35,9 +35,9 @@ tensor size for which the operation is parallelized because for short ...@@ -35,9 +35,9 @@ tensor size for which the operation is parallelized because for short
tensors using OpenMP can slow down the operation. The default value is tensors using OpenMP can slow down the operation. The default value is
``200000``. ``200000``.
For simple(fast) operation you can obtain a speed up with very large For simple (fast) operations you can obtain a speed-up with very large
tensors while for more complex operation you can obtain a good speed tensors while for more complex operations you can obtain a good speed-up
up also for smaller tensor. also for smaller tensors.
There is a script ``elemwise_openmp_speedup.py`` in ``theano/misc/`` There is a script ``elemwise_openmp_speedup.py`` in ``theano/misc/``
which you can use to tune the value of ``openmp_elemwise_minsize`` for which you can use to tune the value of ``openmp_elemwise_minsize`` for
...@@ -47,4 +47,4 @@ without OpenMP and shows the time difference between the cases. ...@@ -47,4 +47,4 @@ without OpenMP and shows the time difference between the cases.
The only way to control the number of threads used is via the The only way to control the number of threads used is via the
``OMP_NUM_THREADS`` environment variable. Set it to the number of threads ``OMP_NUM_THREADS`` environment variable. Set it to the number of threads
you want to use before starting the python process. you want to use before starting the Python process.
"""This file contain auxiliary Ops, used during the compilation phase """This file contains auxiliary Ops, used during the compilation phase
and Ops building class (:class:`FromFunctionOp`) and decorator and Ops building class (:class:`FromFunctionOp`) and decorator
(:func:`as_op`) that help make new Ops more rapidly. (:func:`as_op`) that help make new Ops more rapidly.
...@@ -374,7 +374,7 @@ class FromFunctionOp(gof.Op): ...@@ -374,7 +374,7 @@ class FromFunctionOp(gof.Op):
Build a basic Theano Op around a function. Build a basic Theano Op around a function.
Since the resulting Op is very basic and is missing most of the Since the resulting Op is very basic and is missing most of the
optional functionality, some optimization may not apply. If you optional functionalities, some optimizations may not apply. If you
want to help, you can supply an infer_shape function that computes want to help, you can supply an infer_shape function that computes
the shapes of the output given the shapes of the inputs. the shapes of the output given the shapes of the inputs.
......
...@@ -417,8 +417,8 @@ AddConfigVar('compute_test_value_opt', ...@@ -417,8 +417,8 @@ AddConfigVar('compute_test_value_opt',
in_c_key=False) in_c_key=False)
AddConfigVar('unpickle_function', AddConfigVar('unpickle_function',
("Replace unpickled Theano function with None", ("Replace unpickled Theano functions with None. "
"This is useful to unpickle old graph that pickled" "This is useful to unpickle old graphs that pickled"
" them when it shouldn't"), " them when it shouldn't"),
BoolParam(True), BoolParam(True),
in_c_key=False) in_c_key=False)
...@@ -483,9 +483,9 @@ AddConfigVar('openmp', ...@@ -483,9 +483,9 @@ AddConfigVar('openmp',
) )
AddConfigVar('openmp_elemwise_minsize', AddConfigVar('openmp_elemwise_minsize',
"If OpenMP is enable, this is the minimum size of vector " "If OpenMP is enabled, this is the minimum size of vectors "
"for which the openmp parallel for is enable." "for which the openmp parallelization is enabled "
"Used in element wise ops", "in element wise ops.",
IntParam(200000), IntParam(200000),
in_c_key=False, in_c_key=False,
) )
...@@ -1808,7 +1808,7 @@ class GCC_compiler(object): ...@@ -1808,7 +1808,7 @@ class GCC_compiler(object):
# Python3 compatibility: try to cast Py3 strings as Py2 strings # Python3 compatibility: try to cast Py3 strings as Py2 strings
try: try:
src_code = b(src_code) src_code = b(src_code)
except: except Exception:
pass pass
os.write(fd, src_code) os.write(fd, src_code)
os.close(fd) os.close(fd)
......
...@@ -9,7 +9,7 @@ parser = OptionParser(usage='%prog <options>\n Compute time for' ...@@ -9,7 +9,7 @@ parser = OptionParser(usage='%prog <options>\n Compute time for'
' fast and slow elemwise operations') ' fast and slow elemwise operations')
parser.add_option('-N', '--N', action='store', dest='N', parser.add_option('-N', '--N', action='store', dest='N',
default=theano.config.openmp_elemwise_minsize, type="int", default=theano.config.openmp_elemwise_minsize, type="int",
help="Number of vector element") help="Number of vector elements")
def runScript(N): def runScript(N):
......
...@@ -11,7 +11,7 @@ parser = OptionParser(usage='%prog <options>\n Compute time for' ...@@ -11,7 +11,7 @@ parser = OptionParser(usage='%prog <options>\n Compute time for'
' fast and slow elemwise operations') ' fast and slow elemwise operations')
parser.add_option('-N', '--N', action='store', dest='N', parser.add_option('-N', '--N', action='store', dest='N',
default=theano.config.openmp_elemwise_minsize, type="int", default=theano.config.openmp_elemwise_minsize, type="int",
help="Number of vector element") help="Number of vector elements")
parser.add_option('--script', action='store_true', dest='script', parser.add_option('--script', action='store_true', dest='script',
default=False, default=False,
help="Run program as script and print results on stdoutput") help="Run program as script and print results on stdoutput")
......
...@@ -71,9 +71,9 @@ def upcast(dtype, *dtypes): ...@@ -71,9 +71,9 @@ def upcast(dtype, *dtypes):
def get_scalar_type(dtype): def get_scalar_type(dtype):
""" """
Return an Scalar(dtype) object. Return a Scalar(dtype) object.
This cache objects to save allocation and run time. This caches objects to save allocation and run time.
""" """
if dtype not in get_scalar_type.cache: if dtype not in get_scalar_type.cache:
get_scalar_type.cache[dtype] = Scalar(dtype=dtype) get_scalar_type.cache[dtype] = Scalar(dtype=dtype)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论