提交 974bd517 authored 作者: slefrancois's avatar slefrancois

Doc corrections for new gpu backend

上级 007b9db2
...@@ -524,8 +524,8 @@ You can also set these options in the .theanorc file's ``[global]`` section: ...@@ -524,8 +524,8 @@ You can also set these options in the .theanorc file's ``[global]`` section:
Note that: Note that:
* If your computer has multiple GPUs and you use 'device=cuda', the driver * If your computer has multiple GPUs and you use 'device=cuda', the driver
selects the one to use (usually gpu0). selects the one to use (usually cuda0).
* You can use the program nvida-smi to change this policy. * You can use the program ``nvidia-smi`` to change this policy.
* You can choose one specific GPU by specifying 'device=cudaX', with X the * You can choose one specific GPU by specifying 'device=cudaX', with X the
the corresponding GPU index (0, 1, 2, ...) the corresponding GPU index (0, 1, 2, ...)
* By default, when ``device`` indicates preference for GPU computations, * By default, when ``device`` indicates preference for GPU computations,
......
...@@ -103,13 +103,16 @@ import theano and print the config variable, as in: ...@@ -103,13 +103,16 @@ import theano and print the config variable, as in:
.. attribute:: device .. attribute:: device
String value: either ``'cpu'``, ``'cuda'``, ``'cuda0'``, ``'cuda1'``, String value: either ``'cpu'``, ``'cuda'``, ``'cuda0'``, ``'cuda1'``,
``'opencl0:0'``, or ``'opencl0:1'`` ... ``'opencl0:0'``, ``'opencl0:1'``, ``'gpu'``, ``'gpu0'`` ...
Default device for computations. If ``'cuda*``, change the default to try Default device for computations. If ``'cuda*``, change the default to try
to move computation to the GPU using CUDA libraries. If ``'opencl*'``, to move computation to the GPU using CUDA libraries. If ``'opencl*'``,
the openCL libraries will be used. To let the driver select the device, the openCL libraries will be used. To let the driver select the device,
use ``'cuda'`` or ``'opencl'``. If we are not able to use the GPU, use ``'cuda'`` or ``'opencl'``. If ``'gpu*'``, the old gpu backend will
either we fall back on the CPU, or an error is raised, depending on the :attr:`force_device` flag. be used, although users are encouraged to migrate to the new GpuArray
backend. If we are not able to use the GPU,
either we fall back on the CPU, or an error is raised, depending
on the :attr:`force_device` flag.
This flag's value cannot be modified during the program execution. This flag's value cannot be modified during the program execution.
...@@ -135,11 +138,11 @@ import theano and print the config variable, as in: ...@@ -135,11 +138,11 @@ import theano and print the config variable, as in:
.. attribute:: init_gpu_device .. attribute:: init_gpu_device
String value: either ``''``, ``'cuda'``, ``'cuda0'``, ``'cuda1'``, String value: either ``''``, ``'cuda'``, ``'cuda0'``, ``'cuda1'``,
``'opencl0:0'``, or ``'opencl0:1'`` ... ``'opencl0:0'``, ``'opencl0:1'``, ``'gpu'``, ``'gpu0'`` ...
Initialize the gpu device to use. Initialize the gpu device to use.
When its value is cuda* or opencl*, the theano flag :attr:`device` must When its value is ``'cuda*'``, ``'opencl*'`` or ``'gpu*'``, the theano
be ``"cpu"``. flag :attr:`device` must be ``'cpu'``.
Unlike :attr:`device`, setting this flag to a specific GPU will not Unlike :attr:`device`, setting this flag to a specific GPU will not
try to use this device by default, in particular it will **not** move try to use this device by default, in particular it will **not** move
computations, nor shared variables, to the specified GPU. computations, nor shared variables, to the specified GPU.
......
...@@ -133,7 +133,7 @@ the GPU object directly. The following code is modified to do just that. ...@@ -133,7 +133,7 @@ the GPU object directly. The following code is modified to do just that.
rng = numpy.random.RandomState(22) rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX)) x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], tensor.exp(x).transfer('dev0')) f = function([], tensor.exp(x).transfer(None))
print(f.maker.fgraph.toposort()) print(f.maker.fgraph.toposort())
t0 = time.time() t0 = time.time()
for i in range(iters): for i in range(iters):
...@@ -148,7 +148,7 @@ the GPU object directly. The following code is modified to do just that. ...@@ -148,7 +148,7 @@ the GPU object directly. The following code is modified to do just that.
else: else:
print('Used the gpu') print('Used the gpu')
Here ``tensor.exp(x).transfer('None')`` means "copy ``exp(x)`` to the GPU", Here ``tensor.exp(x).transfer(None)`` means "copy ``exp(x)`` to the GPU",
with ``None`` the default GPU context when not explicitly given. with ``None`` the default GPU context when not explicitly given.
For information on how to set GPU contexts, see :ref:`tut_using_multi_gpu`. For information on how to set GPU contexts, see :ref:`tut_using_multi_gpu`.
...@@ -158,12 +158,15 @@ The output is ...@@ -158,12 +158,15 @@ The output is
:hide: :hide:
:options: +ELLIPSIS, +SKIP :options: +ELLIPSIS, +SKIP
Using device cuda0: ... $ THEANO_FLAGS=device=cuda0 python gpu_tutorial2.py
[GpuElemwise{exp,no_inplace}(<GpuArray<float64>>)] Mapped name None to device cuda0: GeForce GTX 680 (cuDNN version 5004)
Looping 1000 times took ... seconds [GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float64, (False,))>)]
Result is ... Looping 1000 times took 0.088381 seconds
Result is [ 1.23178032 1.61879341 1.52278065 ..., 2.20771815 2.29967753
1.62323285]
Used the gpu Used the gpu
.. code-block:: none .. code-block:: none
$ THEANO_FLAGS=device=cuda0 python gpu_tutorial2.py $ THEANO_FLAGS=device=cuda0 python gpu_tutorial2.py
...@@ -212,7 +215,7 @@ double (float64) or small (less than 32 bits like int16) data types. ...@@ -212,7 +215,7 @@ double (float64) or small (less than 32 bits like int16) data types.
You will get an error at compile time or runtime if this is the case. You will get an error at compile time or runtime if this is the case.
By default all inputs will get transferred to GPU. You can prevent an By default all inputs will get transferred to GPU. You can prevent an
input from getting transferred by setting its tag.target attribute to input from getting transferred by setting its ``tag.target`` attribute to
'cpu'. 'cpu'.
Complex support is untested and most likely completely broken. Complex support is untested and most likely completely broken.
...@@ -225,9 +228,10 @@ Tips for Improving Performance on GPU ...@@ -225,9 +228,10 @@ Tips for Improving Performance on GPU
* The GPU backend supports *float64* variables, but they are still slower * The GPU backend supports *float64* variables, but they are still slower
to compute than *float32*. The more *float32*, the better GPU performance to compute than *float32*. The more *float32*, the better GPU performance
you will get. you will get.
* Prefer constructors like ``matrix``, ``vector`` and ``scalar`` to * Prefer constructors like ``matrix``, ``vector`` and ``scalar`` (which
``dmatrix``, ``dvector`` and ``dscalar`` because the former will give follow the type set in ``floatX``) to ``dmatrix``, ``dvector`` and
you *float32* variables and ignore the type given to ``floatX``. ``dscalar``. The latter enforce double precision (*float64* on most
machines), which slows down GPU computations on current hardware.
* Minimize transfers to the GPU device by using ``shared`` variables * Minimize transfers to the GPU device by using ``shared`` variables
to store frequently-accessed data (see :func:`shared()<shared.shared>`). to store frequently-accessed data (see :func:`shared()<shared.shared>`).
When using the GPU, tensor ``shared`` variables are stored on When using the GPU, tensor ``shared`` variables are stored on
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论