提交 cdfbcbfa authored 作者: Simon Lefrancois's avatar Simon Lefrancois 提交者: GitHub

Merge pull request #5231 from chinnadhurai/ccw_5186

Clarify GPU memory pre-allocation in new and old backend
...@@ -162,6 +162,12 @@ but requires that all nodes in the graph have a C implementation: ...@@ -162,6 +162,12 @@ but requires that all nodes in the graph have a C implementation:
f = function([x], (x + 1.) * 2, mode=theano.Mode(linker='c')) f = function([x], (x + 1.) * 2, mode=theano.Mode(linker='c'))
f(10.) f(10.)
New GPU backend using libgpuarray
---------------------------------
The new theano GPU backend (:ref:`gpuarray`) uses ``config.gpuarray.preallocate`` for GPU memory allocation.
Likewise, the old back-end uses ``config.lib.cnmem`` for GPU memory allocation.
Related Projects Related Projects
---------------- ----------------
......
...@@ -416,32 +416,36 @@ import theano and print the config variable, as in: ...@@ -416,32 +416,36 @@ import theano and print the config variable, as in:
`amdlibm <http://developer.amd.com/cpu/libraries/libm/>`__ `amdlibm <http://developer.amd.com/cpu/libraries/libm/>`__
library, which is faster than the standard libm. library, which is faster than the standard libm.
.. attribute:: config.lib.cnmem .. attribute:: config.gpuarray.preallocate
Float value: >= 0 Float value
Controls the use of `CNMeM <https://github.com/NVIDIA/cnmem>`_ (a Default: 0 (Preallocation of size 0, only cache the allocation)
faster CUDA memory allocator). In Theano dev version until 0.8
is released.
The CNMeM library is included in Theano and does not need to be Controls the preallocation of memory with the gpuarray backend.
separately installed.
The value represents the start size (either in MB or the fraction of total GPU The value represents the start size (either in MB or the fraction
memory) of the memory pool. If more memory is needed, Theano will of total GPU memory) of the memory pool. If more memory is needed,
try to obtain more, but this can cause memory fragmentation. Theano will try to obtain more, but this can cause memory
fragmentation.
* 0: not enabled. A negative value will completely disable the allocation cache.
* 0 < N <= 1: use this fraction of the total GPU memory (clipped to .95 for driver memory). This can have a severe impact on performance and so should not be
done outside of debugging.
* < 0: disabled
* 0 <= N <= 1: use this fraction of the total GPU memory (clipped to .95 for driver memory).
* > 1: use this number in megabytes (MB) of memory. * > 1: use this number in megabytes (MB) of memory.
.. note::
Default: 0 (but should change later) This value allocates GPU memory ONLY when using (:ref:`gpuarray`).
For the old backend, please see ``config.lib.cnmem``
.. note:: .. note::
This could cause memory fragmentation. So if you have a This could cause memory fragmentation. So if you have a memory
memory error while using CNMeM, try to allocate more memory at error while using the cache, try to allocate more memory at
the start or disable it. If you try this, report your result the start or disable it. If you try this, report your result
on :ref`theano-dev`. on :ref`theano-dev`.
...@@ -452,31 +456,38 @@ import theano and print the config variable, as in: ...@@ -452,31 +456,38 @@ import theano and print the config variable, as in:
automatically to get more memory. But this can cause automatically to get more memory. But this can cause
fragmentation, see note above. fragmentation, see note above.
.. attribute:: config.gpuarray.preallocate .. attribute:: config.lib.cnmem
Float value .. note::
Default: 0 This value allocates GPU memory ONLY when using (:ref:`cuda`)
and has no effect when the GPU backend is (:ref:`gpuarray`).
For the new backend, please see ``config.gpuarray.preallocate``
Controls the preallocation of memory with the gpuarray backend. Float value: >= 0
The value represents the start size (either in MB or the fraction Controls the use of `CNMeM <https://github.com/NVIDIA/cnmem>`_ (a
of total GPU memory) of the memory pool. If more memory is needed, faster CUDA memory allocator). Applies to the old GPU backend
Theano will try to obtain more, but this can cause memory :ref:`cuda` up to Theano release 0.8.
fragmentation.
A negative value will completely disable the allocation cache. The CNMeM library is included in Theano and does not need to be
This can have a severe impact on performance and so should not be separately installed.
done outside of debugging.
* < 0: disabled The value represents the start size (either in MB or the fraction of total GPU
* 0 <= N <= 1: use this fraction of the total GPU memory (clipped to .95 for driver memory). memory) of the memory pool. If more memory is needed, Theano will
try to obtain more, but this can cause memory fragmentation.
* 0: not enabled.
* 0 < N <= 1: use this fraction of the total GPU memory (clipped to .95 for driver memory).
* > 1: use this number in megabytes (MB) of memory. * > 1: use this number in megabytes (MB) of memory.
Default: 0
.. note:: .. note::
This could cause memory fragmentation. So if you have a memory This could cause memory fragmentation. So if you have a
error while using the cache, try to allocate more memory at memory error while using CNMeM, try to allocate more memory at
the start or disable it. If you try this, report your result the start or disable it. If you try this, report your result
on :ref`theano-dev`. on :ref`theano-dev`.
......
...@@ -144,11 +144,11 @@ Could speed up and lower memory usage: ...@@ -144,11 +144,11 @@ Could speed up and lower memory usage:
Could raise memory usage but speed up computation: Could raise memory usage but speed up computation:
- :attr:`config.gpuarray.preallocate` =1 # Preallocates the GPU memory and - :attr:`config.gpuarray.preallocate` =1 # Preallocates the GPU memory for the new backend(:ref:`gpuarray`)
then manages it in a smart way. Does not raise much the memory usage, but if and then manages it in a smart way. Does not raise much the memory usage, but if
you are at the limit of GPU memory available you might need to specify a you are at the limit of GPU memory available you might need to specify a
lower value. GPU only. lower value. GPU only.
- :attr:`config.lib.cnmem` =1 # Equivalent on the old backend. GPU only. - :attr:`config.lib.cnmem` =1 # Equivalent on the old backend (:ref:`cuda`). GPU only.
- :attr:`config.allow_gc` =False - :attr:`config.allow_gc` =False
- :attr:`config.optimizer_excluding` =low_memory , GPU only for now. - :attr:`config.optimizer_excluding` =low_memory , GPU only for now.
......
...@@ -64,6 +64,10 @@ While all types of devices are supported if using OpenCL, for the ...@@ -64,6 +64,10 @@ While all types of devices are supported if using OpenCL, for the
remainder of this section, whatever compute device you are using will remainder of this section, whatever compute device you are using will
be referred to as GPU. be referred to as GPU.
.. note::
GpuArray backend uses ``config.gpuarray.preallocate`` for GPU memory allocation.
For the old backend, please see ``config.lib.cnmem``
.. warning:: .. warning::
If you want to use the new GpuArray backend, make sure to have the If you want to use the new GpuArray backend, make sure to have the
...@@ -283,6 +287,9 @@ Tips for Improving Performance on GPU ...@@ -283,6 +287,9 @@ Tips for Improving Performance on GPU
a value to `assert_no_cpu_op` flag, i.e. `warn`, for warning, `raise` for a value to `assert_no_cpu_op` flag, i.e. `warn`, for warning, `raise` for
raising an error or `pdb` for putting a breakpoint in the computational raising an error or `pdb` for putting a breakpoint in the computational
graph if there is a CPU Op. graph if there is a CPU Op.
* Please note that ``config.lib.cnmem`` and ``config.gpuarray.preallocate``
controls GPU memory allocation when using (:ref:`cuda`) and
(:ref:`gpuarray`) as theano backends respectively.
.. _gpu_async: .. _gpu_async:
...@@ -409,8 +416,8 @@ We provide installation instructions for :ref:`Linux <gpu_linux>`, ...@@ -409,8 +416,8 @@ We provide installation instructions for :ref:`Linux <gpu_linux>`,
The old CUDA backend can be activated using the flags ``device=gpu`` or The old CUDA backend can be activated using the flags ``device=gpu`` or
``device=gpu{0,1,...}`` ``device=gpu{0,1,...}``
.. Note:: .. note::
* CUDA backend uses ``config.lib.cnmem`` for GPU memory allocation. For the new backend(:ref:`gpuarray`), please see ``config.gpuarray.preallocate``
* Only 32 bit floats are supported. * Only 32 bit floats are supported.
* ``Shared`` variables with *float32* dtype are by default moved to the GPU memory space. * ``Shared`` variables with *float32* dtype are by default moved to the GPU memory space.
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论