提交 d06feefd authored 作者: Frederic Bastien's avatar Frederic Bastien

doc init_gpu_device and make it to don't move shared variable of float32 to the gpu.

上级 f8768d67
......@@ -95,13 +95,15 @@ Config Attributes
.. attribute:: device
String value: either 'cpu', gpu, 'gpu0', 'gpu1', 'gpu2', or 'gpu3'
String value: either 'cpu', 'gpu', 'gpu0', 'gpu1', 'gpu2', or 'gpu3'
Default device for computations. If gpu*, change the default to try to move computation to it and to put shared variable of float32 on it.
Choose the default compute device for theano graphs. Setting this to a
gpu string will make the corresponding graphics device the default storage
for shared tensor variables with dtype float32. 'gpu' lets the driver select
the gpu to use, while 'gpu?' makes theano try to use a specific device. If we are not
able to use the gpu, we fall back to the cpu.
gpu* string will make theano to try by default to move computation to it.
Also it will make theano put by default shared variable of float32 on it.
'gpu' lets the driver select the gpu to use, while 'gpu?' makes theano try
to use a specific device. If we are not able to use the gpu, we fall back
to the cpu.
.. attribute:: force_device
......@@ -112,6 +114,16 @@ Config Attributes
If True, we raise an error if we can't use the specified device. If False, we fall back to the cpu.
Have precedence over the device flag.
.. attribute:: init_gpu_device
String value: either '', 'gpu0', 'gpu1', 'gpu2', or 'gpu3'
Initialize the gpu device to use. This don't change anything other. So by
default we continue to do computation on the cpu and we keep shared variable
on the cpu memory.
When its value is gpu*, the theano flag device must be cpu.
.. attribute:: floatX
String value: either 'float64' or 'float32'.
......
......@@ -18,17 +18,20 @@ AddConfigVar('floatX',
#gpu mean let the driver select the gpu. Needed in case of gpu in exclusive mode.
#gpuX mean use the gpu number X.
AddConfigVar('device',
"Default device for computations. If gpu, try to move computation to it when possible.",
EnumStr('cpu', 'gpu',*['gpu%i'%i for i in range(4)])
"Default device for computations. If gpu*, change the default to try to move computation to it and to put shared variable of float32 on it.",
EnumStr('cpu', 'gpu',*['gpu%i'%i for i in range(4)]),
allow_override=False
)
AddConfigVar('init_gpu_device',
"Gpu device to use for computations, but don't automatically try to move the computation to this device. Usefull to run the test on a specific gpu.",
EnumStr('', *['gpu%i'%i for i in range(4)])
"Initialize the gpu device to use. This don't change the default behavior. We don't default to try to move the computation to it. We don't default to put shared variable of float32 on it. Usefull to run the test on a specific gpu.",
EnumStr('', *['gpu%i'%i for i in range(4)]),
allow_override=False
)
AddConfigVar('force_device',
"Raise an error if we can't use the specified device",
allow_override=False,
BoolParam(False)
)
......
......@@ -130,7 +130,8 @@ if cuda_available:
import cuda_ndarray
def use(device, force=False, move_to_gpu_automatically = True):
def use(device, force=False, default_to_move_computation_to_gpu = True,
move_shared_float32_to_gpu = True):
global cuda_enabled, cuda_initialization_error_message
if force and not cuda_available and device.startswith('gpu'):
raise EnvironmentError("You forced use of device %s, but CUDA initialization failed "
......@@ -160,7 +161,8 @@ def use(device, force=False, move_to_gpu_automatically = True):
#warning To let people see that the gpu will be used.
_logger.warn("We let the driver select the gpu device to use")
handle_shared_float32(True)
if move_shared_float32_to_gpu:
handle_shared_float32(True)
use.device_number = device
cuda_enabled = True
except (EnvironmentError, ValueError), e:
......@@ -172,7 +174,8 @@ def use(device, force=False, move_to_gpu_automatically = True):
elif use.device_number != device:
_logger.warning("WARNING: ignoring call to use(%s), GPU number %i is already in use." %(str(device), use.device_number))
if move_to_gpu_automatically:
if default_to_move_computation_to_gpu:
optdb.add_tags('gpu',
'fast_run',
'inplace')
......@@ -203,5 +206,6 @@ def handle_shared_float32(tf):
if config.device.startswith('gpu'):
use(config.device, config.force_device)
elif config.init_gpu_device:
print "Will init the gpu to use a specific gpu device. This don't move automatically cpu code to gpu. For that try the theano flags device."
use(config.init_gpu_device, config.force_device, False)
assert config.device=="cpu", "We can use the theano flags init_gpu_device only when the theano flags device=='cpu'"
print "Will init the gpu to use a specific gpu device. This don't default tomove computation and allocate shared variable of float32 to this device. For that try the theano flags device."
use(config.init_gpu_device, config.force_device, False, False)
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论