提交 01532093 authored 作者: Igor Varfolomeev's avatar Igor Varfolomeev

Update the documentation on gpuarray.sched

上级 076be887
...@@ -482,7 +482,7 @@ import theano and print the config variable, as in: ...@@ -482,7 +482,7 @@ import theano and print the config variable, as in:
The sched parameter passed for context creation to pygpu. With The sched parameter passed for context creation to pygpu. With
CUDA, using "multi" mean using the parameter CUDA, using "multi" mean using the parameter
cudaDeviceScheduleYield. This is useful to lower the CPU overhead cudaDeviceScheduleBlockingSync. This is useful to lower the CPU overhead
when waiting for GPU. One user found that it speeds up his other when waiting for GPU. One user found that it speeds up his other
processes that was doing data augmentation. processes that was doing data augmentation.
......
...@@ -222,7 +222,7 @@ AddConfigVar('gpuarray.preallocate', ...@@ -222,7 +222,7 @@ AddConfigVar('gpuarray.preallocate',
AddConfigVar('gpuarray.sched', AddConfigVar('gpuarray.sched',
"""The sched parameter passed for context creation to pygpu. """The sched parameter passed for context creation to pygpu.
With CUDA, using "multi" is equivalent to using the parameter With CUDA, using "multi" is equivalent to using the parameter
cudaDeviceScheduleYield. This is useful to lower the cudaDeviceScheduleBlockingSync. This is useful to lower the
CPU overhead when waiting for GPU. One user found that it CPU overhead when waiting for GPU. One user found that it
speeds up his other processes that was doing data augmentation. speeds up his other processes that was doing data augmentation.
""", """,
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论