提交 19f583c6 authored 作者: Pascal Lamblin's avatar Pascal Lamblin 提交者: GitHub

Merge pull request #5447 from ReyhaneAskari/fix_doc

Fix doc
...@@ -19,11 +19,34 @@ ...@@ -19,11 +19,34 @@
The user-friendly constructor is :func:`shared` The user-friendly constructor is :func:`shared`
.. attribute:: value .. method:: get_value(self, borrow=False, return_internal_type=False)
Read/write access to the [non-symbolic] value/data associated with this SharedVariable. :param borrow: True to permit returning of an object aliased to internal memory.
:type borrow: bool
Changes to this value will be visible to all functions using this SharedVariable. :param return_internal_type: True to permit the returning of an arbitrary type object used
internally to store the shared variable.
:type return_internal_type: bool
By default, return a copy of the data. If ``borrow=True`` (and
``return_internal_type=False``), maybe it will return a copy.
For tensor, it will always return a ndarray by default, so if
the data is on the GPU, it will return a copy, but if the data
is on the CPU, it will return the original data. If you do
``borrow=True`` and ``return_internal_type=True``, it will
always return the original data, not a copy, but this can be a
GPU object.
.. method:: set_value(self, new_value, borrow=False)
:param new_value: The new value.
:type new_value: A compatible type for this shared variable.
:param borrow: True to use the new_value directly, potentially creating problems
related to aliased memory.
:type borrow: bool
The new value will be seen by all functions using this SharedVariable.
.. method:: __init__(self, name, type, value, strict, container=None) .. method:: __init__(self, name, type, value, strict, container=None)
......
...@@ -120,6 +120,31 @@ class SharedVariable(Variable): ...@@ -120,6 +120,31 @@ class SharedVariable(Variable):
Changes to this value will be visible to all functions using Changes to this value will be visible to all functions using
this SharedVariable. this SharedVariable.
Notes
-----
Set_value will work in-place on the GPU, if
the following conditions are met:
* The destination on the GPU must be c_contiguous.
* The source is on the CPU.
* The old value must have the same dtype as the new value
(which is a given for now, since only float32 is
supported).
* The old and new value must have the same shape.
* The old value is being completely replaced by the new
value (not partially modified, e.g. by replacing some
subtensor of it).
* You change the value of the shared variable via
set_value, not via the .value accessors. You should not
use the .value accessors anyway, since they will soon be
deprecated and removed.
It is also worth mentioning that, for efficient transfer to the GPU,
Theano will make the new data ``c_contiguous``. This can require an
extra copy of the data on the host.
The inplace on gpu memory work when borrow is either True or False.
""" """
if borrow: if borrow:
self.container.value = new_value self.container.value = new_value
......
...@@ -79,24 +79,6 @@ class CudaNdarraySharedVariable(_operators, SharedVariable): ...@@ -79,24 +79,6 @@ class CudaNdarraySharedVariable(_operators, SharedVariable):
""" """
Return the value of this SharedVariable's internal array. Return the value of this SharedVariable's internal array.
Parameters
----------
borrow
Permit the return of internal storage, when used in conjunction with
``return_internal_type=True``.
return_internal_type
True to return the internal ``cuda_ndarray`` instance rather than a
``numpy.ndarray`` (Default False).
By default ``get_value()`` copies from the GPU to a ``numpy.ndarray``
and returns that host-allocated array.
``get_value(False,True)`` will return a GPU-allocated copy of the
original GPU array.
``get_value(True,True)`` will return the original GPU-allocated array
without any copying.
""" """
if return_internal_type or not self.get_value_return_ndarray: if return_internal_type or not self.get_value_return_ndarray:
# return a cuda_ndarray # return a cuda_ndarray
...@@ -111,42 +93,6 @@ class CudaNdarraySharedVariable(_operators, SharedVariable): ...@@ -111,42 +93,6 @@ class CudaNdarraySharedVariable(_operators, SharedVariable):
""" """
Assign `value` to the GPU-allocated array. Assign `value` to the GPU-allocated array.
Parameters
----------
borrow : bool
``True`` permits reusing `value` itself, ``False`` requires that
this function copies `value` into internal storage.
Notes
-----
Prior to Theano 0.3.1, set_value did not work in-place on the GPU. This
meant that sometimes, GPU memory for the new value would be allocated
before the old memory was released. If you're running near the limits of
GPU memory, this could cause you to run out of GPU memory.
Beginning with Theano 0.3.1, set_value will work in-place on the GPU, if
the following conditions are met:
* The destination on the GPU must be c_contiguous.
* The source is on the CPU.
* The old value must have the same dtype as the new value
(which is a given for now, since only float32 is
supported).
* The old and new value must have the same shape.
* The old value is being completely replaced by the new
value (not partially modified, e.g. by replacing some
subtensor of it).
* You change the value of the shared variable via
set_value, not via the .value accessors. You should not
use the .value accessors anyway, since they will soon be
deprecated and removed.
It is also worth mentioning that, for efficient transfer to the GPU,
Theano will make the new data ``c_contiguous``. This can require an
extra copy of the data on the host.
The inplace on gpu memory work when borrow is either True or False.
""" """
if not borrow: if not borrow:
# TODO: check for cuda_ndarray type # TODO: check for cuda_ndarray type
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论