提交 a081a56f authored 作者: abergeron's avatar abergeron

Merge pull request #2912 from nouiz/doc

Doc
...@@ -325,9 +325,10 @@ Here's a brief example. The setup code is: ...@@ -325,9 +325,10 @@ Here's a brief example. The setup code is:
Here, 'rv_u' represents a random stream of 2x2 matrices of draws from a uniform Here, 'rv_u' represents a random stream of 2x2 matrices of draws from a uniform
distribution. Likewise, 'rv_n' represents a random stream of 2x2 matrices of distribution. Likewise, 'rv_n' represents a random stream of 2x2 matrices of
draws from a normal distribution. The distributions that are implemented are draws from a normal distribution. The distributions that are implemented are
defined in :class:`RandomStreams` and, at a lower level, in :ref:`raw_random<libdoc_tensor_raw_random>`. defined in :class:`RandomStreams` and, at a lower level,
in :ref:`raw_random<libdoc_tensor_raw_random>`. They only work on CPU.
See `Other Implementations`_ for GPU version.
.. TODO: repair the latter reference on RandomStreams
Now let's use these objects. If we call f(), we get random uniform numbers. Now let's use these objects. If we call f(), we get random uniform numbers.
The internal state of the random number generator is automatically updated, The internal state of the random number generator is automatically updated,
...@@ -459,10 +460,15 @@ Other Random Distributions ...@@ -459,10 +460,15 @@ Other Random Distributions
There are :ref:`other distributions implemented <libdoc_tensor_raw_random>`. There are :ref:`other distributions implemented <libdoc_tensor_raw_random>`.
.. _example_other_random:
Other Implementations Other Implementations
--------------------- ---------------------
There is 2 other implementations based on :class:`CURAND <theano.sandbox.cuda.rng_curand>` and :ref:`MRG31k3p <libdoc_rng_mrg>` There is 2 other implementations based on :class:`CURAND
<theano.sandbox.cuda.rng_curand>` and :ref:`MRG31k3p
<libdoc_rng_mrg>`. The RandomStream only work on the CPU, MRG31k3p
work on the CPU and GPU. CURAND only work on the GPU.
.. _logistic_regression: .. _logistic_regression:
......
...@@ -744,3 +744,9 @@ efficiency over the basic solution that is asked here, the two operations would ...@@ -744,3 +744,9 @@ efficiency over the basic solution that is asked here, the two operations would
have to be jointly optimized explicitly in the code.) have to be jointly optimized explicitly in the code.)
Modify and execute to support *stride* (i.e. to avoid constraining the input to be *C-contiguous*). Modify and execute to support *stride* (i.e. to avoid constraining the input to be *C-contiguous*).
Note
----
See :ref:`example_other_random` to know how to handle random numbers
on the GPU.
...@@ -1196,28 +1196,6 @@ def cast(x, dtype): ...@@ -1196,28 +1196,6 @@ def cast(x, dtype):
########################## ##########################
@constructor
def old_shape(a):
"""
Return the shape tuple of a TensorType Variable.
It may be either symbolic or nonsymbolic.
If the shape of the expression is not known at graph-construction time,
then a symbolic lvector will be returned, corresponding to the actual
shape at graph-execution time.
"""
va = as_tensor_variable(a)
# print 'HERE', va, va.type
if None in va.type.shape:
# Some shape components are unknown at this time
return _shape(va)
else:
# all shape components are known at compile time, so we return
# a tuple directly. This tuple is like the numpy.ndarray.shape tuple.
return va.type.shape
class MaxAndArgmax(Op): class MaxAndArgmax(Op):
"""Calculate the max and argmax over a given axis or over all axes. """Calculate the max and argmax over a given axis or over all axes.
""" """
...@@ -3306,13 +3284,15 @@ def addbroadcast(x, *axes): ...@@ -3306,13 +3284,15 @@ def addbroadcast(x, *axes):
Input theano tensor. Input theano tensor.
axis : an int or an iterable object such as list or tuple axis : an int or an iterable object such as list or tuple
of int values of int values
The dimension along which the tensor x should be broadcastable.
if the length of x along these dimensions is not 1, The dimension along which the tensor x should be
a ValueError will be raised. broadcastable. if the length of x along these
dimensions is not 1, a ValueError will be raised.
returns: returns:
---------- ----------
a theano tensor, which is broadcastable along the specified dimensions. a theano tensor, which is broadcastable along the specified dimensions.
""" """
rval = Rebroadcast(*[(axis, True) for axis in axes])(x) rval = Rebroadcast(*[(axis, True) for axis in axes])(x)
return theano.tensor.opt.apply_rebroadcast_opt(rval) return theano.tensor.opt.apply_rebroadcast_opt(rval)
...@@ -3334,13 +3314,15 @@ def unbroadcast(x, *axes): ...@@ -3334,13 +3314,15 @@ def unbroadcast(x, *axes):
Input theano tensor. Input theano tensor.
axis : an int or an iterable object such as list or tuple axis : an int or an iterable object such as list or tuple
of int values of int values
The dimension along which the tensor x should be unbroadcastable.
if the length of x along these dimensions is not 1, The dimension along which the tensor x should be
a ValueError will be raised. unbroadcastable. if the length of x along these
dimensions is not 1, a ValueError will be raised.
returns: returns:
---------- ----------
a theano tensor, which is unbroadcastable along the specified dimensions. a theano tensor, which is unbroadcastable along the specified dimensions.
""" """
rval = Rebroadcast(*[(axis, False) for axis in axes])(x) rval = Rebroadcast(*[(axis, False) for axis in axes])(x)
return theano.tensor.opt.apply_rebroadcast_opt(rval) return theano.tensor.opt.apply_rebroadcast_opt(rval)
...@@ -3363,6 +3345,7 @@ def patternbroadcast(x, broadcastable): ...@@ -3363,6 +3345,7 @@ def patternbroadcast(x, broadcastable):
Input theano tensor. Input theano tensor.
broadcastable : an iterable object such as list or tuple broadcastable : an iterable object such as list or tuple
of bool values of bool values
a set of boolean values indicating whether a dimension a set of boolean values indicating whether a dimension
should be broadcastable or not. should be broadcastable or not.
if the length of x along these dimensions is not 1, if the length of x along these dimensions is not 1,
...@@ -5468,8 +5451,6 @@ class Choose(Op): ...@@ -5468,8 +5451,6 @@ class Choose(Op):
"We currently didn't implemented that case. " "We currently didn't implemented that case. "
"To make it work, explicitly add dimensions " "To make it work, explicitly add dimensions "
"of size one for dimensions that will be broadcasted") "of size one for dimensions that will be broadcasted")
assert isinstance(node.inputs[1],
theano.typed_list.TypedListVariable)
bcast = [False] * out_ndim bcast = [False] * out_ndim
for idx, (b1, b2) in enumerate( for idx, (b1, b2) in enumerate(
......
...@@ -456,6 +456,7 @@ def bincount(x, weights=None, minlength=None, assert_nonneg=False): ...@@ -456,6 +456,7 @@ def bincount(x, weights=None, minlength=None, assert_nonneg=False):
:param assert_nonneg: A flag that inserts an assert_op to check if :param assert_nonneg: A flag that inserts an assert_op to check if
every input x is nonnegative. every input x is nonnegative.
Optional. Optional.
.. versionadded:: 0.6 .. versionadded:: 0.6
""" """
compatible_type = ('int8', 'int16', 'int32', 'int64', compatible_type = ('int8', 'int16', 'int32', 'int64',
...@@ -521,7 +522,7 @@ def compress(condition, x, axis=None): ...@@ -521,7 +522,7 @@ def compress(condition, x, axis=None):
:param x: Input data, tensor variable :param x: Input data, tensor variable
:param condition: 1 dimensional array of non-zero and zero values :param condition: 1 dimensional array of non-zero and zero values
corresponding to indices of slices along a selected axis corresponding to indices of slices along a selected axis
:return: `x` with selected slices :return: `x` with selected slices
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论