提交 1bc6c56f authored 作者: abergeron's avatar abergeron

Merge pull request #2416 from nouiz/doc

small doc update and interface addition.
...@@ -195,11 +195,13 @@ Here is the state of that vision as of December 3th, 2013 (after Theano release ...@@ -195,11 +195,13 @@ Here is the state of that vision as of December 3th, 2013 (after Theano release
* How to have `DebugMode` check it? Right now, DebugMode checks the computation non-lazily. * How to have `DebugMode` check it? Right now, DebugMode checks the computation non-lazily.
* SIMD parallelism on the CPU comes from the compiler. * SIMD parallelism on the CPU comes from the compiler.
* Multi-core parallelism is only supported by Conv2d(not by default). * Multi-core parallelism support limited.
If the external BLAS implementation supports it, If the external BLAS implementation supports it,
there are also, gemm, gemv and ger that are parallelized. many dot are parallelized via gemm, gemv and ger.
Also, element-wise operation are supported. See :ref:`tut_multi_cores`.
* No multi-node support. * No multi-node support.
* Many, but not all NumPy functions/aliases are implemented. * Most, but not all NumPy functions/aliases are implemented.
* https://github.com/Theano/Theano/issues/1080 * https://github.com/Theano/Theano/issues/1080
* Wrapping an existing Python function in easy and documented. * Wrapping an existing Python function in easy and documented.
* We know how to separate the shared variable memory * We know how to separate the shared variable memory
......
...@@ -43,7 +43,7 @@ There are also some top-level imports that you might find more convenient: ...@@ -43,7 +43,7 @@ There are also some top-level imports that you might find more convenient:
.. function:: shared(...) .. function:: shared(...)
Alias for :func:`shared.shared` Alias for :func:`theano.compile.sharedvalue.shared`
.. class:: Param .. class:: Param
......
...@@ -276,6 +276,15 @@ expression that evaluates to a tensor of same shape and dtype. ...@@ -276,6 +276,15 @@ expression that evaluates to a tensor of same shape and dtype.
.. _using_random_numbers: .. _using_random_numbers:
.. note::
Theano shared variable broadcast pattern default to False for each
dimensions. Shared variable size can change over time, so we can't
use the shape to find the broadcastable pattern. If you want a
different pattern, just pass it as a parameter
``theano.shared(..., broadcastable=(True, False))``
Using Random Numbers Using Random Numbers
==================== ====================
......
...@@ -286,6 +286,20 @@ class SqueezeTester(utt.InferShapeTester): ...@@ -286,6 +286,20 @@ class SqueezeTester(utt.InferShapeTester):
utt.verify_grad(self.op, [data]) utt.verify_grad(self.op, [data])
def test_var_interface(self):
# same as test_op, but use a_theano_var.squeeze.
for shape, broadcast in zip(self.shape_list, self.broadcast_list):
data = numpy.random.random(size=shape).astype(theano.config.floatX)
variable = tensor.TensorType(theano.config.floatX, broadcast)()
f = theano.function([variable], variable.squeeze())
expected = numpy.squeeze(data)
tested = f(data)
assert tested.shape == expected.shape
assert numpy.allclose(tested, expected)
class TestRepeatOp(utt.InferShapeTester): class TestRepeatOp(utt.InferShapeTester):
def _possible_axis(self, ndim): def _possible_axis(self, ndim):
......
...@@ -585,7 +585,19 @@ class _tensor_py_operators: ...@@ -585,7 +585,19 @@ class _tensor_py_operators:
def choose(self, a, choices, out=None, mode='raise'): def choose(self, a, choices, out=None, mode='raise'):
"""Construct an array from an index array and a set of arrays to choose from.""" """Construct an array from an index array and a set of arrays to choose from."""
return theano.tensor.basic.choose(self, a, choices, out=None, mode='raise') return theano.tensor.basic.choose(self, a, choices, out=None,
mode='raise')
def squeeze(self):
"""Remove broadcastable dimensions from
the shape of an array.
It returns the input array, but with the
broadcastable dimensions removed. This is
always `x` itself or a view into `x`.
"""
return theano.tensor.extra_ops.squeeze(self)
class TensorVariable(_tensor_py_operators, Variable): class TensorVariable(_tensor_py_operators, Variable):
"""Subclass to add the tensor operators to the basic `Variable` class.""" """Subclass to add the tensor operators to the basic `Variable` class."""
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论