Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
e2ab154a
提交
e2ab154a
authored
2月 24, 2014
作者:
Frédéric Bastien
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #28 from abergeron/doc_fix
Fix numerous grammar, syntax and stylistic problems.
上级
ecfbdfd4
911dbda9
显示空白字符变更
内嵌
并排
正在显示
8 个修改的文件
包含
199 行增加
和
202 行删除
+199
-202
index.txt
doc/library/sparse/index.txt
+42
-42
raw_random.txt
doc/library/tensor/raw_random.txt
+98
-97
examples.txt
doc/tutorial/examples.txt
+15
-14
multi_cores.txt
doc/tutorial/multi_cores.txt
+24
-29
basic.py
theano/sparse/basic.py
+7
-6
Conv3D.py
theano/tensor/nnet/Conv3D.py
+5
-5
conv3d2d.py
theano/tensor/nnet/conv3d2d.py
+2
-2
raw_random.py
theano/tensor/raw_random.py
+6
-7
没有找到文件。
doc/library/sparse/index.txt
浏览文件 @
e2ab154a
...
@@ -11,20 +11,20 @@ In the tutorial section, you can find a :ref:`sparse tutorial
...
@@ -11,20 +11,20 @@ In the tutorial section, you can find a :ref:`sparse tutorial
The sparse submodule is not loaded when we import Theano. You must
The sparse submodule is not loaded when we import Theano. You must
import ``theano.sparse`` to enable it.
import ``theano.sparse`` to enable it.
The sparse module provide
the same functionalities
as the tensor
The sparse module provide
s the same functionality
as the tensor
module. The difference lies under the cover because sparse matrices
module. The difference lies under the cover
s
because sparse matrices
do
es
not store data in a contiguous array. Note that there are no GPU
do not store data in a contiguous array. Note that there are no GPU
implementations for sparse matrices i
mplemented in Theano. The sparse
implementations for sparse matrices i
n Theano. The sparse module has
module has
been used in:
been used in:
- NLP: Dense linear transformations of sparse vectors.
- NLP: Dense linear transformations of sparse vectors.
- Audio: Filterbank in Fourier domain.
- Audio: Filterbank in
the
Fourier domain.
Compressed Sparse Format
Compressed Sparse Format
========================
========================
This section tries to explain how information is store for the two
This section tries to explain how information is store
d
for the two
sparse formats of SciPy supported by Theano. There
is
more formats
sparse formats of SciPy supported by Theano. There
are
more formats
that can be used with SciPy and some documentation about them may be
that can be used with SciPy and some documentation about them may be
found `here
found `here
<http://deeplearning.net/software/theano/sandbox/sparse.html>`_.
<http://deeplearning.net/software/theano/sandbox/sparse.html>`_.
...
@@ -50,14 +50,14 @@ attributes: ``data``, ``indices``, ``indptr`` and ``shape``.
...
@@ -50,14 +50,14 @@ attributes: ``data``, ``indices``, ``indptr`` and ``shape``.
CSC Matrix
CSC Matrix
----------
----------
In the *Compressed Sparse Column* format, ``indices`` stands for
index
In the *Compressed Sparse Column* format, ``indices`` stands for
in
side the column vectors of the matrix and ``indptr`` tells where the
in
dexes inside the column vectors of the matrix and ``indptr`` tells
column starts in the ``data`` and in the ``indices``
where the
column starts in the ``data`` and in the ``indices``
attributes. ``indptr`` can be t
ought as giving the slice which must be
attributes. ``indptr`` can be t
hought of as giving the slice which
applied to the other attribute in order to get each column of the
must be applied to the other attribute in order to get each column of
matrix. In other words, ``slice(indptr[i], indptr[i+1])`` correspond
the matrix. In other words, ``slice(indptr[i], indptr[i+1])``
to the slice needed to find the i-th column of the matrix in the
corresponds to the slice needed to find the i-th column of the matrix
``data`` and in the
``indices`` fields.
in the ``data`` and
``indices`` fields.
The following example builds a matrix and returns its columns. It
The following example builds a matrix and returns its columns. It
prints the i-th column, i.e. a list of indices in the column and their
prints the i-th column, i.e. a list of indices in the column and their
...
@@ -84,18 +84,18 @@ corresponding value in the second list.
...
@@ -84,18 +84,18 @@ corresponding value in the second list.
CSR Matrix
CSR Matrix
----------
----------
In the *Compressed Sparse Row* format, ``indices`` stands for index
In the *Compressed Sparse Row* format, ``indices`` stands for index
es
inside the row vectors of the matrix and ``indptr`` tells where the
inside the row vectors of the matrix and ``indptr`` tells where the
row starts in the ``data`` and in the ``indices``
row starts in the ``data`` and in the ``indices``
attributes. ``indptr`` can be t
ought as giving the slice which must be
attributes. ``indptr`` can be t
hought of as giving the slice which
applied to the other attribute in order to get each row of the
must be
applied to the other attribute in order to get each row of the
matrix. In other words, ``slice(indptr[i], indptr[i+1])`` correspond
matrix. In other words, ``slice(indptr[i], indptr[i+1])`` correspond
s
to the slice needed to find the i-th row of the matrix in the ``data``
to the slice needed to find the i-th row of the matrix in the ``data``
and
in the
``indices`` fields.
and ``indices`` fields.
The following example builds a matrix and returns its rows. It prints
The following example builds a matrix and returns its rows. It prints
the i-th row, i.e. a list of indices in the row and their
corresponding value
the i-th row, i.e. a list of indices in the row and their
in the second list.
corresponding value
in the second list.
>>> data = np.asarray([7, 8, 9])
>>> data = np.asarray([7, 8, 9])
>>> indices = np.asarray([0, 1, 2])
>>> indices = np.asarray([0, 1, 2])
...
@@ -120,7 +120,7 @@ List of Implemented Operations
...
@@ -120,7 +120,7 @@ List of Implemented Operations
- Moving from and to sparse
- Moving from and to sparse
- :class:`DenseFromSparse <theano.sparse.basic.DenseFromSparse>` and ``dense_from_sparse``.
- :class:`DenseFromSparse <theano.sparse.basic.DenseFromSparse>` and ``dense_from_sparse``.
Both grad are implemented. Structured by default.
Both grad
s
are implemented. Structured by default.
- :class:`SparseFromDense <theano.sparse.basic.SparseFromDense>` and ``csr_from_dense``, ``csc_from_dense``.
- :class:`SparseFromDense <theano.sparse.basic.SparseFromDense>` and ``csr_from_dense``, ``csc_from_dense``.
The grad implemented is structured.
The grad implemented is structured.
- Theano SparseVariable object have a method ``toarray()`` that is the same as ``dense_from_sparse``.
- Theano SparseVariable object have a method ``toarray()`` that is the same as ``dense_from_sparse``.
...
@@ -201,16 +201,17 @@ List of Implemented Operations
...
@@ -201,16 +201,17 @@ List of Implemented Operations
- One of the inputs must be sparse, the other sparse or dense.
- One of the inputs must be sparse, the other sparse or dense.
- The grad implemented is regular.
- The grad implemented is regular.
- No C code for perform and no C code for grad.
- No C code for perform and no C code for grad.
- Return a dense for perform and a dense for grad.
- Return
s
a dense for perform and a dense for grad.
- :class:`StructuredDot <theano.sparse.basic.StructuredDot>`
- :class:`StructuredDot <theano.sparse.basic.StructuredDot>`
and :func:`structured_dot <theano.sparse.basic.structured_dot>`.
and :func:`structured_dot <theano.sparse.basic.structured_dot>`.
- The first input is sparse, the second can be sparse or dense.
- The first input is sparse, the second can be sparse or dense.
- The grad implemented is structured.
- The grad implemented is structured.
- C code for perform and grad.
- C code for perform and grad.
- Return for perform a sparse if both inputs are sparse and
- When not using C code, it returns a sparse output if both
dense if one of them is dense.
inputs are sparse and dense one if one of the inputs is
- Return a sparse grad for sparse inputs and dense grad for
dense.
- Returns a sparse grad for sparse inputs and dense grad for
dense inputs.
dense inputs.
- :class:`TrueDot <theano.sparse.basic.TrueDot>` and
- :class:`TrueDot <theano.sparse.basic.TrueDot>` and
:func:`true_dot <theano.sparse.basic.true_dot>`.
:func:`true_dot <theano.sparse.basic.true_dot>`.
...
@@ -218,39 +219,38 @@ List of Implemented Operations
...
@@ -218,39 +219,38 @@ List of Implemented Operations
- The first input is sparse, the second can be sparse or dense.
- The first input is sparse, the second can be sparse or dense.
- The grad implemented is regular.
- The grad implemented is regular.
- No C code for perform and no C code for grad.
- No C code for perform and no C code for grad.
- Return
a Sparse for perform
.
- Return
s a Sparse
.
-
Return a Sparse for grad for sparse inputs and by default a
-
The gradient returns a Sparse for sparse inputs and by
dense for dense inputs. The parameter
de
fault a de
nse for dense inputs. The parameter
``grad_preserves_dense`` can be set to False to return a
``grad_preserves_dense`` can be set to False to return a
sparse grad for dense inputs.
sparse grad for dense inputs.
- :class:`SamplingDot <theano.sparse.basic.SamplingDot>` and
- :class:`SamplingDot <theano.sparse.basic.SamplingDot>` and
``sampling_dot``.
``sampling_dot``.
- Both input must be dense.
- Both input
s
must be dense.
- The grad implemented is structured for `p`.
- The grad implemented is structured for `p`.
- Sample of the dot and sample of the gradient.
- Sample of the dot and sample of the gradient.
- C code for perform but not for grad.
- C code for perform but not for grad.
- Return sparse for perform and grad.
- Return
s
sparse for perform and grad.
- :class:`Usmm <theano.sparse.basic.Usmm>` and ``usmm``.
- :class:`Usmm <theano.sparse.basic.Usmm>` and ``usmm``.
- You *shouldn't* insert this op yourself!
- You *shouldn't* insert this op yourself!
- There is optimization that transform a
- There is
an
optimization that transform a
:class:`Dot <theano.sparse.basic.Dot>` to ``Usmm`` when possible.
:class:`Dot <theano.sparse.basic.Dot>` to ``Usmm`` when possible.
- This op is the equivalent of gemm for sparse dot.
- This op is the equivalent of gemm for sparse dot.
- There is no grad implemented for this op and this is not needed as
- There is no grad implemented for this op.
you don't insert it yourself.
- One of the inputs must be sparse, the other sparse or dense.
- One of the inputs must be sparse, the other sparse or dense.
- Return
a dense for perform
- Return
s a dense from perform.
- Slice Operations
- Slice Operations
- sparse_variable[N, N], return a tensor scalar.
- sparse_variable[N, N], return
s
a tensor scalar.
There is no grad implemented for this operation.
There is no grad implemented for this operation.
- sparse_variable[M:N, O:P], return a sparse matrix
- sparse_variable[M:N, O:P], return
s
a sparse matrix
There is no grad implemented for this operation.
There is no grad implemented for this operation.
- Sparse variable
don't support [M, N:O] and [M:N, O] as we don't support sparse vector
- Sparse variable
s don't support [M, N:O] and [M:N, O] as we don't
and returning a sparse matrix would break the numpy interface.
support sparse vectors and returning a sparse matrix would break
Use [M:M+1, N:O] and [M:N, O:O+1] instead.
the numpy interface.
Use [M:M+1, N:O] and [M:N, O:O+1] instead.
- :class:`Diag <theano.sparse.basic.Diag>` and ``diag``.
- :class:`Diag <theano.sparse.basic.Diag>` and ``diag``.
The grad implemented is regular.
The grad implemented is regular.
...
...
doc/library/tensor/raw_random.txt
浏览文件 @
e2ab154a
...
@@ -22,132 +22,130 @@ Reference
...
@@ -22,132 +22,130 @@ Reference
:class:`theano.tensor.shared_randomstreams.RandomStreams` subclass and the
:class:`theano.tensor.shared_randomstreams.RandomStreams` subclass and the
:class:`theano.tensor.randomstreams.RandomStreams` subclass.
:class:`theano.tensor.randomstreams.RandomStreams` subclass.
.. method:: binomial(self, size=(), n=1, p
rob
=0.5, ndim=None):
.. method:: binomial(self, size=(), n=1, p=0.5, ndim=None):
Sample
n times with probability of success prob for each trial, return the number of
Sample
``n`` times with probability of success ``p`` for each
successes.
trial and return the number of
successes.
If
the size argument is ambiguous on the number of dimensions, the first argument may be a
If
``size`` is ambiguous on the number of dimensions, ``ndim``
plain integer to supplement the missing information.
may be a
plain integer to supplement the missing information.
This wrap numpy implementation, so it have the same behavior.
This wraps the numpy implementation, so it has the same
behavior.
.. method:: uniform(self, size=(), low=0.0, high=1.0, ndim=None):
.. method:: uniform(self, size=(), low=0.0, high=1.0, ndim=None):
Sample a tensor of given size whose element from a uniform distribution between low and high.
Sample a tensor of the given size whose elements come from a
uniform distribution between low and high.
If the size argument is ambiguous on the number of
If ``size`` is ambiguous on the number of dimensions, ``ndim``
dimensions, the first argument may be a plain integer
may be a plain integer to supplement the missing information.
to supplement the missing information.
This wrap
numpy implementation, so it have the same bound:
This wrap
s the numpy implementation, so it has the same
include the low bound, but exclude the high bound
.
bounds: [``low``, ``high``\[
.
.. method:: normal(self, size=(), avg=0.0, std=1.0, ndim=None):
.. method:: normal(self, size=(), avg=0.0, std=1.0, ndim=None):
Usage: normal(random_state, size,
Sample from a normal distribution centered on ``avg`` with the
Sample from a normal distribution centered on avg with
specified standard deviation (``std``)
the specified standard deviation (std)
If the size argument is ambiguous on the number of
If ``size`` is ambiguous on the number of dimensions, ``ndim``
dimensions, the first argument may be a plain integer
may be a plain integer to supplement the missing information.
to supplement the missing information.
This wrap numpy implementation, so it have the same behavior.
This wrap numpy implementation, so it have the same behavior.
.. method:: random_integers(self, size=(), low=0, high=1, ndim=None):
.. method:: random_integers(self, size=(), low=0, high=1, ndim=None):
Usage: random_integers(random_state, size, low=0, high=1)
Sample a random integer between low and high, both inclusive.
Sample a random integer between low and high, both inclusive.
If the size argument is ambiguous on the number of
If ``size`` is ambiguous on the number of dimensions, ``ndim``
dimensions, the first argument may be a plain integer
may be a plain integer to supplement the missing information.
to supplement the missing information.
This is a generalization of numpy.random.random_integers to
This is a generalization of :py:func:`numpy.random.random_integers`
the case where low and high are tensors. Otherwise it behaves the same.
to the case where low and high are tensors. Otherwise it
behaves the same.
.. method:: choice(self, size=(), a=2, replace=True, p=None, ndim=None, dtype='int64'):
.. method:: choice(self, size=(), a=2, replace=True, p=None, ndim=None, dtype='int64'):
Choose values from `
a` with or without replacement. `a` can be a 1-D
Choose values from `
`a`` with or without replacement. ``a``
array or a positive scalar. If `a` is a scalar, the samples are drawn
can be a 1-D array or a positive scalar. If ``a`` is a scalar,
from the range 0,...,a-1
.
the samples are drawn from the range [0, ``a``\[
.
If the size argument is ambiguous on the number of dimensions,
If ``size`` is ambiguous on the number of dimensions, ``ndim``
ndim may be a plain integer to supplement the missing
may be a plain integer to supplement the missing information.
information.
This wrap
numpy implementation, so it have
the same behavior.
This wrap
s the numpy implementation so it has
the same behavior.
.. method:: poisson(self, size=(), lam=None, ndim=None, dtype='int64'):
.. method:: poisson(self, size=(), lam=None, ndim=None, dtype='int64'):
Usage: poisson(random_state, size, lam=5)
Draw samples from a Poisson distribution.
Draw samples from a Poisson distribution.
The Poisson distribution is the limit of the Binomial distribution for large N.
The Poisson distribution is the limit of the Binomial
distribution for large N.
If the size argument is ambiguous on the number of dimensions,
If ``size`` is ambiguous on the number of dimensions, ``ndim``
ndim may be a plain integer to supplement the missing
may be a plain integer to supplement the missing information.
information.
This wrap
numpy implementation, so it have
the same behavior.
This wrap
s the numpy implementation so it has
the same behavior.
.. method:: permutation(self, size=(), n=1, ndim=None):
.. method:: permutation(self, size=(), n=1, ndim=None):
Returns permutations of the integers between 0 and n-1, as many times
Returns permutations of the integers between 0 and ``n-1``, as
as required by size. For instance, if size=(p,q), p*q permutations
many times as required by ``size``. For instance, if
will be generated, and the output shape will be (p,q,n), because each
``size=(p,q)``, ``p*q`` permutations will be generated, and
permutation is of size n.
the output shape will be ``(p,q,n)``, because each permutation
is of size ``n``.
Theano tries to infer the number of dimensions from the length
Theano tries to infer the number of dimensions from the length
of the size argument, but you may always specify it with the
of ``size``, but you may always specify it with ``ndim``.
`ndim` parameter.
.. note::
.. note::
Note that the output will then be of dimension ndim+1
.
The output will have ``ndim+1`` dimensions
.
This is a generalization of numpy.random.permutation
to
This is a generalization of :py:func:`numpy.random.permutation`
to
t
he generate many permutation
. Otherwise it behaves the same.
t
ensors
. Otherwise it behaves the same.
.. method:: multinomial(self, size=(), n=1, pvals=[0.5, 0.5], ndim=None):
.. method:: multinomial(self, size=(), n=1, pvals=[0.5, 0.5], ndim=None):
Sample n times from a multinomial distribution defined by probabilities pvals,
Sample n times from a multinomial distribution defined by
as many times as required by size. For instance, if size=(p,q), p*q
probabilities ``pvals``, as many times as required by
samples will be drawn, and the output shape will be (p,q,len(pvals)).
``size``. For instance, if ``size=(p,q)``, ``p*q`` samples
will be drawn, and the output shape will be
Theano tries to infer the number of dimensions from the length of the size argument, but you
``(p,q,len(pvals))``.
may always specify it with the `ndim` parameter.
This is a generalization of numpy.random.multinomial to the
Theano tries to infer the number of dimensions from the length
case where n and pvals are tensors. Otherwise it behaves the
of ``size``, but you may always specify it with ``ndim``.
same.
.. note::
.. note::
Note that the output will then be of dimension ndim+1.
The output will have ``ndim+1`` dimensions.
This is a generalization of :py:func:`numpy.random.multinomial`
to the case where ``n`` and ``pvals`` are tensors. Otherwise
it behaves the same.
.. method:: shuffle_row_elements(self, input):
.. method:: shuffle_row_elements(self, input):
Return a variable with every row (rightmost index) shuffled.
Return a variable with every row (rightmost index) shuffled.
This uses permutation random variable internally, available via the ``.permutation``
This uses a permutation random variable internally, available
attribute of the return value.
via the ``.permutation`` attribute of the return value.
.. class:: RandomStateType(gof.Type)
.. class:: RandomStateType(gof.Type)
A `Type` for variables that will take ``numpy.random.RandomState`` values.
A `Type` for variables that will take ``numpy.random.RandomState``
values.
.. function:: random_state_type(name=None)
.. function:: random_state_type(name=None)
Return a new Variable whose ``.type`` is ``random_state_
variabl
e``.
Return a new Variable whose ``.type`` is ``random_state_
typ
e``.
.. class:: RandomFunction(gof.Op)
.. class:: RandomFunction(gof.Op)
Op that draws random numbers from a numpy.RandomState object. This Op is
Op that draws random numbers from a numpy.RandomState object.
parametrized to draw numbers from many possible distributions.
This Op is parametrized to draw numbers from many possible
distributions.
.. function:: uniform(random_state, size=
(), low=0.0, high=1.0
)
.. function:: uniform(random_state, size=
None, low=0.0, high=1.0, ndim=None, dtype=None
)
Sample from a uniform distribution between low and high.
Sample from a uniform distribution between low and high.
...
@@ -157,59 +155,62 @@ Reference
...
@@ -157,59 +155,62 @@ Reference
:returns: :class:`RandomVariable`, NewRandomState
:returns: :class:`RandomVariable`, NewRandomState
.. function:: binomial(random_state, size=
(), n=1, p=0.5
)
.. function:: binomial(random_state, size=
None, n=1, p=0.5, ndim=None, dtype='int64'
)
Sample n times with probability of success prob for each trial,
Sample ``n`` times with probability of success ``p`` for each
return the number of successes.
trial and return the number of successes.
If ``size`` is ambiguous on the number of dimensions, ``ndim`` may
be a plain integer to supplement the missing information.
If the size argument is ambiguous on the number of
dimensions, the first argument may be a plain integer
to supplement the missing information.
:returns: :class:`RandomVariable`, NewRandomState
:returns: :class:`RandomVariable`, NewRandomState
.. function:: normal(random_state, size=
(), avg=0.0, std=1.0
)
.. function:: normal(random_state, size=
None, avg=0.0, std=1.0, ndim=None, dtype=None
)
Sample from a normal distribution centered on
avg with
Sample from a normal distribution centered on
``avg`` with the
the specified standard deviation (std)
specified standard deviation (``std``).
If the size argument is ambiguous on the number of
If ``size`` is ambiguous on the number of dimensions, ``ndim`` may
dimensions, the first argument may be a plain integer
be a plain integer to supplement the missing information.
to supplement the missing information.
:returns: :class:`RandomVariable`, NewRandomState
:returns: :class:`RandomVariable`, NewRandomState
.. function:: random_integers(random_state, size=
(), low=0, high=1
)
.. function:: random_integers(random_state, size=
None, low=0, high=1, ndim=None, dtype='int64'
)
Sample
a random integer between low and high, both inclusive
.
Sample
random integers in [``low``, ``high``] to fill up ``size``
.
If the size argument is ambiguous on the number of
If ``size`` is ambiguous on the number of dimensions, ``ndim`` may
dimensions, the first argument may be a plain integer
be a plain integer to supplement the missing information.
to supplement the missing information.
:returns: :class:`RandomVariable`, NewRandomState
:returns: :class:`RandomVariable`, NewRandomState
.. function:: permutation(random_state, size=
(), n=1
)
.. function:: permutation(random_state, size=
None, n=1, ndim=None, dtype='int64'
)
Returns permutations of the integers
between 0 and n-1
, as many times
Returns permutations of the integers
in [0, ``n``\[
, as many times
as required by
size. For instance, if size=(p,q), p*q permutations
as required by
``size``. For instance, if ``size=(p,q)``, ``p*q``
will be generated, and the output shape will be (p,q,n), because each
permutations will be generated, and the output shape will be
permutation is of size n
.
``(p,q,n)``, because each permutation is of size ``n``
.
If the size argument is ambiguous on the number of dimensions, the first
If ``size`` is ambiguous on the number of dimensions, ``ndim``
argument may be a plain integer i, which should correspond to len(size).
may be a plain integer, which should correspond to ``len(size)``.
Note that the output will then be of dimension i+1.
.. note::
The output will have ``ndim+1`` dimensions.
:returns: :class:`RandomVariable`, NewRandomState
:returns: :class:`RandomVariable`, NewRandomState
.. function:: multinomial(random_state, size=
(), p_vals=[0.5, 0.5]
)
.. function:: multinomial(random_state, size=
None, p_vals=[0.5, 0.5], ndim=None, dtype='int64'
)
Sample from a multinomial distribution defined by probabilities pvals,
Sample from a multinomial distribution defined by probabilities
as many times as required by size. For instance, if size=(p,q), p*q
``pvals``, as many times as required by ``size``. For instance, if
samples will be drawn, and the output shape will be (p,q,len(pvals)).
``size=(p,q)``, ``p*q`` samples will be drawn, and the output
shape will be ``(p,q,len(pvals))``.
If the size argument is ambiguous on the number of dimensions, the first
If ``size`` is ambiguous on the number of dimensions, ``ndim``
argument may be a plain integer i, which should correspond to len(size).
may be a plain integer, which should correspond to ``len(size)``.
Note that the output will then be of dimension i+1.
.. note::
The output will have ``ndim+1`` dimensions.
:returns: :class:`RandomVariable`, NewRandomState
:returns: :class:`RandomVariable`, NewRandomState
doc/tutorial/examples.txt
浏览文件 @
e2ab154a
...
@@ -5,13 +5,13 @@
...
@@ -5,13 +5,13 @@
More Examples
More Examples
=============
=============
At this point it would be wise to begin familiarizing yourself
At this point it would be wise to begin familiarizing yourself
more
more systematically with Theano's fundamental objects and operations by browsing
systematically with Theano's fundamental objects and operations by
this section of the library: :ref:`libdoc_basic_tensor`.
browsing
this section of the library: :ref:`libdoc_basic_tensor`.
As the tutorial unfolds, you should also gradually acquaint yourself
with the other
As the tutorial unfolds, you should also gradually acquaint yourself
relevant areas of the library and with the relevant subjects of the documentation
with the other relevant areas of the library and with the relevant
entrance page.
subjects of the documentation
entrance page.
Logistic Function
Logistic Function
...
@@ -30,13 +30,13 @@ the logistic curve, which is given by:
...
@@ -30,13 +30,13 @@ the logistic curve, which is given by:
A plot of the logistic function, with x on the x-axis and s(x) on the
A plot of the logistic function, with x on the x-axis and s(x) on the
y-axis.
y-axis.
You want to compute the function :ref:`elementwise <libdoc_tensor_elementwise>` on matrices of
You want to compute the function :ref:`elementwise
doubles, which means that you want to apply this function to each
<libdoc_tensor_elementwise>` on matrices of doubles, which means that
individual element of the matrix.
you want to apply this function to each individual element of the
matrix.
Well, what you do is this:
Well, what you do is this:
.. If you modify this code, also change :
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_examples.test_examples_1
.. theano/tests/test_tutorial.py:T_examples.test_examples_1
...
@@ -450,10 +450,10 @@ Other Random Distributions
...
@@ -450,10 +450,10 @@ Other Random Distributions
There are :ref:`other distributions implemented <libdoc_tensor_raw_random>`.
There are :ref:`other distributions implemented <libdoc_tensor_raw_random>`.
Other Implementation
Other Implementation
s
--------------------
--------------------
-
The
ir is 2 other implementation
based on :class:`CURAND <theano.sandbox.cuda.rng_curand>` and :ref:`MRG31k3p <libdoc_rng_mrg>`
The
re is 2 other implementations
based on :class:`CURAND <theano.sandbox.cuda.rng_curand>` and :ref:`MRG31k3p <libdoc_rng_mrg>`
.. _logistic_regression:
.. _logistic_regression:
...
@@ -461,7 +461,8 @@ Their is 2 other implementation based on :class:`CURAND <theano.sandbox.cuda.rng
...
@@ -461,7 +461,8 @@ Their is 2 other implementation based on :class:`CURAND <theano.sandbox.cuda.rng
A Real Example: Logistic Regression
A Real Example: Logistic Regression
===================================
===================================
The preceding elements are featured in this more realistic example. It will be used repeatedly.
The preceding elements are featured in this more realistic example.
It will be used repeatedly.
.. code-block:: python
.. code-block:: python
...
...
doc/tutorial/multi_cores.txt
浏览文件 @
e2ab154a
...
@@ -5,45 +5,40 @@ Multi cores support in Theano
...
@@ -5,45 +5,40 @@ Multi cores support in Theano
BLAS operation
BLAS operation
==============
==============
BLAS is an interface for some mathematics operations between vectors,
BLAS is an interface for some mathematic operations between two
vector and matrix and matrices (e.g. the dot product between vector/matrix
vectors, a vector and a matrix or two matrices (e.g. the dot product
and matrix/matrix). Many different implementations exist of that
between vector/matrix and matrix/matrix). Many different
interface and some of them are parallel.
implementations of that interface exist and some of them are
parallelized.
Theano tr
y
to use that interface as frequently as possible for
Theano tr
ies
to use that interface as frequently as possible for
performance reason
. So if Theano link to one
parallel implementation,
performance reason
s. So if Theano links to a
parallel implementation,
those operation will run in parallel in Theano.
those operation
s
will run in parallel in Theano.
The most frequent way to control the number of threads used is via the
The most frequent way to control the number of threads used is via the
``OMP_NUM_THREADS`` environment variable. Set it to the number of threads
``OMP_NUM_THREADS`` environment variable. Set it to the number of threads
you want to use before starting the python process.
you want to use before starting the python process.
Parallel element wise op with OpenMP
Parallel element wise op
s
with OpenMP
====================================
====================================
=
Because element wise ops work on every tensor entry independently they
can be
Because element wise ops work on every tensor entry independently they
easily parallelized using OpenMP.
can be
easily parallelized using OpenMP.
To use OpenMP you must set the OpenMP flag in Theano configuration.
To use OpenMP you must set the OpenMP flag in Theano configuration.
You can use the flag ``openmp_elemwise_minsize`` to set the minimum tensor size
You can use the flag ``openmp_elemwise_minsize`` to set the minimum
for which the operation is parallelized because for short tensor using OpenMP
tensor size for which the operation is parallelized because for short
can slow down the operation.
tensors using OpenMP can slow down the operation. The default value is
``200000``.
If it is no specified the default value ``200000`` is used.
For simple(fast) operation you can obtain a speed up for very long tensor
while for more complex operation you ca obtain a good speed up also for not
too long tensor.
There is a script ``elemwise_openmp_speedup.py`` in ``theano/misc/`` which you
can use to choose that value for your machine.
The script run two elemwise operation (a fast and a slow one) for a vector of
size ``openmp_elemwise_minsize`` with and without OpenMP and show the time
difference between the two cases.
For simple(fast) operation you can obtain a speed up with very large
tensors while for more complex operation you can obtain a good speed
up also for smaller tensor.
There is a script ``elemwise_openmp_speedup.py`` in ``theano/misc/``
which you can use to tune the value of ``openmp_elemwise_minsize`` for
your machine. The script runs two elemwise operations (a fast one and
a slow one) for a vector of size ``openmp_elemwise_minsize`` with and
without OpenMP and shows the time difference between the cases.
theano/sparse/basic.py
浏览文件 @
e2ab154a
...
@@ -2623,9 +2623,10 @@ class TrueDot(gof.op.Op):
...
@@ -2623,9 +2623,10 @@ class TrueDot(gof.op.Op):
self
.
grad_preserves_dense
=
grad_preserves_dense
self
.
grad_preserves_dense
=
grad_preserves_dense
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
# The grad_preserves_dense attribute don't change the
# The grad_preserves_dense attribute doesn't change the
# execution behavior. To have Theano merge optimizer merging
# execution behavior. To let the optimizer merge nodes with
# them, we shouldn't compare it here.
# different values of this attribute we shouldn't compare it
# here.
return
type
(
self
)
==
type
(
other
)
return
type
(
self
)
==
type
(
other
)
def
__hash__
(
self
):
def
__hash__
(
self
):
...
@@ -2714,13 +2715,13 @@ class TrueDot(gof.op.Op):
...
@@ -2714,13 +2715,13 @@ class TrueDot(gof.op.Op):
def
true_dot
(
x
,
y
,
grad_preserves_dense
=
True
):
def
true_dot
(
x
,
y
,
grad_preserves_dense
=
True
):
"""
"""
Operation for efficiently calculating the dot product when
Operation for efficiently calculating the dot product when
one or all operands
is sparse. Supported format
are CSC and CSR.
one or all operands
are sparse. Supported formats
are CSC and CSR.
The output of the operation is sparse.
The output of the operation is sparse.
:param x: Sparse matrix or 2d tensor variable.
:param x: Sparse matrix or 2d tensor variable.
:param y: Sparse matrix or 2d tensor variable.
:param y: Sparse matrix or 2d tensor variable.
:param grad_preserves_dense: if True
and one on the input is dense,
:param grad_preserves_dense: if True
(default), makes the grad of
make the grad dense on that input
.
dense inputs dense. Otherwise the grad is always sparse
.
:return: The dot product `x`.`y` in a sparse format.
:return: The dot product `x`.`y` in a sparse format.
...
...
theano/tensor/nnet/Conv3D.py
浏览文件 @
e2ab154a
...
@@ -562,12 +562,12 @@ conv3D = Conv3D()
...
@@ -562,12 +562,12 @@ conv3D = Conv3D()
:note: The order of dimensions does not correspond to the one in `conv2d`.
:note: The order of dimensions does not correspond to the one in `conv2d`.
This is for optimization.
This is for optimization.
:note: The GPU implementation is very slow. You
are better to
use
:note: The GPU implementation is very slow. You
should
use
:func:`conv3d2d <theano.tensor.nnet.conv3d2d.conv3d>`
that is faster
:func:`conv3d2d <theano.tensor.nnet.conv3d2d.conv3d>`
for a GPU
on GPU
.
graph instead
.
:see: Someone made a script that show
how to swap the axi
s between
:see: Someone made a script that show
s how to swap the axe
s between
both 3d convolution implementation in Theano. See the last
both 3d convolution implementation
s
in Theano. See the last
`attachment <https://groups.google.com/d/msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ>`_.
`attachment <https://groups.google.com/d/msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ>`_.
"""
"""
...
...
theano/tensor/nnet/conv3d2d.py
浏览文件 @
e2ab154a
...
@@ -178,8 +178,8 @@ def conv3d(signals, filters,
...
@@ -178,8 +178,8 @@ def conv3d(signals, filters,
Another way to define signals: (batch, time, in channel, row, column)
Another way to define signals: (batch, time, in channel, row, column)
Another way to define filters: (out channel,time,in channel, row, column)
Another way to define filters: (out channel,time,in channel, row, column)
:see: Someone made a script that show
how to swap the axi
s between
:see: Someone made a script that show
s how to swap the axe
s between
both 3d convolution implementation in Theano. See the last
both 3d convolution implementation
s
in Theano. See the last
`attachment <https://groups.google.com/d/msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ>`_.
`attachment <https://groups.google.com/d/msg/theano-users/1S9_bZgHxVw/0cQR9a4riFUJ>`_.
"""
"""
...
...
theano/tensor/raw_random.py
浏览文件 @
e2ab154a
...
@@ -578,10 +578,9 @@ def random_integers(random_state, size=None, low=0, high=1, ndim=None,
...
@@ -578,10 +578,9 @@ def random_integers(random_state, size=None, low=0, high=1, ndim=None,
def
choice_helper
(
random_state
,
a
,
replace
,
p
,
size
):
def
choice_helper
(
random_state
,
a
,
replace
,
p
,
size
):
"""Helper function to draw random numbers using numpy's choice function.
"""Helper function to draw random numbers using numpy's choice function.
This is a generalization of numpy.random.choice that coerce
This is a generalization of numpy.random.choice that coerces
`replace` to a bool and replace `p` to None when p is a vector of
`replace` to a bool and replaces `p` with None when p is a vector
0 elements.
of 0 elements.
"""
"""
if
a
.
ndim
>
1
:
if
a
.
ndim
>
1
:
raise
ValueError
(
'a.ndim (
%
i) must be 0 or 1'
%
a
.
ndim
)
raise
ValueError
(
'a.ndim (
%
i) must be 0 or 1'
%
a
.
ndim
)
...
@@ -660,8 +659,8 @@ def permutation_helper(random_state, n, shape):
...
@@ -660,8 +659,8 @@ def permutation_helper(random_state, n, shape):
If you wish to perform a permutation of the elements of an existing vector,
If you wish to perform a permutation of the elements of an existing vector,
see shuffle_row_elements.
see shuffle_row_elements.
This is a generalization of numpy.random.permutation to
This is a generalization of numpy.random.permutation to
tensors.
the generate many permutation. Otherwise it behave
the same.
Otherwise it behaves
the same.
"""
"""
# n should be a 0-dimension array
# n should be a 0-dimension array
assert
n
.
shape
==
()
assert
n
.
shape
==
()
...
@@ -863,7 +862,7 @@ class RandomStreamsBase(object):
...
@@ -863,7 +862,7 @@ class RandomStreamsBase(object):
def
binomial
(
self
,
size
=
None
,
n
=
1
,
p
=
0.5
,
ndim
=
None
,
dtype
=
'int64'
,
def
binomial
(
self
,
size
=
None
,
n
=
1
,
p
=
0.5
,
ndim
=
None
,
dtype
=
'int64'
,
prob
=
None
):
prob
=
None
):
"""
"""
Sample n times with probability of success p
rob for each trial,
Sample n times with probability of success p
for each trial and
return the number of successes.
return the number of successes.
If the size argument is ambiguous on the number of dimensions,
If the size argument is ambiguous on the number of dimensions,
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论