Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
45d35515
提交
45d35515
authored
1月 30, 2014
作者:
Olivier Delalleau
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
A bunch of typo fixes in documentation
上级
4b9e31ff
隐藏空白字符变更
内嵌
并排
正在显示
2 个修改的文件
包含
25 行增加
和
25 行删除
+25
-25
other_ops.txt
doc/extending/other_ops.txt
+22
-22
extending_theano.txt
doc/tutorial/extending_theano.txt
+3
-3
没有找到文件。
doc/extending/other_ops.txt
浏览文件 @
45d35515
.. _other_ops:
.. _other_ops:
=============================
=============================
=
Implementing some specific Op
Implementing some specific Op
s
=============================
=============================
=
This page is a guide on the implementation of some specific types of Ops,
This page is a guide on the implementation of some specific types of Ops,
and point to some examples of such implementations.
and point
s
to some examples of such implementations.
For the random number generating Ops, it explains different possible
For the random number generating Ops, it explains different possible
implementation strategies.
implementation strategies.
...
@@ -18,10 +18,10 @@ Scalar/Elemwise/Reduction Ops
...
@@ -18,10 +18,10 @@ Scalar/Elemwise/Reduction Ops
Implementing a Theano scalar Op allows that scalar operation to be reused
Implementing a Theano scalar Op allows that scalar operation to be reused
by our elemwise operations on tensors. If the scalar operation has C code, the
by our elemwise operations on tensors. If the scalar operation has C code, the
elemwise implementation
it will automatica
ly have C code too. This
elemwise implementation
will automatical
ly have C code too. This
will enable the fusion of elemwise operations using your new scalar
will enable the fusion of elemwise operations using your new scalar
operation. It can also reuse the GPU elemwise code. It is similar for
operation. It can also reuse the GPU elemwise code. It is similar for
reduction operation.
reduction operation
s
.
For examples of how to add new scalar operations, you can have a look at
For examples of how to add new scalar operations, you can have a look at
those 2 pull requests, that add `GammaLn and Psi
those 2 pull requests, that add `GammaLn and Psi
...
@@ -84,11 +84,11 @@ instead of ``as_tensor_variable(x)``.
...
@@ -84,11 +84,11 @@ instead of ``as_tensor_variable(x)``.
Another difference is that you need to use ``SparseVariable`` and
Another difference is that you need to use ``SparseVariable`` and
``SparseType`` instead of ``TensorVariable`` and ``TensorType``.
``SparseType`` instead of ``TensorVariable`` and ``TensorType``.
Do
n'
t forget that we support only sparse matrices (so only 2 dimensions)
Do
no
t forget that we support only sparse matrices (so only 2 dimensions)
and they do
n't support broadcasting operation by default,
as SciPy sparse
and they do
not support broadcasting operations by default, where
as SciPy sparse
matrix class does (but a few Ops do it when called manually). Also, we support
2
matrix class does (but a few Ops do it when called manually). Also, we support
only two
formats for sparse type: ``csr`` and ``csc``. So in ``make_mode()``,
formats for sparse type: ``csr`` and ``csc``. So in ``make_mode()``,
you can create output
s
variables like this:
you can create output variables like this:
.. code-block:: python
.. code-block:: python
...
@@ -97,11 +97,11 @@ you can create outputs variables like this:
...
@@ -97,11 +97,11 @@ you can create outputs variables like this:
See the sparse :class:`theano.sparse.basic.Cast` op `code
See the sparse :class:`theano.sparse.basic.Cast` op `code
<https://github.com/Theano/Theano/blob/master/theano/sparse/basic.py#L753>`_
<https://github.com/Theano/Theano/blob/master/theano/sparse/basic.py#L753>`_
for a good example
for
a sparse op with Python code.
for a good example
of
a sparse op with Python code.
.. note::
.. note::
From the definition of CSR and CSC format, CSR column indices are
From the definition of CSR and CSC format
s
, CSR column indices are
not necessarily sorted. Likewise for CSC row indices. Use
not necessarily sorted. Likewise for CSC row indices. Use
:class:`EnsureSortedIndices
:class:`EnsureSortedIndices
<theano.sparse.basic.EnsureSortedIndices>` if your code does not
<theano.sparse.basic.EnsureSortedIndices>` if your code does not
...
@@ -129,7 +129,7 @@ Sparse C code
...
@@ -129,7 +129,7 @@ Sparse C code
-------------
-------------
Theano does not have a native C code interface for sparse matrices. The
Theano does not have a native C code interface for sparse matrices. The
reason is simple
, we use the SciPy sparse matrix object
and they don't
reason is simple
: we use the SciPy sparse matrix objects
and they don't
have a C object. So we use a simple trick: a sparse matrix is made of
have a C object. So we use a simple trick: a sparse matrix is made of
4 fields that are NumPy vector arrays: ``data``, ``indices``, ``indptr``
4 fields that are NumPy vector arrays: ``data``, ``indices``, ``indptr``
and ``shape``. So to make
and ``shape``. So to make
...
@@ -183,17 +183,17 @@ distributions here::
...
@@ -183,17 +183,17 @@ distributions here::
2) Extend MRG implementation by reusing existing Theano Op. Look into
2) Extend MRG implementation by reusing existing Theano Op. Look into
the ``theano/sandbox/rng_mrg.py`` file and grep for all code about
the ``theano/sandbox/rng_mrg.py`` file and grep for all code about
binomal(). This distribution uses the output of the uniform
binom
i
al(). This distribution uses the output of the uniform
distribution and converts it to a binomial distribution with
distribution and converts it to a binomial distribution with
existing Theano operations. The tests go in
existing Theano operations. The tests go in
``theano/sandbox/test_rng_mrg.py``
``theano/sandbox/test_rng_mrg.py``
3) Extend MRG implementation with a new Op that takes a
n uniform
as
3) Extend MRG implementation with a new Op that takes a
uniform sample
as
input. Look in the ``theano/sandbox/{rng_mrg,multinomial}.py`` file
input. Look in the ``theano/sandbox/{rng_mrg,multinomial}.py`` file
and its test in ``theano/sandbox/test_multinomal.py``. This is
and its test in ``theano/sandbox/test_multinomal.py``. This is
recommended when current Theano ops aren't well suited to modify
recommended when current Theano ops aren't well suited to modify
the uniform to the target distribution. This can happen in
the uniform to the target distribution. This can happen in
particular i
s
there is a loop or complicated condition.
particular i
f
there is a loop or complicated condition.
.. note::
.. note::
...
@@ -214,16 +214,16 @@ the ``__init__()`` method, it must have an ``openmp=None`` parameter
...
@@ -214,16 +214,16 @@ the ``__init__()`` method, it must have an ``openmp=None`` parameter
and must call ``super(MyOpClass, self).__init__(openmp=openmp)``.
and must call ``super(MyOpClass, self).__init__(openmp=openmp)``.
The ``OpenMPOp`` class also implements ``c_compile_args`` and
The ``OpenMPOp`` class also implements ``c_compile_args`` and
``make_thunk``. This makes it add the correct g++ flag to compile with
``make_thunk``. This makes it add the correct g++ flag
s
to compile with
OpenMP. It also disables OpenMP and prints a warning if the version of
OpenMP. It also disables OpenMP and prints a warning if the version of
g++ do
n'
t support it.
g++ do
es no
t support it.
The Theano flag ``openmp`` is currently False by default as we do
n'
t
The Theano flag ``openmp`` is currently False by default as we do
no
t
have code that gets spe
e
d up with it. The only current implementation
have code that gets sped up with it. The only current implementation
is ConvOp. It speeds up some cases, but slows down others. That is why
is ConvOp. It speeds up some cases, but slows down others. That is why
we disable it by default. But we have all the code to have it enabled
we disable it by default. But we have all the code to have it enabled
by default if there is more th
en 1 core and that
the environment
by default if there is more th
an 1 core and
the environment
variable OMP_NUM_THREADS is
n'
t 1. This allows Theano to respect the
variable OMP_NUM_THREADS is
no
t 1. This allows Theano to respect the
current convention.
current convention.
.. note:
.. note:
...
...
doc/tutorial/extending_theano.txt
浏览文件 @
45d35515
...
@@ -467,10 +467,10 @@ Final Note
...
@@ -467,10 +467,10 @@ Final Note
==========
==========
A more extensive discussion of this section's content may be found in
A more extensive discussion of this section's content may be found in
the advanced tutorial :ref:`Extending Theano<extending>`
the advanced tutorial :ref:`Extending Theano<extending>`
.
The section :ref:`Other ops <other_ops>` include
more instruction
for
The section :ref:`Other ops <other_ops>` include
s more instructions
for
specific case
:
the following specific cases
:
- :ref:`scalar_ops`
- :ref:`scalar_ops`
- :ref:`scipy_ops`
- :ref:`scipy_ops`
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论