Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
a14c3e9b
提交
a14c3e9b
authored
10月 08, 2014
作者:
Pascal Lamblin
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #2078 from abergeron/doc
Add a mode to docgen to run the code samples in the documentation.
上级
eabfd16f
e95d9b5a
显示空白字符变更
内嵌
并排
正在显示
9 个修改的文件
包含
177 行增加
和
91 行删除
+177
-91
advanced_theano.txt
doc/cifarSC2011/advanced_theano.txt
+24
-12
pyCUDA.txt
doc/cifarSC2011/pyCUDA.txt
+11
-6
conf.py
doc/conf.py
+1
-1
glossary.txt
doc/glossary.txt
+7
-0
install.txt
doc/install.txt
+1
-1
basic.txt
doc/library/tensor/basic.txt
+118
-65
docgen.py
doc/scripts/docgen.py
+12
-5
subtensor.py
theano/tensor/subtensor.py
+2
-0
test_tutorial.py
theano/tests/test_tutorial.py
+1
-1
没有找到文件。
doc/cifarSC2011/advanced_theano.txt
浏览文件 @
a14c3e9b
...
@@ -16,7 +16,7 @@ Conditions
...
@@ -16,7 +16,7 @@ Conditions
**IfElse Example: Comparison with Switch**
**IfElse Example: Comparison with Switch**
..
code-block:: python
..
testcode::
from theano import tensor as T
from theano import tensor as T
from theano.ifelse import ifelse
from theano.ifelse import ifelse
...
@@ -50,12 +50,21 @@ Conditions
...
@@ -50,12 +50,21 @@ Conditions
f_lazyifelse(val1, val2, big_mat1, big_mat2)
f_lazyifelse(val1, val2, big_mat1, big_mat2)
print 'time spent evaluating one value %f sec'%(time.clock()-tic)
print 'time spent evaluating one value %f sec'%(time.clock()-tic)
.. testoutput::
:hide:
:options: +ELLIPSIS
time spent evaluating both values ... sec
time spent evaluating one value ... sec
IfElse Op spend less time (about an half) than Switch since it computes only
IfElse Op spend less time (about an half) than Switch since it computes only
one variable instead of both.
one variable instead of both.
>>> python ifelse_switch.py
.. code-block:: none
time spent evaluating both values 0.6700 sec
time spent evaluating one value 0.3500 sec
$ python ifelse_switch.py
time spent evaluating both values 0.6700 sec
time spent evaluating one value 0.3500 sec
Note that IfElse condition is a boolean while Switch condition is a tensor, so
Note that IfElse condition is a boolean while Switch condition is a tensor, so
Switch is more general.
Switch is more general.
...
@@ -112,7 +121,7 @@ Loops
...
@@ -112,7 +121,7 @@ Loops
**Scan Example: Calculating a Polynomial**
**Scan Example: Calculating a Polynomial**
..
code-block:: python
..
testcode::
import theano
import theano
import theano.tensor as T
import theano.tensor as T
...
@@ -133,7 +142,10 @@ Loops
...
@@ -133,7 +142,10 @@ Loops
test_coeff = numpy.asarray([1, 0, 2], dtype=numpy.float32)
test_coeff = numpy.asarray([1, 0, 2], dtype=numpy.float32)
print calculate_polynomial(test_coeff, 3)
print calculate_polynomial(test_coeff, 3)
# 19.0
.. testoutput::
19.0
...
@@ -267,7 +279,7 @@ Printing/Drawing Theano graphs
...
@@ -267,7 +279,7 @@ Printing/Drawing Theano graphs
``theano.printing.pprint(variable)``
``theano.printing.pprint(variable)``
>>> theano.printing.pprint(prediction)
>>> theano.printing.pprint(prediction)
# doctest: +SKIP
gt((TensorConstant{1} / (TensorConstant{1} + exp(((-(x \\dot w)) - b)))),TensorConstant{0.5})
gt((TensorConstant{1} / (TensorConstant{1} + exp(((-(x \\dot w)) - b)))),TensorConstant{0.5})
...
@@ -275,7 +287,7 @@ gt((TensorConstant{1} / (TensorConstant{1} + exp(((-(x \\dot w)) - b)))),TensorC
...
@@ -275,7 +287,7 @@ gt((TensorConstant{1} / (TensorConstant{1} + exp(((-(x \\dot w)) - b)))),TensorC
``theano.printing.debugprint({fct, variable, list of variables})``
``theano.printing.debugprint({fct, variable, list of variables})``
>>> theano.printing.debugprint(prediction)
>>> theano.printing.debugprint(prediction)
# doctest: +SKIP
Elemwise{gt,no_inplace} [@181772236] ''
Elemwise{gt,no_inplace} [@181772236] ''
|Elemwise{true_div,no_inplace} [@181746668] ''
|Elemwise{true_div,no_inplace} [@181746668] ''
| |InplaceDimShuffle{x} [@181746412] ''
| |InplaceDimShuffle{x} [@181746412] ''
...
@@ -293,7 +305,7 @@ Elemwise{gt,no_inplace} [@181772236] ''
...
@@ -293,7 +305,7 @@ Elemwise{gt,no_inplace} [@181772236] ''
| | | | | |b [@181730156]
| | | | | |b [@181730156]
|InplaceDimShuffle{x} [@181771788] ''
|InplaceDimShuffle{x} [@181771788] ''
| |TensorConstant{0.5} [@181771148]
| |TensorConstant{0.5} [@181771148]
>>> theano.printing.debugprint(predict)
>>> theano.printing.debugprint(predict)
# doctest: +SKIP
Elemwise{Composite{neg,{sub,{{scalar_sigmoid,GT},neg}}}} [@183160204] '' 2
Elemwise{Composite{neg,{sub,{{scalar_sigmoid,GT},neg}}}} [@183160204] '' 2
|dot [@183018796] '' 1
|dot [@183018796] '' 1
| |x [@183000780]
| |x [@183000780]
...
@@ -304,19 +316,19 @@ Elemwise{Composite{neg,{sub,{{scalar_sigmoid,GT},neg}}}} [@183160204] '' 2
...
@@ -304,19 +316,19 @@ Elemwise{Composite{neg,{sub,{{scalar_sigmoid,GT},neg}}}} [@183160204] '' 2
- Picture Printing of Graphs
- Picture Printing of Graphs
>>> theano.printing.pydotprint_variables(prediction)
>>> theano.printing.pydotprint_variables(prediction)
# doctest: +SKIP
.. image:: ../hpcs2011_tutorial/pics/logreg_pydotprint_prediction.png
.. image:: ../hpcs2011_tutorial/pics/logreg_pydotprint_prediction.png
:width: 800 px
:width: 800 px
All pydotprint* requires graphviz and pydot
All pydotprint* requires graphviz and pydot
>>> theano.printing.pydotprint(predict)
>>> theano.printing.pydotprint(predict)
# doctest: +SKIP
.. image:: ../hpcs2011_tutorial/pics/logreg_pydotprint_predic.png
.. image:: ../hpcs2011_tutorial/pics/logreg_pydotprint_predic.png
:width: 800 px
:width: 800 px
>>> theano.printing.pydotprint(train) # This is a small train example!
>>> theano.printing.pydotprint(train) # This is a small train example!
# doctest: +SKIP
.. image:: ../hpcs2011_tutorial/pics/logreg_pydotprint_train.png
.. image:: ../hpcs2011_tutorial/pics/logreg_pydotprint_train.png
:width: 1500 px
:width: 1500 px
...
...
doc/cifarSC2011/pyCUDA.txt
浏览文件 @
a14c3e9b
...
@@ -80,7 +80,7 @@ Exercise 6
...
@@ -80,7 +80,7 @@ Exercise 6
Theano + PyCUDA
Theano + PyCUDA
---------------
---------------
..
code-block:: python
..
testcode::
import numpy, theano
import numpy, theano
import theano.misc.pycuda_init
import theano.misc.pycuda_init
...
@@ -119,14 +119,19 @@ Theano + PyCUDA
...
@@ -119,14 +119,19 @@ Theano + PyCUDA
block=(512,1,1), grid=grid)
block=(512,1,1), grid=grid)
return thunk
return thunk
.. testoutput::
:hide:
:options: +SKIP
This contains GPU code so skip it
Test it!
Test it!
>>> x = theano.tensor.fmatrix()
>>> x = theano.tensor.fmatrix()
# doctest: +SKIP
>>> f = theano.function([x], PyCUDADoubleOp()(x))
>>> f = theano.function([x], PyCUDADoubleOp()(x))
# doctest: +SKIP
>>> xv=numpy.ones((4,5), dtype="float32")
>>> xv=numpy.ones((4,5), dtype="float32")
# doctest: +SKIP
>>> assert numpy.allclose(f(xv), xv*2)
>>> assert numpy.allclose(f(xv), xv*2)
# doctest: +SKIP
>>> print numpy.asarray(f(xv))
>>> print numpy.asarray(f(xv))
# doctest: +SKIP
Exercises 7
Exercises 7
-----------
-----------
...
...
doc/conf.py
浏览文件 @
a14c3e9b
...
@@ -23,7 +23,7 @@
...
@@ -23,7 +23,7 @@
# Add any Sphinx extension module names here, as strings. They can be
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions
=
[
'sphinx.ext.autodoc'
,
'sphinx.ext.todo'
]
extensions
=
[
'sphinx.ext.autodoc'
,
'sphinx.ext.todo'
,
'sphinx.ext.doctest'
]
todo_include_todos
=
True
todo_include_todos
=
True
...
...
doc/glossary.txt
浏览文件 @
a14c3e9b
...
@@ -3,6 +3,11 @@
...
@@ -3,6 +3,11 @@
Glossary
Glossary
========
========
..
# This is for the doctests in the file
>>> import theano
>>> from theano import tensor
.. glossary::
.. glossary::
Apply
Apply
...
@@ -25,8 +30,10 @@ Glossary
...
@@ -25,8 +30,10 @@ Glossary
Constant
Constant
A variable with an immutable value.
A variable with an immutable value.
For example, when you type
For example, when you type
>>> x = tensor.ivector()
>>> x = tensor.ivector()
>>> y = x + 3
>>> y = x + 3
Then a `constant` is created to represent the ``3`` in the graph.
Then a `constant` is created to represent the ``3`` in the graph.
See also: :class:`gof.Constant`
See also: :class:`gof.Constant`
...
...
doc/install.txt
浏览文件 @
a14c3e9b
...
@@ -318,7 +318,7 @@ a Python (or IPython) interpreter,
...
@@ -318,7 +318,7 @@ a Python (or IPython) interpreter,
.. code-block:: python
.. code-block:: python
>>> import theano
>>> import theano
>>> theano.test()
>>> theano.test()
# doctest: +SKIP
You can also run them in-place from the Git checkout directory by typing
You can also run them in-place from the Git checkout directory by typing
...
...
doc/library/tensor/basic.txt
浏览文件 @
a14c3e9b
...
@@ -6,6 +6,14 @@
...
@@ -6,6 +6,14 @@
Basic Tensor Functionality
Basic Tensor Functionality
===========================
===========================
.. testsetup::
import theano.tensor as T
from theano.tensor import scalar, iscalar, TensorType, dmatrix, ivector
from theano.tensor import set_subtensor, inc_subtensor, batched_dot
from theano import shared
import numpy
Theano supports any kind of Python object, but its focus is support for
Theano supports any kind of Python object, but its focus is support for
symbolic matrix expressions. When you type,
symbolic matrix expressions. When you type,
...
@@ -90,7 +98,7 @@ All Fully-Typed Constructors
...
@@ -90,7 +98,7 @@ All Fully-Typed Constructors
The following TensorType instances are provided in the theano.tensor module.
The following TensorType instances are provided in the theano.tensor module.
They are all callable, and accept an optional ``name`` argument. So for example:
They are all callable, and accept an optional ``name`` argument. So for example:
..
code-block:: python
..
testcode:: constructors
from theano.tensor import *
from theano.tensor import *
...
@@ -195,7 +203,7 @@ will return that many Variables and if strings are provided, it will
...
@@ -195,7 +203,7 @@ will return that many Variables and if strings are provided, it will
create one Variable for each string, using the string as the Variable's
create one Variable for each string, using the string as the Variable's
name. For example:
name. For example:
..
code-block:: python
..
testcode:: constructors
from theano.tensor import *
from theano.tensor import *
...
@@ -221,7 +229,8 @@ correctly:
...
@@ -221,7 +229,8 @@ correctly:
>>> my_dmatrix = TensorType('float64', (False,)*2)
>>> my_dmatrix = TensorType('float64', (False,)*2)
>>> x = my_dmatrix() # allocate a matrix variable
>>> x = my_dmatrix() # allocate a matrix variable
>>> my_dmatrix == dmatrix # this compares True
>>> my_dmatrix == dmatrix
True
See :class:`TensorType` for more information about creating new types of
See :class:`TensorType` for more information about creating new types of
Tensor.
Tensor.
...
@@ -233,7 +242,7 @@ Converting from Python Objects
...
@@ -233,7 +242,7 @@ Converting from Python Objects
Another way of creating a TensorVariable (a TensorSharedVariable to be
Another way of creating a TensorVariable (a TensorSharedVariable to be
precise) is by calling :func:`shared()`
precise) is by calling :func:`shared()`
..
code-block:: python
..
testcode::
x = shared(numpy.random.randn(3,4))
x = shared(numpy.random.randn(3,4))
...
@@ -695,7 +704,8 @@ Creating Tensor
...
@@ -695,7 +704,8 @@ Creating Tensor
>>> x1 = T.scalar()
>>> x1 = T.scalar()
>>> x2 = T.scalar()
>>> x2 = T.scalar()
>>> x = T.stack(x0, x1, x2)
>>> x = T.stack(x0, x1, x2)
>>> # x.ndim == 1, is a vector of length 3.
>>> x.ndim # x is a vector of length 3.
1
.. function:: concatenate(tensor_list, axis=0)
.. function:: concatenate(tensor_list, axis=0)
...
@@ -710,7 +720,8 @@ Creating Tensor
...
@@ -710,7 +720,8 @@ Creating Tensor
>>> x1 = T.ftensor3()
>>> x1 = T.ftensor3()
>>> x2 = T.fvector()
>>> x2 = T.fvector()
>>> x = T.concatenate([x0, x1[0], T.shape_padright(x2)], axis=1)
>>> x = T.concatenate([x0, x1[0], T.shape_padright(x2)], axis=1)
>>> # x.ndim == 2
>>> x.ndim
2
.. function:: stacklists(tensor_list)
.. function:: stacklists(tensor_list)
...
@@ -729,7 +740,8 @@ Creating Tensor
...
@@ -729,7 +740,8 @@ Creating Tensor
>>> X = stacklists([[a, b], [c, d]])
>>> X = stacklists([[a, b], [c, d]])
>>> f = function([a, b, c, d], X)
>>> f = function([a, b, c, d], X)
>>> f(1, 2, 3, 4)
>>> f(1, 2, 3, 4)
>>> # array([[ 1., 2.], [ 3., 4.]], dtype=float32)
array([[ 1., 2.],
[ 3., 4.]])
We can also stack arbitrarily shaped tensors. Here we stack matrices into
We can also stack arbitrarily shaped tensors. Here we stack matrices into
a 2 by 2 grid:
a 2 by 2 grid:
...
@@ -740,7 +752,7 @@ Creating Tensor
...
@@ -740,7 +752,7 @@ Creating Tensor
>>> f = function([a, b, c, d], X)
>>> f = function([a, b, c, d], X)
>>> x = ones((4, 4), 'float32')
>>> x = ones((4, 4), 'float32')
>>> f(x, x, x, x).shape
>>> f(x, x, x, x).shape
>>> #
(2, 2, 4, 4)
(2, 2, 4, 4)
Reductions
Reductions
==========
==========
...
@@ -998,19 +1010,45 @@ Theano fully supports basic indexing
...
@@ -998,19 +1010,45 @@ Theano fully supports basic indexing
<http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#integer>`_
<http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#integer>`_
will be supported in 0.6rc4 (or the development version). We do not
will be supported in 0.6rc4 (or the development version). We do not
support boolean masks, as Theano does not have a boolean type (we use
support boolean masks, as Theano does not have a boolean type (we use
int8 for the output of logic operators). To imitate boolean advanced
int8 for the output of logic operators).
indexing, you can do::
.. testsetup:: indexing
import theano
import numpy as np
NumPy with a mask:
.. doctest:: indexing
>>> n = np.arange(9).reshape(3,3)
>>> n[n > 4]
array([5, 6, 7, 8])
Theano indexing with a "mask" (incorrect approach):
# NumPy indexing with a mask
.. doctest:: indexing
n = np.arange(9).reshape(3,3)
n[n > 4] # array([5, 6, 7, 8])
# Theano indexing with a "mask" (incorrect approach)
>>> t = theano.tensor.arange(9).reshape((3,3))
t = theano.tensor.arange(9).reshape((3,3))
>>> t[t > 4].eval() # an array with shape (3, 3, 3)
t[t > 4].eval() # an array with shape (3, 3, 3)
array([[[0, 1, 2],
[0, 1, 2],
[0, 1, 2]],
<BLANKLINE>
[[0, 1, 2],
[0, 1, 2],
[3, 4, 5]],
<BLANKLINE>
[[3, 4, 5],
[3, 4, 5],
[3, 4, 5]]], dtype=int8)
# getting a Theano result like NumPy
Getting a Theano result like NumPy:
t[(t > 4).nonzero()].eval() # array([5, 6, 7, 8])
.. doctest:: indexing
>>> t[(t > 4).nonzero()].eval()
array([5, 6, 7, 8], dtype=int8)
The gradient of Advanced indexing needs in many cases NumPy
The gradient of Advanced indexing needs in many cases NumPy
1.8. It is not released yet as of April 30th, 2013. You can use NumPy
1.8. It is not released yet as of April 30th, 2013. You can use NumPy
...
@@ -1036,21 +1074,27 @@ Many Python operators are supported.
...
@@ -1036,21 +1074,27 @@ Many Python operators are supported.
Arithmetic
Arithmetic
--------------
--------------
>>> a + 3 # T.add(a, 3) -> itensor3
.. doctest::
>>> 3 - a # T.sub(3, a)
:options: +SKIP
>>> a * 3.5 # T.mul(a, 3.5) -> ftensor3 or dtensor3 (depending on casting)
>>> 2.2 / a # T.truediv(2.2, a)
>>> a + 3 # T.add(a, 3) -> itensor3
>>> 2.2 // a # T.intdiv(2.2, a)
>>> 3 - a # T.sub(3, a)
>>> 2.2**a # T.pow(2.2, a)
>>> a * 3.5 # T.mul(a, 3.5) -> ftensor3 or dtensor3 (depending on casting)
>>> b % a # T.mod(b, a)
>>> 2.2 / a # T.truediv(2.2, a)
>>> 2.2 // a # T.intdiv(2.2, a)
>>> 2.2**a # T.pow(2.2, a)
>>> b % a # T.mod(b, a)
Bitwise
Bitwise
-------------
-------------
>>> a & b # T.and_(a,b) bitwise and (alias T.bitwise_and)
.. doctest::
>>> a ^ 1 # T.xor(a,1) bitwise xor (alias T.bitwise_xor)
:options: +SKIP
>>> a | b # T.or_(a,b) bitwise or (alias T.bitwise_or)
>>> ~a # T.invert(a) bitwise invert (alias T.bitwise_not)
>>> a & b # T.and_(a,b) bitwise and (alias T.bitwise_and)
>>> a ^ 1 # T.xor(a,1) bitwise xor (alias T.bitwise_xor)
>>> a | b # T.or_(a,b) bitwise or (alias T.bitwise_or)
>>> ~a # T.invert(a) bitwise invert (alias T.bitwise_not)
Inplace
Inplace
-------------
-------------
...
@@ -1077,13 +1121,12 @@ Casting
...
@@ -1077,13 +1121,12 @@ Casting
This is not a reinterpret cast, but a coersion cast, similar to
This is not a reinterpret cast, but a coersion cast, similar to
``numpy.asarray(x, dtype=dtype)``.
``numpy.asarray(x, dtype=dtype)``.
..
code-block:: python
..
testcode:: cast
import theano.tensor as T
import theano.tensor as T
x
_as_float
= T.matrix()
x = T.matrix()
x_as_int = T.cast(x, 'int32')
x_as_int = T.cast(x, 'int32')
Attempting to casting a complex value to a real value is ambiguous and
Attempting to casting a complex value to a real value is ambiguous and
will raise an exception. Use `real()`, `imag()`, `abs()`, or `angle()`.
will raise an exception. Use `real()`, `imag()`, `abs()`, or `angle()`.
...
@@ -1114,7 +1157,7 @@ The six usual equality and inequality operators share the same interface.
...
@@ -1114,7 +1157,7 @@ The six usual equality and inequality operators share the same interface.
Here is an example with the less-than operator.
Here is an example with the less-than operator.
..
code-block:: python
..
testcode:: oper
import theano.tensor as T
import theano.tensor as T
x,y = T.dmatrices('x','y')
x,y = T.dmatrices('x','y')
...
@@ -1178,7 +1221,7 @@ Condition
...
@@ -1178,7 +1221,7 @@ Condition
:Parameter: *iff* - symbolic Tensor (or compatible)
:Parameter: *iff* - symbolic Tensor (or compatible)
:Return type: symbolic Tensor
:Return type: symbolic Tensor
..
code-block:: python
..
testcode:: switch
import theano.tensor as T
import theano.tensor as T
a,b = T.dmatrices('a','b')
a,b = T.dmatrices('a','b')
...
@@ -1189,7 +1232,6 @@ Condition
...
@@ -1189,7 +1232,6 @@ Condition
Alias for `switch`. where is the numpy name.
Alias for `switch`. where is the numpy name.
.. function:: clip(x, min, max)
.. function:: clip(x, min, max)
Return a variable representing x, but with all elements greater than
Return a variable representing x, but with all elements greater than
...
@@ -1247,7 +1289,7 @@ The bitwise operators possess this interface:
...
@@ -1247,7 +1289,7 @@ The bitwise operators possess this interface:
Here is an example using the bit-wise ``and_`` via the ``&`` operator:
Here is an example using the bit-wise ``and_`` via the ``&`` operator:
..
code-block:: python
..
testcode:: bitwise
import theano.tensor as T
import theano.tensor as T
x,y = T.imatrices('x','y')
x,y = T.imatrices('x','y')
...
@@ -1474,7 +1516,9 @@ Linear Algebra
...
@@ -1474,7 +1516,9 @@ Linear Algebra
are compatible. The resulting tensor will have shape (2, 5, 6) -- the
are compatible. The resulting tensor will have shape (2, 5, 6) -- the
dimensions that are not being summed:
dimensions that are not being summed:
.. code-block:: python
.. testcode:: tensordot
import numpy as np
a = np.random.random((2,3,4))
a = np.random.random((2,3,4))
b = np.random.random((5,6,4,3))
b = np.random.random((5,6,4,3))
...
@@ -1498,7 +1542,7 @@ Linear Algebra
...
@@ -1498,7 +1542,7 @@ Linear Algebra
for m in range(a2):
for m in range(a2):
cloop[i,j,k] += a[i,l,m] * b[j,k,m,l]
cloop[i,j,k] += a[i,l,m] * b[j,k,m,l]
np.allclose(c, cloop) #true
assert np.allclose(c, cloop)
This specific implementation avoids a loop by transposing a and b such that
This specific implementation avoids a loop by transposing a and b such that
the summed axes of a are last and the summed axes of b are first. The
the summed axes of a are last and the summed axes of b are first. The
...
@@ -1509,12 +1553,15 @@ Linear Algebra
...
@@ -1509,12 +1553,15 @@ Linear Algebra
In an extreme case, no axes may be specified. The resulting tensor
In an extreme case, no axes may be specified. The resulting tensor
will have shape equal to the concatenation of the shapes of a and b:
will have shape equal to the concatenation of the shapes of a and b:
..
code-block:: python
..
doctest:: tensordot
c = np.tensordot(a, b, 0)
>>> c = np.tensordot(a, b, 0)
print(a.shape) #(2,3,4)
>>> a.shape
print(b.shape) #(5,6,4,3)
(2, 3, 4)
print(c.shape) #(2,3,4,5,6,4,3)
>>> b.shape
(5, 6, 4, 3)
>>> print(c.shape)
(2, 3, 4, 5, 6, 4, 3)
:note: See the documentation of `numpy.tensordot <http://docs.scipy.org/doc/numpy/reference/generated/numpy.tensordot.html>`_ for more examples.
:note: See the documentation of `numpy.tensordot <http://docs.scipy.org/doc/numpy/reference/generated/numpy.tensordot.html>`_ for more examples.
...
@@ -1527,6 +1574,7 @@ Linear Algebra
...
@@ -1527,6 +1574,7 @@ Linear Algebra
over the first dimension using scan.
over the first dimension using scan.
Returns a tensor of size e.g. if it is 3D: (dim1, dim3, dim4)
Returns a tensor of size e.g. if it is 3D: (dim1, dim3, dim4)
Example:
Example:
>>> first = T.tensor3('first')
>>> first = T.tensor3('first')
>>> second = T.tensor3('second')
>>> second = T.tensor3('second')
>>> result = batched_dot(first, second)
>>> result = batched_dot(first, second)
...
@@ -1629,28 +1677,33 @@ Gradient / Differentiation
...
@@ -1629,28 +1677,33 @@ Gradient / Differentiation
another subgraph_grad as `start` with any other `cost` (e.g. weight decay).
another subgraph_grad as `start` with any other `cost` (e.g. weight decay).
In an MLP, we could use subgraph_grad to iteratively backpropagate:
In an MLP, we could use subgraph_grad to iteratively backpropagate:
>>> x, t = theano.tensor.fvector('x'), theano.tensor.fvector('t')
>>> w1 = theano.shared(np.random.randn(3,4))
.. testcode:: subgraph_grad
>>> w2 = theano.shared(np.random.randn(4,2))
>>> a1 = theano.tensor.tanh(theano.tensor.dot(x,w1))
import theano
>>> a2 = theano.tensor.tanh(theano.tensor.dot(a1,w2))
import numpy as np
>>> cost2 = theano.tensor.sqr(a2 - t).sum()
x, t = theano.tensor.fvector('x'), theano.tensor.fvector('t')
>>> cost2 += theano.tensor.sqr(w2.sum())
w1 = theano.shared(np.random.randn(3,4))
>>> cost1 = theano.tensor.sqr(w1.sum())
w2 = theano.shared(np.random.randn(4,2))
a1 = theano.tensor.tanh(theano.tensor.dot(x,w1))
>>> params = [[w2],[w1]]
a2 = theano.tensor.tanh(theano.tensor.dot(a1,w2))
>>> costs = [cost2,cost1]
cost2 = theano.tensor.sqr(a2 - t).sum()
>>> grad_ends = [[a1], [x]]
cost2 += theano.tensor.sqr(w2.sum())
cost1 = theano.tensor.sqr(w1.sum())
>>> next_grad = None
>>> param_grads = []
params = [[w2],[w1]]
>>> for i in xrange(2):
costs = [cost2,cost1]
>>> param_grad, next_grad = theano.subgraph_grad(
grad_ends = [[a1], [x]]
>>> wrt=params[i], end=grad_ends[i],
>>> start=next_grad, cost=costs[i]
next_grad = None
>>> )
param_grads = []
>>> next_grad = dict(zip(grad_ends[i], next_grad))
for i in xrange(2):
>>> param_grads.extend(param_grad)
param_grad, next_grad = theano.subgraph_grad(
wrt=params[i], end=grad_ends[i],
start=next_grad, cost=costs[i]
)
next_grad = dict(zip(grad_ends[i], next_grad))
param_grads.extend(param_grad)
:type wrt: list of variables
:type wrt: list of variables
:param wrt:
:param wrt:
...
...
doc/scripts/docgen.py
浏览文件 @
a14c3e9b
...
@@ -65,7 +65,7 @@ if __name__ == '__main__':
...
@@ -65,7 +65,7 @@ if __name__ == '__main__':
options
.
update
(
dict
([
x
,
y
or
True
]
for
x
,
y
in
options
.
update
(
dict
([
x
,
y
or
True
]
for
x
,
y
in
getopt
.
getopt
(
sys
.
argv
[
1
:],
getopt
.
getopt
(
sys
.
argv
[
1
:],
'o:'
,
'o:'
,
[
'epydoc'
,
'rst'
,
'help'
,
'nopdf'
,
'cache'
])[
0
]))
[
'epydoc'
,
'rst'
,
'help'
,
'nopdf'
,
'cache'
,
'test'
])[
0
]))
if
options
[
'--help'
]:
if
options
[
'--help'
]:
print
'Usage:
%
s [OPTIONS]'
%
sys
.
argv
[
0
]
print
'Usage:
%
s [OPTIONS]'
%
sys
.
argv
[
0
]
print
' -o <dir>: output the html files in the specified dir'
print
' -o <dir>: output the html files in the specified dir'
...
@@ -74,10 +74,11 @@ if __name__ == '__main__':
...
@@ -74,10 +74,11 @@ if __name__ == '__main__':
print
' --nopdf: do not produce a PDF file from the doc, only HTML'
print
' --nopdf: do not produce a PDF file from the doc, only HTML'
print
' --epydoc: only compile the api documentation'
,
print
' --epydoc: only compile the api documentation'
,
print
'(requires epydoc)'
print
'(requires epydoc)'
print
' --test: run all the code samples in the documentaton'
print
' --help: this help'
print
' --help: this help'
sys
.
exit
(
0
)
sys
.
exit
(
0
)
if
not
(
options
[
'--epydoc'
]
or
options
[
'--rst'
]):
if
not
(
options
[
'--epydoc'
]
or
options
[
'--rst'
]
or
options
[
'--test'
]
):
# Default is now rst
# Default is now rst
options
[
'--rst'
]
=
True
options
[
'--rst'
]
=
True
...
@@ -113,9 +114,6 @@ if __name__ == '__main__':
...
@@ -113,9 +114,6 @@ if __name__ == '__main__':
# Generate PDF doc
# Generate PDF doc
# TODO
# TODO
if
options
[
'--all'
]
or
options
[
'--rst'
]:
mkdir
(
"doc"
)
sys
.
path
[
0
:
0
]
=
[
os
.
path
.
join
(
throot
,
'doc'
)]
def
call_sphinx
(
builder
,
workdir
,
extraopts
=
None
):
def
call_sphinx
(
builder
,
workdir
,
extraopts
=
None
):
import
sphinx
import
sphinx
if
extraopts
is
None
:
if
extraopts
is
None
:
...
@@ -124,6 +122,10 @@ if __name__ == '__main__':
...
@@ -124,6 +122,10 @@ if __name__ == '__main__':
extraopts
.
append
(
'-E'
)
extraopts
.
append
(
'-E'
)
sphinx
.
main
([
''
,
'-b'
,
builder
]
+
extraopts
+
sphinx
.
main
([
''
,
'-b'
,
builder
]
+
extraopts
+
[
os
.
path
.
join
(
throot
,
'doc'
),
workdir
])
[
os
.
path
.
join
(
throot
,
'doc'
),
workdir
])
if
options
[
'--all'
]
or
options
[
'--rst'
]:
mkdir
(
"doc"
)
sys
.
path
[
0
:
0
]
=
[
os
.
path
.
join
(
throot
,
'doc'
)]
call_sphinx
(
'html'
,
'.'
)
call_sphinx
(
'html'
,
'.'
)
if
not
options
[
'--nopdf'
]:
if
not
options
[
'--nopdf'
]:
...
@@ -142,3 +144,8 @@ if __name__ == '__main__':
...
@@ -142,3 +144,8 @@ if __name__ == '__main__':
print
'OSError:'
,
e
print
'OSError:'
,
e
except
IOError
,
e
:
except
IOError
,
e
:
print
'IOError:'
,
e
print
'IOError:'
,
e
if
options
[
'--test'
]:
mkdir
(
"doc"
)
sys
.
path
[
0
:
0
]
=
[
os
.
path
.
join
(
throot
,
'doc'
)]
call_sphinx
(
'doctest'
,
'.'
)
theano/tensor/subtensor.py
浏览文件 @
a14c3e9b
...
@@ -967,6 +967,7 @@ def set_subtensor(x, y, inplace=False,
...
@@ -967,6 +967,7 @@ def set_subtensor(x, y, inplace=False,
Example: To replicate the numpy expression "r[10:] = 5", type
Example: To replicate the numpy expression "r[10:] = 5", type
>>> r = ivector()
>>> new_r = set_subtensor(r[10:], 5)
>>> new_r = set_subtensor(r[10:], 5)
:param x: symbolic variable for the lvalue of = operation
:param x: symbolic variable for the lvalue of = operation
...
@@ -991,6 +992,7 @@ def inc_subtensor(x, y, inplace=False, set_instead_of_inc=False,
...
@@ -991,6 +992,7 @@ def inc_subtensor(x, y, inplace=False, set_instead_of_inc=False,
Example: To replicate the numpy expression "r[10:] += 5", type
Example: To replicate the numpy expression "r[10:] += 5", type
>>> r = ivector()
>>> new_r = inc_subtensor(r[10:], 5)
>>> new_r = inc_subtensor(r[10:], 5)
"""
"""
# First of all, y cannot have a higher dimension than x,
# First of all, y cannot have a higher dimension than x,
...
...
theano/tests/test_tutorial.py
浏览文件 @
a14c3e9b
...
@@ -912,7 +912,7 @@ class T_loading_and_saving(unittest.TestCase):
...
@@ -912,7 +912,7 @@ class T_loading_and_saving(unittest.TestCase):
class
T_modes
(
unittest
.
TestCase
):
class
T_modes
(
unittest
.
TestCase
):
# All tests here belog to
# All tests here belo
n
g to
# http://deeplearning.net/software/theano/tutorial/modes.html
# http://deeplearning.net/software/theano/tutorial/modes.html
# Theano/doc/tutorial/modes.txt
# Theano/doc/tutorial/modes.txt
# Any change you do here also add it to the tutorial !
# Any change you do here also add it to the tutorial !
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论