Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
80264d01
提交
80264d01
authored
8月 14, 2015
作者:
Arnaud Bergeron
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fix remaining test problems in documentation.
All tests pass!
上级
f747933f
隐藏空白字符变更
内嵌
并排
正在显示
28 个修改的文件
包含
216 行增加
和
202 行删除
+216
-202
advanced_theano.txt
doc/cifarSC2011/advanced_theano.txt
+6
-7
theano.txt
doc/cifarSC2011/theano.txt
+3
-2
advanced_theano.txt
doc/crei2013/advanced_theano.txt
+38
-24
theano.txt
doc/crei2013/theano.txt
+4
-4
glossary.txt
doc/glossary.txt
+4
-4
mammouth.txt
doc/internal/mammouth.txt
+3
-1
function.txt
doc/library/compile/function.txt
+3
-1
utils.txt
doc/library/gof/utils.txt
+4
-0
gradient.txt
doc/library/gradient.txt
+4
-0
pkl_utils.txt
doc/library/misc/pkl_utils.txt
+4
-0
printing.txt
doc/library/printing.txt
+14
-9
index.txt
doc/library/sparse/index.txt
+24
-20
basic.txt
doc/library/tensor/basic.txt
+4
-4
extra_ops.txt
doc/library/tensor/extra_ops.txt
+4
-0
utils.txt
doc/library/tensor/utils.txt
+4
-0
noupdates.txt
doc/proposals/noupdates.txt
+0
-56
max_gotcha.txt
doc/sandbox/max_gotcha.txt
+14
-11
faq_tutorial.txt
doc/tutorial/faq_tutorial.txt
+21
-7
numpy.txt
doc/tutorial/numpy.txt
+4
-1
printing_drawing.txt
doc/tutorial/printing_drawing.txt
+33
-34
shape_info.txt
doc/tutorial/shape_info.txt
+9
-7
sparse.txt
doc/tutorial/sparse.txt
+1
-1
function.py
theano/compile/function.py
+2
-2
utils.py
theano/gof/utils.py
+0
-1
gradient.py
theano/gradient.py
+2
-2
extra_ops.py
theano/tensor/extra_ops.py
+1
-0
io.py
theano/tensor/io.py
+1
-1
utils.py
theano/tensor/utils.py
+5
-3
没有找到文件。
doc/cifarSC2011/advanced_theano.txt
浏览文件 @
80264d01
...
@@ -312,8 +312,7 @@ Pretty Printing
...
@@ -312,8 +312,7 @@ Pretty Printing
~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~
>>> theano.printing.pprint(prediction) # doctest: +NORMALIZE_WHITESPACE
>>> theano.printing.pprint(prediction) # doctest: +NORMALIZE_WHITESPACE
'gt((TensorConstant{1} / (TensorConstant{1} + exp(((-(x \\dot w)) - b)))),
'gt((TensorConstant{1} / (TensorConstant{1} + exp(((-(x \\dot w)) - b)))), TensorConstant{0.5})'
TensorConstant{0.5})'
Debug Print
Debug Print
...
@@ -321,7 +320,7 @@ Debug Print
...
@@ -321,7 +320,7 @@ Debug Print
The graph before optimization:
The graph before optimization:
>>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE
>>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE
, +SKIP
Elemwise{gt,no_inplace} [@A] ''
Elemwise{gt,no_inplace} [@A] ''
|Elemwise{true_div,no_inplace} [@B] ''
|Elemwise{true_div,no_inplace} [@B] ''
| |DimShuffle{x} [@C] ''
| |DimShuffle{x} [@C] ''
...
@@ -342,7 +341,7 @@ The graph before optimization:
...
@@ -342,7 +341,7 @@ The graph before optimization:
The graph after optimization:
The graph after optimization:
>>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE
>>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE
, +SKIP
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [@A] '' 4
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [@A] '' 4
|CGemv{inplace} [@B] '' 3
|CGemv{inplace} [@B] '' 3
| |Alloc [@C] '' 2
| |Alloc [@C] '' 2
...
@@ -364,7 +363,7 @@ Picture Printing of Graphs
...
@@ -364,7 +363,7 @@ Picture Printing of Graphs
The graph before optimization:
The graph before optimization:
>>> theano.printing.pydotprint(prediction, outfile="pics/logreg_pydotprint_prediction.png", var_with_name_simple=True)
>>> theano.printing.pydotprint(prediction, outfile="pics/logreg_pydotprint_prediction.png", var_with_name_simple=True)
# doctest: +SKIP
The output file is available at pics/logreg_pydotprint_prediction.png
The output file is available at pics/logreg_pydotprint_prediction.png
.. image:: ./pics/logreg_pydotprint_prediction.png
.. image:: ./pics/logreg_pydotprint_prediction.png
...
@@ -372,7 +371,7 @@ The output file is available at pics/logreg_pydotprint_prediction.png
...
@@ -372,7 +371,7 @@ The output file is available at pics/logreg_pydotprint_prediction.png
The graph after optimization:
The graph after optimization:
>>> theano.printing.pydotprint(predict, outfile="pics/logreg_pydotprint_predict.png", var_with_name_simple=True)
>>> theano.printing.pydotprint(predict, outfile="pics/logreg_pydotprint_predict.png", var_with_name_simple=True)
# doctest: +SKIP
The output file is available at pics/logreg_pydotprint_predict.png
The output file is available at pics/logreg_pydotprint_predict.png
.. image:: ./pics/logreg_pydotprint_predict.png
.. image:: ./pics/logreg_pydotprint_predict.png
...
@@ -380,7 +379,7 @@ The output file is available at pics/logreg_pydotprint_predict.png
...
@@ -380,7 +379,7 @@ The output file is available at pics/logreg_pydotprint_predict.png
The optimized training graph:
The optimized training graph:
>>> theano.printing.pydotprint(train, outfile="pics/logreg_pydotprint_train.png", var_with_name_simple=True)
>>> theano.printing.pydotprint(train, outfile="pics/logreg_pydotprint_train.png", var_with_name_simple=True)
# doctest: +SKIP
The output file is available at pics/logreg_pydotprint_train.png
The output file is available at pics/logreg_pydotprint_train.png
.. image:: ./pics/logreg_pydotprint_train.png
.. image:: ./pics/logreg_pydotprint_train.png
...
...
doc/cifarSC2011/theano.txt
浏览文件 @
80264d01
...
@@ -56,7 +56,8 @@ Simple example
...
@@ -56,7 +56,8 @@ Simple example
>>> a = theano.tensor.vector("a") # declare symbolic variable
>>> a = theano.tensor.vector("a") # declare symbolic variable
>>> b = a + a**10 # build symbolic expression
>>> b = a + a**10 # build symbolic expression
>>> f = theano.function([a], b) # compile function
>>> f = theano.function([a], b) # compile function
>>> print f([0,1,2]) # prints `array([0,2,1026])`
>>> f([0,1,2])
array([ 0., 2., 1026.])
====================================================== =====================================================
====================================================== =====================================================
...
@@ -332,7 +333,7 @@ Details regarding symbolic broadcasting...
...
@@ -332,7 +333,7 @@ Details regarding symbolic broadcasting...
Differentiation details
Differentiation details
-----------------------
-----------------------
>>> gw,gb = T.grad(cost, [w,b])
>>> gw,gb = T.grad(cost, [w,b])
# doctest: +SKIP
* T.grad works symbolically: takes and returns a Theano variable
* T.grad works symbolically: takes and returns a Theano variable
...
...
doc/crei2013/advanced_theano.txt
浏览文件 @
80264d01
...
@@ -148,8 +148,7 @@ Pretty Printing
...
@@ -148,8 +148,7 @@ Pretty Printing
~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~
>>> theano.printing.pprint(prediction) # doctest: +NORMALIZE_WHITESPACE
>>> theano.printing.pprint(prediction) # doctest: +NORMALIZE_WHITESPACE
'gt((TensorConstant{1} / (TensorConstant{1} + exp(((-(x \\dot w)) - b)))),
'gt((TensorConstant{1} / (TensorConstant{1} + exp(((-(x \\dot w)) - b)))), TensorConstant{0.5})'
TensorConstant{0.5})'
Debug Print
Debug Print
...
@@ -157,8 +156,11 @@ Debug Print
...
@@ -157,8 +156,11 @@ Debug Print
The graph before optimization:
The graph before optimization:
>>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE
.. doctest::
Elemwise{gt,no_inplace} [@A] ''
:options: +SKIP
>>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE
Elemwise{gt,no_inplace} [@A] ''
|Elemwise{true_div,no_inplace} [@B] ''
|Elemwise{true_div,no_inplace} [@B] ''
| |DimShuffle{x} [@C] ''
| |DimShuffle{x} [@C] ''
| | |TensorConstant{1} [@D]
| | |TensorConstant{1} [@D]
...
@@ -178,20 +180,23 @@ The graph before optimization:
...
@@ -178,20 +180,23 @@ The graph before optimization:
The graph after optimization:
The graph after optimization:
>>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE
.. doctest::
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [@A] '' 4
:options: +SKIP
|CGemv{inplace} [@B] '' 3
| |Alloc [@C] '' 2
>>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE
| | |TensorConstant{0.0} [@D]
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [@A] '' 4
| | |Shape_i{0} [@E] '' 1
|CGemv{inplace} [@B] '' 3
| | |x [@F]
| |Alloc [@C] '' 2
| |TensorConstant{1.0} [@G]
| | |TensorConstant{0.0} [@D]
| |x [@F]
| | |Shape_i{0} [@E] '' 1
| |w [@H]
| | |x [@F]
| |TensorConstant{0.0} [@D]
| |TensorConstant{1.0} [@G]
|InplaceDimShuffle{x} [@I] '' 0
| |x [@F]
| |b [@J]
| |w [@H]
|TensorConstant{(1,) of 0.5} [@K]
| |TensorConstant{0.0} [@D]
|InplaceDimShuffle{x} [@I] '' 0
| |b [@J]
|TensorConstant{(1,) of 0.5} [@K]
Picture Printing of Graphs
Picture Printing of Graphs
...
@@ -201,24 +206,33 @@ Picture Printing of Graphs
...
@@ -201,24 +206,33 @@ Picture Printing of Graphs
The graph before optimization:
The graph before optimization:
>>> theano.printing.pydotprint(prediction, outfile="pics/logreg_pydotprint_prediction.png", var_with_name_simple=True)
.. doctest::
The output file is available at pics/logreg_pydotprint_prediction.png
:options: +SKIP
>>> theano.printing.pydotprint(prediction, outfile="pics/logreg_pydotprint_prediction.png", var_with_name_simple=True)
The output file is available at pics/logreg_pydotprint_prediction.png
.. image:: ./pics/logreg_pydotprint_prediction.png
.. image:: ./pics/logreg_pydotprint_prediction.png
:width: 800 px
:width: 800 px
The graph after optimization:
The graph after optimization:
>>> theano.printing.pydotprint(predict, outfile="pics/logreg_pydotprint_predict.png", var_with_name_simple=True)
.. doctest::
The output file is available at pics/logreg_pydotprint_predict.png
:options: +SKIP
>>> theano.printing.pydotprint(predict, outfile="pics/logreg_pydotprint_predict.png", var_with_name_simple=True)
The output file is available at pics/logreg_pydotprint_predict.png
.. image:: ./pics/logreg_pydotprint_predict.png
.. image:: ./pics/logreg_pydotprint_predict.png
:width: 800 px
:width: 800 px
The optimized training graph:
The optimized training graph:
>>> theano.printing.pydotprint(train, outfile="pics/logreg_pydotprint_train.png", var_with_name_simple=True)
.. doctest::
The output file is available at pics/logreg_pydotprint_train.png
:options: +SKIP
>>> theano.printing.pydotprint(train, outfile="pics/logreg_pydotprint_train.png", var_with_name_simple=True)
The output file is available at pics/logreg_pydotprint_train.png
.. image:: ./pics/logreg_pydotprint_train.png
.. image:: ./pics/logreg_pydotprint_train.png
:width: 1500 px
:width: 1500 px
...
...
doc/crei2013/theano.txt
浏览文件 @
80264d01
...
@@ -54,8 +54,8 @@ Simple example
...
@@ -54,8 +54,8 @@ Simple example
>>> a = theano.tensor.vector("a") # declare symbolic variable
>>> a = theano.tensor.vector("a") # declare symbolic variable
>>> b = a + a ** 10 # build symbolic expression
>>> b = a + a ** 10 # build symbolic expression
>>> f = theano.function([a], b) # compile function
>>> f = theano.function([a], b) # compile function
>>>
print f([0, 1, 2]) # prints `array([0, 2, 1026])`
>>>
f([0, 1, 2])
array([ 0., 2., 1026.])
====================================================== =====================================================
====================================================== =====================================================
Unoptimized graph Optimized graph
Unoptimized graph Optimized graph
...
@@ -118,7 +118,7 @@ Where are those optimization applied?
...
@@ -118,7 +118,7 @@ Where are those optimization applied?
# Log(1-sigmoid(var)) -> -sigmoid(var)
# Log(1-sigmoid(var)) -> -sigmoid(var)
prediction = p_1 > 0.5
prediction = p_1 > 0.5
cost = xent.mean() + 0.01 * (w ** 2).sum()
cost = xent.mean() + 0.01 * (w ** 2).sum()
gw,gb = tt.grad(cost, [w, b])
gw,
gb = tt.grad(cost, [w, b])
train = theano.function(
train = theano.function(
inputs=[x, y],
inputs=[x, y],
...
@@ -294,7 +294,7 @@ Details regarding symbolic broadcasting...
...
@@ -294,7 +294,7 @@ Details regarding symbolic broadcasting...
Differentiation details
Differentiation details
-----------------------
-----------------------
>>> gw,
gb = tt.grad(cost, [w,b])
>>> gw,
gb = tt.grad(cost, [w,b]) # doctest: +SKIP
* tt.grad works symbolically: takes and returns a Theano variable
* tt.grad works symbolically: takes and returns a Theano variable
...
...
doc/glossary.txt
浏览文件 @
80264d01
...
@@ -3,10 +3,10 @@
...
@@ -3,10 +3,10 @@
Glossary
Glossary
========
========
..
..
testsetup::
# This is for the doctests in the file
>>>
import theano
import theano
>>>
from theano import tensor
from theano import tensor
.. glossary::
.. glossary::
...
...
doc/internal/mammouth.txt
浏览文件 @
80264d01
...
@@ -10,7 +10,9 @@ To run Theano on the Mammouth cluster, follow these simple steps:
...
@@ -10,7 +10,9 @@ To run Theano on the Mammouth cluster, follow these simple steps:
the goodies for using the latest and greatest (optimized) libraries
the goodies for using the latest and greatest (optimized) libraries
(numpy, scipy, etc.)
(numpy, scipy, etc.)
>>> source /home/bastienf/.local.bashrc
.. code-block:: sh
source /home/bastienf/.local.bashrc
Perhaps even put this in your ``.bashrc``
Perhaps even put this in your ``.bashrc``
...
...
doc/library/compile/function.txt
浏览文件 @
80264d01
...
@@ -18,9 +18,11 @@ the interface for compiling graphs into callable objects.
...
@@ -18,9 +18,11 @@ the interface for compiling graphs into callable objects.
You've already seen example usage in the basic tutorial... something like this:
You've already seen example usage in the basic tutorial... something like this:
>>> import theano
>>> x = theano.tensor.dscalar()
>>> x = theano.tensor.dscalar()
>>> f = theano.function([x], 2*x)
>>> f = theano.function([x], 2*x)
>>> print f(4) # prints 8.0
>>> f(4)
array(8.0)
The idea here is that we've compiled the symbolic graph (``2*x``) into a function that can be called on a number and will do some computations.
The idea here is that we've compiled the symbolic graph (``2*x``) into a function that can be called on a number and will do some computations.
...
...
doc/library/gof/utils.txt
浏览文件 @
80264d01
...
@@ -4,6 +4,10 @@
...
@@ -4,6 +4,10 @@
:mod:`utils` -- Utilities functions operating on the graph
:mod:`utils` -- Utilities functions operating on the graph
==========================================================
==========================================================
.. testsetup:: *
from theano.gof.utils import *
.. module:: utils
.. module:: utils
:platform: Unix, Windows
:platform: Unix, Windows
:synopsis: Utilities functions operating on the graph
:synopsis: Utilities functions operating on the graph
...
...
doc/library/gradient.txt
浏览文件 @
80264d01
...
@@ -9,6 +9,10 @@
...
@@ -9,6 +9,10 @@
:synopsis: low-level automatic differentiation
:synopsis: low-level automatic differentiation
.. moduleauthor:: LISA
.. moduleauthor:: LISA
.. testsetup:: *
from theano.gradient import *
Symbolic gradient is usually computed from :func:`gradient.grad`, which offers a
Symbolic gradient is usually computed from :func:`gradient.grad`, which offers a
more convenient syntax for the common case of wanting the gradient in some
more convenient syntax for the common case of wanting the gradient in some
expressions with respect to a scalar cost. The :func:`grad_sources_inputs`
expressions with respect to a scalar cost. The :func:`grad_sources_inputs`
...
...
doc/library/misc/pkl_utils.txt
浏览文件 @
80264d01
...
@@ -5,6 +5,10 @@
...
@@ -5,6 +5,10 @@
:mod:`misc.pkl_utils` - Tools for serialization.
:mod:`misc.pkl_utils` - Tools for serialization.
================================================
================================================
.. testsetup:: *
from theano.misc.pkl_utils import *
.. autofunction:: theano.misc.pkl_utils.dump
.. autofunction:: theano.misc.pkl_utils.dump
.. autofunction:: theano.misc.pkl_utils.load
.. autofunction:: theano.misc.pkl_utils.load
...
...
doc/library/printing.txt
浏览文件 @
80264d01
...
@@ -9,6 +9,10 @@
...
@@ -9,6 +9,10 @@
:synopsis: Provides the Print Op and graph-printing routines.
:synopsis: Provides the Print Op and graph-printing routines.
.. moduleauthor:: LISA
.. moduleauthor:: LISA
.. testsetup::
import theano
Guide
Guide
======
======
...
@@ -19,12 +23,13 @@ Intermediate values in a computation cannot be printed in
...
@@ -19,12 +23,13 @@ Intermediate values in a computation cannot be printed in
the normal python way with the print statement, because Theano has no *statements*.
the normal python way with the print statement, because Theano has no *statements*.
Instead there is the :class:`Print` Op.
Instead there is the :class:`Print` Op.
>>> from theano import tensor as T, function, printing
>>> x = T.dvector()
>>> x = T.dvector()
>>> hello_world_op = printing.Print('hello world')
>>> hello_world_op = printing.Print('hello world')
>>> printed_x = hello_world_op(x)
>>> printed_x = hello_world_op(x)
>>> f = function([x], printed_x)
>>> f = function([x], printed_x)
>>> f([1, 2, 3])
>>>
r =
f([1, 2, 3])
>>> # output: "hello world __str__ = [ 1. 2. 3.]"
hello world __str__ = [ 1. 2. 3.]
If you print more than one thing in a function like `f`, they will not
If you print more than one thing in a function like `f`, they will not
necessarily be printed in the order that you think. The order might even depend
necessarily be printed in the order that you think. The order might even depend
...
@@ -46,14 +51,15 @@ Theano also provides :func:`theano.printing.pydotprint` that creates a png image
...
@@ -46,14 +51,15 @@ Theano also provides :func:`theano.printing.pydotprint` that creates a png image
1) The first is :func:`theano.pp`.
1) The first is :func:`theano.pp`.
>>> from theano import pp, tensor as T
>>> x = T.dscalar('x')
>>> x = T.dscalar('x')
>>> y = x ** 2
>>> y = x ** 2
>>> gy = T.grad(y, x)
>>> gy = T.grad(y, x)
>>> pp(gy) # print out the gradient prior to optimization
>>> pp(gy) # print out the gradient prior to optimization
'((fill((x **
2), 1.0) * 2) * (x ** (2 - 1
)))'
'((fill((x **
TensorConstant{2}), TensorConstant{1.0}) * TensorConstant{2}) * (x ** (TensorConstant{2} - TensorConstant{1}
)))'
>>> f = function([x], gy)
>>> f = function([x], gy)
>>> pp(f.maker.fgraph.outputs[0])
>>> pp(f.maker.fgraph.outputs[0])
'(
2.0
* x)'
'(
TensorConstant{2.0}
* x)'
The parameter in T.dscalar('x') in the first line is the name of this variable
The parameter in T.dscalar('x') in the first line is the name of this variable
in the graph. This name is used when printing the graph to make it more readable.
in the graph. This name is used when printing the graph to make it more readable.
...
@@ -74,8 +80,7 @@ iteration number or other kinds of information in the name.
...
@@ -74,8 +80,7 @@ iteration number or other kinds of information in the name.
2) The second function to print a graph is :func:`theano.printing.debugprint`
2) The second function to print a graph is :func:`theano.printing.debugprint`
>>> theano.printing.debugprint(f.maker.fgraph.outputs[0]) # doctest: +NORMALIZE_WHITESPACE
>>> theano.printing.debugprint(f.maker.fgraph.outputs[0])
Elemwise{mul,no_inplace} [@A] ''
Elemwise{mul,no_inplace} [@A] ''
|TensorConstant{2.0} [@B]
|TensorConstant{2.0} [@B]
|x [@C]
|x [@C]
...
@@ -100,7 +105,7 @@ happen when that Variable has already been printed. Where else has it been
...
@@ -100,7 +105,7 @@ happen when that Variable has already been printed. Where else has it been
printed? Look for debugprint identifier using the Find feature of your text
printed? Look for debugprint identifier using the Find feature of your text
editor.
editor.
>>> theano.printing.debugprint(gy)
>>> theano.printing.debugprint(gy)
# doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [@A] ''
Elemwise{mul} [@A] ''
|Elemwise{mul} [@B] ''
|Elemwise{mul} [@B] ''
| |Elemwise{second,no_inplace} [@C] ''
| |Elemwise{second,no_inplace} [@C] ''
...
@@ -113,10 +118,10 @@ Elemwise{mul} [@A] ''
...
@@ -113,10 +118,10 @@ Elemwise{mul} [@A] ''
|x [@E]
|x [@E]
|Elemwise{sub} [@I] ''
|Elemwise{sub} [@I] ''
|TensorConstant{2} [@F]
|TensorConstant{2} [@F]
|
Inplace
DimShuffle{} [@J] ''
|DimShuffle{} [@J] ''
|TensorConstant{1} [@K]
|TensorConstant{1} [@K]
>>> theano.printing.debugprint(gy, depth=2)
>>> theano.printing.debugprint(gy, depth=2)
# doctest: +NORMALIZE_WHITESPACE
Elemwise{mul} [@A] ''
Elemwise{mul} [@A] ''
|Elemwise{mul} [@B] ''
|Elemwise{mul} [@B] ''
|Elemwise{pow} [@C] ''
|Elemwise{pow} [@C] ''
...
...
doc/library/sparse/index.txt
浏览文件 @
80264d01
...
@@ -63,23 +63,25 @@ The following example builds a matrix and returns its columns. It
...
@@ -63,23 +63,25 @@ The following example builds a matrix and returns its columns. It
prints the i-th column, i.e. a list of indices in the column and their
prints the i-th column, i.e. a list of indices in the column and their
corresponding value in the second list.
corresponding value in the second list.
>>> import numpy as np
>>> import scipy.sparse as sp
>>> data = np.asarray([7, 8, 9])
>>> data = np.asarray([7, 8, 9])
>>> indices = np.asarray([0, 1, 2])
>>> indices = np.asarray([0, 1, 2])
>>> indptr = np.asarray([0, 2, 3, 3])
>>> indptr = np.asarray([0, 2, 3, 3])
>>> m = sp.csc_matrix((data, indices, indptr), shape=(3, 3))
>>> m = sp.csc_matrix((data, indices, indptr), shape=(3, 3))
>>>
print
m.toarray()
>>> m.toarray()
[[7 0 0]
array([[7, 0, 0],
[8 0 0]
[8, 0, 0],
[0 9 0]]
[0, 9, 0]])
>>> i = 0
>>> i = 0
>>>
print
m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
>>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
[0, 1] [7, 8]
(array([0, 1], dtype=int32), array([7, 8]))
>>> i = 1
>>> i = 1
>>>
print
m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
>>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
[2] [9]
(array([2], dtype=int32), array([9]))
>>> i = 2
>>> i = 2
>>>
print
m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
>>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
[] []
(array([], dtype=int32), array([], dtype=int64))
CSR Matrix
CSR Matrix
----------
----------
...
@@ -97,23 +99,25 @@ The following example builds a matrix and returns its rows. It prints
...
@@ -97,23 +99,25 @@ The following example builds a matrix and returns its rows. It prints
the i-th row, i.e. a list of indices in the row and their
the i-th row, i.e. a list of indices in the row and their
corresponding value in the second list.
corresponding value in the second list.
>>> import numpy as np
>>> import scipy.sparse as sp
>>> data = np.asarray([7, 8, 9])
>>> data = np.asarray([7, 8, 9])
>>> indices = np.asarray([0, 1, 2])
>>> indices = np.asarray([0, 1, 2])
>>> indptr = np.asarray([0, 2, 3, 3])
>>> indptr = np.asarray([0, 2, 3, 3])
>>> m = sp.csr_matrix((data, indices, indptr), shape=(3, 3))
>>> m = sp.csr_matrix((data, indices, indptr), shape=(3, 3))
>>>
print
m.toarray()
>>> m.toarray()
[[7 8 0]
array([[7, 8, 0],
[0 0 9]
[0, 0, 9],
[0 0 0]]
[0, 0, 0]])
>>> i = 0
>>> i = 0
>>>
print
m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
>>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
[0, 1] [7, 8]
(array([0, 1], dtype=int32), array([7, 8]))
>>> i = 1
>>> i = 1
>>>
print
m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
>>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
[2] [9]
(array([2], dtype=int32), array([9]))
>>> i = 2
>>> i = 2
>>>
print
m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
>>> m.indices[m.indptr[i]:m.indptr[i+1]], m.data[m.indptr[i]:m.indptr[i+1]]
[] []
(array([], dtype=int32), array([], dtype=int64))
List of Implemented Operations
List of Implemented Operations
==============================
==============================
...
...
doc/library/tensor/basic.txt
浏览文件 @
80264d01
...
@@ -1665,8 +1665,8 @@ Linear Algebra
...
@@ -1665,8 +1665,8 @@ Linear Algebra
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2],
[0, 1, 2]], dtype=int8)
[0, 1, 2]], dtype=int8)
.. function:: ogrid
.. function:: ogrid
:returns: an instance which returns an open (i.e. not fleshed out) mesh-grid
:returns: an instance which returns an open (i.e. not fleshed out) mesh-grid
...
@@ -1685,8 +1685,8 @@ Linear Algebra
...
@@ -1685,8 +1685,8 @@ Linear Algebra
[3],
[3],
[4]], dtype=int8)
[4]], dtype=int8)
>>> b[1].eval()
>>> b[1].eval()
array([[0, 1, 2
, 3
]], dtype=int8)
array([[0, 1, 2]], dtype=int8)
Gradient / Differentiation
Gradient / Differentiation
==========================
==========================
...
...
doc/library/tensor/extra_ops.txt
浏览文件 @
80264d01
...
@@ -2,6 +2,10 @@
...
@@ -2,6 +2,10 @@
:mod:`tensor.extra_ops` -- Tensor Extra Ops
:mod:`tensor.extra_ops` -- Tensor Extra Ops
===================================================================
===================================================================
.. testsetup:: *
from theano.tensor.extra_ops import *
.. module:: tensor.extra_ops
.. module:: tensor.extra_ops
:platform: Unix, Windows
:platform: Unix, Windows
:synopsis: Tensor Extra Ops
:synopsis: Tensor Extra Ops
...
...
doc/library/tensor/utils.txt
浏览文件 @
80264d01
...
@@ -2,6 +2,10 @@
...
@@ -2,6 +2,10 @@
:mod:`tensor.utils` -- Tensor Utils
:mod:`tensor.utils` -- Tensor Utils
===================================================================
===================================================================
.. testsetup::
from theano.tensor.utils import *
.. module:: tensor.utils
.. module:: tensor.utils
:platform: Unix, Windows
:platform: Unix, Windows
:synopsis: Tensor Utils
:synopsis: Tensor Utils
...
...
doc/proposals/noupdates.txt
deleted
100644 → 0
浏览文件 @
f747933f
=================
Automatic updates
=================
.. note:
Proposed 2010 01 13
Done 2010 04 ??
The Module version of RandomStreams could arrange for the automatic update of
certain inputs (such as the random number generators) at the time of make(), so
that certain *obvious* patterns would work:
>>> rs = RandomStreams()
>>> u = rs.uniform(...)
>>> f = theano.function([], u)
>>> assert not numpy.all(f() == f())
Unfortunately, with shared variables this does not work! Function needs to be
told which shared variables to update. The current workaround is to do this:
>>> theano.function([], u, updates=rs.updates())
or this:
>>> theano.function([], u, updates=[u.update])
But it is all too easy to forget to do either of these workarounds, and
accidentally run a program whose random numbers are the same in every call.
Proposal
========
Add an optional `default_update` attribute to Shared variables. This will be
consulted by function. If no update expression is given for this variable in
the updates list, then this default will be inserted. Note well: a value of None for the
default_update means to update with a value of None! To have no default update,
make sure that the default_update attribute is not defined.
Add an optional argument to function: `no_default_updates`. This argument defaults to
False, which results in the current semantics.
A True value here would mean "ignore all default_update expressions", and this
would be useful for disabling implicit behaviour.
A list of shared variables here would mean to ignore the
default_update_expressions in these specific variables.
Alternatives
============
Consider a singleton 'NOUPDATE' object that can be used as a pseudo-expression
in the update list. This doesn't introduce a new keyword argument, which makes
it slightly more awkward to document in theano.function. Really though, I have
no strong feelings between this and the no_updates paramter.
doc/sandbox/max_gotcha.txt
浏览文件 @
80264d01
...
@@ -22,17 +22,20 @@ max. The third argument is an array into which the result can be
...
@@ -22,17 +22,20 @@ max. The third argument is an array into which the result can be
written.
written.
So for example:
So for example:
.. code-block:: python
.. doctest::
>>> max(3, 4)
:options: +SKIP
4
>>> numpy.max(3, 4)
>>> import numpy
3
>>> max(3, 4)
>>> a,b,c = [numpy.asarray(i) for i in [0,1,2]]
4
>>> numpy.max(a,b,c)
>>> numpy.max(3, 4) # This is an error
0
3
>>> c
>>> a, b, c = [numpy.asarray(i) for i in [0, 1, 2]]
array(0)
>>> numpy.max(a, b, c) # This is an error
0
>>> c
array(0)
Be careful!
Be careful!
...
...
doc/tutorial/faq_tutorial.txt
浏览文件 @
80264d01
...
@@ -21,36 +21,50 @@ should be written:
...
@@ -21,36 +21,50 @@ should be written:
Defining a shared variable for the lookup table
Defining a shared variable for the lookup table
>>> lookup_table = theano.shared(matrix_ndarray).
.. code-block:: python
lookup_table = theano.shared(matrix_ndarray)
Getting a subset of the table (some rows or some columns) by passing
Getting a subset of the table (some rows or some columns) by passing
an integer vector of indices corresponding to those rows or columns.
an integer vector of indices corresponding to those rows or columns.
>>> subset = lookup_table[vector_of_indices]
.. code-block:: python
subset = lookup_table[vector_of_indices]
From now on, use only 'subset'. Do not call lookup_table[vector_of_indices]
From now on, use only 'subset'. Do not call lookup_table[vector_of_indices]
again. This causes problems with grad as this will create new variables.
again. This causes problems with grad as this will create new variables.
Defining cost which depends only on subset and not the entire lookup_table
Defining cost which depends only on subset and not the entire lookup_table
>>> cost = something that depends on subset
.. code-block:: python
>>> g = theano.grad(cost, subset)
cost = something that depends on subset
g = theano.grad(cost, subset)
There are two ways for updating the parameters:
There are two ways for updating the parameters:
Either use inc_subtensor or set_subtensor. It is recommended to use
Either use inc_subtensor or set_subtensor. It is recommended to use
inc_subtensor. Some theano optimizations do the conversion between
inc_subtensor. Some theano optimizations do the conversion between
the two functions, but not in all cases.
the two functions, but not in all cases.
>>> updates = inc_subtensor(subset, g*lr)
.. code-block:: python
updates = inc_subtensor(subset, g*lr)
OR
OR
>>> updates = set_subtensor(subset, subset + g*lr)
.. code-block:: python
updates = set_subtensor(subset, subset + g*lr)
Currently we just cover the case here,
Currently we just cover the case here,
not if you use inc_subtensor or set_subtensor with other types of indexing.
not if you use inc_subtensor or set_subtensor with other types of indexing.
Defining the theano function
Defining the theano function
>>> f=theano.function(..., updates=updates)
.. code-block:: python
f = theano.function(..., updates=updates)
Note that you can compute the gradient of the cost function w.r.t.
Note that you can compute the gradient of the cost function w.r.t.
the entire lookup_table, and the gradient will have nonzero rows only
the entire lookup_table, and the gradient will have nonzero rows only
...
...
doc/tutorial/numpy.txt
浏览文件 @
80264d01
.. _numpy:
.. _numpy:
.. testsetup::
import numpy
***************
***************
NumPy refresher
NumPy refresher
...
@@ -59,7 +62,7 @@ compatible shapes. The example below shows an instance of
...
@@ -59,7 +62,7 @@ compatible shapes. The example below shows an instance of
>>> a = numpy.asarray([1.0, 2.0, 3.0])
>>> a = numpy.asarray([1.0, 2.0, 3.0])
>>> b = 2.0
>>> b = 2.0
>>> a * b
>>> a * b
array([
2., 4.,
6.])
array([
2., 4.,
6.])
The smaller array ``b`` (actually a scalar here, which works like a 0-d array) in this case is *broadcasted* to the same size
The smaller array ``b`` (actually a scalar here, which works like a 0-d array) in this case is *broadcasted* to the same size
as ``a`` during the multiplication. This trick is often useful in
as ``a`` during the multiplication. This trick is often useful in
...
...
doc/tutorial/printing_drawing.txt
浏览文件 @
80264d01
...
@@ -67,40 +67,39 @@ Debug Print
...
@@ -67,40 +67,39 @@ Debug Print
The pre-compilation graph:
The pre-compilation graph:
>>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE
>>> theano.printing.debugprint(prediction) # doctest: +NORMALIZE_WHITESPACE
Elemwise{gt,no_inplace} [@A] ''
Elemwise{gt,no_inplace} [@A] ''
|Elemwise{true_div,no_inplace} [@B] ''
|Elemwise{true_div,no_inplace} [@B] ''
| |DimShuffle{x} [@C] ''
| |DimShuffle{x} [@C] ''
| | |TensorConstant{1} [@D]
| | |TensorConstant{1} [@D]
| |Elemwise{add,no_inplace} [@E] ''
| |Elemwise{add,no_inplace} [@E] ''
| |DimShuffle{x} [@F] ''
| |DimShuffle{x} [@F] ''
| | |TensorConstant{1} [@D]
| | |TensorConstant{1} [@D]
| |Elemwise{exp,no_inplace} [@G] ''
| |Elemwise{exp,no_inplace} [@G] ''
| |Elemwise{sub,no_inplace} [@H] ''
| |Elemwise{sub,no_inplace} [@H] ''
| |Elemwise{neg,no_inplace} [@I] ''
| |Elemwise{neg,no_inplace} [@I] ''
| | |dot [@J] ''
| | |dot [@J] ''
| | |x [@K]
| | |x [@K]
| | |w [@L]
| | |w [@L]
| |DimShuffle{x} [@M] ''
| |DimShuffle{x} [@M] ''
| |b [@N]
| |b [@N]
|DimShuffle{x} [@O] ''
|DimShuffle{x} [@O] ''
|TensorConstant{0.5} [@P]
|TensorConstant{0.5} [@P]
The post-compilation graph:
The post-compilation graph:
>>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE
>>> theano.printing.debugprint(predict) # doctest: +NORMALIZE_WHITESPACE
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [@A] '' 4
Elemwise{Composite{GT(scalar_sigmoid((-((-i0) - i1))), i2)}} [@A] '' 4
|CGemv{inplace} [@B] '' 3
|CGemv{inplace} [@B] '' 3
| |Alloc [@C] '' 2
| |AllocEmpty{dtype='float64'} [@C] '' 2
| | |TensorConstant{0.0} [@D]
| | |Shape_i{0} [@D] '' 1
| | |Shape_i{0} [@E] '' 1
| | |x [@E]
| | |x [@F]
| |TensorConstant{1.0} [@F]
| |TensorConstant{1.0} [@G]
| |x [@E]
| |x [@F]
| |w [@G]
| |w [@H]
| |TensorConstant{0.0} [@H]
| |TensorConstant{0.0} [@D]
|InplaceDimShuffle{x} [@I] '' 0
|InplaceDimShuffle{x} [@I] '' 0
| |b [@J]
| |b [@J]
|TensorConstant{(1,) of 0.5} [@K]
|TensorConstant{(1,) of 0.5} [@K]
Picture Printing of Graphs
Picture Printing of Graphs
...
@@ -108,7 +107,7 @@ Picture Printing of Graphs
...
@@ -108,7 +107,7 @@ Picture Printing of Graphs
The pre-compilation graph:
The pre-compilation graph:
>>> theano.printing.pydotprint(prediction, outfile="pics/logreg_pydotprint_prediction.png", var_with_name_simple=True)
>>> theano.printing.pydotprint(prediction, outfile="pics/logreg_pydotprint_prediction.png", var_with_name_simple=True)
# doctest: +SKIP
The output file is available at pics/logreg_pydotprint_prediction.png
The output file is available at pics/logreg_pydotprint_prediction.png
.. image:: ./pics/logreg_pydotprint_prediction.png
.. image:: ./pics/logreg_pydotprint_prediction.png
...
@@ -116,7 +115,7 @@ The output file is available at pics/logreg_pydotprint_prediction.png
...
@@ -116,7 +115,7 @@ The output file is available at pics/logreg_pydotprint_prediction.png
The post-compilation graph:
The post-compilation graph:
>>> theano.printing.pydotprint(predict, outfile="pics/logreg_pydotprint_predict.png", var_with_name_simple=True)
>>> theano.printing.pydotprint(predict, outfile="pics/logreg_pydotprint_predict.png", var_with_name_simple=True)
# doctest: +SKIP
The output file is available at pics/logreg_pydotprint_predict.png
The output file is available at pics/logreg_pydotprint_predict.png
.. image:: ./pics/logreg_pydotprint_predict.png
.. image:: ./pics/logreg_pydotprint_predict.png
...
@@ -124,7 +123,7 @@ The output file is available at pics/logreg_pydotprint_predict.png
...
@@ -124,7 +123,7 @@ The output file is available at pics/logreg_pydotprint_predict.png
The optimized training graph:
The optimized training graph:
>>> theano.printing.pydotprint(train, outfile="pics/logreg_pydotprint_train.png", var_with_name_simple=True)
>>> theano.printing.pydotprint(train, outfile="pics/logreg_pydotprint_train.png", var_with_name_simple=True)
# doctest: +SKIP
The output file is available at pics/logreg_pydotprint_train.png
The output file is available at pics/logreg_pydotprint_train.png
.. image:: ./pics/logreg_pydotprint_train.png
.. image:: ./pics/logreg_pydotprint_train.png
...
...
doc/tutorial/shape_info.txt
浏览文件 @
80264d01
...
@@ -24,7 +24,7 @@ Currently, information regarding shape is used in two ways in Theano:
...
@@ -24,7 +24,7 @@ Currently, information regarding shape is used in two ways in Theano:
>>> x = theano.tensor.matrix('x')
>>> x = theano.tensor.matrix('x')
>>> f = theano.function([x], (x ** 2).shape)
>>> f = theano.function([x], (x ** 2).shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector [@A] '' 2
MakeVector
{dtype='int64'}
[@A] '' 2
|Shape_i{0} [@B] '' 1
|Shape_i{0} [@B] '' 1
| |x [@C]
| |x [@C]
|Shape_i{1} [@D] '' 0
|Shape_i{1} [@D] '' 0
...
@@ -49,9 +49,9 @@ can lead to errors. Consider this example:
...
@@ -49,9 +49,9 @@ can lead to errors. Consider this example:
>>> xv = numpy.random.rand(5, 4)
>>> xv = numpy.random.rand(5, 4)
>>> yv = numpy.random.rand(3, 3)
>>> yv = numpy.random.rand(3, 3)
>>> f = theano.function([x,y], z.shape)
>>> f = theano.function([x,
y], z.shape)
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
MakeVector [@A] '' 4
MakeVector
{dtype='int64'}
[@A] '' 4
|Elemwise{Add}[(0, 0)] [@B] '' 3
|Elemwise{Add}[(0, 0)] [@B] '' 3
| |Shape_i{0} [@C] '' 1
| |Shape_i{0} [@C] '' 1
| | |x [@D]
| | |x [@D]
...
@@ -60,8 +60,8 @@ MakeVector [@A] '' 4
...
@@ -60,8 +60,8 @@ MakeVector [@A] '' 4
|Shape_i{1} [@G] '' 0
|Shape_i{1} [@G] '' 0
|x [@D]
|x [@D]
print f(xv,yv)
# DOES NOT RAISE AN ERROR AS SHOULD BE.
>>> f(xv, yv)
# DOES NOT RAISE AN ERROR AS SHOULD BE.
[8, 4]
array([8, 4])
>>> f = theano.function([x,y], z)# Do not take the shape.
>>> f = theano.function([x,y], z)# Do not take the shape.
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
>>> theano.printing.debugprint(f) # doctest: +NORMALIZE_WHITESPACE
...
@@ -70,8 +70,10 @@ Join [@A] '' 0
...
@@ -70,8 +70,10 @@ Join [@A] '' 0
|x [@C]
|x [@C]
|y [@D]
|y [@D]
>>> f(xv,yv) # doctest: +SKIP
>>> f(xv, yv) # doctest: +ELLIPSIS
>>> # Raises a dimensions mismatch error.
Traceback (most recent call last):
...
ValueError: ...
As you can see, when asking only for the shape of some computation (``join`` in the
As you can see, when asking only for the shape of some computation (``join`` in the
example), an inferred shape is computed directly, without executing
example), an inferred shape is computed directly, without executing
...
...
doc/tutorial/sparse.txt
浏览文件 @
80264d01
...
@@ -104,7 +104,7 @@ does not provide any way to handle a number of dimensions different from two.
...
@@ -104,7 +104,7 @@ does not provide any way to handle a number of dimensions different from two.
The set of all accepted ``dtype`` for the sparse matrices can be found in
The set of all accepted ``dtype`` for the sparse matrices can be found in
``sparse.all_dtypes``.
``sparse.all_dtypes``.
>>> sparse.all_dtypes
>>> sparse.all_dtypes
# doctest: +SKIP
set(['int8', 'int16', 'int32', 'int64', 'uint8', 'uint16', 'uint32', 'uint64',
set(['int8', 'int16', 'int32', 'int64', 'uint8', 'uint16', 'uint32', 'uint64',
'float32', 'float64', 'complex64', 'complex128'])
'float32', 'float64', 'complex64', 'complex128'])
...
...
theano/compile/function.py
浏览文件 @
80264d01
...
@@ -46,8 +46,8 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None,
...
@@ -46,8 +46,8 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None,
To load such a dump and do the compilation:
To load such a dump and do the compilation:
>>> import cPickle, theano
>>> import cPickle, theano
>>> d
=cPickle.load(open("func_dump.bin", "rb"))
>>> d
= cPickle.load(open("func_dump.bin", "rb")) # doctest: +SKIP
>>> f
=theano.function(**d)
>>> f
= theano.function(**d) # doctest: +SKIP
"""
"""
assert
isinstance
(
filename
,
string_types
)
assert
isinstance
(
filename
,
string_types
)
...
...
theano/gof/utils.py
浏览文件 @
80264d01
...
@@ -456,7 +456,6 @@ def remove(predicate, coll):
...
@@ -456,7 +456,6 @@ def remove(predicate, coll):
Examples
Examples
--------
--------
>>> from itertoolz import remove
>>> def even(x):
>>> def even(x):
... return x
% 2
== 0
... return x
% 2
== 0
>>> remove(even, [1, 2, 3, 4])
>>> remove(even, [1, 2, 3, 4])
...
...
theano/gradient.py
浏览文件 @
80264d01
...
@@ -1525,8 +1525,8 @@ def verify_grad(fun, pt, n_tests=2, rng=None, eps=None,
...
@@ -1525,8 +1525,8 @@ def verify_grad(fun, pt, n_tests=2, rng=None, eps=None,
Example:
Example:
>>> verify_grad(theano.tensor.tanh,
>>> verify_grad(theano.tensor.tanh,
(numpy.asarray([[2,3,4], [-1, 3.3, 9.9]]),),
...
(numpy.asarray([[2,3,4], [-1, 3.3, 9.9]]),),
rng=numpy.random)
...
rng=numpy.random)
Raises an Exception if the difference between the analytic gradient and
Raises an Exception if the difference between the analytic gradient and
numerical gradient (computed through the Finite Difference Method) of a
numerical gradient (computed through the Finite Difference Method) of a
...
...
theano/tensor/extra_ops.py
浏览文件 @
80264d01
...
@@ -1092,6 +1092,7 @@ class Unique(theano.Op):
...
@@ -1092,6 +1092,7 @@ class Unique(theano.Op):
Examples
Examples
--------
--------
>>> import numpy as np
>>> import numpy as np
>>> import theano
>>> x = theano.tensor.vector()
>>> x = theano.tensor.vector()
>>> f = theano.function([x], Unique(True, True, False)(x))
>>> f = theano.function([x], Unique(True, True, False)(x))
...
...
theano/tensor/io.py
浏览文件 @
80264d01
...
@@ -83,7 +83,7 @@ def load(path, dtype, broadcastable, mmap_mode=None):
...
@@ -83,7 +83,7 @@ def load(path, dtype, broadcastable, mmap_mode=None):
>>> x = tensor.load(path, 'int64', (False,))
>>> x = tensor.load(path, 'int64', (False,))
>>> y = x*2
>>> y = x*2
>>> fn = function([path], y)
>>> fn = function([path], y)
>>> fn("stored-array.npy")
>>> fn("stored-array.npy")
# doctest: +SKIP
array([0, 2, 4, 6, 8], dtype=int64)
array([0, 2, 4, 6, 8], dtype=int64)
"""
"""
...
...
theano/tensor/utils.py
浏览文件 @
80264d01
...
@@ -55,9 +55,11 @@ def shape_of_variables(fgraph, input_shapes):
...
@@ -55,9 +55,11 @@ def shape_of_variables(fgraph, input_shapes):
>>> x = theano.tensor.matrix('x')
>>> x = theano.tensor.matrix('x')
>>> y = x[512:]; y.name = 'y'
>>> y = x[512:]; y.name = 'y'
>>> fgraph = theano.FunctionGraph([x], [y], clone=False)
>>> fgraph = theano.FunctionGraph([x], [y], clone=False)
>>> shape_of_variables(fgraph, {x: (1024, 1024)})
>>> d = shape_of_variables(fgraph, {x: (1024, 1024)})
{y: (512, 1024), x: (1024, 1024)}
>>> d[y]
(array(512), array(1024))
>>> d[x]
(array(1024), array(1024))
"""
"""
if
not
hasattr
(
fgraph
,
'shape_feature'
):
if
not
hasattr
(
fgraph
,
'shape_feature'
):
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论