Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
d4ac3d05
提交
d4ac3d05
authored
12月 05, 2011
作者:
nouiz
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #264 from delallea/minor
Minor stuff
上级
f2ed8c2f
d64fb51b
隐藏空白字符变更
内嵌
并排
正在显示
4 个修改的文件
包含
34 行增加
和
29 行删除
+34
-29
cop.txt
doc/extending/cop.txt
+9
-7
basic.txt
doc/library/tensor/basic.txt
+8
-8
basic.py
theano/tensor/basic.py
+12
-11
opt.py
theano/tensor/opt.py
+5
-3
没有找到文件。
doc/extending/cop.txt
浏览文件 @
d4ac3d05
...
@@ -67,13 +67,15 @@ There are less methods to define for an Op than for a Type:
...
@@ -67,13 +67,15 @@ There are less methods to define for an Op than for a Type:
.. method:: infer_shape(node, (i0_shapes,i1_shapes,...))
.. method:: infer_shape(node, (i0_shapes,i1_shapes,...))
Allow optimization to lift the Shape op over this op.
Allow optimizations to lift the Shape op over this op.
Example of why this is good is that we compute an op only to take its shape,
An example of why this is good is when we only need the shape of a
we will be able to have the shape without its computation.
variable: we will be able to obtain it without computing the variable
must return a tuple with one tuple with the shape of each output.
itself.
Example of matrix-matrix product input_shapes will have as input
Must return a list where each element is a tuple representing the shape
(node, ((x0,x1), (y0,y1))) and should return [(x0, y1)]. Both the
of one output.
inputs and the return value may be theano variables.
For example, for the matrix-matrix product ``infer_shape`` will have as
inputs (node, ((x0,x1), (y0,y1))) and should return [(x0, y1)]. Both the
inputs and the return value may be Theano variables.
.. method:: c_code_cache_version()
.. method:: c_code_cache_version()
...
...
doc/library/tensor/basic.txt
浏览文件 @
d4ac3d05
...
@@ -454,14 +454,14 @@ TensorVariable
...
@@ -454,14 +454,14 @@ TensorVariable
A few examples of patterns and their effect:
A few examples of patterns and their effect:
('x') -> make a 0d (scalar) into a 1d vector
*
('x') -> make a 0d (scalar) into a 1d vector
(0, 1) -> identity for 2d vectors
*
(0, 1) -> identity for 2d vectors
(1, 0) -> inverts the first and second dimensions
* (1, 0) -> inverts the first and second dimensions
('x', 0) -> make a row out of a 1d vector (N to 1xN)
*
('x', 0) -> make a row out of a 1d vector (N to 1xN)
(0, 'x') -> make a column out of a 1d vector (N to Nx1)
*
(0, 'x') -> make a column out of a 1d vector (N to Nx1)
(2, 0, 1) -> AxBxC to CxAxB
*
(2, 0, 1) -> AxBxC to CxAxB
(0, 'x', 1) -> AxB to Ax1xB
*
(0, 'x', 1) -> AxB to Ax1xB
(1, 'x', 0) -> AxB to Bx1xA
*
(1, 'x', 0) -> AxB to Bx1xA
.. method:: flatten(ndim=1)
.. method:: flatten(ndim=1)
...
...
theano/tensor/basic.py
浏览文件 @
d4ac3d05
...
@@ -1798,14 +1798,14 @@ pprint.assign(_shape, printing.MemberPrinter('shape'))
...
@@ -1798,14 +1798,14 @@ pprint.assign(_shape, printing.MemberPrinter('shape'))
class
SpecifyShape
(
Op
):
class
SpecifyShape
(
Op
):
"""
"""
L{Op}
put into the graph the user provided shape
L{Op}
that puts into the graph the user-provided shape.
In the case where this op stay in the final graph, we assert the shape.
In the case where this op stay
s
in the final graph, we assert the shape.
For this the output of this op must be used in the graph. This is not
For this the output of this op must be used in the graph. This is not
the case most of the time if we only take the shape of the output.
the case most of the time if we only take the shape of the output.
Maybe there
is other optimization
that will mess with this.
Maybe there
are other optimizations
that will mess with this.
@note: Maybe in the futur we will never do the assert!
@note: Maybe in the futur
e
we will never do the assert!
@note: We currently don't support specifying partial shape information.
@note: We currently don't support specifying partial shape information.
"""
"""
view_map
=
{
0
:
[
0
]}
view_map
=
{
0
:
[
0
]}
...
@@ -1913,7 +1913,7 @@ class MaxAndArgmax(Op):
...
@@ -1913,7 +1913,7 @@ class MaxAndArgmax(Op):
def
perform
(
self
,
node
,
inp
,
outs
):
def
perform
(
self
,
node
,
inp
,
outs
):
x
,
axis
=
inp
x
,
axis
=
inp
max
,
max_idx
=
outs
max
,
max_idx
=
outs
if
len
(
axis
)
==
0
or
python_all
(
axis
==
range
(
x
.
ndim
)):
if
python_all
(
axis
==
range
(
x
.
ndim
)):
axis
=
None
axis
=
None
max
[
0
]
=
numpy
.
asarray
(
numpy
.
max
(
x
,
axis
))
max
[
0
]
=
numpy
.
asarray
(
numpy
.
max
(
x
,
axis
))
max_idx
[
0
]
=
theano
.
_asarray
(
numpy
.
argmax
(
x
,
axis
),
dtype
=
'int32'
)
max_idx
[
0
]
=
theano
.
_asarray
(
numpy
.
argmax
(
x
,
axis
),
dtype
=
'int32'
)
...
@@ -2945,18 +2945,19 @@ class Subtensor(Op):
...
@@ -2945,18 +2945,19 @@ class Subtensor(Op):
This class uses a relatively complex internal representation of the inputs
This class uses a relatively complex internal representation of the inputs
to remember how the input tensor x should be sliced. The instance variable
to remember how the input tensor x should be sliced. The instance variable
idxlist is a list whose elements are either integers, or slices. The
idx
_
list is a list whose elements are either integers, or slices. The
integers are indexes into the inputs array, and the start/stop/step members
integers are indexes into the inputs array, and the start/stop/step members
of each slice are also integer indexes into the inputs array (or None). The
of each slice are also integer indexes into the inputs array (or None). The
inputs array is the tensor x, followed by scalar integer variables.
inputs array is the tensor x, followed by scalar integer variables.
@todo: add support for advanced tensor indexing (in Subtensor_dx too).
@todo: add support for advanced tensor indexing (in Subtensor_dx too).
The idx_list is a tuple similar in structure to the sort of key you might expect in numpy's
The idx_list is a tuple similar in structure to the sort of key you might
basic indexing mode. It has one element for each explicitly named dimension. In numpy, the elements
expect in numpy's basic indexing mode. It has one element for each
can be either integers or slices containing integers and None. In Subtensor, each element
explicitly named dimension. In numpy, the elements can be either integers
can additionally be a Scalar instance, and slice components can also be Scalar instances
or slices containing integers and None. In Subtensor, each element can
too.
additionally be a Scalar instance, and slice components can also be Scalar
instances too.
"""
"""
e_invalid
=
(
'The index list is longer (size
%
d) than the number of '
e_invalid
=
(
'The index list is longer (size
%
d) than the number of '
'dimensions of the tensor(namely
%
d). You are asking for '
'dimensions of the tensor(namely
%
d). You are asking for '
...
...
theano/tensor/opt.py
浏览文件 @
d4ac3d05
...
@@ -11,6 +11,8 @@ import operator
...
@@ -11,6 +11,8 @@ import operator
import
itertools
import
itertools
import
sys
import
sys
import
traceback
import
traceback
from
itertools
import
izip
import
numpy
import
numpy
import
numpy
as
N
# guys... please don't do this in the library :(
import
numpy
as
N
# guys... please don't do this in the library :(
...
@@ -676,7 +678,7 @@ class ShapeFeature(object):
...
@@ -676,7 +678,7 @@ class ShapeFeature(object):
add an optional Param() argument to promise that inputs will
add an optional Param() argument to promise that inputs will
have a certain shape (or even to have certain shapes in
have a certain shape (or even to have certain shapes in
certain dimensions). We can't automatically infer the shape of
certain dimensions). We can't automatically infer the shape of
shared variable as they can change of shape during the
shared variable
s
as they can change of shape during the
execution by default. (NOT IMPLEMENTED YET, BUT IS IN TRAC)
execution by default. (NOT IMPLEMENTED YET, BUT IS IN TRAC)
...
@@ -918,7 +920,7 @@ class ShapeFeature(object):
...
@@ -918,7 +920,7 @@ class ShapeFeature(object):
+
' != len(node.outputs) = '
+
' != len(node.outputs) = '
+
str
(
len
(
node
.
outputs
)))
+
str
(
len
(
node
.
outputs
)))
for
r
,
s
in
zip
(
node
.
outputs
,
o_shapes
):
for
r
,
s
in
i
zip
(
node
.
outputs
,
o_shapes
):
self
.
set_shape
(
r
,
s
)
self
.
set_shape
(
r
,
s
)
def
on_change_input
(
self
,
env
,
node
,
i
,
r
,
new_r
):
def
on_change_input
(
self
,
env
,
node
,
i
,
r
,
new_r
):
...
@@ -1431,7 +1433,7 @@ def local_upcast_elemwise_constant_inputs(node):
...
@@ -1431,7 +1433,7 @@ def local_upcast_elemwise_constant_inputs(node):
@gof.local_optimizer
([
T
.
Subtensor
])
@gof.local_optimizer
([
T
.
Subtensor
])
def
local_useless_subtensor
(
node
):
def
local_useless_subtensor
(
node
):
"""
"""
Remove Subtensor if it take the full input
Remove Subtensor if it take
s
the full input
"""
"""
if
isinstance
(
node
.
op
,
T
.
Subtensor
):
if
isinstance
(
node
.
op
,
T
.
Subtensor
):
# This optimization needs ShapeOpt and env.shape_feature
# This optimization needs ShapeOpt and env.shape_feature
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论