Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
dd668d17
提交
dd668d17
authored
5月 31, 2013
作者:
Frédéric Bastien
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #1395 from delallea/minor
Minor fixes
上级
dcb5e098
3298dcf5
隐藏空白字符变更
内嵌
并排
正在显示
6 个修改的文件
包含
47 行增加
和
45 行删除
+47
-45
nnet.txt
doc/library/tensor/nnet/nnet.txt
+7
-7
test_cmodule.py
theano/gof/tests/test_cmodule.py
+2
-2
basic.py
theano/scalar/basic.py
+6
-5
basic.py
theano/tensor/basic.py
+16
-16
blas.py
theano/tensor/blas.py
+2
-2
sigm.py
theano/tensor/nnet/sigm.py
+14
-13
没有找到文件。
doc/library/tensor/nnet/nnet.txt
浏览文件 @
dd668d17
...
@@ -15,8 +15,8 @@
...
@@ -15,8 +15,8 @@
:Parameters: *x* - symbolic Tensor (or compatible)
:Parameters: *x* - symbolic Tensor (or compatible)
:Return type: same as x
:Return type: same as x
:Returns: element-wise sigmoid: :math:`sigmoid(x) = \frac{1}{1 + \exp(-x)}`.
:Returns: element-wise sigmoid: :math:`sigmoid(x) = \frac{1}{1 + \exp(-x)}`.
:note: see :func:`ultra_fast_sigmoid` or :func:`hard_sigmoid` for faster version.
:note: see :func:`ultra_fast_sigmoid` or :func:`hard_sigmoid` for faster version
s
.
Speed comparison for 100M float64 element
on a Core2 Duo @ 3.16 GHz.
Speed comparison for 100M float64 element
s on a Core2 Duo @ 3.16 GHz:
- hard_sigmoid: 1.0s
- hard_sigmoid: 1.0s
- ultra_fast_sigmoid: 1.3s
- ultra_fast_sigmoid: 1.3s
...
@@ -44,15 +44,15 @@
...
@@ -44,15 +44,15 @@
:Parameters: *x* - symbolic Tensor (or compatible)
:Parameters: *x* - symbolic Tensor (or compatible)
:Return type: same as x
:Return type: same as x
:Returns: approximated element-wise sigmoid: :math:`sigmoid(x) = \frac{1}{1 + \exp(-x)}`.
:Returns: approximated element-wise sigmoid: :math:`sigmoid(x) = \frac{1}{1 + \exp(-x)}`.
:note: To automatically change all :func:`sigmoid` op to this version, use
:note: To automatically change all :func:`sigmoid` op
s
to this version, use
the Theano optimization ``local_ultra_fast_sigmoid``. This can be done
the Theano optimization ``local_ultra_fast_sigmoid``. This can be done
with the Theano flag ``optimizer_including=local_ultra_fast_sigmoid``.
with the Theano flag ``optimizer_including=local_ultra_fast_sigmoid``.
This optimization is done late, so it should
n'
t affect
This optimization is done late, so it should
no
t affect
stabilization optimization.
stabilization optimization.
.. note:: The underlying code will return 0.00247262315663 as the
.. note:: The underlying code will return 0.00247262315663 as the
minimum value and 0.997527376843 as the maximum value. So it
minimum value and 0.997527376843 as the maximum value. So it
never return 0 or 1.
never return
s
0 or 1.
...
@@ -63,10 +63,10 @@
...
@@ -63,10 +63,10 @@
:Parameters: *x* - symbolic Tensor (or compatible)
:Parameters: *x* - symbolic Tensor (or compatible)
:Return type: same as x
:Return type: same as x
:Returns: approximated element-wise sigmoid: :math:`sigmoid(x) = \frac{1}{1 + \exp(-x)}`.
:Returns: approximated element-wise sigmoid: :math:`sigmoid(x) = \frac{1}{1 + \exp(-x)}`.
:note: To automatically change all :func:`sigmoid` op to this version, use
:note: To automatically change all :func:`sigmoid` op
s
to this version, use
the Theano optimization ``local_hard_sigmoid``. This can be done
the Theano optimization ``local_hard_sigmoid``. This can be done
with the Theano flag ``optimizer_including=local_hard_sigmoid``.
with the Theano flag ``optimizer_including=local_hard_sigmoid``.
This optimization is done late, so it should
n'
t affect
This optimization is done late, so it should
no
t affect
stabilization optimization.
stabilization optimization.
.. note:: The underlying code will return an exact 0 or 1 if an
.. note:: The underlying code will return an exact 0 or 1 if an
...
...
theano/gof/tests/test_cmodule.py
浏览文件 @
dd668d17
"""We don't have real test for the cache, but it would be great to make them!
"""We don't have real test
s
for the cache, but it would be great to make them!
But this one test a current behavior that isn't good: the c_code isn't
But this one test
s
a current behavior that isn't good: the c_code isn't
deterministic based on the input type and the op.
deterministic based on the input type and the op.
"""
"""
...
...
theano/scalar/basic.py
浏览文件 @
dd668d17
...
@@ -847,14 +847,15 @@ class ScalarOp(Op):
...
@@ -847,14 +847,15 @@ class ScalarOp(Op):
def
c_code_contiguous
(
self
,
node
,
name
,
inp
,
out
,
sub
):
def
c_code_contiguous
(
self
,
node
,
name
,
inp
,
out
,
sub
):
"""This function is called by Elemwise when all inputs and
"""This function is called by Elemwise when all inputs and
outputs are c_contiguous. This allow
to us
e SIMD version
outputs are c_contiguous. This allow
s to use th
e SIMD version
of this op.
of this op.
The inputs are the same as c_code except:
The inputs are the same as c_code except
that
:
- inp and out must be the variable name of the ndarray
- inp and out must be the names of the variables associated to the
- node must be the elemwise node. This is needed to know
ndarrays in the C code
the inputs/outputs type.
- node must be the elemwise node (this is needed to know
the inputs/outputs types)
"""
"""
raise
theano
.
gof
.
utils
.
MethodNotDefined
()
raise
theano
.
gof
.
utils
.
MethodNotDefined
()
...
...
theano/tensor/basic.py
浏览文件 @
dd668d17
...
@@ -622,7 +622,7 @@ def get_scalar_constant_value(v):
...
@@ -622,7 +622,7 @@ def get_scalar_constant_value(v):
isinstance
(
v
.
owner
.
op
.
idx_list
[
0
],
(
int
,
long
,
isinstance
(
v
.
owner
.
op
.
idx_list
[
0
],
(
int
,
long
,
numpy
.
integer
))):
numpy
.
integer
))):
# Python 2.4 do
n'
t support indexing with numpy.integer
# Python 2.4 do
es no
t support indexing with numpy.integer
# So we cast it.
# So we cast it.
idx
=
int
(
v
.
owner
.
op
.
idx_list
[
0
])
idx
=
int
(
v
.
owner
.
op
.
idx_list
[
0
])
ret
=
v
.
owner
.
inputs
[
0
]
.
owner
.
inputs
[
idx
]
ret
=
v
.
owner
.
inputs
[
0
]
.
owner
.
inputs
[
idx
]
...
@@ -1533,15 +1533,15 @@ class _tensor_py_operators:
...
@@ -1533,15 +1533,15 @@ class _tensor_py_operators:
return
True
return
True
else
:
else
:
raise
TypeError
(
raise
TypeError
(
"Variable
does not support boolean operations. This
"
"Variable
s do not support boolean operations. This
"
"can happen if you do
logical operator (<, <=, >, <=,
"
"can happen if you do
a logical operation (<, <=, >, <=,
"
"==, !=) between
numpy.ndarray and t
heano tensor"
"==, !=) between
a numpy.ndarray and a T
heano tensor"
"variable. Due
NumPy implementation before NumPy 1.8,
"
"variable. Due
to NumPy implementation before NumPy 1.8,
"
"we can
't make the python syntax work when the ndarray
"
"we can
not make the Python syntax work when the ndarray
"
"is on the left, and this
end with this error. To work
"
"is on the left, and this
results in this error. To work
"
"around that,
just call
"
"around that,
either call
"
"theano.tensor.{lt,le,eq,ne,gt,ge}(ndarray, tensor)
or
"
"theano.tensor.{lt,le,eq,ne,gt,ge}(ndarray, tensor)
, or
"
"use the
python syntax with the theano tensor on the
"
"use the
Python syntax with the Theano tensor on the
"
"left. Or update to NumPy 1.8 or above."
"left. Or update to NumPy 1.8 or above."
)
)
...
@@ -6436,11 +6436,11 @@ class Reshape(Op):
...
@@ -6436,11 +6436,11 @@ class Reshape(Op):
(
x
.
shape
,
shp
))
(
x
.
shape
,
shp
))
if
not
out
[
0
]
.
flags
.
aligned
:
if
not
out
[
0
]
.
flags
.
aligned
:
raise
RuntimeError
(
"numpy.reshape returned a not aligned tensor."
raise
RuntimeError
(
"numpy.reshape returned a not aligned tensor."
" NumPy version 1.6.2, 1.7.0 and 1.7.1 have"
" NumPy version
s
1.6.2, 1.7.0 and 1.7.1 have"
" this problem for some input shape/new shape"
" this problem for some input shape/new shape"
" combination. Use another NumPy version."
" combination
s
. Use another NumPy version."
" Input shape:
%
s, input stride
%
s,"
" Input shape:
%
s, input stride
:
%
s,"
" new_shape
%
s new_strides
%
s."
%
(
" new_shape
:
%
s, new_strides:
%
s."
%
(
x
.
shape
,
x
.
strides
,
shp
,
out
[
0
]
.
strides
))
x
.
shape
,
x
.
strides
,
shp
,
out
[
0
]
.
strides
))
def
connection_pattern
(
self
,
node
):
def
connection_pattern
(
self
,
node
):
...
@@ -6545,9 +6545,9 @@ class Reshape(Op):
...
@@ -6545,9 +6545,9 @@ class Reshape(Op):
PyErr_Format(
PyErr_Format(
PyExc_RuntimeError,
PyExc_RuntimeError,
"PyArray_Newshape returned an object that isn't aligned!"
"PyArray_Newshape returned an object that isn't aligned!"
" NumPy version 1.6.2, 1.7.0 and 1.7.1 have"
" NumPy version
s
1.6.2, 1.7.0 and 1.7.1 have"
" this problem for some input shape/new shape"
" this problem for some input shape/new shape"
" combination. Use another NumPy version.");
" combination
s
. Use another NumPy version.");
%(fail)
s;
%(fail)
s;
}
}
"""
%
locals
()
"""
%
locals
()
...
...
theano/tensor/blas.py
浏览文件 @
dd668d17
...
@@ -251,8 +251,8 @@ except ImportError, e:
...
@@ -251,8 +251,8 @@ except ImportError, e:
# when theano.config.blas.ldflags is defined. So we don't need a
# when theano.config.blas.ldflags is defined. So we don't need a
# warning in that case.
# warning in that case.
if
not
config
.
blas
.
ldflags
:
if
not
config
.
blas
.
ldflags
:
_logger
.
warning
(
'Failed to import scipy.linalg.blas and '
_logger
.
warning
(
'Failed to import scipy.linalg.blas
,
and '
'Theano flag blas.ldflags empty. '
'Theano flag blas.ldflags
is
empty. '
'Falling back on slower implementations for '
'Falling back on slower implementations for '
'dot(matrix, vector), dot(vector, matrix) and '
'dot(matrix, vector), dot(vector, matrix) and '
'dot(vector, vector) (
%
s)'
,
'dot(vector, vector) (
%
s)'
,
...
...
theano/tensor/nnet/sigm.py
浏览文件 @
dd668d17
...
@@ -98,7 +98,7 @@ for i in xrange(750):
...
@@ -98,7 +98,7 @@ for i in xrange(750):
// We block to keep the data in l1
// We block to keep the data in l1
// normal l1 size = 32k: 32k/2(input + output)/8(nb bytes of double)=2k
// normal l1 size = 32k: 32k/2(input + output)/8(nb bytes of double)=2k
// We stay bellow the 2k limit to let space for
// We stay bellow the 2k limit to let space for
// This is faster th
e
n the not blocking version
// This is faster th
a
n the not blocking version
for(int i=0;i<n;i+=2048){
for(int i=0;i<n;i+=2048){
npy_intp nb = (n-i<2048)?n-i:2048;
npy_intp nb = (n-i<2048)?n-i:2048;
for(int j=0;j<nb;j++){
for(int j=0;j<nb;j++){
...
@@ -134,9 +134,9 @@ for i in xrange(750):
...
@@ -134,9 +134,9 @@ for i in xrange(750):
import
os
import
os
fig
=
plt
.
figure
()
fig
=
plt
.
figure
()
ax
=
fig
.
add_subplot
(
111
)
ax
=
fig
.
add_subplot
(
111
)
ax
.
plot
(
data
,
val
)
#
, 'o-')
ax
.
plot
(
data
,
val
)
#
, 'o-')
ax
.
plot
(
data
,
val_ultra
)
#
, '-')
ax
.
plot
(
data
,
val_ultra
)
#
, '-')
ax
.
plot
(
data
,
val_hard
)
#
, '-')
ax
.
plot
(
data
,
val_hard
)
#
, '-')
ax
.
grid
(
True
)
ax
.
grid
(
True
)
ax
.
legend
((
"sigmoid"
,
"ultra_fast"
,
"hard"
),
"upper left"
)
ax
.
legend
((
"sigmoid"
,
"ultra_fast"
,
"hard"
),
"upper left"
)
fname
=
os
.
path
.
join
(
os
.
path
.
dirname
(
theano
.
__file__
),
'..'
,
fname
=
os
.
path
.
join
(
os
.
path
.
dirname
(
theano
.
__file__
),
'..'
,
...
@@ -234,13 +234,13 @@ def local_ultra_fast_sigmoid(node):
...
@@ -234,13 +234,13 @@ def local_ultra_fast_sigmoid(node):
"""
"""
When enabled, change all sigmoid to ultra_fast_sigmoid.
When enabled, change all sigmoid to ultra_fast_sigmoid.
To
example do mode.including('local_ultra_fast_sigmoid')
For
example do mode.including('local_ultra_fast_sigmoid')
or use the Theano flag optimizer_including=local_ultra_fast_sigmoid
or use the Theano flag optimizer_including=local_ultra_fast_sigmoid
This speed up the sigmoid op by using an approximation.
This speed
s
up the sigmoid op by using an approximation.
This is done after the stabilization and specialize phase
This is done after the stabilization and specialize phase
s
to
don't interact
with them.
to
avoid interacting
with them.
"""
"""
if
(
isinstance
(
node
.
op
,
tensor
.
Elemwise
)
and
if
(
isinstance
(
node
.
op
,
tensor
.
Elemwise
)
and
...
@@ -261,16 +261,16 @@ theano.compile.optdb['uncanonicalize'].register("local_ultra_fast_sigmoid",
...
@@ -261,16 +261,16 @@ theano.compile.optdb['uncanonicalize'].register("local_ultra_fast_sigmoid",
def
hard_sigmoid
(
x
):
def
hard_sigmoid
(
x
):
"""An approximation of sigmoid.
"""An approximation of sigmoid.
More approximate and faster th
e
n ultra_fast_sigmoid.
More approximate and faster th
a
n ultra_fast_sigmoid.
Approx in 3 parts: 0, scaled linear, 1
Approx in 3 parts: 0, scaled linear, 1
Removing the slop
and shift don'
t make it faster.
Removing the slop
e and shift does no
t make it faster.
"""
"""
slop
=
0.2
slop
e
=
0.2
shift
=
0.5
shift
=
0.5
x
=
(
x
*
0.2
)
+
shift
x
=
(
x
*
slope
)
+
shift
x
=
tensor
.
clip
(
x
,
0
,
1
)
x
=
tensor
.
clip
(
x
,
0
,
1
)
return
x
return
x
...
@@ -330,7 +330,8 @@ class ScalarSoftplus(scalar.UnaryScalarOp):
...
@@ -330,7 +330,8 @@ class ScalarSoftplus(scalar.UnaryScalarOp):
return
(
2
,)
+
v
return
(
2
,)
+
v
else
:
else
:
return
v
return
v
scalar_softplus
=
ScalarSoftplus
(
scalar
.
upgrade_to_float
,
name
=
'scalar_softplus'
)
scalar_softplus
=
ScalarSoftplus
(
scalar
.
upgrade_to_float
,
name
=
'scalar_softplus'
)
softplus
=
elemwise
.
Elemwise
(
scalar_softplus
,
name
=
'softplus'
)
softplus
=
elemwise
.
Elemwise
(
scalar_softplus
,
name
=
'softplus'
)
pprint
.
assign
(
softplus
,
printing
.
FunctionPrinter
(
'softplus'
))
pprint
.
assign
(
softplus
,
printing
.
FunctionPrinter
(
'softplus'
))
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论