Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
5ef9acdb
提交
5ef9acdb
authored
7月 22, 2015
作者:
Frederic
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Port some for GpuConv change to the new back-end. (fix test error in new back-end.
上级
a25d68fa
隐藏空白字符变更
内嵌
并排
正在显示
2 个修改的文件
包含
32 行增加
和
1 行删除
+32
-1
conv.py
theano/sandbox/gpuarray/conv.py
+28
-1
opt.py
theano/sandbox/gpuarray/opt.py
+4
-0
没有找到文件。
theano/sandbox/gpuarray/conv.py
浏览文件 @
5ef9acdb
...
@@ -27,15 +27,23 @@ class GpuConv(gof.Op):
...
@@ -27,15 +27,23 @@ class GpuConv(gof.Op):
logical_kern_hw
=
None
,
logical_kern_hw
=
None
,
logical_kern_align_top
=
True
,
logical_kern_align_top
=
True
,
version
=-
1
,
version
=-
1
,
direction_hint
=
None
,
verbose
=
0
,
verbose
=
0
,
kshp
=
None
,
kshp
=
None
,
imshp
=
None
,
imshp
=
None
,
max_threads_dim0
=
None
):
max_threads_dim0
=
None
,
nkern
=
None
,
bsize
=
None
,
fft_opt
=
True
):
"""
"""
:param version: each version of c_code implements many kernels for the
:param version: each version of c_code implements many kernels for the
convolution. By default we try to guess the best one.
convolution. By default we try to guess the best one.
You can force one version with this parameter. This
You can force one version with this parameter. This
parameter is used by the tests.
parameter is used by the tests.
:param direction_hint: 'forward', 'bprop weights' or 'bprop inputs'.
Serves as a hint for graph optimizers replacing
GpuConv by other implementations. If the GpuConv is
inserted automatically, we take its value from ConvOp.
:param verbose: for value of 1,2 and 3. Print more information during
:param verbose: for value of 1,2 and 3. Print more information during
the execution of the convolution. Mostly used for
the execution of the convolution. Mostly used for
optimization or debugging.
optimization or debugging.
...
@@ -49,6 +57,19 @@ class GpuConv(gof.Op):
...
@@ -49,6 +57,19 @@ class GpuConv(gof.Op):
:param max_threads_dim0: The maximum number of threads for the
:param max_threads_dim0: The maximum number of threads for the
block size dimensions 0 (blockDim.x) used by the
block size dimensions 0 (blockDim.x) used by the
GPU function.
GPU function.
:param nkern: The number of kernels. Not used for this op, but can be
used by graph optimizers to select a more optimal
convolution implementation. If the GpuConv op is inserted
automatically, we take its value from the Conv op.
:param bsize: The batch size. Not used for this op, but can be
used by graph optimizers to select a more optimal
convolution implementation. If the GpuConv op is inserted
automatically, we take its value from the Conv op.
:param fft_opt: deactivate fft_opt optimization at the op level when
set to False. Note that by default fft optimization
aren't enabled. See
:ref:`convolution documentation <libdoc_tensor_nnet_conv>`
to enable them.
"""
"""
self
.
border_mode
=
border_mode
self
.
border_mode
=
border_mode
...
@@ -69,10 +90,14 @@ class GpuConv(gof.Op):
...
@@ -69,10 +90,14 @@ class GpuConv(gof.Op):
self
.
logical_kern_hw
=
logical_kern_hw
self
.
logical_kern_hw
=
logical_kern_hw
self
.
logical_kern_align_top
=
logical_kern_align_top
self
.
logical_kern_align_top
=
logical_kern_align_top
self
.
version
=
version
self
.
version
=
version
self
.
direction_hint
=
direction_hint
self
.
verbose
=
verbose
self
.
verbose
=
verbose
self
.
kshp
=
kshp
self
.
kshp
=
kshp
self
.
imshp
=
imshp
self
.
imshp
=
imshp
self
.
max_threads_dim0
=
max_threads_dim0
self
.
max_threads_dim0
=
max_threads_dim0
self
.
nkern
=
nkern
self
.
bsize
=
bsize
self
.
fft_opt
=
fft_opt
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
\
return
type
(
self
)
==
type
(
other
)
\
...
@@ -93,6 +118,8 @@ class GpuConv(gof.Op):
...
@@ -93,6 +118,8 @@ class GpuConv(gof.Op):
self
.
imshp
=
None
self
.
imshp
=
None
if
not
hasattr
(
self
,
"max_threads_dim0"
):
if
not
hasattr
(
self
,
"max_threads_dim0"
):
self
.
max_threads_dim0
=
None
self
.
max_threads_dim0
=
None
if
not
hasattr
(
self
,
"direction_hint"
):
self
.
direction_hint
=
None
def
__hash__
(
self
):
def
__hash__
(
self
):
# don't use hash(self.version) as hash(-1)==-2 and
# don't use hash(self.version) as hash(-1)==-2 and
...
...
theano/sandbox/gpuarray/opt.py
浏览文件 @
5ef9acdb
...
@@ -669,8 +669,12 @@ def local_gpu_conv(node):
...
@@ -669,8 +669,12 @@ def local_gpu_conv(node):
logical_kern_align_top
=
op
.
kshp_logical_top_aligned
,
logical_kern_align_top
=
op
.
kshp_logical_top_aligned
,
kshp
=
op
.
kshp
,
kshp
=
op
.
kshp
,
version
=
op
.
version
,
version
=
op
.
version
,
direction_hint
=
op
.
direction_hint
,
verbose
=
op
.
verbose
,
verbose
=
op
.
verbose
,
imshp
=
op
.
imshp
,
imshp
=
op
.
imshp
,
nkern
=
op
.
nkern
,
bsize
=
op
.
bsize
,
fft_opt
=
op
.
fft_opt
)
)
if
op
.
imshp_logical
is
not
None
:
if
op
.
imshp_logical
is
not
None
:
logical_img_hw
=
op
.
imshp_logical
[
1
:
3
]
logical_img_hw
=
op
.
imshp_logical
[
1
:
3
]
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论