提交 f96de3b3 authored 作者: Frederic Bastien's avatar Frederic Bastien

add the local_cut_gpu_host_gpu optimizer into the canonicalize phase.

This allow all other optimization to work correctly. This allow the insertion of gemm into gpu code in the deep learning tutorial of the mlp to have a speed up of 3x.
上级 bb96ebe8
...@@ -70,6 +70,9 @@ gpu_cut_copies.register('cut_gpu_host_transfers', local_cut_gpu_host_gpu, ...@@ -70,6 +70,9 @@ gpu_cut_copies.register('cut_gpu_host_transfers', local_cut_gpu_host_gpu,
'fast_run', 'inplace', 'gpu') 'fast_run', 'inplace', 'gpu')
gpu_cut_copies.register('cut_gpu_constant_transfers', tensor.opt.constant_folding, gpu_cut_copies.register('cut_gpu_constant_transfers', tensor.opt.constant_folding,
'fast_run', 'gpu') 'fast_run', 'gpu')
#register it into canonicalize to allow other optimization to work without
#botering with this useless pattern.
compile.optdb['canonicalize'].register('local_cut_gpu_host_gpu', local_cut_gpu_host_gpu, 'fast_run')
@register_opt() @register_opt()
@local_optimizer([]) @local_optimizer([])
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论