提交 06962b6f authored 作者: Olivier Delalleau's avatar Olivier Delalleau

Fixed some typos

上级 dc921d8c
...@@ -29,7 +29,7 @@ New Features ...@@ -29,7 +29,7 @@ New Features
(Frederic B.) (Frederic B.)
* debugprint does not print anymore the "|" symbol in a column after the last input. (Frederic B.) * debugprint does not print anymore the "|" symbol in a column after the last input. (Frederic B.)
* If you use Enthought Python Distribution (EPD) now we use its blas * If you use Enthought Python Distribution (EPD) now we use its blas
implementation by default (Tested Linux, Windows) implementation by default (tested on Linux and Windows)
(Frederic B., Simon McGregor) (Frederic B., Simon McGregor)
Sparse Sandbox graduate Sparse Sandbox graduate
...@@ -55,10 +55,10 @@ Crash Fix ...@@ -55,10 +55,10 @@ Crash Fix
* Optimization printed a useless error when scipy was not available. (Frederic B.) * Optimization printed a useless error when scipy was not available. (Frederic B.)
* GPU conv crash/slowdown on newer hardware (James B.) * GPU conv crash/slowdown on newer hardware (James B.)
* Better error handling in GPU conv (Frederic B.) * Better error handling in GPU conv (Frederic B.)
* GPU optimization that move element-wise op to the gpu. It happen in * GPU optimization that moves element-wise Ops to the GPU. Crash happened in
a particular execution order of this optimization and the a particular execution order of this optimization and the
element-wise fusion optimization when upcasting some inputs to element-wise fusion optimization when upcasting some inputs to
float32 (to compute them on the gpu). float32 (to compute them on the GPU).
(Frederic B., reported by Sander Dieleman) (Frederic B., reported by Sander Dieleman)
============= =============
......
...@@ -243,8 +243,7 @@ def test_huge_elemwise_fusion(): ...@@ -243,8 +243,7 @@ def test_huge_elemwise_fusion():
def test_local_gpu_elemwise_0(): def test_local_gpu_elemwise_0():
""" """
Test the test_local_gpu_elemwise_0 when there is dtype upcastable Test local_gpu_elemwise_0 when there is a dtype upcastable to float32
to float32
""" """
a = tensor.bmatrix() a = tensor.bmatrix()
b = tensor.fmatrix() b = tensor.fmatrix()
...@@ -254,7 +253,7 @@ def test_local_gpu_elemwise_0(): ...@@ -254,7 +253,7 @@ def test_local_gpu_elemwise_0():
b_v = (numpy.random.rand(4, 5) * 10).astype("float32") b_v = (numpy.random.rand(4, 5) * 10).astype("float32")
c_v = (numpy.random.rand(4, 5) * 10).astype("float32") c_v = (numpy.random.rand(4, 5) * 10).astype("float32")
# Due to order of optimization, this the composite is created when all # Due to optimization order, this composite is created when all
# the op are on the gpu. # the op are on the gpu.
f = theano.function([a, b, c], [a + b + c], mode=mode_with_gpu) f = theano.function([a, b, c], [a + b + c], mode=mode_with_gpu)
#theano.printing.debugprint(f) #theano.printing.debugprint(f)
...@@ -263,7 +262,7 @@ def test_local_gpu_elemwise_0(): ...@@ -263,7 +262,7 @@ def test_local_gpu_elemwise_0():
assert sum(isinstance(node.op, tensor.Elemwise) for node in topo) == 1 assert sum(isinstance(node.op, tensor.Elemwise) for node in topo) == 1
f(a_v, b_v, c_v) f(a_v, b_v, c_v)
# Not test with the composite already on the cpu before we move it # Now test with the composite already on the cpu before we move it
# to the gpu # to the gpu
a_s = theano.scalar.int8() a_s = theano.scalar.int8()
b_s = theano.scalar.float32() b_s = theano.scalar.float32()
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论