提交 9517a606 authored 作者: nouiz's avatar nouiz

Merge pull request #310 from delallea/minor

Minor stuff
...@@ -123,8 +123,8 @@ Known bugs: ...@@ -123,8 +123,8 @@ Known bugs:
* CAReduce with nan in inputs don't return the good output (`Ticket <https://www.assembla.com/spaces/theano/tickets/763>`_). * CAReduce with nan in inputs don't return the good output (`Ticket <https://www.assembla.com/spaces/theano/tickets/763>`_).
* This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements. * This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements.
* If you take the grad of a grad of scan, now we raise an error during the construction of the graph. In the past, you could have wrong results in some cases or an error at run time. * If you take the grad of a grad of scan, now we raise an error during the construction of the graph. In the past, you could have wrong results in some cases or an error at run time.
* Scan can raise an IncSubtensor error at run time (no wrong result possible). The current work around is to disable an optimization with this Theano flags: "optimizer_excluding=scanOp_save_mem". * Scan can raise an IncSubtensor error at run time (no wrong result possible). The current workaround is to disable an optimization with this Theano flag: "optimizer_excluding=scanOp_save_mem".
* If you have more then 1 optimization to disable, you must separate them with ":". * If you have multiple optimizations to disable, you must separate them with ":".
Sandbox: Sandbox:
......
...@@ -123,8 +123,8 @@ Known bugs: ...@@ -123,8 +123,8 @@ Known bugs:
* CAReduce with nan in inputs don't return the good output (`Ticket <https://www.assembla.com/spaces/theano/tickets/763>`_). * CAReduce with nan in inputs don't return the good output (`Ticket <https://www.assembla.com/spaces/theano/tickets/763>`_).
* This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements. * This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements.
* If you take the grad of a grad of scan, now we raise an error during the construction of the graph. In the past, you could have wrong results in some cases or an error at run time. * If you take the grad of a grad of scan, now we raise an error during the construction of the graph. In the past, you could have wrong results in some cases or an error at run time.
* Scan can raise an IncSubtensor error at run time (no wrong result possible). The current work around is to disable an optimization with this Theano flags: "optimizer_excluding=scanOp_save_mem". * Scan can raise an IncSubtensor error at run time (no wrong result possible). The current workaround is to disable an optimization with this Theano flag: "optimizer_excluding=scanOp_save_mem".
* If you have more then 1 optimization to disable, you must separate them with ":". * If you have multiple optimizations to disable, you must separate them with ":".
Sandbox: Sandbox:
......
...@@ -35,8 +35,8 @@ Compiling with PyCUDA ...@@ -35,8 +35,8 @@ Compiling with PyCUDA
You can use PyCUDA to compile some CUDA function that work directly on You can use PyCUDA to compile some CUDA function that work directly on
CudaNdarray. There is an example in the function `test_pycuda_theano` CudaNdarray. There is an example in the function `test_pycuda_theano`
in the file `theano/misc/tests/test_pycuda_theano_simple.py`. Also, in the file `theano/misc/tests/test_pycuda_theano_simple.py`. Also,
there is an example that show how to make an op that call a pycuda there is an example that shows how to make an op that calls a pycuda
function :ref:`here <pyCUDA_theano>` function :ref:`here <pyCUDA_theano>`.
Theano op using PyCUDA function Theano op using PyCUDA function
------------------------------- -------------------------------
......
...@@ -16,7 +16,7 @@ import theano.misc.pycuda_init ...@@ -16,7 +16,7 @@ import theano.misc.pycuda_init
if not theano.misc.pycuda_init.pycuda_available: if not theano.misc.pycuda_init.pycuda_available:
from nose.plugins.skip import SkipTest from nose.plugins.skip import SkipTest
raise SkipTest("Pycuda not installed." raise SkipTest("Pycuda not installed."
" We skip test of theano op with pycuda code.") " We skip tests of Theano Ops with pycuda code.")
if cuda_ndarray.cuda_available == False: if cuda_ndarray.cuda_available == False:
from nose.plugins.skip import SkipTest from nose.plugins.skip import SkipTest
...@@ -28,7 +28,7 @@ import pycuda.gpuarray ...@@ -28,7 +28,7 @@ import pycuda.gpuarray
def test_pycuda_only(): def test_pycuda_only():
"""Run pycuda only example to test that pycuda work.""" """Run pycuda only example to test that pycuda works."""
from pycuda.compiler import SourceModule from pycuda.compiler import SourceModule
mod = SourceModule(""" mod = SourceModule("""
__global__ void multiply_them(float *dest, float *a, float *b) __global__ void multiply_them(float *dest, float *a, float *b)
......
...@@ -51,8 +51,9 @@ class CudaNdarrayType(Type): ...@@ -51,8 +51,9 @@ class CudaNdarrayType(Type):
def __init__(self, broadcastable, name=None, dtype=None): def __init__(self, broadcastable, name=None, dtype=None):
if dtype != None and dtype != 'float32': if dtype != None and dtype != 'float32':
raise TypeError(self.__class__.__name__+' only support dtype float32 for now.'\ raise TypeError('%s only supports dtype float32 for now. Tried '
'Tried using dtype %s for variable %s' % (dtype, name)) 'using dtype %s for variable %s' %
(self.__class__.__name__, dtype, name))
self.broadcastable = tuple(broadcastable) self.broadcastable = tuple(broadcastable)
self.name = name self.name = name
self.dtype_specs() # error checking is done there self.dtype_specs() # error checking is done there
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论