提交 726aeae5 authored 作者: abalkin's avatar abalkin

Merge remote-tracking branch 'upstream/master' into issue-1080

...@@ -43,13 +43,13 @@ Interface change: ...@@ -43,13 +43,13 @@ Interface change:
* Now we do not support unaligned ndarray in python code. (Frederic B.) * Now we do not support unaligned ndarray in python code. (Frederic B.)
We did not support it in c code and supporting it in python code made We did not support it in c code and supporting it in python code made
the detection harder. the detection harder.
* Now we only support officialy scipy 0.7.2 and numpy 1.5.0 (Frederic B.) * Now we only officially support scipy 0.7.2 and numpy 1.5.0 (Frederic B.)
We weren't and aren't testing with older version. We weren't and aren't testing with older versions.
* The theano.sparse.SparseType is available even when scipy is not (Frederic B.) * The theano.sparse.SparseType is available even when scipy is not (Frederic B.)
* Fixes issue where members of consider_constant grad parameter * Fixes issue where members of consider_constant grad parameter
were treated differently from Constant variables. (Ian G.) were treated differently from Constant variables. (Ian G.)
* Remove the parameter g_cost to theano.grad(). (Ian G.) * Remove the parameter g_cost from theano.grad(). (Ian G.)
Use the new more powerfull parameter known_grads instead. Use the new more powerful parameter known_grads instead.
NumPy interface support: NumPy interface support:
* theano.tensor.where is an alias for theano.tensor.switch to support NumPy semantic. (Ian G.) * theano.tensor.where is an alias for theano.tensor.switch to support NumPy semantic. (Ian G.)
...@@ -75,23 +75,23 @@ New Feature: ...@@ -75,23 +75,23 @@ New Feature:
* Allow integer axes when keepdims==True (Jeremiah Lowin) * Allow integer axes when keepdims==True (Jeremiah Lowin)
* Add erfinv and erfcinv op. (Jey Kottalam) * Add erfinv and erfcinv op. (Jey Kottalam)
* Added tensor.batched_dot(). (Caglar Gulcehre) * Added tensor.batched_dot(). (Caglar Gulcehre)
It use scan behind the scene, but making doing this easier. It uses scan behind the scenes, but makes doing this easier.
* theano.get_constant_value(x) (Frederic B.) * theano.get_constant_value(x) (Frederic B.)
This try to do have x as a constant int. This tries to have x as a constant int.
This do some constant folding to try to convert x into an int. This does some constant folding to try to convert x into an int.
Used by some optimization. Used by some optimizations.
* Add theano.tensor.io.{MPIRecv,MPIRecvWait,MPISend,MPISendWait} (Matthew Rocklin) * Add theano.tensor.io.{MPIRecv,MPIRecvWait,MPISend,MPISendWait} (Matthew Rocklin)
Theano do not automatically use them. It is up to you to use them and split your computation. Theano does not automatically use them. It is up to you to use them and split your computation.
* Added theano.sandbox.linalg.eig (abalkin) * Added theano.sandbox.linalg.eig (abalkin)
* Started some support for Python3 (abalkin) * Started some support for Python3 (abalkin)
setup.py support python3 now. setup.py supports python3 now.
It call 2to3 during the setup. It calls 2to3 during the setup.
Python3 not fully supported as we didn't update the c code. Python3 is not fully supported as we didn't update the c code.
Crash Fix: Crash Fix:
* Fix a crash related to scan.grad due to the new mechanism. (Ian G.) * Fix a crash related to scan.grad due to the new mechanism. (Ian G.)
* Fix an optimization warning. Now it get optimized. (Frederic B.) * Fix an optimization warning. Now it gets optimized. (Frederic B.)
* Fix crash introduced in 0.6rc1 in theano.grad (Ian G.) * Fix crash introduced in 0.6rc1 in theano.grad (Ian G.)
* Fix crash introduced in 0.6rc1 in the grad of scan (Razvan P.) * Fix crash introduced in 0.6rc1 in the grad of scan (Razvan P.)
* Fix crash introduced in 0.6rc1 in the grad of clip (Ian G.) * Fix crash introduced in 0.6rc1 in the grad of clip (Ian G.)
...@@ -125,7 +125,7 @@ Theano 0.6rc1 (October 1st, 2012) ...@@ -125,7 +125,7 @@ Theano 0.6rc1 (October 1st, 2012)
Highlights: Highlights:
* Bug fixes, crash fixes, CPU and GPU speed up. * Bug fixes, crash fixes, CPU and GPU speed up.
* theano_var.eval({other_var: val[,...]} to simplify the usage of Theano (Ian G.) * theano_var.eval({other_var: val[,...]} to simplify the usage of Theano (Ian G.)
* New default linker `cvm`. This is the execution engine that tells what op to run in which order. * New default linker `cvm`. This is the execution engine that tells ops to run in certain orders.
It is now implemented in C and enables lazy evaluation of ifelse op. It is now implemented in C and enables lazy evaluation of ifelse op.
* Faster theano.function compilation. (Pascal L., Ian G.) * Faster theano.function compilation. (Pascal L., Ian G.)
* Big sparse submodule update and documentation of it. (Nicolas Bouchard) * Big sparse submodule update and documentation of it. (Nicolas Bouchard)
...@@ -153,7 +153,7 @@ Bug fixes: ...@@ -153,7 +153,7 @@ Bug fixes:
with those defaults then stuck at old values if the config variables were with those defaults then stuck at old values if the config variables were
changed during program execution. (David W-F) changed during program execution. (David W-F)
* Fixed many subtle bugs involving mutable default arguments which may have * Fixed many subtle bugs involving mutable default arguments which may have
led to unexpected behaviour, such as objects sharing instance variables led to unexpected behavior, such as objects sharing instance variables
they were not supposed to share. (David W-F) they were not supposed to share. (David W-F)
* Correctly record the GPU device number used when we let the driver select it. * Correctly record the GPU device number used when we let the driver select it.
(Frederic B.) (Frederic B.)
...@@ -408,7 +408,7 @@ Documentation: ...@@ -408,7 +408,7 @@ Documentation:
(Frederic B.) (Frederic B.)
* New installation instructions for Windows using EPD (Pascal L.) * New installation instructions for Windows using EPD (Pascal L.)
* New installation on Windows by using a Linux VM from ContinuumIO (Frederic B.) * New installation on Windows by using a Linux VM from ContinuumIO (Frederic B.)
* Revisions of Theano tutorial and addition of exercices to it. (Eric L.) * Revisions of Theano tutorial and addition of exercises to it. (Eric L.)
* New tutorial on Sparse variable. (Nicolas B., Sebastien Lemieux, Frederic Bastien * New tutorial on Sparse variable. (Nicolas B., Sebastien Lemieux, Frederic Bastien
http://www.deeplearning.net/software/theano/tutorial/sparse.html http://www.deeplearning.net/software/theano/tutorial/sparse.html
* Installation documentation for CentOS6 (Frederic B.) * Installation documentation for CentOS6 (Frederic B.)
......
...@@ -43,13 +43,13 @@ Interface change: ...@@ -43,13 +43,13 @@ Interface change:
* Now we do not support unaligned ndarray in python code. (Frederic B.) * Now we do not support unaligned ndarray in python code. (Frederic B.)
We did not support it in c code and supporting it in python code made We did not support it in c code and supporting it in python code made
the detection harder. the detection harder.
* Now we only support officialy scipy 0.7.2 and numpy 1.5.0 (Frederic B.) * Now we only officially support scipy 0.7.2 and numpy 1.5.0 (Frederic B.)
We weren't and aren't testing with older version. We weren't and aren't testing with older versions.
* The theano.sparse.SparseType is available even when scipy is not (Frederic B.) * The theano.sparse.SparseType is available even when scipy is not (Frederic B.)
* Fixes issue where members of consider_constant grad parameter * Fixes issue where members of consider_constant grad parameter
were treated differently from Constant variables. (Ian G.) were treated differently from Constant variables. (Ian G.)
* Remove the parameter g_cost to theano.grad(). (Ian G.) * Remove the parameter g_cost from theano.grad(). (Ian G.)
Use the new more powerfull parameter known_grads instead. Use the new more powerful parameter known_grads instead.
NumPy interface support: NumPy interface support:
* theano.tensor.where is an alias for theano.tensor.switch to support NumPy semantic. (Ian G.) * theano.tensor.where is an alias for theano.tensor.switch to support NumPy semantic. (Ian G.)
...@@ -75,23 +75,23 @@ New Feature: ...@@ -75,23 +75,23 @@ New Feature:
* Allow integer axes when keepdims==True (Jeremiah Lowin) * Allow integer axes when keepdims==True (Jeremiah Lowin)
* Add erfinv and erfcinv op. (Jey Kottalam) * Add erfinv and erfcinv op. (Jey Kottalam)
* Added tensor.batched_dot(). (Caglar Gulcehre) * Added tensor.batched_dot(). (Caglar Gulcehre)
It use scan behind the scene, but making doing this easier. It uses scan behind the scenes, but makes doing this easier.
* theano.get_constant_value(x) (Frederic B.) * theano.get_constant_value(x) (Frederic B.)
This try to do have x as a constant int. This tries to have x as a constant int.
This do some constant folding to try to convert x into an int. This does some constant folding to try to convert x into an int.
Used by some optimization. Used by some optimizations.
* Add theano.tensor.io.{MPIRecv,MPIRecvWait,MPISend,MPISendWait} (Matthew Rocklin) * Add theano.tensor.io.{MPIRecv,MPIRecvWait,MPISend,MPISendWait} (Matthew Rocklin)
Theano do not automatically use them. It is up to you to use them and split your computation. Theano does not automatically use them. It is up to you to use them and split your computation.
* Added theano.sandbox.linalg.eig (abalkin) * Added theano.sandbox.linalg.eig (abalkin)
* Started some support for Python3 (abalkin) * Started some support for Python3 (abalkin)
setup.py support python3 now. setup.py supports python3 now.
It call 2to3 during the setup. It calls 2to3 during the setup.
Python3 not fully supported as we didn't update the c code. Python3 is not fully supported as we didn't update the c code.
Crash Fix: Crash Fix:
* Fix a crash related to scan.grad due to the new mechanism. (Ian G.) * Fix a crash related to scan.grad due to the new mechanism. (Ian G.)
* Fix an optimization warning. Now it get optimized. (Frederic B.) * Fix an optimization warning. Now it gets optimized. (Frederic B.)
* Fix crash introduced in 0.6rc1 in theano.grad (Ian G.) * Fix crash introduced in 0.6rc1 in theano.grad (Ian G.)
* Fix crash introduced in 0.6rc1 in the grad of scan (Razvan P.) * Fix crash introduced in 0.6rc1 in the grad of scan (Razvan P.)
* Fix crash introduced in 0.6rc1 in the grad of clip (Ian G.) * Fix crash introduced in 0.6rc1 in the grad of clip (Ian G.)
...@@ -125,7 +125,7 @@ Theano 0.6rc1 (October 1st, 2012) ...@@ -125,7 +125,7 @@ Theano 0.6rc1 (October 1st, 2012)
Highlights: Highlights:
* Bug fixes, crash fixes, CPU and GPU speed up. * Bug fixes, crash fixes, CPU and GPU speed up.
* theano_var.eval({other_var: val[,...]} to simplify the usage of Theano (Ian G.) * theano_var.eval({other_var: val[,...]} to simplify the usage of Theano (Ian G.)
* New default linker `cvm`. This is the execution engine that tells what op to run in which order. * New default linker `cvm`. This is the execution engine that tells ops to run in certain orders.
It is now implemented in C and enables lazy evaluation of ifelse op. It is now implemented in C and enables lazy evaluation of ifelse op.
* Faster theano.function compilation. (Pascal L., Ian G.) * Faster theano.function compilation. (Pascal L., Ian G.)
* Big sparse submodule update and documentation of it. (Nicolas Bouchard) * Big sparse submodule update and documentation of it. (Nicolas Bouchard)
...@@ -153,7 +153,7 @@ Bug fixes: ...@@ -153,7 +153,7 @@ Bug fixes:
with those defaults then stuck at old values if the config variables were with those defaults then stuck at old values if the config variables were
changed during program execution. (David W-F) changed during program execution. (David W-F)
* Fixed many subtle bugs involving mutable default arguments which may have * Fixed many subtle bugs involving mutable default arguments which may have
led to unexpected behaviour, such as objects sharing instance variables led to unexpected behavior, such as objects sharing instance variables
they were not supposed to share. (David W-F) they were not supposed to share. (David W-F)
* Correctly record the GPU device number used when we let the driver select it. * Correctly record the GPU device number used when we let the driver select it.
(Frederic B.) (Frederic B.)
...@@ -408,7 +408,7 @@ Documentation: ...@@ -408,7 +408,7 @@ Documentation:
(Frederic B.) (Frederic B.)
* New installation instructions for Windows using EPD (Pascal L.) * New installation instructions for Windows using EPD (Pascal L.)
* New installation on Windows by using a Linux VM from ContinuumIO (Frederic B.) * New installation on Windows by using a Linux VM from ContinuumIO (Frederic B.)
* Revisions of Theano tutorial and addition of exercices to it. (Eric L.) * Revisions of Theano tutorial and addition of exercises to it. (Eric L.)
* New tutorial on Sparse variable. (Nicolas B., Sebastien Lemieux, Frederic Bastien * New tutorial on Sparse variable. (Nicolas B., Sebastien Lemieux, Frederic Bastien
http://www.deeplearning.net/software/theano/tutorial/sparse.html http://www.deeplearning.net/software/theano/tutorial/sparse.html
* Installation documentation for CentOS6 (Frederic B.) * Installation documentation for CentOS6 (Frederic B.)
......
...@@ -769,34 +769,10 @@ You can then proceed to the :ref:`windows_basic` or the :ref:`windows_bleeding_e ...@@ -769,34 +769,10 @@ You can then proceed to the :ref:`windows_basic` or the :ref:`windows_bleeding_e
Alternative: Anaconda 0.8.3 (Linux VM on Windows) Alternative: Anaconda 0.8.3 (Linux VM on Windows)
################################################# #################################################
ContinuumIO_ provides a free Windows VM with Theano install. The VM is the CentOS6.2 64 bit OS. ContinuumIO_ was providing a free VM with Theano installed. Now they
provide a new installation system that install itself on Windows. We
- If you do not have VMWare installed, install VMWare player(free): http://www.vmware.com/products/player/ don't have the time now to update the docs, so we remove the old
- Download the VM: http://continuum.io/downloads.html documentations that don't work.
- Follow the instruction on the ContinuumIO website to start the VM
- Configure Theano by executing this:
.. code-block:: bash
echo "[blas]" >> ~/.theanorc
echo "ldflags=" >> ~/.theanorc
- [Optional] To enable the network, go into the VMWare setting for the vm and set the networking to NAT. Start the VM. In the VM, comment all lines in the file /etc/udev/rules.d/70-persistent-net.rules and restart the VM.
- [Optional] to install easy_install:
.. code-block:: bash
wget -c http://pypi.python.org/packages/source/s/setuptools/setuptools-0.6c11.tar.gz#md5=7df2a529a074f613b509fb44feefe74e
tar -zxf setuptools-0.6c11.tar.gz
cd setuptools-0.6c11
sudo /opt/anaconda/bin/python setup.py install
- [Optional] To install pip
.. code-block:: bash
#install setuptools
sudo /opt/anaconda/bin/easy_install pip
.. _ContinuumIO: http://continuum.io .. _ContinuumIO: http://continuum.io
......
...@@ -147,7 +147,7 @@ class BadThunkOutput(DebugModeError): ...@@ -147,7 +147,7 @@ class BadThunkOutput(DebugModeError):
val2 = None val2 = None
"""The value computed by `thunk2`""" """The value computed by `thunk2`"""
def __init__(self, r, thunk1, val1, thunk2, val2): def __init__(self, r, thunk1, val1, thunk2, val2, inputs_val=()):
"""Initialize members""" """Initialize members"""
DebugModeError.__init__(self) # to be compatible with python2.4 DebugModeError.__init__(self) # to be compatible with python2.4
self.r = r self.r = r
...@@ -155,6 +155,7 @@ class BadThunkOutput(DebugModeError): ...@@ -155,6 +155,7 @@ class BadThunkOutput(DebugModeError):
self.val1 = val1 self.val1 = val1
self.thunk2 = thunk2 self.thunk2 = thunk2
self.val2 = val2 self.val2 = val2
self.inputs_val = inputs_val
def offending_op(self): def offending_op(self):
"""Return the Op class whose c_code and perform """Return the Op class whose c_code and perform
...@@ -171,7 +172,11 @@ class BadThunkOutput(DebugModeError): ...@@ -171,7 +172,11 @@ class BadThunkOutput(DebugModeError):
print >> sio, "BadThunkOutput" print >> sio, "BadThunkOutput"
print >> sio, " variable :", self.r print >> sio, " variable :", self.r
print >> sio, " Outputs Type:", self.r.type print >> sio, " Outputs Type:", self.r.type
print >> sio, " Inputs Type :", [i.type for i in self.r.owner.inputs] print >> sio, " Inputs Type :", [i.type for i in self.r.owner.inputs],
print >> sio, " Inputs Shape:", [getattr(val, 'shape', None)
for val in self.inputs_val]
print >> sio, " Inputs Strides:", [getattr(val, 'strides', None)
for val in self.inputs_val]
print >> sio, " Apply :", self.r.owner print >> sio, " Apply :", self.r.owner
print >> sio, " thunk1 :", self.thunk1 print >> sio, " thunk1 :", self.thunk1
print >> sio, " thunk2 :", self.thunk2 print >> sio, " thunk2 :", self.thunk2
...@@ -1331,9 +1336,11 @@ def _check_preallocated_output(node, thunk, prealloc_modes, def_val, ...@@ -1331,9 +1336,11 @@ def _check_preallocated_output(node, thunk, prealloc_modes, def_val,
for r in node.outputs: for r in node.outputs:
if not r.type.values_eq_approx(r_vals[r], storage_map[r][0]): if not r.type.values_eq_approx(r_vals[r], storage_map[r][0]):
# TODO: indicate it is not a C/Py problem # TODO: indicate it is not a C/Py problem
inputs_val = [storage_map[inp] for inp in r.owner.inputs]
raise BadThunkOutput(r, raise BadThunkOutput(r,
thunk1='Reference value', val1=r_vals[r], thunk1='Reference value', val1=r_vals[r],
thunk2=thunk_name, val2=storage_map[r][0]) thunk2=thunk_name, val2=storage_map[r][0],
inputs_val=inputs_val)
# Clear storage_map # Clear storage_map
for r in node.outputs: for r in node.outputs:
...@@ -1911,9 +1918,11 @@ class _Linker(gof.link.LocalLinker): ...@@ -1911,9 +1918,11 @@ class _Linker(gof.link.LocalLinker):
if not r.type.values_eq_approx(r_vals[r], storage_map[r][0]): if not r.type.values_eq_approx(r_vals[r], storage_map[r][0]):
#import pdb; pdb.set_trace() #import pdb; pdb.set_trace()
#r.type.values_eq_approx(r_vals[r], storage_map[r][0]) #r.type.values_eq_approx(r_vals[r], storage_map[r][0])
inputs_val = [storage_map[inp] for inp in r.owner.inputs]
raise BadThunkOutput(r, raise BadThunkOutput(r,
thunk1='perform', val1=r_vals[r], thunk1='perform', val1=r_vals[r],
thunk2='c_code', val2=storage_map[r][0]) thunk2='c_code', val2=storage_map[r][0],
inputs_val=inputs_val)
else: else:
#print >> sys.stderr, i, "DEBUGMODE storing reference output %x" % id(storage_map[r][0]) #print >> sys.stderr, i, "DEBUGMODE storing reference output %x" % id(storage_map[r][0])
#retrieve each output from the storage_map #retrieve each output from the storage_map
......
...@@ -2308,6 +2308,8 @@ class SpecifyShape(Op): ...@@ -2308,6 +2308,8 @@ class SpecifyShape(Op):
@note: Maybe in the future we will never do the assert! @note: Maybe in the future we will never do the assert!
@note: We currently don't support specifying partial shape information. @note: We currently don't support specifying partial shape information.
@todo: test this op with sparse and cuda ndarray. Do c code for them too.
""" """
view_map = {0: [0]} view_map = {0: [0]}
...@@ -2324,11 +2326,16 @@ class SpecifyShape(Op): ...@@ -2324,11 +2326,16 @@ class SpecifyShape(Op):
if not isinstance(x, Variable): if not isinstance(x, Variable):
x = as_tensor_variable(x) x = as_tensor_variable(x)
shape = as_tensor_variable(shape) shape = as_tensor_variable(shape)
assert shape.ndim == 1
assert "int" in shape.dtype
if isinstance(shape, TensorConstant):
assert shape.data.size == x.ndim
return Apply(self, [x, shape], [x.type()]) return Apply(self, [x, shape], [x.type()])
def perform(self, node, inp, out_): def perform(self, node, inp, out_):
x, shape = inp x, shape = inp
out, = out_ out, = out_
assert x.ndim == shape.size
assert numpy.all(x.shape == shape), ("got shape", x.shape, assert numpy.all(x.shape == shape), ("got shape", x.shape,
"expected", shape) "expected", shape)
out[0] = x out[0] = x
...@@ -2368,6 +2375,47 @@ class SpecifyShape(Op): ...@@ -2368,6 +2375,47 @@ class SpecifyShape(Op):
return [None] return [None]
return self.make_node(eval_points[0], *inputs[1:]).outputs return self.make_node(eval_points[0], *inputs[1:]).outputs
def c_code(self, node, nodename, inp, out, sub):
if not isinstance(node.inputs[0], TensorVariable):
# The c code bellow support only Tensor. super.c_code
# will raise an exception to tell that there isn't c code
# for the other cases.
return super(SpecifyShape, self).c_code(node, nodename,
inp, out, sub)
iname, shape = inp
oname, = out
fail = sub['fail']
return """
if (PyArray_NDIM(%(iname)s) != PyArray_DIMS(%(shape)s)[0]) {
PyErr_Format(PyExc_AssertionError,
"SpecifyShape: vector of shape have %%d element,"
" but the input have %%d dimensions.",
PyArray_NDIM(%(iname)s),
PyArray_DIMS(%(shape)s)[0]);
%(fail)s;
}
for(int i = 0; i < PyArray_NDIM(%(iname)s); i++){
dtype_%(shape)s shp = ((dtype_%(shape)s*)PyArray_GETPTR1(%(shape)s,
i))[0];
if (PyArray_DIMS(%(iname)s)[i] != shp) {
PyErr_Format(PyExc_AssertionError,
"SpecifyShape: dim %%d of input have shape %%d,"
" expected %%d.",
i, PyArray_DIMS(%(iname)s)[i],
shp);
%(fail)s;
}
}
Py_XDECREF(%(oname)s);
%(oname)s = %(iname)s;
Py_XINCREF(%(oname)s);
""" % locals()
def c_code_cache_version(self):
return (1,)
specify_shape = SpecifyShape() specify_shape = SpecifyShape()
......
...@@ -340,6 +340,7 @@ class DimShuffle(Op): ...@@ -340,6 +340,7 @@ class DimShuffle(Op):
#borrow only the writable flag from the base #borrow only the writable flag from the base
# the NPY_OWNDATA flag will default to 0. # the NPY_OWNDATA flag will default to 0.
'(NPY_ARRAY_WRITEABLE*PyArray_ISWRITEABLE(%(basename)s)), NULL)'), '(NPY_ARRAY_WRITEABLE*PyArray_ISWRITEABLE(%(basename)s)), NULL)'),
'if (%(res)s == NULL) %(fail)s;',
#recalculate flags: CONTIGUOUS, FORTRAN, ALIGNED #recalculate flags: CONTIGUOUS, FORTRAN, ALIGNED
'PyArray_UpdateFlags(%(res)s, NPY_ARRAY_UPDATE_ALL)', 'PyArray_UpdateFlags(%(res)s, NPY_ARRAY_UPDATE_ALL)',
#we are making a view in both inplace and non-inplace cases #we are making a view in both inplace and non-inplace cases
......
...@@ -6326,6 +6326,54 @@ def test_transpose(): ...@@ -6326,6 +6326,54 @@ def test_transpose():
assert tensor.transpose(tensor.dmatrix()).name is None assert tensor.transpose(tensor.dmatrix()).name is None
class TestSpecifyShape(unittest.TestCase):
def shortDescription(self):
return None
def test_bad_shape(self):
""" Test that at run time we raise an exception when the shape
is not the one specified"""
specify_shape = SpecifyShape()
x = vector()
xval = numpy.random.rand(2).astype(floatX)
f = theano.function([x], specify_shape(x, [2]))
f(xval)
xval = numpy.random.rand(3).astype(floatX)
self.assertRaises(AssertionError, f, xval)
x = matrix()
xval = numpy.random.rand(2, 3).astype(floatX)
f = theano.function([x], specify_shape(x, [2, 3]))
f(xval)
for shape in [(1, 3), (2, 2), (5, 5)]:
xval = numpy.random.rand(*shape).astype(floatX)
self.assertRaises(AssertionError, f, xval)
def test_bad_number_of_shape(self):
""" Test that the number of dimensions provided is good"""
specify_shape = SpecifyShape()
x = vector()
shape_vec = ivector()
xval = numpy.random.rand(2).astype(floatX)
self.assertRaises(AssertionError, specify_shape, x, [])
self.assertRaises(AssertionError, specify_shape, x, [2, 2])
f = theano.function([x, shape_vec], specify_shape(x, shape_vec))
self.assertRaises(AssertionError, f, xval, [])
self.assertRaises(AssertionError, f, xval, [2, 2])
x = matrix()
xval = numpy.random.rand(2, 3).astype(floatX)
for shape in [(),
(1,),
(2, 3, 4)]:
self.assertRaises(AssertionError, specify_shape, x, shape)
f = theano.function([x, shape_vec], specify_shape(x, shape_vec))
self.assertRaises(AssertionError, f, xval, shape)
class TestInferShape(utt.InferShapeTester): class TestInferShape(utt.InferShapeTester):
def test_infer_shape(self): def test_infer_shape(self):
......
...@@ -95,6 +95,10 @@ class test_DimShuffle(unittest_tools.InferShapeTester): ...@@ -95,6 +95,10 @@ class test_DimShuffle(unittest_tools.InferShapeTester):
[DimShuffle(ib, shuffle)(adtens)], [DimShuffle(ib, shuffle)(adtens)],
[adtens_val], DimShuffle) [adtens_val], DimShuffle)
def test_too_big_rank(self):
x = tensor.dscalar()
y = x.dimshuffle(('x',) * (numpy.MAXDIMS + 1))
self.assertRaises(ValueError, y.eval, {x: 0})
class test_Broadcast(unittest.TestCase): class test_Broadcast(unittest.TestCase):
def setUp(self): def setUp(self):
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论