提交 e8c50c78 authored 作者: fsavard's avatar fsavard

merge

...@@ -27,7 +27,7 @@ Theano (current directory) is the distribution directory. ...@@ -27,7 +27,7 @@ Theano (current directory) is the distribution directory.
* scalar depends upon core * scalar depends upon core
* tensor depends upon scalar * tensor depends upon scalar
* sparse depends upon tensor * sparse depends upon tensor
* sandbox can depends on everything else * sandbox can depend on everything else
* Theano/examples are copies of the example on the wiki * Theano/examples are copies of the example on the wiki
* Theano/benchmark, Theano/bin and Theano/examples are in the distribution, * Theano/benchmark, Theano/bin and Theano/examples are in the distribution,
but not in the python package but not in the python package
......
...@@ -99,7 +99,7 @@ The ``make_node`` method creates a node to be included in the expression graph. ...@@ -99,7 +99,7 @@ The ``make_node`` method creates a node to be included in the expression graph.
It runs when we apply our Op (``fibby``) to Variable (``x``), as in ``fibby(tensor.vector())``. It runs when we apply our Op (``fibby``) to Variable (``x``), as in ``fibby(tensor.vector())``.
When an Op has multiple inputs, their order in the inputs argument to ``Apply`` When an Op has multiple inputs, their order in the inputs argument to ``Apply``
is important: Theano will call ``make_node(*inputs)`` to copy the graph, is important: Theano will call ``make_node(*inputs)`` to copy the graph,
so it is important to not change the semantics of the expression by doing changing the argument order. so it is important not to change the semantics of the expression by changing the argument order.
......
...@@ -138,7 +138,7 @@ following methods: ...@@ -138,7 +138,7 @@ following methods:
other criterion C with respect to the Op's input. other criterion C with respect to the Op's input.
If the outputs of your op are :math:`[ f_1, ... f_n]`, then If the outputs of your op are :math:`[ f_1, ... f_n]`, then
``output_derivatives`` gives ``output_gradients`` is
:math:`[ grad_{f_1}(C), grad_{f_2}(C), ... , grad_{f_n}(C) ]`. :math:`[ grad_{f_1}(C), grad_{f_2}(C), ... , grad_{f_n}(C) ]`.
If the inputs of your op are :math:`[x_1, ..., x_m]`, then your Op.grad If the inputs of your op are :math:`[x_1, ..., x_m]`, then your Op.grad
should return :math:`[ grad_{x_1}(C), grad_{x_2}(C), ..., grad_{x_m}(C) ]`, should return :math:`[ grad_{x_1}(C), grad_{x_2}(C), ..., grad_{x_m}(C) ]`,
......
...@@ -14,7 +14,8 @@ Requirements ...@@ -14,7 +14,8 @@ Requirements
------------ ------------
In order to use Theano, the following libraries and software will need In order to use Theano, the following libraries and software will need
to be installed: to be installed (MacOS and Windows users should refer to platform-specific
instructions below for detailed installation steps):
Linux, Mac OS X or Windows operating system Linux, Mac OS X or Windows operating system
We develop mainly on 64-bit Linux machines. 32-bit architectures are We develop mainly on 64-bit Linux machines. 32-bit architectures are
...@@ -394,7 +395,7 @@ Windows V1 (Installing from Scratch) ...@@ -394,7 +395,7 @@ Windows V1 (Installing from Scratch)
You can keep the default install options (except for the installation directory). You can keep the default install options (except for the installation directory).
- Install Mercurial. You can download it - Install Mercurial. You can download it
`here <http://mercurial.selenic.com/downloads>`_. You may get either the command `here <http://mercurial.selenic.com/downloads>`__. You may get either the command
line Windows version or the TortoiseHG GUI version: it does not matter as line Windows version or the TortoiseHG GUI version: it does not matter as
far as installing Theano is concerned. far as installing Theano is concerned.
...@@ -450,7 +451,7 @@ compile GotoBLAS2 (ATLAS may work too, but was not tested, and is ...@@ -450,7 +451,7 @@ compile GotoBLAS2 (ATLAS may work too, but was not tested, and is
usually reported to be slower and more difficult to compile -- especially usually reported to be slower and more difficult to compile -- especially
on Windows). on Windows).
GotoBLAS2 can be downloaded GotoBLAS2 can be downloaded
`here <http://www.tacc.utexas.edu/tacc-projects/gotoblas2/downloads>`_ `here <http://www.tacc.utexas.edu/tacc-projects/gotoblas2/downloads>`__
after registering on the website (we tested v1.13). after registering on the website (we tested v1.13).
To compile it, you will also need to install MSYS and Perl, To compile it, you will also need to install MSYS and Perl,
as described below. as described below.
...@@ -538,8 +539,7 @@ Windows: Using the GPU ...@@ -538,8 +539,7 @@ Windows: Using the GPU
Please note that these are tentative instructions (we have not yet been able to Please note that these are tentative instructions (we have not yet been able to
get the GPU to work under Windows with Theano). get the GPU to work under Windows with Theano).
Please report your own successes / failures on the Please report your own successes / failures on the `theano-users`_ mailing list.
`theano-users <http://groups.google.com/group/theano-users>`_ mailing list.
Those are instructions for the 32-bit version of Python (the one that comes Those are instructions for the 32-bit version of Python (the one that comes
with Python(x,y) is 32-bit). with Python(x,y) is 32-bit).
...@@ -555,14 +555,15 @@ use a compilation directory located somewhere else: ...@@ -555,14 +555,15 @@ use a compilation directory located somewhere else:
[global] [global]
base_compiledir=path_to_a_directory_without_such_characters base_compiledir=path_to_a_directory_without_such_characters
You also need to add in the configuration file those lines: You also need to add in the configuration file those lines (make sure this
is the correct Python installation path):
.. code-block:: cfg .. code-block:: cfg
[cuda] [cuda]
nvccflags=-LC:\Python26\libs nvccflags=-LC:\Python26\libs
Then Then
1) Install CUDA driver (32-bit on 32-bit Windows, idem for 64-bit). 1) Install CUDA driver (32-bit on 32-bit Windows, idem for 64-bit).
......
...@@ -128,16 +128,26 @@ Config Attributes ...@@ -128,16 +128,26 @@ Config Attributes
Default 'Mode' Default 'Mode'
This set the default compilation mode for theano functions. By default the This sets the default compilation mode for theano functions. By default the
mode Mode is equivalent to FAST_RUN. See Config attribute linker and optimizer. mode Mode is equivalent to FAST_RUN. See Config attribute linker and optimizer.
.. attribute:: config.lib.amdlibm
Bool value: either True or False
Default False
This makes the compilation use the
`amdlibm <http://developer.amd.com/cpu/libraries/libm/>`__
library, which is faster than the standard libm.
.. attribute:: linker .. attribute:: linker
String value: 'c|py', 'py', 'c', 'c|py_nogc', 'c&py' String value: 'c|py', 'py', 'c', 'c|py_nogc', 'c&py'
Default: 'c|py' Default: 'c|py'
When the mode is Mode, it set the default linker used. When the mode is Mode, it sets the default linker used.
.. attribute:: optimizer .. attribute:: optimizer
...@@ -145,7 +155,7 @@ Config Attributes ...@@ -145,7 +155,7 @@ Config Attributes
Default: 'fast_run' Default: 'fast_run'
When the mode is Mode, it set the default optimizer used. When the mode is Mode, it sets the default optimizer used.
.. attribute:: warn.ignore_bug_before .. attribute:: warn.ignore_bug_before
......
...@@ -46,7 +46,7 @@ AddConfigVar('DebugMode.check_strides', ...@@ -46,7 +46,7 @@ AddConfigVar('DebugMode.check_strides',
IntParam(1, lambda i: i in (0,1,2))) IntParam(1, lambda i: i in (0,1,2)))
AddConfigVar('DebugMode.warn_input_not_reused', AddConfigVar('DebugMode.warn_input_not_reused',
("Generate a warning when the destroy_map tell that an op work inplace, but the op did not reuse the input for its output." ("Generate a warning when the destroy_map or view_map tell that an op work inplace, but the op did not reuse the input for its output."
), ),
BoolParam(True)) BoolParam(True))
...@@ -519,6 +519,18 @@ def _check_inputs(node, storage_map, r_vals, dr_vals, active_nodes, clobber_dr_v ...@@ -519,6 +519,18 @@ def _check_inputs(node, storage_map, r_vals, dr_vals, active_nodes, clobber_dr_v
if storage_map[node.outputs[oo]][0] is not storage_map[node.inputs[ii[0]]][0]: if storage_map[node.outputs[oo]][0] is not storage_map[node.inputs[ii[0]]][0]:
warning("input idx %d marked as destroyed was not changed for node '%s'"%(ii[0],str(node))) warning("input idx %d marked as destroyed was not changed for node '%s'"%(ii[0],str(node)))
if warn_input_not_reused:
vmap=getattr(node.op,'view_map',{})
for oo,ii in vmap.iteritems():
if hasattr(node.outputs[0].type,"may_share_memory"):
if not node.outputs[0].type.may_share_memory(storage_map[node.outputs[oo]][0],storage_map[node.inputs[ii[0]]][0]):
#when a subtensor return a tensor ofndim==0, numpy seam to return a copy.
#when have an empty ndarray(happen with output guard) it is not the same. why?
if storage_map[node.outputs[oo]][0].ndim>0 and storage_map[node.outputs[oo]][0].size>0:
warning("input idx %d marked as viewed but new memory allocated by node '%s'"%(ii[0],str(node)))
elif storage_map[node.outputs[oo]][0] is not storage_map[node.inputs[ii[0]]][0]:
warning("input idx %d marked as viewed but new memory allocated by node '%s'"%(ii[0],str(node)))
for r_idx, r in enumerate(node.inputs): for r_idx, r in enumerate(node.inputs):
if not r.type.values_eq(r_vals[r], storage_map[r][0]): if not r.type.values_eq(r_vals[r], storage_map[r][0]):
# some input node 'r' got changed by running the node # some input node 'r' got changed by running the node
......
...@@ -14,6 +14,8 @@ import tokenize ...@@ -14,6 +14,8 @@ import tokenize
import argparse import argparse
import reindent import reindent
SKIP_WHITESPACE_CHECK_FILENAME = ".hg/skip_whitespace_check"
def get_parse_error(code): def get_parse_error(code):
""" """
Checks code for ambiguous tabs or other basic parsing issues. Checks code for ambiguous tabs or other basic parsing issues.
...@@ -128,6 +130,20 @@ def save_diffs(diffs, filename): ...@@ -128,6 +130,20 @@ def save_diffs(diffs, filename):
diff_file.write(diff) diff_file.write(diff)
diff_file.close() diff_file.close()
def should_skip_commit():
if not os.path.exists(SKIP_WHITESPACE_CHECK_FILENAME):
return False
whitespace_check_file = open(SKIP_WHITESPACE_CHECK_FILENAME, "r")
whitespace_check_changeset = whitespace_check_file.read()
whitespace_check_file.close()
return whitespace_check_changeset == parent_commit()
def save_skip_next_commit():
whitespace_check_file = open(SKIP_WHITESPACE_CHECK_FILENAME, "w")
whitespace_check_file.write(parent_commit())
whitespace_check_file.close()
def main(argv=None): def main(argv=None):
if argv is None: if argv is None:
argv = sys.argv[1:] argv = sys.argv[1:]
...@@ -145,12 +161,32 @@ def main(argv=None): ...@@ -145,12 +161,32 @@ def main(argv=None):
const=True, const=True,
help="only check indentation if the file was previously correctly indented (or is new)" help="only check indentation if the file was previously correctly indented (or is new)"
) )
parser.add_argument("-s", "--skip-after-failure",
action="store_const",
default=False,
const=True,
help="when this pre-commit hook fails, don't run it on the next commit; "
"this lets you check in your changes and then check in "
"any necessary whitespace changes in the subsequent commit"
)
args = parser.parse_args(argv) args = parser.parse_args(argv)
# -i and -s are incompatible; if you skip checking, you end up with a not-correctly-indented
# file, which -i then causes you to ignore!
if args.skip_after_failure and args.incremental:
print >> sys.stderr, "*** check whitespace hook misconfigured! -i and -s are incompatible."
return 1
if is_merge(): if is_merge():
# don't inspect merges: (a) they're complex and (b) they don't really introduce new code # don't inspect merges: (a) they're complex and (b) they don't really introduce new code
return 0 return 0
if args.skip_after_failure and should_skip_commit():
# we're set up to skip this one, so skip it, but
# first, make sure we don't skip the next one as well :)
os.remove(SKIP_WHITESPACE_CHECK_FILENAME)
return 0
block_commit = False block_commit = False
diffs = [] diffs = []
...@@ -185,12 +221,15 @@ def main(argv=None): ...@@ -185,12 +221,15 @@ def main(argv=None):
save_diffs(diffs, diffs_filename) save_diffs(diffs, diffs_filename)
print >> sys.stderr, "*** To fix all indentation issues, run: cd `hg root` && patch -p0 < %s" % diffs_filename print >> sys.stderr, "*** To fix all indentation issues, run: cd `hg root` && patch -p0 < %s" % diffs_filename
if block_commit: if block_commit:
save_filename = ".hg/commit_message.saved" save_filename = ".hg/commit_message.saved"
save_commit_message(save_filename) save_commit_message(save_filename)
print >> sys.stderr, "*** Commit message saved to %s" % save_filename print >> sys.stderr, "*** Commit message saved to %s" % save_filename
if args.skip_after_failure:
save_skip_next_commit()
print >> sys.stderr, "*** Next commit attempt will not be checked. To change this, rm %s" % SKIP_WHITESPACE_CHECK_FILENAME
return int(block_commit) return int(block_commit)
......
import atexit, os, stat import atexit, gc, os, stat
from theano.compile import optdb from theano.compile import optdb
from theano import config from theano import config
...@@ -96,6 +96,9 @@ if cuda_available: ...@@ -96,6 +96,9 @@ if cuda_available:
cuda_initialization_error_message = "" cuda_initialization_error_message = ""
# actively closing our gpu session presents segfault-on-exit on some systems # actively closing our gpu session presents segfault-on-exit on some systems
atexit.register(gpu_shutdown) atexit.register(gpu_shutdown)
# do garbage collection before releasing the gpu to avoid releasing invalid pointers later
# note that atexit-registered calls are called in LIFO order
atexit.register(gc.collect)
except EnvironmentError, e: except EnvironmentError, e:
cuda_available = False cuda_available = False
cuda_initialization_error_message = e.message cuda_initialization_error_message = e.message
......
...@@ -12,43 +12,12 @@ ...@@ -12,43 +12,12 @@
//If true, we fill with NAN allocated device memory. //If true, we fill with NAN allocated device memory.
#define ALLOC_MEMSET 0 #define ALLOC_MEMSET 0
#define DEBUG_GPU_CONTEXT_REFCOUNT 0
// g_gpu_context_refcount starts at one b/c the gpu context will be implicitly created
// on the first successful cuda call. the matching decref is in CudaNdarray_gpu_shutdown.
static int g_gpu_context_refcount = 1;
///////////////////////////
// cuda context management
///////////////////////////
void gpu_context_incref() {
g_gpu_context_refcount++;
#if DEBUG_GPU_CONTEXT_REFCOUNT
fprintf(stderr, "gpu_context_incref, to %d\n", g_gpu_context_refcount);
#endif
}
void gpu_context_decref() {
g_gpu_context_refcount--;
#if DEBUG_GPU_CONTEXT_REFCOUNT
fprintf(stderr, "gpu_context_decref, to %d\n", g_gpu_context_refcount);
#endif
if(g_gpu_context_refcount == 0) {
// we're now free to close the cuda context; if we don't explicitly
// exit our cuda context, some systems segfault on process exit
// for as-yet unknown reasons; see
// http://groups.google.com/group/theano-users/browse_thread/thread/c351846e5cebe35f
cudaThreadExit();
#if DEBUG_GPU_CONTEXT_REFCOUNT
fprintf(stderr, "gpu_context_decref at 0, calling cudaThreadExit\n");
#endif
}
}
///////////////////////// /////////////////////////
// Alloc and Free // Alloc and Free
///////////////////////// /////////////////////////
static int g_gpu_context_active = 0;
/** /**
* *
* In the test program I'm using, the _outstanding_mallocs decreases with every call. * In the test program I'm using, the _outstanding_mallocs decreases with every call.
...@@ -80,9 +49,6 @@ void * device_malloc(size_t size) ...@@ -80,9 +49,6 @@ void * device_malloc(size_t size)
return NULL; return NULL;
} }
_outstanding_mallocs[0] += (rval != NULL); _outstanding_mallocs[0] += (rval != NULL);
if(rval != NULL) {
gpu_context_incref(); // keep the gpu context around until we've free this memory
}
#if COMPUTE_GPU_MEM_USED #if COMPUTE_GPU_MEM_USED
for(int i=0;i<TABLE_SIZE;i++){ for(int i=0;i<TABLE_SIZE;i++){
if(NULL==_alloc_size_table[i].ptr){ if(NULL==_alloc_size_table[i].ptr){
...@@ -104,6 +70,10 @@ void * device_malloc(size_t size) ...@@ -104,6 +70,10 @@ void * device_malloc(size_t size)
} }
int device_free(void *ptr) int device_free(void *ptr)
{ {
// if there is no gpu context, the call to cudaFree will fail; skip it entirely
if(!g_gpu_context_active) {
return 0;
}
cudaError_t err = cudaFree(ptr); cudaError_t err = cudaFree(ptr);
if (cudaSuccess != err) if (cudaSuccess != err)
{ {
...@@ -116,9 +86,6 @@ int device_free(void *ptr) ...@@ -116,9 +86,6 @@ int device_free(void *ptr)
return -1; return -1;
} }
_outstanding_mallocs[0] -= (ptr != NULL); _outstanding_mallocs[0] -= (ptr != NULL);
if(ptr != NULL) {
gpu_context_decref();
}
#if COMPUTE_GPU_MEM_USED #if COMPUTE_GPU_MEM_USED
int i=0; int i=0;
for(;i<TABLE_SIZE;i++) for(;i<TABLE_SIZE;i++)
...@@ -1883,6 +1850,11 @@ CudaNdarray_gpu_init(PyObject* _unused, PyObject* args) ...@@ -1883,6 +1850,11 @@ CudaNdarray_gpu_init(PyObject* _unused, PyObject* args)
"Unable to get the number of gpus available: %s", "Unable to get the number of gpus available: %s",
cudaGetErrorString(cudaGetLastError())); cudaGetErrorString(cudaGetLastError()));
} }
// as soon as the first successful call to a cuda* function is made, a
// gpu context has been created
g_gpu_context_active = 1;
if(deviceCount <= 0) { if(deviceCount <= 0) {
return PyErr_Format(PyExc_EnvironmentError, return PyErr_Format(PyExc_EnvironmentError,
"Can't use the GPU, no devices support CUDA"); "Can't use the GPU, no devices support CUDA");
...@@ -1926,7 +1898,8 @@ CudaNdarray_gpu_init(PyObject* _unused, PyObject* args) ...@@ -1926,7 +1898,8 @@ CudaNdarray_gpu_init(PyObject* _unused, PyObject* args)
PyObject * PyObject *
CudaNdarray_gpu_shutdown(PyObject* _unused, PyObject* _unused_args) { CudaNdarray_gpu_shutdown(PyObject* _unused, PyObject* _unused_args) {
gpu_context_decref(); cudaThreadExit();
g_gpu_context_active = 0; // context has now been closed down
Py_INCREF(Py_None); Py_INCREF(Py_None);
return Py_None; return Py_None;
} }
......
...@@ -213,7 +213,8 @@ class SparseType(gof.Type): ...@@ -213,7 +213,8 @@ class SparseType(gof.Type):
# a FAST_RUN computation.. # a FAST_RUN computation..
return scipy.sparse.issparse(a) \ return scipy.sparse.issparse(a) \
and scipy.sparse.issparse(b) \ and scipy.sparse.issparse(b) \
and abs(a-b).sum() < (1e-6 * a.nnz) and ((abs(a-b).sum() < (1e-6 * a.nnz))
or (a.nnz==0 and b.nnz==0))#in case a and b are empty
def values_eq(self, a, b): def values_eq(self, a, b):
#WARNING: equality comparison of sparse matrices is not fast or easy #WARNING: equality comparison of sparse matrices is not fast or easy
...@@ -789,6 +790,10 @@ class StructuredDot(gof.Op): ...@@ -789,6 +790,10 @@ class StructuredDot(gof.Op):
dtype_out = scalar.upcast(a.type.dtype, b.type.dtype) dtype_out = scalar.upcast(a.type.dtype, b.type.dtype)
if b.type.ndim != 2: if b.type.ndim != 2:
raise NotImplementedError('non-matrix b') raise NotImplementedError('non-matrix b')
if _is_sparse_variable(b):
return gof.Apply(self, [a,b], [SparseType(a.type.format,dtype_out)()])
else:
return gof.Apply(self, [a,b], [tensor.tensor(dtype_out, (False, b.type.broadcastable[1]))]) return gof.Apply(self, [a,b], [tensor.tensor(dtype_out, (False, b.type.broadcastable[1]))])
def perform(self, node, (a,b), (out,)): def perform(self, node, (a,b), (out,)):
...@@ -797,6 +802,11 @@ class StructuredDot(gof.Op): ...@@ -797,6 +802,11 @@ class StructuredDot(gof.Op):
#variable = a.dot(b) # deprecated #variable = a.dot(b) # deprecated
variable = a * b variable = a * b
if isinstance(node.outputs[0].type,SparseType):
assert _is_sparse(variable)
out[0] = variable
return
assert _is_dense(variable) # scipy 0.7 automatically converts to dense assert _is_dense(variable) # scipy 0.7 automatically converts to dense
# dot of an NxM sparse matrix, with a Mx1 dense matrix, returns vector not matrix # dot of an NxM sparse matrix, with a Mx1 dense matrix, returns vector not matrix
......
...@@ -344,6 +344,28 @@ class test_structureddot(unittest.TestCase): ...@@ -344,6 +344,28 @@ class test_structureddot(unittest.TestCase):
outvals = f(kernvals,imvals) outvals = f(kernvals,imvals)
print outvals print outvals
def test_dot_sparse_sparse(self):
#test dot for 2 input sparse matrix
sparse_dtype = 'float64'
for sparse_format in ['csc','csr']:
a = SparseType(sparse_format, dtype=sparse_dtype)()
b = SparseType(sparse_format, dtype=sparse_dtype)()
d = theano.dot(a,b)
f = theano.function([a,b], theano.Out(d, borrow=True))
topo = f.maker.env.toposort()
for M,N,K,nnz in [(4,3,2,3),
(40,30,20,3),
(40,30,20,30),
(400,3000,200,6000),
]:
if sparse_format == 'csc':
spmat = sp.csc_matrix(random_lil((M,N), sparse_dtype, nnz))
spmat2 = sp.csc_matrix(random_lil((N,K), sparse_dtype, nnz))
elif sparse_format == 'csr':
spmat = sp.csr_matrix(random_lil((M,N), sparse_dtype, nnz))
spmat2 = sp.csr_matrix(random_lil((N,K), sparse_dtype, nnz))
f(spmat,spmat2)
def test_csc_correct_output_faster_than_scipy(self): def test_csc_correct_output_faster_than_scipy(self):
sparse_dtype = 'float64' sparse_dtype = 'float64'
dense_dtype = 'float64' dense_dtype = 'float64'
......
...@@ -33,6 +33,9 @@ def _info(*msg): ...@@ -33,6 +33,9 @@ def _info(*msg):
def _warn(*msg): def _warn(*msg):
_logger.warn(' '.join(msg)) _logger.warn(' '.join(msg))
#This is needed as we will hide it later
python_complex=complex
def check_equal_numpy(x, y): def check_equal_numpy(x, y):
""" """
Returns True iff x and y are equal (checks the dtype and Returns True iff x and y are equal (checks the dtype and
...@@ -367,8 +370,41 @@ def get_constant_value(v): ...@@ -367,8 +370,41 @@ def get_constant_value(v):
ret = [[None]] ret = [[None]]
v.owner.op.perform(v.owner, [const], ret) v.owner.op.perform(v.owner, [const], ret)
return ret[0][0] return ret[0][0]
if isinstance(v.owner.op, Subtensor) and v.ndim==0 and isinstance(v.owner.inputs[0], TensorConstant): if isinstance(v.owner.op, Subtensor) and v.ndim==0:
if isinstance(v.owner.inputs[0], TensorConstant):
return v.owner.inputs[0].data[v.owner.op.idx_list[0]] return v.owner.inputs[0].data[v.owner.op.idx_list[0]]
#Needed to make better graph in this test.
#theano/tensor/tests/test_sharedvar.py:test_shared_options.test_specify_shape_partial
if (v.owner.inputs[0].owner and
isinstance(v.owner.inputs[0].owner.op, Join) and
# Ensure the Join is joining only scalar variables (so that
# the constant value can be found at the same index as the one
# used in the sub-tensor).
all(var.ndim==0 for var in v.owner.inputs[0].owner.inputs)):
# The index list 'idx_list' should have length one
# since joining scalar variables results in a 1D vector.
assert len(v.owner.op.idx_list) == 1
# Note the '+ 1' is because the first argument to Join is the
# axis.
ret = v.owner.inputs[0].owner.inputs[v.owner.op.idx_list[0]+1]
ret = get_constant_value(ret)
#join can cast implicitly its input in some case.
return theano._asarray(ret, dtype=v.type.dtype)
if (v.owner.inputs[0].owner and
isinstance(v.owner.inputs[0].owner.op,
theano.tensor.opt.MakeVector) and
# MakeVector normally accept only scalar as input.
# We put this check in case there is change in the future
all(var.ndim==0 for var in v.owner.inputs[0].owner.inputs)):
# The index list 'idx_list' should have length one
# since joining scalar variables results in a 1D vector.
assert len(v.owner.op.idx_list) == 1
ret = v.owner.inputs[0].owner.inputs[v.owner.op.idx_list[0]]
ret = get_constant_value(ret)
#MakeVector can cast implicitly its input in some case.
return theano._asarray(ret, dtype=v.type.dtype)
raise TypeError(v) raise TypeError(v)
...@@ -531,7 +567,11 @@ class TensorType(Type): ...@@ -531,7 +567,11 @@ class TensorType(Type):
@staticmethod @staticmethod
def may_share_memory(a,b): def may_share_memory(a,b):
#when this is called with a an ndarray and b
#a sparce matrix, numpy.may_share_memory fail.
if a.__class__ is b.__class__:
return numpy.may_share_memory(a,b) return numpy.may_share_memory(a,b)
else: return False
@staticmethod @staticmethod
def values_eq(a, b): def values_eq(a, b):
...@@ -1477,6 +1517,54 @@ shape = Shape() ...@@ -1477,6 +1517,54 @@ shape = Shape()
_shape = shape #was used in the past, now use shape directly. _shape = shape #was used in the past, now use shape directly.
pprint.assign(_shape, printing.MemberPrinter('shape')) pprint.assign(_shape, printing.MemberPrinter('shape'))
class SpecifyShape(Op):
"""
L{Op} put into the graph the user provided shape
In the case where this op stay in the final graph, we assert the shape.
For this the output of this op must be used in the graph. This is not
the case most of the time if we only take the shape of the output.
Maybe there is other optimization that will mess with this.
@note: Maybe in the futur we will never do the assert!
@note: We currently don't support specifying partial shape information.
"""
view_map = {0: [0]}
def __hash__(self):
return hash(type(self))
def __eq__(self, other):
return type(self) == type(other)
def __str__(self):
return self.__class__.__name__
def make_node(self, x, shape):
if not isinstance(x,Variable):
x = as_tensor_variable(x)
shape = as_tensor_variable(shape)
return Apply(self, [x, shape], [x.type()])
def perform(self, node, (x,shape ), (out, )):
assert numpy.all(x.shape==shape), ("got shape", x.shape,
"expected", shape)
out[0] = x
def infer_shape(self, node, (xshape, sshape)):
new_shape=[]
for dim in range(node.inputs[0].ndim):
try:
s=get_constant_value(node.inputs[1][dim])
s=as_tensor_variable(s)
new_shape.append(s)
except TypeError, e:
new_shape.append(node.inputs[1][dim])
assert len(new_shape)==len(xshape)
return [new_shape]
def grad(self, (x,), (gz,)):
return [gz]
specify_shape = SpecifyShape()
class MaxAndArgmax(Op): class MaxAndArgmax(Op):
"""Calculate the max and argmax over a given axis. """Calculate the max and argmax over a given axis.
...@@ -1620,10 +1708,10 @@ def min(x, axis='DEFAULT'): ...@@ -1620,10 +1708,10 @@ def min(x, axis='DEFAULT'):
axis = 0 axis = 0
elif axis=='DEFAULT': elif axis=='DEFAULT':
axis = x.type.ndim - 1 axis = x.type.ndim - 1
warnings.warn("The default axis of min will change! Now we return the min over the last dimensions. It will change to be the same as numpy: the min over all dimensions. To hide this warning and be compatible with the future behavior, set axis to -1 to have the current behavior. To have the futur behavior set axis to range(nb dim), but this don't support the grad. To have the grad, you must flatten the tensor before calling min().") warnings.warn("The default axis of min will change! Now we return the min over the last dimensions. It will change to be the same as numpy: the min over all dimensions. To hide this warning and be compatible with the future behavior, set axis to -1 to have the current behavior. To have the future behavior, set axis to range(x.ndim), but this does not support the grad. To be able to get the grad, you must flatten the tensor before calling min().")
elif axis is None: elif axis is None:
axis = x.type.ndim - 1 axis = x.type.ndim - 1
warnings.warn("The behavior of min when axis==None will change! Now we return the min over the last dimensions. It will change to the min over all dimensions as numpy. To hide this warning and be compatible with the future behavior, set axis to -1 to have the current behavior. To have the futur behavior set axis to range(nb dim), but this don't support the grad. To have the grad, you must flatten the tensor before calling min().") warnings.warn("The behavior of min when axis is None will change! Now we return the min over the last dimensions. It will change to the min over all dimensions as numpy. To hide this warning and be compatible with the future behavior, set axis to -1 to have the current behavior. To have the future behavior, set axis to range(x.ndim), but this does not support the grad. To be able to get the grad, you must flatten the tensor before calling min().")
str_x_type = str(x.dtype) str_x_type = str(x.dtype)
if str_x_type.startswith('float') or str_x_type.startswith('int'): if str_x_type.startswith('float') or str_x_type.startswith('int'):
return -max(-x, axis=axis) return -max(-x, axis=axis)
...@@ -2159,9 +2247,10 @@ def mean(input, axis = None, op = False): ...@@ -2159,9 +2247,10 @@ def mean(input, axis = None, op = False):
@constructor @constructor
def var(input, axis = None): def var(input, axis = None):
"""Compute the variance along the given axis of a tensor `input` """Compute the variance along the given axis of a tensor `input`.
:param axis: compute the variance along this axis of the tensor. None means trailing axis. :param axis: Compute the variance along this axis of the tensor.
None means all axes (like numpy).
:type axis: None or int or (list of int) (see `Sum`) :type axis: None or int or (list of int) (see `Sum`)
""" """
...@@ -2195,6 +2284,16 @@ def var(input, axis = None): ...@@ -2195,6 +2284,16 @@ def var(input, axis = None):
#return the mean sqr #return the mean sqr
return mean(centered_input**2, axis) return mean(centered_input**2, axis)
@constructor
def std(input, axis=None):
"""Compute the standard deviation along the given axis of a tensor `input`.
:param axis: Compute the standard deviation along this axis of the tensor.
None means all axes (like numpy).
:type axis: None or int or (list of int) (see `Sum`)
"""
return sqrt(var(input=input, axis=axis))
if 0: if 0:
## COMMENTED OUT FEB 17 2010 ## COMMENTED OUT FEB 17 2010
## TODO (DOCUMENT AND WRITE TESTS) OR DELETE ## TODO (DOCUMENT AND WRITE TESTS) OR DELETE
...@@ -3187,11 +3286,18 @@ def stack(*tensors): ...@@ -3187,11 +3286,18 @@ def stack(*tensors):
raise Exception('theano.tensor.stack(*tensors) must have at least one parameter') raise Exception('theano.tensor.stack(*tensors) must have at least one parameter')
# If all tensors are scalars of the same type, call make_vector. # If all tensors are scalars of the same type, call make_vector.
# It makes the graph simpler, by not adding DimShuffles and Rebroadcasts # It makes the graph simpler, by not adding DimShuffles and Rebroadcasts
if numpy.all([isinstance(t, Variable) and\ if isinstance(tensors[0], (numpy.number, float, int, python_complex)):
isinstance(t.type, TensorType) and\ tensors=list(tensors)
t.ndim==0 and t.type==tensors[0].type\ tensors[0]=as_tensor_variable(tensors[0])
if numpy.all([isinstance(t, (numpy.number, float, int, python_complex))#in case their is direct int
or (isinstance(t, Variable) and
isinstance(t.type, TensorType) and
t.ndim==0 and
t.type.__class__==tensors[0].type.__class__)
for t in tensors]): for t in tensors]):
return theano.tensor.opt.MakeVector(scal.upcast(*[i.dtype for i in tensors]))(*tensors) tensors = map(as_tensor_variable,tensors)#in case their is direct int
dtype = scal.upcast(*[i.dtype for i in tensors])
return theano.tensor.opt.MakeVector(dtype)(*tensors)
return join(0, *[shape_padleft(t, 1) for t in tensors]) return join(0, *[shape_padleft(t, 1) for t in tensors])
@constructor @constructor
...@@ -3334,6 +3440,7 @@ class Reshape(Op): ...@@ -3334,6 +3440,7 @@ class Reshape(Op):
return '%s{%s}' %(self.__class__.__name__, self.ndim) return '%s{%s}' %(self.__class__.__name__, self.ndim)
def make_node(self, x, shp): def make_node(self, x, shp):
x = as_tensor_variable(x) x = as_tensor_variable(x)
shp_orig = shp
shp = as_tensor_variable(shp, ndim=1) shp = as_tensor_variable(shp, ndim=1)
if not shp.dtype.startswith('int'): if not shp.dtype.startswith('int'):
raise TypeError("Shape must be integers") raise TypeError("Shape must be integers")
...@@ -3342,7 +3449,16 @@ class Reshape(Op): ...@@ -3342,7 +3449,16 @@ class Reshape(Op):
bcast = [s==1 for s in shp.data] bcast = [s==1 for s in shp.data]
return gof.Apply(self, [x, shp], [tensor(x.type.dtype, bcast)]) return gof.Apply(self, [x, shp], [tensor(x.type.dtype, bcast)])
else: else:
return gof.Apply(self, [x, shp], [tensor(x.type.dtype, [False]*self.ndim)]) bcasts = [False] * self.ndim
for index in xrange(self.ndim):
y = shp_orig[index]
# Try to see if we can infer that y has a constant value of 1.
# If so, that dimension should be broadcastable.
try:
bcasts[index] = (hasattr(y, 'get_constant_value') and y.get_constant_value() == 1)
except TypeError:
pass
return gof.Apply(self, [x, shp], [tensor(x.type.dtype, bcasts)])
def perform(self, node, (x, shp), (out,)): def perform(self, node, (x, shp), (out,)):
if (len(shp) != self.ndim): if (len(shp) != self.ndim):
raise ValueError('shape argument to Reshape.perform has incorrect length %i' raise ValueError('shape argument to Reshape.perform has incorrect length %i'
......
...@@ -1207,4 +1207,3 @@ from opt import register_specialize, register_canonicalize ...@@ -1207,4 +1207,3 @@ from opt import register_specialize, register_canonicalize
def local_print_as_we_go_along(node): def local_print_as_we_go_along(node):
if node.op in (T.sub, T.add): if node.op in (T.sub, T.add):
debugprint(node) debugprint(node)
...@@ -841,9 +841,9 @@ class CAReduce(Op): ...@@ -841,9 +841,9 @@ class CAReduce(Op):
Examples: Examples:
CAReduce(add) -> sum CAReduce(add) -> sum
CAReduce(mul) -> product CAReduce(mul) -> product
CAReduce(maximum) -> sum CAReduce(maximum) -> max
CAReduce(_or) -> any # not lazy CAReduce(or_) -> any # not lazy
CAReduce(_and) -> all # not lazy CAReduce(and_) -> all # not lazy
In order to (eventually) optimize memory usage patterns, In order to (eventually) optimize memory usage patterns,
L{CAReduce} makes zero guarantees on the order in which it L{CAReduce} makes zero guarantees on the order in which it
......
...@@ -317,8 +317,10 @@ class MakeVector(T.Op): ...@@ -317,8 +317,10 @@ class MakeVector(T.Op):
inputs = map(T.as_tensor_variable, inputs) inputs = map(T.as_tensor_variable, inputs)
if not all(a.type == inputs[0].type for a in inputs) or (len(inputs)>0 and inputs[0].dtype != self.dtype): if not all(a.type == inputs[0].type for a in inputs) or (len(inputs)>0 and inputs[0].dtype != self.dtype):
dtype=theano.scalar.upcast(self.dtype,*[i.dtype for i in inputs]) dtype=theano.scalar.upcast(self.dtype,*[i.dtype for i in inputs])
#upcast the input to the determined dtype, but don't upcast downcast anything #upcast the input to the determined dtype, but don't downcast anything
assert dtype==self.dtype, "Upcast the input of MakeVector to dtype gived in init without precissino loss only." assert dtype==self.dtype, (
"The upcast of the inputs to MakeVector should match the "
"dtype given in __init__.")
if not all(self.dtype == T.cast(i,dtype=dtype).dtype for a in inputs): if not all(self.dtype == T.cast(i,dtype=dtype).dtype for a in inputs):
raise TypeError("MakeVector.make_node expected inputs upcastable to %s. got %s"%( raise TypeError("MakeVector.make_node expected inputs upcastable to %s. got %s"%(
self.dtype, self.dtype,
...@@ -348,6 +350,9 @@ class MakeVector(T.Op): ...@@ -348,6 +350,9 @@ class MakeVector(T.Op):
# assume that out has correct dtype. there is no cheap way to check # assume that out has correct dtype. there is no cheap way to check
out[0][...] = inputs out[0][...] = inputs
def grad(self, inputs, output_gradients):
return [output_gradients[0][i] for i in xrange(len(inputs))]
make_vector = MakeVector() make_vector = MakeVector()
class MakeVectorPrinter: class MakeVectorPrinter:
......
...@@ -1552,6 +1552,36 @@ class T_Join_and_Split(unittest.TestCase): ...@@ -1552,6 +1552,36 @@ class T_Join_and_Split(unittest.TestCase):
assert len([n for n in e if isinstance(n, Join)]) == 0 assert len([n for n in e if isinstance(n, Join)]) == 0
assert f.maker.env.outputs[0].dtype == config.floatX assert f.maker.env.outputs[0].dtype == config.floatX
def test_stack_scalar_make_vector_dtype(self):
'''Test that calling stack() on scalars instantiates MakeVector,
event when the scalar don't have the same dtype.'''
a = tensor.iscalar('a')
b = tensor.lscalar('b')
s = stack(a, b, a, b)
f = function([a,b], s)
val = f(1,2)
self.failUnless(numpy.all(val == [1,2,1,2]))
e = f.maker.env.toposort()
assert len([n for n in e if isinstance(n.op,opt.MakeVector)]) > 0
assert len([n for n in e if isinstance(n, Join)]) == 0
assert f.maker.env.outputs[0].dtype == 'int64'
def test_stack_scalar_make_vector_constant(self):
'''Test that calling stack() on scalars instantiates MakeVector,
event when the scalar are simple int type.'''
a = tensor.iscalar('a')
b = tensor.lscalar('b')
#test when the constant is the first element.
#The first element is used in a special way
s = stack(10,a,b, numpy.int8(3))
f = function([a,b], s)
val = f(1,2)
self.failUnless(numpy.all(val == [10,1,2,3]))
e = f.maker.env.toposort()
assert len([n for n in e if isinstance(n.op,opt.MakeVector)]) > 0
assert len([n for n in e if isinstance(n, Join)]) == 0
assert f.maker.env.outputs[0].dtype == 'int64'
def test_join_vector(self): def test_join_vector(self):
a = as_tensor_variable(numpy.array([1, 2, 3])) a = as_tensor_variable(numpy.array([1, 2, 3]))
b = as_tensor_variable(numpy.array([7, 8, 9])) b = as_tensor_variable(numpy.array([7, 8, 9]))
...@@ -3440,6 +3470,28 @@ def test_dimshuffle_duplicate(): ...@@ -3440,6 +3470,28 @@ def test_dimshuffle_duplicate():
assert success assert success
class T_get_constant_value(unittest.TestCase):
def test_get_constant_value(self):
a = tensor.stack(1,2,3)
assert get_constant_value(a[0])==1
assert get_constant_value(a[1])==2
assert get_constant_value(a[2])==3
b = tensor.iscalar()
a = tensor.stack(b,2,3)
self.assertRaises(TypeError, get_constant_value, a[0])
assert get_constant_value(a[1])==2
assert get_constant_value(a[2])==3
#For now get_constant_value got throught only MakeVector and Join of scalar.
v = tensor.ivector()
a = tensor.stack(v,2,3)
self.assertRaises(TypeError, get_constant_value, a[0])
self.assertRaises(TypeError, get_constant_value, a[1])
self.assertRaises(TypeError, get_constant_value, a[2])
if __name__ == '__main__': if __name__ == '__main__':
if 1: if 1:
unittest.main() unittest.main()
...@@ -3449,5 +3501,3 @@ if __name__ == '__main__': ...@@ -3449,5 +3501,3 @@ if __name__ == '__main__':
suite = unittest.TestLoader() suite = unittest.TestLoader()
suite = suite.loadTestsFromTestCase(testcase) suite = suite.loadTestsFromTestCase(testcase)
unittest.TextTestRunner(verbosity=2).run(suite) unittest.TextTestRunner(verbosity=2).run(suite)
...@@ -80,6 +80,49 @@ def makeSharedTester(shared_constructor_, ...@@ -80,6 +80,49 @@ def makeSharedTester(shared_constructor_,
else: else:
assert numpy.allclose(x_ref, total_func()) assert numpy.allclose(x_ref, total_func())
def test_shape(self):
dtype = self.dtype
if dtype is None:
dtype = theano.config.floatX
rng = numpy.random.RandomState([3,5,17])
x = numpy.asarray(rng.uniform(0,1,[2,4]),dtype=dtype)
x = self.cast_value(x)
x_ref = self.ref_fct(x)
x_shared = self.shared_constructor(x, borrow = False)
total = self.theano_fct(x_shared)
f = theano.function([],x_shared.shape)
topo = f.maker.env.toposort()
assert numpy.all(f()==(2,4))
if theano.config.mode!='FAST_COMPILE':
assert len(topo)==3
assert isinstance(topo[0].op,tensor.opt.Shape_i)
assert isinstance(topo[1].op,tensor.opt.Shape_i)
assert isinstance(topo[2].op,tensor.opt.MakeVector)
def test_shape_i(self):
dtype = self.dtype
if dtype is None:
dtype = theano.config.floatX
rng = numpy.random.RandomState([3,5,17])
x = numpy.asarray(rng.uniform(0,1,[2,4]),dtype=dtype)
x = self.cast_value(x)
x_ref = self.ref_fct(x)
x_shared = self.shared_constructor(x, borrow = False)
total = self.theano_fct(x_shared)
f = theano.function([],x_shared.shape[1])
topo = f.maker.env.toposort()
assert numpy.all(f()==(4))
if theano.config.mode!='FAST_COMPILE':
assert len(topo)==1
assert isinstance(topo[0].op,tensor.opt.Shape_i)
def test_return_internal_type(self): def test_return_internal_type(self):
dtype = self.dtype dtype = self.dtype
...@@ -191,6 +234,174 @@ def makeSharedTester(shared_constructor_, ...@@ -191,6 +234,174 @@ def makeSharedTester(shared_constructor_,
else: else:
assert numpy.allclose(x_ref, total_func()) assert numpy.allclose(x_ref, total_func())
def test_specify_shape(self):
dtype = self.dtype
if dtype is None:
dtype = theano.config.floatX
rng = numpy.random.RandomState([2,4,16])
x1_1 = numpy.asarray(rng.uniform(1,2,[4,2]),dtype=dtype)
x1_1 = self.cast_value(x1_1)
x1_2 = numpy.asarray(rng.uniform(1,2,[4,2]),dtype=dtype)
x1_2 = self.cast_value(x1_2)
x2 = numpy.asarray(rng.uniform(1,2,[4,3]),dtype=dtype)
x2 = self.cast_value(x2)
#Test that we can replace with values of the same shape
x1_shared = self.shared_constructor(x1_1)
x1_specify_shape = tensor.specify_shape(x1_shared,x1_1.shape)
x1_shared.set_value(x1_2)
assert numpy.allclose(self.ref_fct(x1_shared.value), self.ref_fct( x1_2))
shape_op_fct = theano.function([],x1_shared.shape)
topo = shape_op_fct.maker.env.toposort()
if theano.config.mode!='FAST_COMPILE':
assert len(topo)==3
assert isinstance(topo[0].op,tensor.opt.Shape_i)
assert isinstance(topo[1].op,tensor.opt.Shape_i)
assert isinstance(topo[2].op,tensor.opt.MakeVector)
#Test that we forward the input
specify_shape_fct = theano.function([],x1_specify_shape)
assert numpy.all(self.ref_fct(specify_shape_fct())==
self.ref_fct(x1_2))
topo_specify = specify_shape_fct.maker.env.toposort()
assert len(topo_specify)==2
#Test that we put the shape info into the graph
shape_constant_fct = theano.function([],x1_specify_shape.shape)
assert numpy.all(shape_constant_fct()==shape_op_fct())
topo_cst = shape_constant_fct.maker.env.toposort()
if theano.config.mode!='FAST_COMPILE':
assert len(topo_cst)==0
#Test that we can replace with values of the different shape
# but that will raise an error in some case, but not all
x1_shared.set_value(x2)
self.assertRaises(AssertionError, specify_shape_fct)
#No assertion will be raised as the Op is removed from the graph
#when their is optimization
if theano.config.mode not in ['FAST_COMPILE','DebugMode','DEBUG_MODE']:
shape_constant_fct()
else:
self.assertRaises(AssertionError, shape_constant_fct)
def test_specify_shape_partial(self):
dtype = self.dtype
if dtype is None:
dtype = theano.config.floatX
rng = numpy.random.RandomState([2,4,16])
x1_1 = numpy.asarray(rng.uniform(1,2,[4,2]),dtype=dtype)
x1_1 = self.cast_value(x1_1)
x1_2 = numpy.asarray(rng.uniform(1,2,[4,2]),dtype=dtype)
x1_2 = self.cast_value(x1_2)
x2 = numpy.asarray(rng.uniform(1,2,[5,2]),dtype=dtype)
x2 = self.cast_value(x2)
#Test that we can replace with values of the same shape
x1_shared = self.shared_constructor(x1_1)
x1_specify_shape = tensor.specify_shape(x1_shared,
(tensor.as_tensor_variable(x1_1.shape[0]),
x1_shared.shape[1]))
x1_shared.set_value(x1_2)
assert numpy.allclose(self.ref_fct(x1_shared.value), self.ref_fct( x1_2))
shape_op_fct = theano.function([],x1_shared.shape)
topo = shape_op_fct.maker.env.toposort()
if theano.config.mode!='FAST_COMPILE':
assert len(topo)==3
assert isinstance(topo[0].op,tensor.opt.Shape_i)
assert isinstance(topo[1].op,tensor.opt.Shape_i)
assert isinstance(topo[2].op,tensor.opt.MakeVector)
#Test that we forward the input
specify_shape_fct = theano.function([],x1_specify_shape)
#theano.printing.debugprint(specify_shape_fct)
assert numpy.all(self.ref_fct(specify_shape_fct())
==self.ref_fct(x1_2))
topo_specify = specify_shape_fct.maker.env.toposort()
if theano.config.mode!='FAST_COMPILE':
assert len(topo_specify)==4
#Test that we put the shape info into the graph
shape_constant_fct = theano.function([],x1_specify_shape.shape)
#theano.printing.debugprint(shape_constant_fct)
assert numpy.all(shape_constant_fct()==shape_op_fct())
topo_cst = shape_constant_fct.maker.env.toposort()
if theano.config.mode!='FAST_COMPILE':
assert len(topo_cst)==2
#Test that we can replace with values of the different shape
# but that will raise an error in some case, but not all
x1_shared.set_value(x2)
self.assertRaises(AssertionError, specify_shape_fct)
#No assertion will be raised as the Op is removed from the graph
if theano.config.mode not in ['FAST_COMPILE','DebugMode','DEBUG_MODE']:
shape_constant_fct()
else:
self.assertRaises(AssertionError, shape_constant_fct)
def test_specify_shape_inplace(self):
#test that specify_shape don't break inserting inplace op
dtype = self.dtype
if dtype is None:
dtype = theano.config.floatX
rng = numpy.random.RandomState([2,4,16])
a = numpy.asarray(rng.uniform(1,2,[40,40]),dtype=dtype)
a = self.cast_value(a)
a_shared = self.shared_constructor(a)
b = numpy.asarray(rng.uniform(1,2,[40,40]),dtype=dtype)
b = self.cast_value(b)
b_shared = self.shared_constructor(b)
s = numpy.zeros((40,40),dtype=dtype)
s = self.cast_value(s)
s_shared = self.shared_constructor(s)
f = theano.function([],
updates={s_shared:theano.dot(a_shared,b_shared)
+s_shared})
topo=f.maker.env.toposort()
f()
#[Gemm{inplace}(<TensorType(float64, matrix)>, 0.01, <TensorType(float64, matrix)>, <TensorType(float64, matrix)>, 2e-06)]
#print topo
if theano.config.mode!='FAST_COMPILE':
assert sum([node.op.__class__.__name__ in ["Gemm","GpuGemm","StructuredDot"] for node in topo])==1
assert all(node.op == tensor.blas.gemm_inplace for node in topo if isinstance(node.op,tensor.blas.Gemm))
assert all(node.op.inplace for node in topo if node.op.__class__.__name__ == "GpuGemm")
#Their is no inplace gemm for sparse
#assert all(node.op.inplace for node in topo if node.op.__class__.__name__ == "StructuredDot")
s_shared_specify = tensor.specify_shape(s_shared,s_shared.value.shape)
#now test with the specify shape op in the output
f = theano.function([], s_shared.shape,
updates={s_shared:theano.dot(a_shared,b_shared)
+s_shared_specify})
topo=f.maker.env.toposort()
print topo
shp=f()
assert numpy.all(shp == (40,40))
if theano.config.mode!='FAST_COMPILE':
assert sum([node.op.__class__.__name__ in ["Gemm","GpuGemm","StructuredDot"] for node in topo])==1
assert all(node.op == tensor.blas.gemm_inplace for node in topo if isinstance(node.op,tensor.blas.Gemm))
assert all(node.op.inplace for node in topo if node.op.__class__.__name__ == "GpuGemm")
#now test with the specify shape op in the inputs and outputs
a_shared = tensor.specify_shape(a_shared,a_shared.value.shape)
b_shared = tensor.specify_shape(b_shared,b_shared.value.shape)
f = theano.function([], s_shared.shape,
updates={s_shared:theano.dot(a_shared,b_shared)
+s_shared_specify})
topo=f.maker.env.toposort()
print topo
shp=f()
assert numpy.all(shp == (40,40))
if theano.config.mode!='FAST_COMPILE':
assert sum([node.op.__class__.__name__ in ["Gemm","GpuGemm","StructuredDot"] for node in topo])==1
assert all(node.op == tensor.blas.gemm_inplace for node in topo if isinstance(node.op,tensor.blas.Gemm))
assert all(node.op.inplace for node in topo if node.op.__class__.__name__ == "GpuGemm")
return SharedTester return SharedTester
test_shared_options=makeSharedTester(tensor.shared, 'float64', test_shared_options=makeSharedTester(tensor.shared, 'float64',
...@@ -199,4 +410,3 @@ test_shared_options=makeSharedTester(tensor.shared, 'float64', ...@@ -199,4 +410,3 @@ test_shared_options=makeSharedTester(tensor.shared, 'float64',
lambda a: isinstance(a,numpy.ndarray), lambda a: isinstance(a,numpy.ndarray),
theano.tensor.sum, theano.tensor.sum,
numpy.sum) numpy.sum)
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论