提交 aeb68ef2 authored 作者: nouiz's avatar nouiz

Merge pull request #1355 from delallea/minor

Minor fixes
......@@ -21,9 +21,10 @@ In this section we will define a couple optimizations on doubles.
.. note::
There is the optimization tag `cxx_only` that tell this
optimization will insert Op that only have c code. So we should not
run them when we don't have a c++ compiler.
The optimization tag `cxx_only` is used for optimizations that insert
Ops which have no Python implementation (so they only have C code).
Optimizations with this tag are skipped when there is no C++ compiler
available.
Global and local optimizations
==============================
......
......@@ -442,9 +442,9 @@ correctly (for example, for MKL this might be ``-lmkl -lguide -lpthread`` or
If you have problems linking with MKL, `Intel Line Advisor
<http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor>`_
and `MKL User Guide
and the `MKL User Guide
<http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mkl_userguide_lnx/index.htm>`_
can help you find the correct flag to use.
can help you find the correct flags to use.
.. _gpu_linux:
......
......@@ -410,17 +410,18 @@ import theano and print the config variable, as in:
.. attribute:: config.cxx
Default: 'g++' if g++ is present. '' Otherwise.
Default: 'g++' if g++ is present. Empty string otherwise.
Tell the c++ compiler to use. If empty, don't compile c++ code.
We automatically detect if g++ is present and disable it if not
present.
Indicates which C++ compiler to use. If empty, no C++ code is compiled.
Theano automatically detects whether g++ is present and disables
C++ compilation when it is not.
We print a warning if we detect that g++ is not present. It is
recommended to run with c++ compilation as Theano will be much
recommended to run with C++ compilation as Theano will be much
slower otherwise.
Currently only g++ is supported, but supporting others is easy.
Currently only g++ is supported, but supporting other compilers should
not be too difficult.
.. attribute:: optimizer_excluding
......@@ -636,12 +637,12 @@ import theano and print the config variable, as in:
Bool value, default: False
If True, will remove -O* parameter passed to g++.
This is useful to debug in gdb module compiled by Theano.
If True, will remove the -O* parameter passed to g++.
This is useful to debug in gdb modules compiled by Theano.
The parameter -g is passed by default to g++.
.. attribute:: cmodule.compilation_warning
Bool value, default: False
If True, will print compilation warning.
If True, will print compilation warnings.
......@@ -295,25 +295,26 @@ the following:
.. code-block:: python
W = theano.shared ( W_values ) # we assume that ``W_values`` contains the
# initial values of your weight matrix
W = theano.shared(W_values) # we assume that ``W_values`` contains the
# initial values of your weight matrix
bvis = theano.shared( bvis_values)
bhid = theano.shared( bhid_values)
bvis = theano.shared(bvis_values)
bhid = theano.shared(bhid_values)
trng = T.shared_randomstreams.RandomStreams(1234)
def OneStep( vsample) :
hmean = T.nnet.sigmoid( theano.dot( vsample, W) + bhid)
hsample = trng.binomial( size = hmean.shape, n = 1, prob = hmean)
vmean = T.nnet.sigmoid( theano.dot( hsample. W.T) + bvis)
return trng.binomial( size = vsample.shape, n = 1, prob = vsample)
def OneStep(vsample) :
hmean = T.nnet.sigmoid(theano.dot(vsample, W) + bhid)
hsample = trng.binomial(size=hmean.shape, n=1, p=hmean)
vmean = T.nnet.sigmoid(theano.dot(hsample, W.T) + bvis)
return trng.binomial(size=vsample.shape, n=1, p=vmean,
dtype=theano.config.floatX)
sample = theano.tensor.vector()
values, updates = theano.scan( OneStep, outputs_info = sample, n_steps = 10 )
values, updates = theano.scan(OneStep, outputs_info=sample, n_steps=10)
gibbs10 = theano.function([sample], values[-1], updates = updates)
gibbs10 = theano.function([sample], values[-1], updates=updates)
Note that if we use shared variables ( ``W``, ``bvis``, ``bhid``) but
......@@ -335,7 +336,7 @@ afterwards. Look at this example :
.. code-block:: python
a = theano.shared(1)
values,updates = theano.scan( lambda : {a:a+1}, n_steps = 10 )
values, updates = theano.scan(lambda: {a: a+1}, n_steps=10)
In this case the lambda expression does not require any input parameters
and returns an update dictionary which tells how ``a`` should be updated
......@@ -343,9 +344,9 @@ after each step of scan. If we write :
.. code-block:: python
b = a+1
b = a + 1
c = updates[a] + 1
f = theano.function([], [b,c], updates = updates)
f = theano.function([], [b, c], updates=updates)
print b
print c
......
......@@ -646,30 +646,30 @@ dimensions, see :meth:`_tensor_py_operators.dimshuffle`.
>>> x = T.concatenate([x0, x1[0], T.shape_padright(x2)], axis=1)
>>> # x.ndim == 2
.. function:: stacklist(tensor_list)
.. function:: stacklists(tensor_list)
:type tensor_list: an iterable that contain tensors or iterable
with at the end tensors.
:param tensor_list: tensors to be
stackend together.
:type tensor_list: an iterable that contains either tensors or other
iterables of the same type as `tensor_list` (in other words, this
is a tree whose leaves are tensors).
:param tensor_list: tensors to be stacked together.
Recursivly stack lists of tensors to maintain similar structure.
Recursively stack lists of tensors to maintain similar structure.
This function can create a tensor from a shaped list of scalars
This function can create a tensor from a shaped list of scalars:
>>> from theano.tensor import stacklists, scalars, matrices
>>> from theano import function
>>> a,b,c,d = scalars('abcd')
>>> a, b, c, d = scalars('abcd')
>>> X = stacklists([[a, b], [c, d]])
>>> f = function([a, b, c, d], X)
>>> f(1, 2, 3, 4)
>>> # array([[ 1., 2.], [ 3., 4.]], dtype=float32)
We can also stack arbitrarily shaped tensors. Here we stack matrices into
a 2 by 2 grid.
We can also stack arbitrarily shaped tensors. Here we stack matrices into
a 2 by 2 grid:
>>> from numpy import ones
>>> a,b,c,d, = matrices('abcd')
>>> a, b, c, d = matrices('abcd')
>>> X = stacklists([[a, b], [c, d]])
>>> f = function([a, b, c, d], X)
>>> x = ones((4, 4), 'float32')
......
......@@ -20,8 +20,8 @@ If you would like to add an additional optimization, refer to
This list is partial.
The print_summary method allow several OpDBs and optimizers to list the optimization executed.
This allow to have an up-to-date list.
The print_summary method allows several OpDBs and optimizers to list the executed optimizations.
This makes it possible to have an up-to-date list.
python -c 'import theano; theano.compile.FAST_RUN.optimizer.print_summary()'
......@@ -255,6 +255,6 @@ Optimization FAST_RUN FAST_COMPILE
local_log_softmax
This is a stabilization optimization.
It can happen due to rounding problem that the softmax probability of one value get to 0.
Taking the log of 0, would generate -inf that will probably generate NaN later.
It can happen due to rounding errors that the softmax probability of one value gets to 0.
Taking the log of 0 would generate -inf that will probably generate NaN later.
We return a closer answer.
......@@ -397,8 +397,8 @@ have to be jointly optimized explicitly in the code.)
SciPy
-----
We can wrap SciPy function in Theano. But Scipy is an optional dependency.
Here is some code that allow to make the op Optional:
We can wrap SciPy functions in Theano. But SciPy is an optional dependency.
Here is some code that allows the Op to be optional:
.. code-block:: python
......@@ -413,17 +413,19 @@ Here is some code that allow to make the op Optional:
...
def make_node(self, x):
assert imported_scipy, (
"Scipy not available. Scipy is needed for the SomeOp op.")
"SciPy not available. SciPy is needed for the SomeOp op.")
...
from nose.plugins.skip import SkipTest
class test_Solve(utt.InferShapeTester):
class test_SomeOp(utt.InferShapeTester):
...
def test_infer_shape(self):
if not imported_scipy:
raise SkipTest("Scipy needed for the Cholesky op.")
raise SkipTest("SciPy needed for the SomeOp op.")
...
Random number in tests
----------------------
Random numbers in tests
-----------------------
Making tests errors more reproducible is a good practice. To make your
tests more reproducible, you need a way to get the same random
......@@ -449,7 +451,7 @@ tutorial :ref:`Extending Theano<extending>`
See :ref:`metadocumentation`, for some information on how to generate
the documentation.
Here is an example how to add docstring to an class.
Here is an example how to add docstring to a class.
.. code-block:: python
......@@ -460,7 +462,7 @@ Here is an example how to add docstring to an class.
:param x: input tensor.
:return: a tensor of the shape shape and dtype as the input with all
:return: a tensor of the same shape and dtype as the input with all
values doubled.
:note:
......@@ -473,7 +475,8 @@ Here is an example how to add docstring to an class.
.. versionadded:: 0.6
"""
This is how it will show up for file that we auto list in the library documentation:
This is how it will show up for files that we auto-list in the library
documentation:
.. automodule:: theano.misc.doubleop
......
......@@ -24,7 +24,8 @@ internals cannot be modified.
Faster gcc optimization
-----------------------
You can enable faster gcc optimization with the ``cxxflags``. This list of flags was suggested on the mailing list::
You can enable faster gcc optimization with the ``cxxflags`` option.
This list of flags was suggested on the mailing list::
-O3 -ffast-math -ftree-loop-distribution -funroll-loops -ftracer
......
......@@ -886,8 +886,8 @@ def _lessbroken_deepcopy(a):
"""
:param a: any object
Returns a copy of `a` that shares no internal storage with the original.
A deep copy.
Returns a copy of `a` that shares no internal storage with the original
(a deep copy).
This function handles numpy arrays specially, because copy.deepcopy()
called on a 0-d array will return a numpy scalar, not an array.
"""
......@@ -2199,22 +2199,29 @@ class _Maker(FunctionMaker): # inheritance buys a few helper functions
raise StochasticOrder(infolog.getvalue())
else:
if self.verbose:
print >> sys.stderr, "OPTCHECK: optimization", i, "of", len(li), "events was stable."
print >> sys.stderr, "OPTCHECK: optimization", i, \
"of", len(li), "events was stable."
else:
fgraph0 = fgraph
del fgraph0
self.fgraph = fgraph
#equivalence_tracker.printstuff()
linker = _Linker(self)
# the 'no_borrow' outputs are the ones for which that we can't return
# the internal storage pointer.
#the 'no_borrow' outputs are the ones for which that we can't return the internal storage pointer.
no_borrow = [output for output, spec in zip(fgraph.outputs, outputs+additional_outputs) if not spec.borrow]
no_borrow = [
output
for output, spec in izip(fgraph.outputs,
outputs + additional_outputs)
if not spec.borrow]
if no_borrow:
self.linker = linker.accept(fgraph, no_recycling = infer_reuse_pattern(fgraph, no_borrow))
self.linker = linker.accept(
fgraph,
no_recycling=infer_reuse_pattern(fgraph, no_borrow))
else:
self.linker = linker.accept(fgraph)
......
......@@ -86,7 +86,7 @@ def register_linker(name, linker):
# If a string is passed as the optimizer argument in the constructor
# for Mode, it will be used as the key to retrieve the real optimizer
# in this dictionary
exclude=[]
exclude = []
if not theano.config.cxx:
exclude = ['cxx_only']
OPT_FAST_RUN = gof.Query(include=['fast_run'], exclude=exclude)
......@@ -120,7 +120,7 @@ def register_optimizer(name, opt):
class AddDestroyHandler(gof.Optimizer):
"""This optimizer performs two important functions:
1) it has a 'requirement' of the destroyhandler. This means that the fgraph
1) It has a 'requirement' of the destroyhandler. This means that the fgraph
will include it as a feature for this optimization, and keep this feature
enabled for subsequent optimizations. All optimizations that work inplace
on any of their inputs must run *after* this optimization to ensure that
......
......@@ -131,9 +131,10 @@ else:
enum = EnumStr("")
AddConfigVar('cxx',
"The c++ compiler to use. Currently only g++ is"
" supported. But supporting more is easy if someone want this."
"If it is empty, we don't compile c++ code.",
"The C++ compiler to use. Currently only g++ is"
" supported, but supporting additional compilers should not be "
"too difficult. "
"If it is empty, no C++ code is compiled.",
enum,
in_c_key=False)
del enum
......
......@@ -45,13 +45,13 @@ AddConfigVar('cmodule.warn_no_version',
in_c_key=False)
AddConfigVar('cmodule.remove_gxx_opt',
"If True, will remove -O* parameter passed to g++."
"This is useful to debug in gdb module compiled by Theano."
"If True, will remove the -O* parameter passed to g++."
"This is useful to debug in gdb modules compiled by Theano."
"The parameter -g is passed by default to g++",
BoolParam(False))
AddConfigVar('cmodule.compilation_warning',
"If True, will print compilation warning.",
"If True, will print compilation warnings.",
BoolParam(False))
......@@ -162,13 +162,15 @@ static struct PyModuleDef moduledef = {{
MyMethods,
}};
""".format(name=self.hash_placeholder)
print >> stream, "PyMODINIT_FUNC PyInit_%s(void) {" % self.hash_placeholder
print >> stream, ("PyMODINIT_FUNC PyInit_%s(void) {" %
self.hash_placeholder)
for block in self.init_blocks:
print >> stream, ' ', block
print >> stream, " PyObject *m = PyModule_Create(&moduledef);"
print >> stream, " return m;"
else:
print >> stream, "PyMODINIT_FUNC init%s(void){" % self.hash_placeholder
print >> stream, ("PyMODINIT_FUNC init%s(void){" %
self.hash_placeholder)
for block in self.init_blocks:
print >> stream, ' ', block
print >> stream, ' ', ('(void) Py_InitModule("%s", MyMethods);'
......
......@@ -869,17 +869,19 @@ def _populate_grad_dict(var_to_app_to_idx,
for o, og in zip(node.outputs, output_grads):
o_dt = getattr(o.type, 'dtype', None)
og_dt = getattr(og.type, 'dtype', None)
if o_dt not in theano.tensor.discrete_dtypes and og_dt and o_dt != og_dt:
if (o_dt not in theano.tensor.discrete_dtypes and
og_dt and o_dt != og_dt):
new_output_grads.append(og.astype(o_dt))
else:
new_output_grads.append(og)
# Make sure that, if new_output_grads[i] has a floating point dtype,
# it is the same dtype as outputs[i]
# Make sure that, if new_output_grads[i] has a floating point
# dtype, it is the same dtype as outputs[i]
for o, ng in zip(node.outputs, new_output_grads):
o_dt = getattr(o.type, 'dtype', None)
ng_dt = getattr(ng.type, 'dtype', None)
if ng_dt is not None and o_dt not in theano.tensor.discrete_dtypes:
if (ng_dt is not None and
o_dt not in theano.tensor.discrete_dtypes):
assert ng_dt == o_dt
# Someone who had obviously not read the Op contract tried
......@@ -890,7 +892,8 @@ def _populate_grad_dict(var_to_app_to_idx,
# 2) Talk to Ian Goodfellow
# (Both of these sources will tell you not to do it)
for ng in new_output_grads:
assert getattr(ng.type, 'dtype', None) not in theano.tensor.discrete_dtypes
assert (getattr(ng.type, 'dtype', None)
not in theano.tensor.discrete_dtypes)
input_grads = node.op.grad(inputs, new_output_grads)
......@@ -908,7 +911,6 @@ def _populate_grad_dict(var_to_app_to_idx,
# Do type checking on the result
# List of bools indicating if each input only has integer outputs
only_connected_to_int = [(True not in
[in_to_out and out_to_cost and not out_int
......@@ -916,7 +918,6 @@ def _populate_grad_dict(var_to_app_to_idx,
zip(in_to_outs, outputs_connected, output_is_int)])
for in_to_outs in connection_pattern]
for i, term in enumerate(input_grads):
# Disallow Nones
......@@ -933,7 +934,6 @@ def _populate_grad_dict(var_to_app_to_idx,
'the grad_undefined or grad_unimplemented helper '
'functions.') % node.op)
if not isinstance(term.type,
(NullType, DisconnectedType)):
if term.type.dtype not in theano.tensor.float_dtypes:
......@@ -973,8 +973,8 @@ def _populate_grad_dict(var_to_app_to_idx,
msg += "evaluate to zeros, but it evaluates to"
msg += "%s."
msg % (str(node.op), str(term), str(type(term)),
i, str(theano.get_scalar_constant_value(term)))
msg % (node.op, term, type(term), i,
theano.get_scalar_constant_value(term))
raise ValueError(msg)
......@@ -1010,8 +1010,6 @@ def _populate_grad_dict(var_to_app_to_idx,
#cache the result
term_dict[node] = input_grads
return term_dict[node]
# populate grad_dict[var] and return it
......@@ -1040,7 +1038,7 @@ def _populate_grad_dict(var_to_app_to_idx,
if isinstance(term.type, DisconnectedType):
continue
if hasattr(var,'ndim') and term.ndim != var.ndim:
if hasattr(var, 'ndim') and term.ndim != var.ndim:
raise ValueError(("%s.grad returned a term with"
" %d dimensions, but %d are required.") % (
str(node.op), term.ndim, var.ndim))
......@@ -1058,8 +1056,8 @@ def _populate_grad_dict(var_to_app_to_idx,
if cost_name is not None and var.name is not None:
grad_dict[var].name = '(d%s/d%s)' % (cost_name, var.name)
else:
# this variable isn't connected to the cost in the computational
# graph
# this variable isn't connected to the cost in the
# computational graph
grad_dict[var] = DisconnectedType()()
# end if cache miss
return grad_dict[var]
......@@ -1068,6 +1066,7 @@ def _populate_grad_dict(var_to_app_to_idx,
return rval
def _float_zeros_like(x):
""" Like zeros_like, but forces the object to have a
a floating point dtype """
......@@ -1317,9 +1316,9 @@ def verify_grad(fun, pt, n_tests=2, rng=None, eps=None,
:param eps: stepsize used in the Finite Difference Method (Default
None is type-dependent)
Raising the value of eps can raise or lower the absolute and
relative error of the verification depending of the
Op. Raising the eps do not lower the verification quality. It
is better to raise eps then raising abs_tol or rel_tol.
relative errors of the verification depending on the
Op. Raising eps does not lower the verification quality. It
is better to raise eps than raising abs_tol or rel_tol.
:param out_type: dtype of output, if complex (i.e. 'complex32' or
'complex64')
:param abs_tol: absolute tolerance used as threshold for gradient
......@@ -1599,6 +1598,7 @@ def hessian(cost, wrt, consider_constant=None,
hessians.append(hess)
return format_as(using_list, using_tuple, hessians)
def _is_zero(x):
"""
Returns 'yes', 'no', or 'maybe' indicating whether x
......
......@@ -8,7 +8,7 @@ class DoubleOp(theano.Op):
:param x: input tensor.
:return: a tensor of the shape shape and dtype as the input with all
:return: a tensor of the same shape and dtype as the input with all
values doubled.
:note:
......@@ -46,8 +46,8 @@ class DoubleOp(theano.Op):
def R_op(self, inputs, eval_points):
# R_op can receive None as eval_points.
# That mean there is no diferientiable path through that input
# If this imply that you cannot compute some outputs,
# That means there is no differentiable path through that input.
# If this implies that you cannot compute some outputs,
# return None for those.
if eval_points[0] is None:
return eval_points
......
......@@ -244,7 +244,8 @@ class GpuOp(theano.gof.Op):
return super(GpuOp, self).make_thunk(node, storage_map,
compute_map, no_recycling)
theano.compile.debugmode.default_make_thunk.append(get_unbound_function(GpuOp.make_thunk))
theano.compile.debugmode.default_make_thunk.append(
get_unbound_function(GpuOp.make_thunk))
# We must do those import to be able to create the full doc when
# nvcc is not available
......@@ -271,15 +272,16 @@ if cuda_available:
shared_constructor = float32_shared_constructor
import basic_ops
from basic_ops import (GpuFromHost, HostFromGpu, GpuElemwise,
GpuDimShuffle, GpuCAReduce, GpuReshape, GpuContiguous,
GpuSubtensor, GpuIncSubtensor,
GpuAdvancedSubtensor1, GpuAdvancedIncSubtensor1,
GpuFlatten, GpuShape, GpuAlloc,
GpuJoin, fscalar, fvector, fmatrix, frow, fcol,
ftensor3, ftensor4,
scalar, vector, matrix, row, col,
tensor3, tensor4)
from basic_ops import (
GpuFromHost, HostFromGpu, GpuElemwise,
GpuDimShuffle, GpuCAReduce, GpuReshape, GpuContiguous,
GpuSubtensor, GpuIncSubtensor,
GpuAdvancedSubtensor1, GpuAdvancedIncSubtensor1,
GpuFlatten, GpuShape, GpuAlloc,
GpuJoin, fscalar, fvector, fmatrix, frow, fcol,
ftensor3, ftensor4,
scalar, vector, matrix, row, col,
tensor3, tensor4)
from basic_ops import host_from_gpu, gpu_from_host, as_cuda_array
import opt
import cuda_ndarray
......@@ -388,16 +390,17 @@ def use(device,
cuda_enabled = True
if config.print_active_device:
print >> sys.stderr, "Using gpu device %d: %s" %(
print >> sys.stderr, "Using gpu device %d: %s" % (
active_device_number(), active_device_name())
if device_properties(use.device_number)['regsPerBlock'] < 16384:
# We will try to use too much register per bloc at many places
# when there is only 8k register per multi-processor.
_logger.warning("You are probably using an old GPU."
" We didn't optimize nor we support those GPU."
" This mean GPU code will be slow AND will"
" crash when we try to use feature/properties"
" that your GPU don't support.")
_logger.warning(
"You are probably using an old GPU, that Theano"
" does not support."
" This means GPU code will most likely be slow AND may"
" crash when we try to use features"
" that your GPU does not support.")
except (EnvironmentError, ValueError, RuntimeError), e:
_logger.error(("ERROR: Not using GPU."
......
......@@ -228,7 +228,7 @@ class Scan(PureOp):
)
err_msg2 = ('When compiling the inner function of scan the '
'following error has been encountered: The '
'initial state (outputs_info in scan nomenclature)'
'initial state (outputs_info in scan nomenclature) '
'of variable %s (argument number %d)'
' has dtype %s and %d dimension(s), while the result '
'of the inner function for this output has dtype %s '
......@@ -1387,6 +1387,7 @@ class Scan(PureOp):
self.inner_nitsot_outs(self_outputs))
scan_node = outs[0].owner
connection_pattern = self.connection_pattern(scan_node)
def get_inp_idx(iidx):
if iidx < self.n_seqs:
return 1 + iidx
......@@ -1426,12 +1427,12 @@ class Scan(PureOp):
"has type " + str(g_y.type))
odx = get_out_idx(self_outputs.index(y))
wrt = [x for x in theano.gof.graph.inputs([y])
if (x in diff_inputs) and
(connection_pattern[get_inp_idx(self_inputs.index(x))][odx])]
grads = gradient.grad(
cost = None,
known_grads = {y : g_y },
wrt = [x for x in theano.gof.graph.inputs([y])
if (x in diff_inputs) and
connection_pattern[get_inp_idx(self_inputs.index(x))][odx]]
grads = gradient.grad(
cost=None,
known_grads={y: g_y},
wrt=wrt, consider_constant=wrt,
disconnected_inputs='ignore',
return_disconnected='None')
......
......@@ -8236,24 +8236,25 @@ def diag(v, k=0):
def stacklists(arg):
""" Recursivly stack lists of tensors to maintain similar structure
"""
Recursively stack lists of tensors to maintain similar structure.
This function can create a tensor from a shaped list of scalars
This function can create a tensor from a shaped list of scalars:
>>> from theano.tensor import stacklists, scalars, matrices
>>> from theano import function
>>> a,b,c,d = scalars('abcd')
>>> a, b, c, d = scalars('abcd')
>>> X = stacklists([[a, b], [c, d]])
>>> f = function([a, b, c, d], X)
>>> f(1, 2, 3, 4)
array([[ 1., 2.],
[ 3., 4.]], dtype=float32)
We can also stack arbitrarily shaped tensors. Here we stack matrices into
a 2 by 2 grid.
We can also stack arbitrarily shaped tensors. Here we stack matrices into
a 2 by 2 grid:
>>> from numpy import ones
>>> a,b,c,d, = matrices('abcd')
>>> a, b, c, d = matrices('abcd')
>>> X = stacklists([[a, b], [c, d]])
>>> f = function([a, b, c, d], X)
>>> x = ones((4, 4), 'float32')
......
......@@ -560,7 +560,7 @@ conv3D = Conv3D()
:param b: bias, shape == (W.shape[0],)
:param d: strides when moving the filter over the input(dx, dy, dt)
:note: The order of dimensions do not correspond with the one in `conv2d`.
:note: The order of dimensions does not correspond to the one in `conv2d`.
This is for optimization.
"""
......
......@@ -103,7 +103,7 @@ def main(stdout=None, stderr=None, argv=None, theano_nose=None,
theano_nose = path
break
if theano_nose is None:
raise Exception("Not able to find theano-nose")
raise Exception("Unable to find theano-nose")
if batch_size is None:
batch_size = 100
stdout_backup = sys.stdout
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论