提交 ae8cce77 authored 作者: lamblin's avatar lamblin

Merge pull request #1280 from delallea/minor

Minor fixes
......@@ -51,8 +51,8 @@ Bug fix:
New Features:
* More Theano determinism (Ian G., Olivier D., Pascal L.)
* Add and use a new class OrderedSet.
* theano.grad is now determinist.
* Warn when the user use a dictionary and this cause non-determinism in Theano.
* theano.grad is now deterministic.
* Warn when the user uses a (non ordered) dictionary and this causes non-determinism in Theano.
* The Updates class was non-deterministic; replaced it with the OrderedUpdates class.
* tensor.tensordot now support Rop/Lop (Jeremiah Lowin)
This remove the class TensorDot and TensorDotGrad. It is the Dot/Elemwise ops that are used.
......@@ -68,7 +68,7 @@ New Features:
Interface Deprecation (a warning is printed):
* theano.misc.strutil.renderString -> render_string (Ian G.)
* Print a warning when using dictionary and this make Theano non-deterministic.
* Print a warning when using dictionary and this makes Theano non-deterministic.
Interface Change:
* Raise an error when theano.shared called with a theano variable. (Frederic B.)
......@@ -79,8 +79,8 @@ Interface Change:
* In the grad method, if it were asked to raise an error if there is no path between the variables, we didn't always returned an error. (Ian G.)
We returned the mathematical right answer 0 in those cases.
* get_constant_value() renamed get_scalar_constant_value() and raise a new exception tensor.basic.NotScalarConstantError. (Ian G.)
* theano.function raise an error when triing to replace inputs with the given paramter. (Olivier D.)
This was doing nothing, the error message explain what the user probably want to do.
* theano.function raises an error when trying to replace inputs with the 'given' parameter. (Olivier D.)
This was doing nothing, the error message explains what the user probably wants to do.
New Interface (reuse existing functionality):
* tensor_var.sort() as a shortcut for theano.tensor.sort. (Jeremiah Lowin)
......@@ -93,7 +93,7 @@ New debug feature:
* Better profiling of test time with `theano-nose --time-profile`. (Frederic B.)
* Detection of infinite loop with global optimizer. (Pascal L.)
* DebugMode.check_preallocated_output now also work on Theano function output. (Pascal L.)
* DebugMode will now complains when the strides of CudaNdarray of dimensions of 1 aren't 0. (Frederic B.)
* DebugMode will now complain when the strides of CudaNdarray of dimensions of 1 are not 0. (Frederic B.)
Speed-ups:
* c_code for SpecifyShape op. (Frederic B.)
......@@ -101,7 +101,7 @@ Speed-ups:
* The Scan optimization ScanSaveMem and PushOutDot1 applied more frequently. (Razvan P, reported Abalkin)
A skipped optimization warning was printed.
* dot(vector, vector) now faster with some BLAS implementation. (Eric Hunsberger)
OpenBLAS and possibly others didn't called {s,d}dot internally when we called {s,d}gemv.
OpenBLAS and possibly others didn't call {s,d}dot internally when we called {s,d}gemv.
MKL was doing this.
* Compilation speed up: Take the compiledir lock only for op that generate c_code. (Frederic B)
* More scan optimization (Razvan P.)
......@@ -131,11 +131,11 @@ Crash Fixes:
Sometimes, we where not able knowing this before run time and resulted in crash. (Frederic B.)
* Fix compilation problems on GPU on Windows. (Frederic B.)
* Fix copy on the GPU with big shape for 4d tensor (Pascal L.)
* GpuSubtensor didn't set the stride to 0 for dimensions of 1. This could lead to check failing later that cause a crash. (Frederic B., reported by vmichals)
* GpuSubtensor didn't set the stride to 0 for dimensions of 1. This could lead to check failing later that caused a crash. (Frederic B., reported by vmichals)
Theoretical bugfix (bug that won't happen with current Theano code, but if you messed with the internal, could have affected you):
* GpuContiguous, GpuAlloc, GpuDownSampleGrad, Conv2d now check the preallocated outputs strides before using it. (Pascal L.)
* GpuDownSample, GpuDownSampleGrad didn't worked correctly with negative strides in their output due to problem with nvcc (Pascal L, reported by abalkin?)
* GpuDownSample, GpuDownSampleGrad didn't work correctly with negative strides in their output due to problem with nvcc (Pascal L, reported by abalkin?)
Others:
* Fix race condition when determining if g++ is available. (Abalkin)
......
......@@ -463,15 +463,15 @@ Any one of them is enough.
.. note::
On Dedian, you can ask the software package manager to install it
for you. We have a user report that this work for Debian Wheezy
On Debian, you can ask the software package manager to install it
for you. We have a user report that this works for Debian Wheezy
(7.0). When you install it this way, you won't always have the
latest version, but we where said that they it get updated
rapidly. One big advantage is that it will be updated
latest version, but we were told that it gets updated
regularly. One big advantage is that it will be updated
automatically. You can try the ``sudo apt-get install
nvidia-cuda-toolkit`` command to install it.
:ref:`Ubuntu instruction <install_ubuntu_gpu>`.
:ref:`Ubuntu instructions <install_ubuntu_gpu>`.
......
......@@ -241,9 +241,9 @@ the output shape was computed correctly, or if some shapes with the
same value have been mixed up. For instance, if the infer_shape uses
the width of a matrix instead of its height, then testing with only
square matrices will not detect the problem. This is why the
``self._compile_and_check`` method print a warning in such a case. If
your op work only in such case, you can diable the warning with the
warn=True parameter.
``self._compile_and_check`` method prints a warning in such a case. If
your op works only with such matrices, you can disable the warning with the
``warn=False`` parameter.
.. code-block:: python
......
......@@ -30,11 +30,11 @@ You can enable faster gcc optimization with the ``cxxflags``. This list of flags
Use it at your own risk. Some people warned that the ``-ftree-loop-distribution`` optimization resulted in wrong results in the past.
In the past we told that if the ``compiledir`` wasn't shared by multiple
computers, you could add the ``-march=native`` flags. Now we recommande
to remove this flags as Theano does that automatically and safelly
In the past we said that if the ``compiledir`` was not shared by multiple
computers, you could add the ``-march=native`` flag. Now we recommend
to remove this flag as Theano does it automatically and safely,
even if the ``compiledir`` is shared by multiple computers with different
CPU. In fact, we ask g++ what are the equivalent flags it use and use
CPUs. In fact, Theano asks g++ what are the equivalent flags it uses, and re-uses
them directly.
......
......@@ -781,7 +781,6 @@ class SanityCheckFunction(Function):
return variables
###
### FunctionMaker
###
......@@ -813,12 +812,12 @@ def insert_deepcopy(fgraph, wrapped_inputs, wrapped_outputs):
view_tree_set(alias_root(fgraph.outputs[i]), views_of_output_i)
copied = False
# do not allow outputs to be aliased
for j in xrange(i+1, len(fgraph.outputs)):
for j in xrange(i + 1, len(fgraph.outputs)):
# We could don't put deep copy if both outputs have borrow==True
# and not(wrapped_outputs[i].borrow and wrapped_outputs[j].borrow):
if fgraph.outputs[j] in views_of_output_i:
if wrapped_outputs[i].borrow and wrapped_outputs[j].borrow:
fgraph.change_input('output',i, view_op(fgraph.outputs[i]),
fgraph.change_input('output', i, view_op(fgraph.outputs[i]),
reason=reason)
else:
fgraph.change_input('output', i, deep_copy_op(fgraph.outputs[i]),
......@@ -831,7 +830,8 @@ def insert_deepcopy(fgraph, wrapped_inputs, wrapped_outputs):
# do not allow outputs to be aliased to an inputs (j), unless
# a) that j'th input has been 'destroyed' by e.g. in-place computations
# b) that j'th input is a shared variable that is also being updated
if hasattr(fgraph,'get_destroyers_of') and fgraph.get_destroyers_of(input_j):
if (hasattr(fgraph, 'get_destroyers_of') and
fgraph.get_destroyers_of(input_j)):
continue
if input_j in updated_fgraph_inputs:
continue
......@@ -840,7 +840,7 @@ def insert_deepcopy(fgraph, wrapped_inputs, wrapped_outputs):
if input_j in fgraph.inputs:
j = fgraph.inputs.index(input_j)
if wrapped_outputs[i].borrow and wrapped_inputs[j].borrow:
fgraph.change_input('output',i, view_op(fgraph.outputs[i]),
fgraph.change_input('output', i, view_op(fgraph.outputs[i]),
reason="insert_deepcopy")
break
else:
......@@ -848,7 +848,7 @@ def insert_deepcopy(fgraph, wrapped_inputs, wrapped_outputs):
reason="insert_deepcopy")
break
elif wrapped_outputs[i].borrow:
fgraph.change_input('output',i, view_op(fgraph.outputs[i]),
fgraph.change_input('output', i, view_op(fgraph.outputs[i]),
reason="insert_deepcopy")
break
else:
......@@ -857,6 +857,8 @@ def insert_deepcopy(fgraph, wrapped_inputs, wrapped_outputs):
break
NODEFAULT = ['NODEFAULT']
class FunctionMaker(object):
"""`FunctionMaker` is the class to `create` `Function` instances.
......@@ -876,7 +878,7 @@ class FunctionMaker(object):
elif isinstance(input, (list, tuple)):
# (r, u) -> SymbolicInput(variable=r, update=u)
if len(input) == 2:
return SymbolicInput(input[0], update = input[1])
return SymbolicInput(input[0], update=input[1])
else:
raise TypeError("Expected two elements in the list or tuple.", input)
else:
......@@ -899,7 +901,7 @@ class FunctionMaker(object):
stacklevel=2)
return self.fgraph
def env_setter(self,value):
def env_setter(self, value):
warnings.warn("FunctionMaker.env is deprecated, it has been renamed 'fgraph'",
stacklevel=2)
self.fgraph = value
......@@ -911,7 +913,6 @@ class FunctionMaker(object):
env = property(env_getter, env_setter, env_deleter)
@staticmethod
def wrap_out(output):
if isinstance(output, SymbolicOutput):
......@@ -922,7 +923,7 @@ class FunctionMaker(object):
raise TypeError("Unknown output type: %s (%s)", type(output), output)
def __init__(self, inputs, outputs,
mode = None, accept_inplace = False, function_builder = Function,
mode=None, accept_inplace=False, function_builder=Function,
profile=None, on_unused_input=None):
"""
:type inputs: a list of SymbolicInput instances
......@@ -1286,7 +1287,7 @@ def orig_function(inputs, outputs, mode=None, accept_inplace=False,
defaults = [getattr(input, 'value', None) for input in inputs]
if isinstance(mode, (list, tuple)): # "mode comparison" semantics
raise Exception("We do not support the passing of multiple mode")
raise Exception("We do not support the passing of multiple modes")
else:
Maker = getattr(mode, 'function_maker', FunctionMaker)
fn = Maker(inputs,
......
......@@ -1453,10 +1453,9 @@ def gcc_version():
def gcc_llvm():
""" Detect if the g++ version used is the llvm one or not.
It don't support all g++ parameters even if it support many of them.
It does not support all g++ parameters even if it supports many of them.
"""
if gcc_llvm.is_llvm is None:
pass
p = None
try:
p = call_subprocess_Popen(['g++', '--version'],
......@@ -1469,7 +1468,7 @@ def gcc_llvm():
# So it is not an llvm compiler.
# Normally this should not happen as we should not try to
# compile when g++ is not available. If this happen, it
# compile when g++ is not available. If this happens, it
# will crash later so supposing it is not llvm is "safe".
output = ''
del p
......@@ -1547,10 +1546,10 @@ class GCC_compiler(object):
if len(native_lines) != 1:
_logger.warn(
"OPTIMIZATION WARNING: Theano was not able to find the"
" g++ parameter that tune the compilation to your specific"
" CPU. This can slow down the execution of Theano"
" function. Can you submit the following lines to"
" Theano's mailing list such that we fix this"
" g++ parameters that tune the compilation to your "
" specific CPU. This can slow down the execution of Theano"
" functions. Please submit the following lines to"
" Theano's mailing list so that we can fix this"
" problem:\n %s", native_lines)
else:
default_lines = get_lines("g++ -E -v -")
......@@ -1558,11 +1557,11 @@ class GCC_compiler(object):
if len(default_lines) < 1:
_logger.warn(
"OPTIMIZATION WARNING: Theano was not able to find the"
" default g++ parameter. This is needed to tune"
" default g++ parameters. This is needed to tune"
" the compilation to your specific"
" CPU. This can slow down the execution of Theano"
" function. Can you submit the following lines to"
" Theano's mailing list such that we fix this"
" functions. Please submit the following lines to"
" Theano's mailing list so that we can fix this"
" problem:\n %s",
get_lines("g++ -E -v -", parse=False))
else:
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论