提交 e31d3ac6 authored 作者: Bryn Keller's avatar Bryn Keller

Fix typos and other doc/comment issues, add .idea to .gitignore, and delete…

Fix typos and other doc/comment issues, add .idea to .gitignore, and delete apparently unused function theano.compile.mode.check_equal
上级 589b5926
...@@ -37,4 +37,5 @@ Theano.suo ...@@ -37,4 +37,5 @@ Theano.suo
.ipynb_checkpoints .ipynb_checkpoints
.pydevproject .pydevproject
.ropeproject .ropeproject
core core
\ No newline at end of file .idea
...@@ -872,9 +872,9 @@ To understand this profile here is some explanation of how optimizations work: ...@@ -872,9 +872,9 @@ To understand this profile here is some explanation of how optimizations work:
0.131s for callback 0.131s for callback
time - (name, class, index) - validate time time - (name, class, index) - validate time
Then it will print, with some additional indentation, each sub-optimizer's profile Then it will print, with some additional indentation, each sub-optimizer's profile
information. These sub-profiles are ordered by the time they took to execute, information. These sub-profiles are ordered by the time they took to execute,
not by their execution order. not by their execution order.
* ``OPT_FAST_RUN`` is the name of the optimizer * ``OPT_FAST_RUN`` is the name of the optimizer
* 1.152s is the total time spent in that optimizer * 1.152s is the total time spent in that optimizer
......
...@@ -28,7 +28,7 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None, ...@@ -28,7 +28,7 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None,
on_unused_input=None, on_unused_input=None,
extra_tag_to_remove=None): extra_tag_to_remove=None):
""" """
This is helpful to make a reproducable case for problem during Theano This is helpful to make a reproducible case for problems during Theano
compilation. compilation.
Ex: Ex:
...@@ -36,13 +36,13 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None, ...@@ -36,13 +36,13 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None,
replace `theano.function(...)` by replace `theano.function(...)` by
`theano.function_dump('filename.pkl', ...)`. `theano.function_dump('filename.pkl', ...)`.
If you see this, you where probably asked to use this function to If you see this, you were probably asked to use this function to
help debug a particular case during the compilation of a Theano help debug a particular case during the compilation of a Theano
function. `function_dump` allows to easily reproduce your function. `function_dump` allows you to easily reproduce your
compilation without asking any code. It pickle all the objects and compilation without generating any code. It pickles all the objects and
parameters needed to reproduce a call to `theano.function()`. This parameters needed to reproduce a call to `theano.function()`. This
include shared variables and there values. If you do not want includes shared variables and their values. If you do not want
that, you can set to replace shared variables values by zeros by that, you can choose to replace shared variables values with zeros by
calling set_value(...) on them before calling `function_dump`. calling set_value(...) on them before calling `function_dump`.
To load such a dump and do the compilation: To load such a dump and do the compilation:
...@@ -53,9 +53,9 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None, ...@@ -53,9 +53,9 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None,
>>> f = theano.function(**d) # doctest: +SKIP >>> f = theano.function(**d) # doctest: +SKIP
Note: Note:
The parameter extra_tag_to_remove, is passed to the StripPickler used. The parameter `extra_tag_to_remove` is passed to the StripPickler used.
To pickle graph made by Blocks, it must be: To pickle graph made by Blocks, it must be:
['annotations', 'replacement_of', 'aggregation_scheme', 'roles'] `['annotations', 'replacement_of', 'aggregation_scheme', 'roles']`
""" """
assert isinstance(filename, string_types) assert isinstance(filename, string_types)
...@@ -100,6 +100,9 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None, ...@@ -100,6 +100,9 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None,
If True, do not perform any automatic update on Variables. If False If True, do not perform any automatic update on Variables. If False
(default), perform them all. Else, perform automatic updates on all (default), perform them all. Else, perform automatic updates on all
Variables that are neither in "updates" nor in "no_default_updates". Variables that are neither in "updates" nor in "no_default_updates".
accept_inplace : bool
True iff the graph can contain inplace operations prior to the
optimization phase (default is False).
name : str name : str
An optional name for this function. The profile mode will print the time An optional name for this function. The profile mode will print the time
spent in this function. spent in this function.
...@@ -115,10 +118,10 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None, ...@@ -115,10 +118,10 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None,
Ops in the graph, an Exception will be raised. Ops in the graph, an Exception will be raised.
allow_input_downcast: bool or None allow_input_downcast: bool or None
True means that the values passed as inputs when calling the function True means that the values passed as inputs when calling the function
can be silently downcasted to fit the dtype of the corresponding can be silently down-casted to fit the dtype of the corresponding
Variable, which may lose precision. False means that it will only be Variable, which may lose precision. False means that it will only be
cast to a more general, or precise, type. None (default) is almost like cast to a more general, or precise, type. None (default) is almost like
False, but allows downcasting of Python float scalars to floatX. False, but allows down-casting of Python float scalars to floatX.
profile: None, True, or ProfileStats instance profile: None, True, or ProfileStats instance
Accumulate profiling information into a given ProfileStats instance. Accumulate profiling information into a given ProfileStats instance.
If argument is `True` then a new ProfileStats instance will be used. If argument is `True` then a new ProfileStats instance will be used.
...@@ -209,9 +212,9 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None, ...@@ -209,9 +212,9 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None,
4. Linker 4. Linker
The linker uses a Python loop to execute the code associated The linker uses a Python loop to execute the code associated
with all the Apply nodes in the graph in the correct order. with all the Apply nodes in the graph in the correct order.
The CVM is a linker that replaces this Python loop with a C The C Virtual Machine (CVM) is a linker that replaces this
loop to avoid continuously changing between Python and C. Python loop with a C loop to avoid continuously changing
The CVM is faster for 2 reasons: between Python and C. The CVM is faster for 2 reasons:
1) Its internal logic is in C, so no Python interpreter 1) Its internal logic is in C, so no Python interpreter
overhead. overhead.
2) It makes native calls from the VM logic into thunks that 2) It makes native calls from the VM logic into thunks that
...@@ -219,7 +222,6 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None, ...@@ -219,7 +222,6 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None,
The VM is a linker that was developed to prototype the CVM. it The VM is a linker that was developed to prototype the CVM. it
was easier to develop the VM in Python then translate it to C instead was easier to develop the VM in Python then translate it to C instead
of just writing it in C from scratch. of just writing it in C from scratch.
CVM stands for C Virtual Machine.
""" """
if isinstance(outputs, dict): if isinstance(outputs, dict):
...@@ -252,7 +254,7 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None, ...@@ -252,7 +254,7 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None,
func_frame = stack[idx - 1] func_frame = stack[idx - 1]
while "theano/gof" in func_frame[0] and idx > 0: while "theano/gof" in func_frame[0] and idx > 0:
idx -= 1 idx -= 1
# This can hapen if we call var.eval() # This can happen if we call var.eval()
func_frame = stack[idx - 1] func_frame = stack[idx - 1]
name = func_frame[0] + ':' + str(func_frame[1]) name = func_frame[0] + ':' + str(func_frame[1])
......
...@@ -1726,8 +1726,8 @@ def orig_function(inputs, outputs, mode=None, accept_inplace=False, ...@@ -1726,8 +1726,8 @@ def orig_function(inputs, outputs, mode=None, accept_inplace=False,
Default of None means to use `config.mode` (see below for descriptive Default of None means to use `config.mode` (see below for descriptive
string list). string list).
name : str name : str
An optional name for this fct. If used, the profile mode will print the An optional name for this function. If used, the profile mode will print the
time spent in this fct. time spent in this function.
accept_inplace : bool accept_inplace : bool
True iff the graph can contain inplace operations prior to the True iff the graph can contain inplace operations prior to the
optimization phase (default is False). optimization phase (default is False).
......
...@@ -18,35 +18,6 @@ from six import string_types ...@@ -18,35 +18,6 @@ from six import string_types
_logger = logging.getLogger('theano.compile.mode') _logger = logging.getLogger('theano.compile.mode')
def check_equal(x, y):
"""
Returns True iff x[0] and y[0] are equal (checks the dtype and shape if x
and y are numpy.ndarray instances). Used internally.
"""
# I put the import here to allow using theano without scipy.
import scipy.sparse as sp
x, y = x[0], y[0]
# TODO: bug in current scipy, two sparse matrices are never equal,
# remove when moving to 0.7
if sp.issparse(x):
x = x.todense()
if sp.issparse(y):
y = y.todense()
if isinstance(x, numpy.ndarray) and isinstance(y, numpy.ndarray):
if (x.dtype != y.dtype or
x.shape != y.shape or
numpy.any(abs(x - y) > 1e-10)):
raise Exception("Output mismatch.",
{'performlinker': x, 'clinker': y})
else:
if x != y:
raise Exception("Output mismatch.",
{'performlinker': x, 'clinker': y})
# If a string is passed as the linker argument in the constructor for # If a string is passed as the linker argument in the constructor for
# Mode, it will be used as the key to retrieve the real linker in this # Mode, it will be used as the key to retrieve the real linker in this
# dictionary # dictionary
...@@ -384,7 +355,7 @@ predefined_modes = {'FAST_COMPILE': FAST_COMPILE, ...@@ -384,7 +355,7 @@ predefined_modes = {'FAST_COMPILE': FAST_COMPILE,
'FAST_RUN': FAST_RUN, 'FAST_RUN': FAST_RUN,
} }
instanciated_default_mode = None instantiated_default_mode = None
def get_mode(orig_string): def get_mode(orig_string):
...@@ -395,17 +366,17 @@ def get_mode(orig_string): ...@@ -395,17 +366,17 @@ def get_mode(orig_string):
if not isinstance(string, string_types): if not isinstance(string, string_types):
return string # it is hopefully already a mode... return string # it is hopefully already a mode...
global instanciated_default_mode global instantiated_default_mode
# The default mode is cached. However, config.mode can change # The default mode is cached. However, config.mode can change
# If instanciated_default_mode has the right class, use it. # If instantiated_default_mode has the right class, use it.
if orig_string is None and instanciated_default_mode: if orig_string is None and instantiated_default_mode:
if string in predefined_modes: if string in predefined_modes:
default_mode_class = predefined_modes[string].__class__.__name__ default_mode_class = predefined_modes[string].__class__.__name__
else: else:
default_mode_class = string default_mode_class = string
if (instanciated_default_mode.__class__.__name__ == if (instantiated_default_mode.__class__.__name__ ==
default_mode_class): default_mode_class):
return instanciated_default_mode return instantiated_default_mode
if string in ['Mode', 'ProfileMode', 'DebugMode', 'NanGuardMode']: if string in ['Mode', 'ProfileMode', 'DebugMode', 'NanGuardMode']:
if string == 'DebugMode': if string == 'DebugMode':
...@@ -422,6 +393,7 @@ def get_mode(orig_string): ...@@ -422,6 +393,7 @@ def get_mode(orig_string):
# This might be required if the string is 'ProfileMode' # This might be required if the string is 'ProfileMode'
from .profilemode import ProfileMode # noqa from .profilemode import ProfileMode # noqa
from .profilemode import prof_mode_instance_to_print from .profilemode import prof_mode_instance_to_print
# TODO: Can't we look up the name and invoke it rather than using eval here?
ret = eval(string + ret = eval(string +
'(linker=config.linker, optimizer=config.optimizer)') '(linker=config.linker, optimizer=config.optimizer)')
elif string in predefined_modes: elif string in predefined_modes:
...@@ -437,7 +409,7 @@ def get_mode(orig_string): ...@@ -437,7 +409,7 @@ def get_mode(orig_string):
ret = ret.including(*theano.config.optimizer_including.split(':')) ret = ret.including(*theano.config.optimizer_including.split(':'))
if theano.config.optimizer_requiring: if theano.config.optimizer_requiring:
ret = ret.requiring(*theano.config.optimizer_requiring.split(':')) ret = ret.requiring(*theano.config.optimizer_requiring.split(':'))
instanciated_default_mode = ret instantiated_default_mode = ret
# must tell python to print the summary at the end. # must tell python to print the summary at the end.
if string == 'ProfileMode': if string == 'ProfileMode':
......
...@@ -306,6 +306,9 @@ def pfunc(params, outputs=None, mode=None, updates=None, givens=None, ...@@ -306,6 +306,9 @@ def pfunc(params, outputs=None, mode=None, updates=None, givens=None,
If False (default), perform them all. Else, perform automatic updates If False (default), perform them all. Else, perform automatic updates
on all Variables that are neither in "updates" nor in on all Variables that are neither in "updates" nor in
"no_default_updates". "no_default_updates".
accept_inplace : bool
True iff the graph can contain inplace operations prior to the
optimization phase (default is False).
name : None or string name : None or string
Attaches a name to the profiling result of this function. Attaches a name to the profiling result of this function.
allow_input_downcast : bool allow_input_downcast : bool
......
...@@ -717,7 +717,7 @@ class CLinker(link.Linker): ...@@ -717,7 +717,7 @@ class CLinker(link.Linker):
[get_c_declare, get_c_extract_out, [get_c_declare, get_c_extract_out,
(get_c_sync, get_c_cleanup)]] (get_c_sync, get_c_cleanup)]]
else: else:
raise Exception("what the fuck") raise Exception("this shouldn't be possible, please report this exception")
builder, block = struct_variable_codeblocks(variable, policy, builder, block = struct_variable_codeblocks(variable, policy,
id, symbol, sub) id, symbol, sub)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论