提交 33e7ed2a authored 作者: Frédéric Bastien's avatar Frédéric Bastien 提交者: GitHub

Merge pull request #4814 from xoltar/comment-fixes

Fix typos and other doc/comment issues, add .idea to .gitignore, and …
......@@ -37,4 +37,5 @@ Theano.suo
.ipynb_checkpoints
.pydevproject
.ropeproject
core
\ No newline at end of file
core
.idea
......@@ -872,9 +872,9 @@ To understand this profile here is some explanation of how optimizations work:
0.131s for callback
time - (name, class, index) - validate time
Then it will print, with some additional indentation, each sub-optimizer's profile
information. These sub-profiles are ordered by the time they took to execute,
not by their execution order.
Then it will print, with some additional indentation, each sub-optimizer's profile
information. These sub-profiles are ordered by the time they took to execute,
not by their execution order.
* ``OPT_FAST_RUN`` is the name of the optimizer
* 1.152s is the total time spent in that optimizer
......
......@@ -28,7 +28,7 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None,
on_unused_input=None,
extra_tag_to_remove=None):
"""
This is helpful to make a reproducable case for problem during Theano
This is helpful to make a reproducible case for problems during Theano
compilation.
Ex:
......@@ -36,13 +36,13 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None,
replace `theano.function(...)` by
`theano.function_dump('filename.pkl', ...)`.
If you see this, you where probably asked to use this function to
If you see this, you were probably asked to use this function to
help debug a particular case during the compilation of a Theano
function. `function_dump` allows to easily reproduce your
compilation without asking any code. It pickle all the objects and
function. `function_dump` allows you to easily reproduce your
compilation without generating any code. It pickles all the objects and
parameters needed to reproduce a call to `theano.function()`. This
include shared variables and there values. If you do not want
that, you can set to replace shared variables values by zeros by
includes shared variables and their values. If you do not want
that, you can choose to replace shared variables values with zeros by
calling set_value(...) on them before calling `function_dump`.
To load such a dump and do the compilation:
......@@ -53,9 +53,9 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None,
>>> f = theano.function(**d) # doctest: +SKIP
Note:
The parameter extra_tag_to_remove, is passed to the StripPickler used.
The parameter `extra_tag_to_remove` is passed to the StripPickler used.
To pickle graph made by Blocks, it must be:
['annotations', 'replacement_of', 'aggregation_scheme', 'roles']
`['annotations', 'replacement_of', 'aggregation_scheme', 'roles']`
"""
assert isinstance(filename, string_types)
......@@ -100,6 +100,10 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None,
If True, do not perform any automatic update on Variables. If False
(default), perform them all. Else, perform automatic updates on all
Variables that are neither in "updates" nor in "no_default_updates".
accept_inplace : bool
True iff the graph can contain inplace operations prior to the
optimization phase (default is False). *Note* this parameter is unsupported,
and its use is not recommended.
name : str
An optional name for this function. The profile mode will print the time
spent in this function.
......@@ -115,10 +119,10 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None,
Ops in the graph, an Exception will be raised.
allow_input_downcast: bool or None
True means that the values passed as inputs when calling the function
can be silently downcasted to fit the dtype of the corresponding
can be silently down-casted to fit the dtype of the corresponding
Variable, which may lose precision. False means that it will only be
cast to a more general, or precise, type. None (default) is almost like
False, but allows downcasting of Python float scalars to floatX.
False, but allows down-casting of Python float scalars to floatX.
profile: None, True, or ProfileStats instance
Accumulate profiling information into a given ProfileStats instance.
If argument is `True` then a new ProfileStats instance will be used.
......@@ -209,9 +213,9 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None,
4. Linker
The linker uses a Python loop to execute the code associated
with all the Apply nodes in the graph in the correct order.
The CVM is a linker that replaces this Python loop with a C
loop to avoid continuously changing between Python and C.
The CVM is faster for 2 reasons:
The C Virtual Machine (CVM) is a linker that replaces this
Python loop with a C loop to avoid continuously changing
between Python and C. The CVM is faster for 2 reasons:
1) Its internal logic is in C, so no Python interpreter
overhead.
2) It makes native calls from the VM logic into thunks that
......@@ -219,7 +223,6 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None,
The VM is a linker that was developed to prototype the CVM. it
was easier to develop the VM in Python then translate it to C instead
of just writing it in C from scratch.
CVM stands for C Virtual Machine.
"""
if isinstance(outputs, dict):
......@@ -252,7 +255,7 @@ def function(inputs, outputs=None, mode=None, updates=None, givens=None,
func_frame = stack[idx - 1]
while "theano/gof" in func_frame[0] and idx > 0:
idx -= 1
# This can hapen if we call var.eval()
# This can happen if we call var.eval()
func_frame = stack[idx - 1]
name = func_frame[0] + ':' + str(func_frame[1])
......
......@@ -1727,8 +1727,8 @@ def orig_function(inputs, outputs, mode=None, accept_inplace=False,
Default of None means to use `config.mode` (see below for descriptive
string list).
name : str
An optional name for this fct. If used, the profile mode will print the
time spent in this fct.
An optional name for this function. If used, the profile mode will print the
time spent in this function.
accept_inplace : bool
True iff the graph can contain inplace operations prior to the
optimization phase (default is False).
......
......@@ -5,8 +5,6 @@ WRITEME
from __future__ import absolute_import, print_function, division
import logging
import numpy
import theano
from theano import gof
import theano.gof.vm
......@@ -18,35 +16,6 @@ from six import string_types
_logger = logging.getLogger('theano.compile.mode')
def check_equal(x, y):
"""
Returns True iff x[0] and y[0] are equal (checks the dtype and shape if x
and y are numpy.ndarray instances). Used internally.
"""
# I put the import here to allow using theano without scipy.
import scipy.sparse as sp
x, y = x[0], y[0]
# TODO: bug in current scipy, two sparse matrices are never equal,
# remove when moving to 0.7
if sp.issparse(x):
x = x.todense()
if sp.issparse(y):
y = y.todense()
if isinstance(x, numpy.ndarray) and isinstance(y, numpy.ndarray):
if (x.dtype != y.dtype or
x.shape != y.shape or
numpy.any(abs(x - y) > 1e-10)):
raise Exception("Output mismatch.",
{'performlinker': x, 'clinker': y})
else:
if x != y:
raise Exception("Output mismatch.",
{'performlinker': x, 'clinker': y})
# If a string is passed as the linker argument in the constructor for
# Mode, it will be used as the key to retrieve the real linker in this
# dictionary
......@@ -384,7 +353,7 @@ predefined_modes = {'FAST_COMPILE': FAST_COMPILE,
'FAST_RUN': FAST_RUN,
}
instanciated_default_mode = None
instantiated_default_mode = None
def get_mode(orig_string):
......@@ -395,17 +364,17 @@ def get_mode(orig_string):
if not isinstance(string, string_types):
return string # it is hopefully already a mode...
global instanciated_default_mode
global instantiated_default_mode
# The default mode is cached. However, config.mode can change
# If instanciated_default_mode has the right class, use it.
if orig_string is None and instanciated_default_mode:
# If instantiated_default_mode has the right class, use it.
if orig_string is None and instantiated_default_mode:
if string in predefined_modes:
default_mode_class = predefined_modes[string].__class__.__name__
else:
default_mode_class = string
if (instanciated_default_mode.__class__.__name__ ==
if (instantiated_default_mode.__class__.__name__ ==
default_mode_class):
return instanciated_default_mode
return instantiated_default_mode
if string in ['Mode', 'ProfileMode', 'DebugMode', 'NanGuardMode']:
if string == 'DebugMode':
......@@ -422,6 +391,7 @@ def get_mode(orig_string):
# This might be required if the string is 'ProfileMode'
from .profilemode import ProfileMode # noqa
from .profilemode import prof_mode_instance_to_print
# TODO: Can't we look up the name and invoke it rather than using eval here?
ret = eval(string +
'(linker=config.linker, optimizer=config.optimizer)')
elif string in predefined_modes:
......@@ -437,7 +407,7 @@ def get_mode(orig_string):
ret = ret.including(*theano.config.optimizer_including.split(':'))
if theano.config.optimizer_requiring:
ret = ret.requiring(*theano.config.optimizer_requiring.split(':'))
instanciated_default_mode = ret
instantiated_default_mode = ret
# must tell python to print the summary at the end.
if string == 'ProfileMode':
......
......@@ -306,6 +306,10 @@ def pfunc(params, outputs=None, mode=None, updates=None, givens=None,
If False (default), perform them all. Else, perform automatic updates
on all Variables that are neither in "updates" nor in
"no_default_updates".
accept_inplace : bool
True iff the graph can contain inplace operations prior to the
optimization phase (default is False). *Note* this parameter is unsupported,
and its use is not recommended.
name : None or string
Attaches a name to the profiling result of this function.
allow_input_downcast : bool
......
......@@ -718,7 +718,7 @@ class CLinker(link.Linker):
[get_c_declare, get_c_extract_out,
(get_c_sync, get_c_cleanup)]]
else:
raise Exception("what the fuck")
raise Exception("this shouldn't be possible, please report this exception")
builder, block = struct_variable_codeblocks(variable, policy,
id, symbol, sub)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论