提交 fa896acb authored 作者: nouiz's avatar nouiz

Merge pull request #200 from delallea/typos

Typos
......@@ -79,7 +79,7 @@ and related methods allow the op to generate c code that will be
compiled and linked by Theano. On the other hand, the ``make_thunk``
method will be called only once during compilation and should generate
a ``thunk``: a standalone function that when called will do the wanted computations.
This is usefull if you want to generate code and compile it yourself. For
This is useful if you want to generate code and compile it yourself. For
example, this allows you to use PyCUDA to compile gpu code.
Also there are 2 methods that are highly recommended to be implemented. They are
......
......@@ -143,12 +143,12 @@ following methods:
This function is needed for shape optimization. ``shapes`` is a
list with one tuple for each input of the Apply node (which corresponds
to the inputs of the op). Each tuple contains 1 element for
each dimension of the corresponding input. The value is the
shape (number of elements) along the corresponding dimension of that
to the inputs of the op). Each tuple contains as many elements as the
number of dimensions of the corresponding input. The value of each element
is the shape (number of items) along the corresponding dimension of that
specific input.
While this might sound complicated, it is nothing more then the shape
While this might sound complicated, it is nothing more than the shape
of each input as symbolic variables (one per dimension).
The function should return a list with one tuple for each output.
......@@ -333,7 +333,7 @@ In the following code, we use our new Op:
Note that there is an implicit call to
``double.filter()`` on each argument, so if we give integers as inputs
they are magically casted to the right type. Now, what if we try this?
they are magically cast to the right type. Now, what if we try this?
>>> x = double('x')
>>> z = mul(x, 2)
......
......@@ -27,7 +27,7 @@ default values.
.. method:: filter(value, strict=False, allow_downcast=None)
This casts a value to match the Type and returns the
casted value. If ``value`` is incompatible with the Type,
cast value. If ``value`` is incompatible with the Type,
the method must raise an exception. If ``strict`` is True, ``filter`` must return a
reference to ``value`` (i.e. casting prohibited).
If ``strict`` is False, then casting may happen, but downcasting should
......
......@@ -84,16 +84,16 @@ then go to your fork's github page on the github website, select your feature
branch and hit the "Pull Request" button in the top right corner.
If you don't get any feedback, bug us on the theano-dev mailing list.
When the your pull request have been merged, you can delete the branch
from the github list of branch. That is usefull to don't have too many
that stay there!
When your pull request has been merged, you can delete the branch
from the github list of branches. This is useful to avoid having too many
branches staying there. Deleting this remote branch is achieved with:
.. code-block:: bash
git push origin :my_shiny_feature
You can keep you local repo up to date with central/master with those commands:
You can keep your local repository up to date with central/master with those
commands:
.. code-block:: bash
......@@ -101,14 +101,14 @@ You can keep you local repo up to date with central/master with those commands:
git fetch central
git merge central/master
If you want to fix a commit done in a pull request(i.e. fix small
typo) to keep the history clean, you can do it like this:
If you want to fix a commit already submitted within a pull request (e.g. to
fix a small typo), you can do it like this to keep history clean:
.. code-block:: bash
git checkout branch
git checkout my_shiny_feature
git commit --amend
git push -u origin my_shiny_feature:my_shiny_feature
git push -u origin my_shiny_feature:my_shiny_feature
Coding Style Auto Check
......
......@@ -64,7 +64,7 @@ The proposal is for two new ways of creating a *shared* variable:
:param value: A value to associate with this variable (a new container will be created).
:param strict: True -> assignments to .value will not be casted or copied, so they must
:param strict: True -> assignments to .value will not be cast or copied, so they must
have the correct type.
:param container: The container to use for this variable. Illegal to pass this as well
......@@ -185,7 +185,7 @@ Corner cases and exotic examples can be found in the tests.
:param mutable: True -> function is allowed to modify this argument.
:param strict: False -> function arguments may be copied or casted to match the
:param strict: False -> function arguments may be copied or cast to match the
type required by the parameter `variable`. True -> function arguments must exactly match the type
required by `variable`.
......
......@@ -59,7 +59,7 @@ def function(inputs, outputs=None, mode=None, updates=[], givens=[],
:param allow_input_downcast: True means that the values passed as
inputs when calling the function can be silently downcasted to fit
the dtype of the corresponding Variable, which may lose precision.
False means that it will only be casted to a more general, or
False means that it will only be cast to a more general, or
precise, type. None (default) is almost like False, but allows
downcasting of Python float scalars to floatX.
......
......@@ -29,13 +29,13 @@ class SymbolicInput(object):
strict: Bool (default: False)
True: means that the value you pass for this input must have exactly the right type
False: the value you pass for this input may be casted automatically to the proper type
False: the value you pass for this input may be cast automatically to the proper type
allow_downcast: Bool or None (default: None)
Only applies when `strict` is False.
True: the value you pass for this input can be silently
downcasted to fit the right type, which may lose precision.
False: the value will only be casted to a more general, or precise, type.
False: the value will only be cast to a more general, or precise, type.
None: Almost like False, but allows downcast of Python floats to floatX.
autoname: Bool (default: True)
......@@ -173,7 +173,7 @@ class In(SymbolicInput):
Only applies when `strict` is False.
True: the value you pass for this input can be silently
downcasted to fit the right type, which may lose precision.
False: the value will only be casted to a more general, or precise, type.
False: the value will only be cast to a more general, or precise, type.
None: Almost like False, but allows downcast of Python floats to floatX.
autoname: Bool (default: True)
......
......@@ -274,12 +274,12 @@ class Param(object):
False: do not permit any output to be aliased to the input
:param strict: False -> function arguments may be copied or casted to match the
:param strict: False -> function arguments may be copied or cast to match the
type required by the parameter `variable`. True -> function arguments must exactly match the type
required by `variable`.
:param allow_downcast: Only applies if `strict` is False.
True -> allow assigned value to lose precision when casted during assignment.
True -> allow assigned value to lose precision when cast during assignment.
False -> never allow precision loss.
None -> only allow downcasting of a Python float to a scalar floatX.
......@@ -346,7 +346,7 @@ def pfunc(params, outputs=None, mode=None, updates=[], givens=[],
:param allow_input_downcast: True means that the values passed as
inputs when calling the function can be silently downcasted to fit
the dtype of the corresponding Variable, which may lose precision.
False means that it will only be casted to a more general, or
False means that it will only be cast to a more general, or
precise, type. None (default) is almost like False, but allows
downcasting of Python float scalars to floatX.
......
......@@ -53,11 +53,11 @@ class SharedVariable(Variable):
:param value: A value to associate with this variable (a new container will be created).
:param strict: True -> assignments to .value will not be casted or copied, so they must
:param strict: True -> assignments to .value will not be cast or copied, so they must
have the correct type.
:param allow_downcast: Only applies if `strict` is False.
True -> allow assigned value to lose precision when casted during assignment.
True -> allow assigned value to lose precision when cast during assignment.
False -> never allow precision loss.
None -> only allow downcasting of a Python float to a scalar floatX.
......
......@@ -95,7 +95,7 @@ class Test_SharedVariable(unittest.TestCase):
value=numpy.asarray([1., 2.]),
strict=False)
# check that assignments to value are casted properly
# check that assignments to value are cast properly
u.set_value([3,4])
assert type(u.get_value()) is numpy.ndarray
assert str(u.get_value(borrow=True).dtype) == 'float64'
......
......@@ -263,14 +263,14 @@ class TypedParam(ConfigParam):
def __init__(self, default, mytype, is_valid=None, allow_override=True):
self.mytype = mytype
def filter(val):
casted_val = mytype(val)
cast_val = mytype(val)
if callable(is_valid):
if is_valid(casted_val):
return casted_val
if is_valid(cast_val):
return cast_val
else:
raise ValueError('Invalid value (%s) for configuration variable "%s".'
% (val, self.fullname), val)
return casted_val
return cast_val
super(TypedParam, self).__init__(default, filter, allow_override=allow_override)
def __str__(self):
return '%s (%s) ' % (self.fullname, self.mytype)
......
......@@ -2098,7 +2098,7 @@ def profile_printer(fct_name, compile_time, fct_call_time, fct_call,
if any([x[1].op.__class__.__name__.lower().startswith("gpu") for x in apply_time.keys()]):
local_time = sum(apply_time.values())
print
print 'Some info usefull for gpu:'
print 'Some info useful for gpu:'
cpu=0
gpu=0
......
......@@ -163,8 +163,8 @@ def local_gpu_elemwise_0(node):
elif numpy.all([i.type.dtype in upcastable for i in node.inputs]):
# second - establish that a new node with upcasted inputs has the same outputs
# types as the original node
casted = node.op.make_node(*[tensor.cast(i, 'float32') for i in node.inputs])
if [o.type for o in casted.outputs] == [o.type for o in node.outputs]:
upcasted = node.op.make_node(*[tensor.cast(i, 'float32') for i in node.inputs])
if [o.type for o in upcasted.outputs] == [o.type for o in node.outputs]:
new_inputs = [gpu_from_host(tensor.cast(i, 'float32')) for i in node.inputs]
gpu_elemwise = new_op(*new_inputs)
......
......@@ -901,7 +901,7 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester(
shared_borrow_true_alias_ = True,#True when the original value is already a CudaNdarray!
set_value_borrow_true_alias_ = True,
set_value_inplace_ = True,
set_casted_value_inplace_ = False,
set_cast_value_inplace_ = False,
shared_constructor_accept_ndarray_ = True,
internal_type_ = cuda_ndarray.CudaNdarray,
test_internal_type_ = lambda a: isinstance(a,cuda_ndarray.CudaNdarray),
......@@ -919,7 +919,7 @@ test_shared_options2 = theano.tensor.tests.test_sharedvar.makeSharedTester(
shared_borrow_true_alias_ = False,
set_value_borrow_true_alias_ = False,
set_value_inplace_ = True,
set_casted_value_inplace_ = True,
set_cast_value_inplace_ = True,
shared_constructor_accept_ndarray_ = True,
internal_type_ = cuda_ndarray.CudaNdarray,
test_internal_type_ = lambda a: isinstance(a,cuda_ndarray.CudaNdarray),
......
......@@ -65,7 +65,7 @@ class CudaNdarrayType(Type):
return cuda.filter(data, self.broadcastable, strict, old_data)
else: # (not strict) and (not allow_downcast)
# Check if data.dtype can be accurately casted to self.dtype
# Check if data.dtype can be accurately cast to self.dtype
if isinstance(data, numpy.ndarray):
up_dtype = scal.upcast(self.dtype, data.dtype)
if up_dtype == self.dtype:
......
......@@ -923,7 +923,7 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester(
shared_borrow_true_alias_=True,
set_value_borrow_true_alias_=True,
set_value_inplace_=False,
set_casted_value_inplace_=False,
set_cast_value_inplace_=False,
shared_constructor_accept_ndarray_=False,
internal_type_=scipy.sparse.csc_matrix,
test_internal_type_=scipy.sparse.issparse,
......
......@@ -244,7 +244,7 @@ class NumpyAutocaster(object):
x_ = theano._asarray(x, dtype=dtype)
if numpy.all(x == x_):
break
# returns either an exact x_==x, or the last casted x_
# returns either an exact x_==x, or the last cast x_
return x_
autocast_int = NumpyAutocaster(('int8', 'int16', 'int32', 'int64'))
......
......@@ -1126,7 +1126,7 @@ def _gemm_from_factored_list(lst):
return False
lst2 = []
# Remove the tuple that can't be casted correctly.
# Remove the tuple that can't be cast correctly.
# This can happen when we try to cast a complex to a real
for sM in lst:
if is_pair(sM):
......
......@@ -92,7 +92,7 @@ def scalarconsts_rest(inputs):
def broadcast_like(value, template, env, dtype=None):
"""Return a Variable with the same shape and dtype as the template,
filled by broadcasting value through it. `value` will be casted as
filled by broadcasting value through it. `value` will be cast as
necessary.
"""
......
......@@ -561,7 +561,7 @@ _good_broadcast_div_mod_normal_float_no_complex = dict(
dtype_mixup_1=(rand(2, 3), randint_nonzero(2, 3)),
dtype_mixup_2=(randint_nonzero(2, 3), rand(2, 3)),
# Fix problem with integers and uintegers and add them.
# Them remove their specific addition to CeilIntDivTester tests.
# Then remove their specific addition to CeilIntDivTester tests.
# integer=(randint(2, 3), randint_nonzero(2, 3)),
# uinteger=(randint(2, 3).astype("uint8"),
# randint_nonzero(2, 3).astype("uint8")),
......
......@@ -1040,7 +1040,7 @@ class BaseGemv(object):
# The only op in the graph is a dot.
# In the gemm case, we create a dot22 for that case
# There is no dot21.
# Creating one is not usefull as this is not faster(in fact it would be slower!
# Creating one is not useful as this is not faster(in fact it would be slower!
# as more code would be in python, numpy.dot will call gemv itself)
# See ticket 594
"""
......
......@@ -17,7 +17,7 @@ def makeSharedTester(shared_constructor_,
shared_borrow_true_alias_,
set_value_borrow_true_alias_,
set_value_inplace_,
set_casted_value_inplace_,
set_cast_value_inplace_,
shared_constructor_accept_ndarray_,
internal_type_,
test_internal_type_,
......@@ -38,7 +38,7 @@ def makeSharedTester(shared_constructor_,
:param set_value_borrow_true_alias_: Should set_value(val,borrow=True) reuse the val memory space
:param set_value_inplace_: Should this shared variable overwrite the current
memory when the new value is an ndarray
:param set_casted_value_inplace_: Should this shared variable overwrite the
:param set_cast_value_inplace_: Should this shared variable overwrite the
current memory when the new value is of the same
type as the internal type.
:param shared_constructor_accept_ndarray_: Do the shared_constructor accept an ndarray as input?
......@@ -71,7 +71,7 @@ def makeSharedTester(shared_constructor_,
ref_fct = staticmethod(ref_fct_)
set_value_borrow_true_alias = set_value_borrow_true_alias_
set_value_inplace = set_value_inplace_
set_casted_value_inplace = set_casted_value_inplace_
set_cast_value_inplace = set_cast_value_inplace_
shared_constructor_accept_ndarray = shared_constructor_accept_ndarray_
cast_value = staticmethod(cast_value_)
op_by_matrix = op_by_matrix_
......@@ -379,14 +379,14 @@ def makeSharedTester(shared_constructor_,
self.ref_fct(self.cast_value(nd)))
assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_value_inplace
# Test by set_value with borrow=False when new data casted.
# Test by set_value with borrow=False when new data cast.
# specificaly useful for gpu data
nd += 1
old_data = x_shared.container.storage[0]
x_shared.set_value(self.cast_value(nd), borrow=False)
assert numpy.allclose(self.ref_fct(x_shared.get_value(borrow=True)),
self.ref_fct(self.cast_value(nd)))
assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_casted_value_inplace
assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_cast_value_inplace
# Test by set_value with borrow=True
nd += 1
......@@ -396,12 +396,12 @@ def makeSharedTester(shared_constructor_,
self.ref_fct(self.cast_value(nd)))
assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_value_inplace
# Test by set_value with borrow=True when new data casted.
# Test by set_value with borrow=True when new data cast.
nd += 1
old_data = x_shared.container.storage[0]
x_shared.set_value(self.cast_value(nd.copy()), borrow=True)
assert numpy.allclose(self.ref_fct(x_shared.get_value(borrow=True)), self.ref_fct(self.cast_value(nd)))
assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_casted_value_inplace
assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_cast_value_inplace
def test_specify_shape(self):
dtype = self.dtype
......@@ -628,7 +628,7 @@ test_shared_options=makeSharedTester(
shared_borrow_true_alias_ = True,
set_value_borrow_true_alias_ = True,
set_value_inplace_ = False,
set_casted_value_inplace_ = False,
set_cast_value_inplace_ = False,
shared_constructor_accept_ndarray_ = True,
internal_type_ = numpy.ndarray,
test_internal_type_ = lambda a: isinstance(a,numpy.ndarray),
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论