提交 d07818e2 authored 作者: Olivier Delalleau's avatar Olivier Delalleau

Typo fix: casted -> cast

Up/downcasted is correct though. English is weird.
上级 aaadb142
...@@ -59,7 +59,7 @@ def function(inputs, outputs=None, mode=None, updates=[], givens=[], ...@@ -59,7 +59,7 @@ def function(inputs, outputs=None, mode=None, updates=[], givens=[],
:param allow_input_downcast: True means that the values passed as :param allow_input_downcast: True means that the values passed as
inputs when calling the function can be silently downcasted to fit inputs when calling the function can be silently downcasted to fit
the dtype of the corresponding Variable, which may lose precision. the dtype of the corresponding Variable, which may lose precision.
False means that it will only be casted to a more general, or False means that it will only be cast to a more general, or
precise, type. None (default) is almost like False, but allows precise, type. None (default) is almost like False, but allows
downcasting of Python float scalars to floatX. downcasting of Python float scalars to floatX.
......
...@@ -29,13 +29,13 @@ class SymbolicInput(object): ...@@ -29,13 +29,13 @@ class SymbolicInput(object):
strict: Bool (default: False) strict: Bool (default: False)
True: means that the value you pass for this input must have exactly the right type True: means that the value you pass for this input must have exactly the right type
False: the value you pass for this input may be casted automatically to the proper type False: the value you pass for this input may be cast automatically to the proper type
allow_downcast: Bool or None (default: None) allow_downcast: Bool or None (default: None)
Only applies when `strict` is False. Only applies when `strict` is False.
True: the value you pass for this input can be silently True: the value you pass for this input can be silently
downcasted to fit the right type, which may lose precision. downcasted to fit the right type, which may lose precision.
False: the value will only be casted to a more general, or precise, type. False: the value will only be cast to a more general, or precise, type.
None: Almost like False, but allows downcast of Python floats to floatX. None: Almost like False, but allows downcast of Python floats to floatX.
autoname: Bool (default: True) autoname: Bool (default: True)
...@@ -173,7 +173,7 @@ class In(SymbolicInput): ...@@ -173,7 +173,7 @@ class In(SymbolicInput):
Only applies when `strict` is False. Only applies when `strict` is False.
True: the value you pass for this input can be silently True: the value you pass for this input can be silently
downcasted to fit the right type, which may lose precision. downcasted to fit the right type, which may lose precision.
False: the value will only be casted to a more general, or precise, type. False: the value will only be cast to a more general, or precise, type.
None: Almost like False, but allows downcast of Python floats to floatX. None: Almost like False, but allows downcast of Python floats to floatX.
autoname: Bool (default: True) autoname: Bool (default: True)
......
...@@ -274,12 +274,12 @@ class Param(object): ...@@ -274,12 +274,12 @@ class Param(object):
False: do not permit any output to be aliased to the input False: do not permit any output to be aliased to the input
:param strict: False -> function arguments may be copied or casted to match the :param strict: False -> function arguments may be copied or cast to match the
type required by the parameter `variable`. True -> function arguments must exactly match the type type required by the parameter `variable`. True -> function arguments must exactly match the type
required by `variable`. required by `variable`.
:param allow_downcast: Only applies if `strict` is False. :param allow_downcast: Only applies if `strict` is False.
True -> allow assigned value to lose precision when casted during assignment. True -> allow assigned value to lose precision when cast during assignment.
False -> never allow precision loss. False -> never allow precision loss.
None -> only allow downcasting of a Python float to a scalar floatX. None -> only allow downcasting of a Python float to a scalar floatX.
...@@ -346,7 +346,7 @@ def pfunc(params, outputs=None, mode=None, updates=[], givens=[], ...@@ -346,7 +346,7 @@ def pfunc(params, outputs=None, mode=None, updates=[], givens=[],
:param allow_input_downcast: True means that the values passed as :param allow_input_downcast: True means that the values passed as
inputs when calling the function can be silently downcasted to fit inputs when calling the function can be silently downcasted to fit
the dtype of the corresponding Variable, which may lose precision. the dtype of the corresponding Variable, which may lose precision.
False means that it will only be casted to a more general, or False means that it will only be cast to a more general, or
precise, type. None (default) is almost like False, but allows precise, type. None (default) is almost like False, but allows
downcasting of Python float scalars to floatX. downcasting of Python float scalars to floatX.
......
...@@ -53,11 +53,11 @@ class SharedVariable(Variable): ...@@ -53,11 +53,11 @@ class SharedVariable(Variable):
:param value: A value to associate with this variable (a new container will be created). :param value: A value to associate with this variable (a new container will be created).
:param strict: True -> assignments to .value will not be casted or copied, so they must :param strict: True -> assignments to .value will not be cast or copied, so they must
have the correct type. have the correct type.
:param allow_downcast: Only applies if `strict` is False. :param allow_downcast: Only applies if `strict` is False.
True -> allow assigned value to lose precision when casted during assignment. True -> allow assigned value to lose precision when cast during assignment.
False -> never allow precision loss. False -> never allow precision loss.
None -> only allow downcasting of a Python float to a scalar floatX. None -> only allow downcasting of a Python float to a scalar floatX.
......
...@@ -95,7 +95,7 @@ class Test_SharedVariable(unittest.TestCase): ...@@ -95,7 +95,7 @@ class Test_SharedVariable(unittest.TestCase):
value=numpy.asarray([1., 2.]), value=numpy.asarray([1., 2.]),
strict=False) strict=False)
# check that assignments to value are casted properly # check that assignments to value are cast properly
u.set_value([3,4]) u.set_value([3,4])
assert type(u.get_value()) is numpy.ndarray assert type(u.get_value()) is numpy.ndarray
assert str(u.get_value(borrow=True).dtype) == 'float64' assert str(u.get_value(borrow=True).dtype) == 'float64'
......
...@@ -263,14 +263,14 @@ class TypedParam(ConfigParam): ...@@ -263,14 +263,14 @@ class TypedParam(ConfigParam):
def __init__(self, default, mytype, is_valid=None, allow_override=True): def __init__(self, default, mytype, is_valid=None, allow_override=True):
self.mytype = mytype self.mytype = mytype
def filter(val): def filter(val):
casted_val = mytype(val) cast_val = mytype(val)
if callable(is_valid): if callable(is_valid):
if is_valid(casted_val): if is_valid(cast_val):
return casted_val return cast_val
else: else:
raise ValueError('Invalid value (%s) for configuration variable "%s".' raise ValueError('Invalid value (%s) for configuration variable "%s".'
% (val, self.fullname), val) % (val, self.fullname), val)
return casted_val return cast_val
super(TypedParam, self).__init__(default, filter, allow_override=allow_override) super(TypedParam, self).__init__(default, filter, allow_override=allow_override)
def __str__(self): def __str__(self):
return '%s (%s) ' % (self.fullname, self.mytype) return '%s (%s) ' % (self.fullname, self.mytype)
......
...@@ -163,8 +163,8 @@ def local_gpu_elemwise_0(node): ...@@ -163,8 +163,8 @@ def local_gpu_elemwise_0(node):
elif numpy.all([i.type.dtype in upcastable for i in node.inputs]): elif numpy.all([i.type.dtype in upcastable for i in node.inputs]):
# second - establish that a new node with upcasted inputs has the same outputs # second - establish that a new node with upcasted inputs has the same outputs
# types as the original node # types as the original node
casted = node.op.make_node(*[tensor.cast(i, 'float32') for i in node.inputs]) upcasted = node.op.make_node(*[tensor.cast(i, 'float32') for i in node.inputs])
if [o.type for o in casted.outputs] == [o.type for o in node.outputs]: if [o.type for o in upcasted.outputs] == [o.type for o in node.outputs]:
new_inputs = [gpu_from_host(tensor.cast(i, 'float32')) for i in node.inputs] new_inputs = [gpu_from_host(tensor.cast(i, 'float32')) for i in node.inputs]
gpu_elemwise = new_op(*new_inputs) gpu_elemwise = new_op(*new_inputs)
......
...@@ -901,7 +901,7 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester( ...@@ -901,7 +901,7 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester(
shared_borrow_true_alias_ = True,#True when the original value is already a CudaNdarray! shared_borrow_true_alias_ = True,#True when the original value is already a CudaNdarray!
set_value_borrow_true_alias_ = True, set_value_borrow_true_alias_ = True,
set_value_inplace_ = True, set_value_inplace_ = True,
set_casted_value_inplace_ = False, set_cast_value_inplace_ = False,
shared_constructor_accept_ndarray_ = True, shared_constructor_accept_ndarray_ = True,
internal_type_ = cuda_ndarray.CudaNdarray, internal_type_ = cuda_ndarray.CudaNdarray,
test_internal_type_ = lambda a: isinstance(a,cuda_ndarray.CudaNdarray), test_internal_type_ = lambda a: isinstance(a,cuda_ndarray.CudaNdarray),
...@@ -919,7 +919,7 @@ test_shared_options2 = theano.tensor.tests.test_sharedvar.makeSharedTester( ...@@ -919,7 +919,7 @@ test_shared_options2 = theano.tensor.tests.test_sharedvar.makeSharedTester(
shared_borrow_true_alias_ = False, shared_borrow_true_alias_ = False,
set_value_borrow_true_alias_ = False, set_value_borrow_true_alias_ = False,
set_value_inplace_ = True, set_value_inplace_ = True,
set_casted_value_inplace_ = True, set_cast_value_inplace_ = True,
shared_constructor_accept_ndarray_ = True, shared_constructor_accept_ndarray_ = True,
internal_type_ = cuda_ndarray.CudaNdarray, internal_type_ = cuda_ndarray.CudaNdarray,
test_internal_type_ = lambda a: isinstance(a,cuda_ndarray.CudaNdarray), test_internal_type_ = lambda a: isinstance(a,cuda_ndarray.CudaNdarray),
......
...@@ -65,7 +65,7 @@ class CudaNdarrayType(Type): ...@@ -65,7 +65,7 @@ class CudaNdarrayType(Type):
return cuda.filter(data, self.broadcastable, strict, old_data) return cuda.filter(data, self.broadcastable, strict, old_data)
else: # (not strict) and (not allow_downcast) else: # (not strict) and (not allow_downcast)
# Check if data.dtype can be accurately casted to self.dtype # Check if data.dtype can be accurately cast to self.dtype
if isinstance(data, numpy.ndarray): if isinstance(data, numpy.ndarray):
up_dtype = scal.upcast(self.dtype, data.dtype) up_dtype = scal.upcast(self.dtype, data.dtype)
if up_dtype == self.dtype: if up_dtype == self.dtype:
......
...@@ -923,7 +923,7 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester( ...@@ -923,7 +923,7 @@ test_shared_options = theano.tensor.tests.test_sharedvar.makeSharedTester(
shared_borrow_true_alias_=True, shared_borrow_true_alias_=True,
set_value_borrow_true_alias_=True, set_value_borrow_true_alias_=True,
set_value_inplace_=False, set_value_inplace_=False,
set_casted_value_inplace_=False, set_cast_value_inplace_=False,
shared_constructor_accept_ndarray_=False, shared_constructor_accept_ndarray_=False,
internal_type_=scipy.sparse.csc_matrix, internal_type_=scipy.sparse.csc_matrix,
test_internal_type_=scipy.sparse.issparse, test_internal_type_=scipy.sparse.issparse,
......
...@@ -244,7 +244,7 @@ class NumpyAutocaster(object): ...@@ -244,7 +244,7 @@ class NumpyAutocaster(object):
x_ = theano._asarray(x, dtype=dtype) x_ = theano._asarray(x, dtype=dtype)
if numpy.all(x == x_): if numpy.all(x == x_):
break break
# returns either an exact x_==x, or the last casted x_ # returns either an exact x_==x, or the last cast x_
return x_ return x_
autocast_int = NumpyAutocaster(('int8', 'int16', 'int32', 'int64')) autocast_int = NumpyAutocaster(('int8', 'int16', 'int32', 'int64'))
......
...@@ -1126,7 +1126,7 @@ def _gemm_from_factored_list(lst): ...@@ -1126,7 +1126,7 @@ def _gemm_from_factored_list(lst):
return False return False
lst2 = [] lst2 = []
# Remove the tuple that can't be casted correctly. # Remove the tuple that can't be cast correctly.
# This can happen when we try to cast a complex to a real # This can happen when we try to cast a complex to a real
for sM in lst: for sM in lst:
if is_pair(sM): if is_pair(sM):
......
...@@ -92,7 +92,7 @@ def scalarconsts_rest(inputs): ...@@ -92,7 +92,7 @@ def scalarconsts_rest(inputs):
def broadcast_like(value, template, env, dtype=None): def broadcast_like(value, template, env, dtype=None):
"""Return a Variable with the same shape and dtype as the template, """Return a Variable with the same shape and dtype as the template,
filled by broadcasting value through it. `value` will be casted as filled by broadcasting value through it. `value` will be cast as
necessary. necessary.
""" """
......
...@@ -17,7 +17,7 @@ def makeSharedTester(shared_constructor_, ...@@ -17,7 +17,7 @@ def makeSharedTester(shared_constructor_,
shared_borrow_true_alias_, shared_borrow_true_alias_,
set_value_borrow_true_alias_, set_value_borrow_true_alias_,
set_value_inplace_, set_value_inplace_,
set_casted_value_inplace_, set_cast_value_inplace_,
shared_constructor_accept_ndarray_, shared_constructor_accept_ndarray_,
internal_type_, internal_type_,
test_internal_type_, test_internal_type_,
...@@ -38,7 +38,7 @@ def makeSharedTester(shared_constructor_, ...@@ -38,7 +38,7 @@ def makeSharedTester(shared_constructor_,
:param set_value_borrow_true_alias_: Should set_value(val,borrow=True) reuse the val memory space :param set_value_borrow_true_alias_: Should set_value(val,borrow=True) reuse the val memory space
:param set_value_inplace_: Should this shared variable overwrite the current :param set_value_inplace_: Should this shared variable overwrite the current
memory when the new value is an ndarray memory when the new value is an ndarray
:param set_casted_value_inplace_: Should this shared variable overwrite the :param set_cast_value_inplace_: Should this shared variable overwrite the
current memory when the new value is of the same current memory when the new value is of the same
type as the internal type. type as the internal type.
:param shared_constructor_accept_ndarray_: Do the shared_constructor accept an ndarray as input? :param shared_constructor_accept_ndarray_: Do the shared_constructor accept an ndarray as input?
...@@ -71,7 +71,7 @@ def makeSharedTester(shared_constructor_, ...@@ -71,7 +71,7 @@ def makeSharedTester(shared_constructor_,
ref_fct = staticmethod(ref_fct_) ref_fct = staticmethod(ref_fct_)
set_value_borrow_true_alias = set_value_borrow_true_alias_ set_value_borrow_true_alias = set_value_borrow_true_alias_
set_value_inplace = set_value_inplace_ set_value_inplace = set_value_inplace_
set_casted_value_inplace = set_casted_value_inplace_ set_cast_value_inplace = set_cast_value_inplace_
shared_constructor_accept_ndarray = shared_constructor_accept_ndarray_ shared_constructor_accept_ndarray = shared_constructor_accept_ndarray_
cast_value = staticmethod(cast_value_) cast_value = staticmethod(cast_value_)
op_by_matrix = op_by_matrix_ op_by_matrix = op_by_matrix_
...@@ -379,14 +379,14 @@ def makeSharedTester(shared_constructor_, ...@@ -379,14 +379,14 @@ def makeSharedTester(shared_constructor_,
self.ref_fct(self.cast_value(nd))) self.ref_fct(self.cast_value(nd)))
assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_value_inplace assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_value_inplace
# Test by set_value with borrow=False when new data casted. # Test by set_value with borrow=False when new data cast.
# specificaly useful for gpu data # specificaly useful for gpu data
nd += 1 nd += 1
old_data = x_shared.container.storage[0] old_data = x_shared.container.storage[0]
x_shared.set_value(self.cast_value(nd), borrow=False) x_shared.set_value(self.cast_value(nd), borrow=False)
assert numpy.allclose(self.ref_fct(x_shared.get_value(borrow=True)), assert numpy.allclose(self.ref_fct(x_shared.get_value(borrow=True)),
self.ref_fct(self.cast_value(nd))) self.ref_fct(self.cast_value(nd)))
assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_casted_value_inplace assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_cast_value_inplace
# Test by set_value with borrow=True # Test by set_value with borrow=True
nd += 1 nd += 1
...@@ -396,12 +396,12 @@ def makeSharedTester(shared_constructor_, ...@@ -396,12 +396,12 @@ def makeSharedTester(shared_constructor_,
self.ref_fct(self.cast_value(nd))) self.ref_fct(self.cast_value(nd)))
assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_value_inplace assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_value_inplace
# Test by set_value with borrow=True when new data casted. # Test by set_value with borrow=True when new data cast.
nd += 1 nd += 1
old_data = x_shared.container.storage[0] old_data = x_shared.container.storage[0]
x_shared.set_value(self.cast_value(nd.copy()), borrow=True) x_shared.set_value(self.cast_value(nd.copy()), borrow=True)
assert numpy.allclose(self.ref_fct(x_shared.get_value(borrow=True)), self.ref_fct(self.cast_value(nd))) assert numpy.allclose(self.ref_fct(x_shared.get_value(borrow=True)), self.ref_fct(self.cast_value(nd)))
assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_casted_value_inplace assert may_share_memory(old_data, x_shared.container.storage[0]) == self.set_cast_value_inplace
def test_specify_shape(self): def test_specify_shape(self):
dtype = self.dtype dtype = self.dtype
...@@ -628,7 +628,7 @@ test_shared_options=makeSharedTester( ...@@ -628,7 +628,7 @@ test_shared_options=makeSharedTester(
shared_borrow_true_alias_ = True, shared_borrow_true_alias_ = True,
set_value_borrow_true_alias_ = True, set_value_borrow_true_alias_ = True,
set_value_inplace_ = False, set_value_inplace_ = False,
set_casted_value_inplace_ = False, set_cast_value_inplace_ = False,
shared_constructor_accept_ndarray_ = True, shared_constructor_accept_ndarray_ = True,
internal_type_ = numpy.ndarray, internal_type_ = numpy.ndarray,
test_internal_type_ = lambda a: isinstance(a,numpy.ndarray), test_internal_type_ = lambda a: isinstance(a,numpy.ndarray),
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论