Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
1765e4e7
提交
1765e4e7
authored
8月 06, 2015
作者:
Iban Harlouchet
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
numpydoc for theano/tensor/subtensor.py
上级
306ee2c8
显示空白字符变更
内嵌
并排
正在显示
1 个修改的文件
包含
170 行增加
和
77 行删除
+170
-77
subtensor.py
theano/tensor/subtensor.py
+170
-77
没有找到文件。
theano/tensor/subtensor.py
浏览文件 @
1765e4e7
...
@@ -39,6 +39,7 @@ sparse_module_ref = None
...
@@ -39,6 +39,7 @@ sparse_module_ref = None
class
AdvancedIndexingError
(
TypeError
):
class
AdvancedIndexingError
(
TypeError
):
"""
"""
Raised when Subtensor is asked to perform advanced indexing.
Raised when Subtensor is asked to perform advanced indexing.
"""
"""
def
__init__
(
self
,
*
args
):
def
__init__
(
self
,
*
args
):
...
@@ -52,6 +53,7 @@ class AdvancedIndexingError(TypeError):
...
@@ -52,6 +53,7 @@ class AdvancedIndexingError(TypeError):
def
make_constant
(
args
):
def
make_constant
(
args
):
"""
"""
Convert python litterals to theano constants in subtensor arguments.
Convert python litterals to theano constants in subtensor arguments.
"""
"""
def
conv
(
a
):
def
conv
(
a
):
if
a
is
None
:
if
a
is
None
:
...
@@ -68,13 +70,14 @@ def make_constant(args):
...
@@ -68,13 +70,14 @@ def make_constant(args):
def
get_idx_list
(
inputs
,
idx_list
,
get_count
=
False
):
def
get_idx_list
(
inputs
,
idx_list
,
get_count
=
False
):
'''
"""
Given a list of inputs to the subtensor and its idx_list reorders
Given a list of inputs to the subtensor and its idx_list reorders
the inputs according to the idx list to get the right values.
the inputs according to the idx list to get the right values.
If get_counts=True, instead returns the number of inputs consumed
If get_counts=True, instead returns the number of inputs consumed
during this process.
during this process.
'''
"""
# The number of indices
# The number of indices
n
=
len
(
inputs
)
-
1
n
=
len
(
inputs
)
-
1
...
@@ -102,14 +105,15 @@ def get_idx_list(inputs, idx_list, get_count=False):
...
@@ -102,14 +105,15 @@ def get_idx_list(inputs, idx_list, get_count=False):
def
get_canonical_form_slice
(
theslice
,
length
):
def
get_canonical_form_slice
(
theslice
,
length
):
'''
"""
Given a slice [start:stop:step] transform it into a canonical form
Given a slice [start:stop:step] transform it into a canonical form
that respects the conventions imposed by python and numpy.
that respects the conventions imposed by python and numpy.
In a canonical form a slice is represented by a canonical form slice,
In a canonical form a slice is represented by a canonical form slice,
in which 0 <= start <= stop <= length and step > 0, and a flag which says
in which 0 <= start <= stop <= length and step > 0, and a flag which says
if the resulting set of numbers needs to be reversed or not.
if the resulting set of numbers needs to be reversed or not.
'''
"""
from
theano.tensor
import
switch
,
lt
,
ge
,
sgn
from
theano.tensor
import
switch
,
lt
,
ge
,
sgn
if
isinstance
(
theslice
,
slice
):
if
isinstance
(
theslice
,
slice
):
...
@@ -252,7 +256,8 @@ def get_canonical_form_slice(theslice, length):
...
@@ -252,7 +256,8 @@ def get_canonical_form_slice(theslice, length):
class
Subtensor
(
Op
):
class
Subtensor
(
Op
):
"""Return a subtensor view
"""
Return a subtensor view.
The inputs array is the tensor x, followed by scalar integer types.
The inputs array is the tensor x, followed by scalar integer types.
TODO: WRITEME: how are the scalar integer variables formatted?
TODO: WRITEME: how are the scalar integer variables formatted?
...
@@ -297,12 +302,15 @@ class Subtensor(Op):
...
@@ -297,12 +302,15 @@ class Subtensor(Op):
@staticmethod
@staticmethod
def
collapse
(
idxs
,
cond
):
def
collapse
(
idxs
,
cond
):
"""
"""
Parameters
----------
idxs : a list of indices or slices.
cond : a callable that returns a bool
idxs: a list of indices or slices.
Returns
cond: a callable that returns a bool
-------
idxs, with the slices flattened out into a list.
returns: idxs, with the slices flattened out into a list.
If cond is true for an entry, does not flatten it.
if cond is true for an entry, does not flatten it.
"""
"""
ret
=
[]
ret
=
[]
...
@@ -323,12 +331,14 @@ class Subtensor(Op):
...
@@ -323,12 +331,14 @@ class Subtensor(Op):
@staticmethod
@staticmethod
def
convert
(
entry
,
slice_ok
=
True
):
def
convert
(
entry
,
slice_ok
=
True
):
"""
"""
Change references to Variables into references to Types.
The "idx_list" field is unique to each Subtensor instance.
The "idx_list" field is unique to each Subtensor instance.
It is not unique to each Apply node, so it should not refer to
It is not unique to each Apply node, so it should not refer to
specific Variables. This method changes references to Variables
specific Variables.
into references to Types.
TODO: WRITEME: This method also accepts "entry" already being a Type;
TODO: WRITEME: This method also accepts "entry" already being a Type;
when would that happen?
when would that happen?
"""
"""
invalid_scal_types
=
[
scal
.
float64
,
scal
.
float32
,
scal
.
float16
]
invalid_scal_types
=
[
scal
.
float64
,
scal
.
float32
,
scal
.
float16
]
scal_types
=
[
scal
.
int64
,
scal
.
int32
,
scal
.
int16
,
scal
.
int8
]
scal_types
=
[
scal
.
int64
,
scal
.
int32
,
scal
.
int16
,
scal
.
int8
]
...
@@ -389,18 +399,25 @@ class Subtensor(Op):
...
@@ -389,18 +399,25 @@ class Subtensor(Op):
only_process_constants
=
False
):
only_process_constants
=
False
):
"""
"""
Return the idx_list with constant inputs replaced by their
Return the idx_list with constant inputs replaced by their
python scalar equivalent.
May raise
python scalar equivalent.
`theano.tensor.NotScalarConstantError` if the idx contains
May raise
`theano.tensor.NotScalarConstantError` if the idx contains
non-constant entries.
non-constant entries.
If allow_partial is True, then entries that are not constant
If allow_partial is True, then entries that are not constant will
will stay as their input variable rather than raising an
stay as their input variable rather than raising an exception.
exception.
None entries are always left as-is.
None entries are always left as-is.
Example usage (where v, a are appropriately typed theano variables):
Parameters
----------
only_process_constants
If True, we only attempt to obtain the value of an index/slice if
it's directly constant and don't try to dig through dimshuffles,
fills, allocs, and other to figure out its value.
Examples
--------
Example usage where v, a are appropriately typed theano variables :
>>> b = a[v, 1:3]
>>> b = a[v, 1:3]
>>> b.owner.op.idx_list
>>> b.owner.op.idx_list
(Scalar(int64), slice(Scalar(int64), Scalar(int64), None))
(Scalar(int64), slice(Scalar(int64), Scalar(int64), None))
...
@@ -409,10 +426,6 @@ class Subtensor(Op):
...
@@ -409,10 +426,6 @@ class Subtensor(Op):
>>> b.owner.op.get_constant_idx(b.owner.inputs)
>>> b.owner.op.get_constant_idx(b.owner.inputs)
NotScalarConstantError: v
NotScalarConstantError: v
:param only_process_constants: If True, we only attempt to obtain
the value of an index/slice if it's directly constant and don't
try to dig through dimshuffles, fills, allocs, and other to figure
out its value.
"""
"""
real_idx
=
get_idx_list
(
inputs
,
self
.
idx_list
)
real_idx
=
get_idx_list
(
inputs
,
self
.
idx_list
)
...
@@ -451,8 +464,13 @@ class Subtensor(Op):
...
@@ -451,8 +464,13 @@ class Subtensor(Op):
def
make_node
(
self
,
x
,
*
inputs
):
def
make_node
(
self
,
x
,
*
inputs
):
"""
"""
x: the tensor to take a subtensor of
Parameters
inputs: a list of theano Scalars
----------
x
The tensor to take a subtensor of.
inputs
A list of theano Scalars.
"""
"""
x
=
theano
.
tensor
.
as_tensor_variable
(
x
)
x
=
theano
.
tensor
.
as_tensor_variable
(
x
)
inputs
=
tuple
(
self
.
my_as_scalar
(
a
)
for
a
in
inputs
)
inputs
=
tuple
(
self
.
my_as_scalar
(
a
)
for
a
in
inputs
)
...
@@ -607,8 +625,8 @@ class Subtensor(Op):
...
@@ -607,8 +625,8 @@ class Subtensor(Op):
@staticmethod
@staticmethod
def
default_helper_c_code_args
():
def
default_helper_c_code_args
():
"""
"""
Returns a dictionary of default arguments to
Returns a dictionary of default arguments to
helper_c_code.
helper_c_code
"""
"""
return
{
"c_prefix"
:
"PyArray"
,
return
{
"c_prefix"
:
"PyArray"
,
...
@@ -622,7 +640,8 @@ class Subtensor(Op):
...
@@ -622,7 +640,8 @@ class Subtensor(Op):
The parameters c_prefix are there to allow reusing this
The parameters c_prefix are there to allow reusing this
function on PyArray and CudaNdarray object.
function on PyArray and CudaNdarray object.
This fct take as input the x,
This fct take as input the x.
"""
"""
default_args
=
Subtensor
.
default_helper_c_code_args
()
default_args
=
Subtensor
.
default_helper_c_code_args
()
...
@@ -986,16 +1005,25 @@ pprint.assign(lambda pstate, r: r.owner and isinstance(r.owner.op, Subtensor),
...
@@ -986,16 +1005,25 @@ pprint.assign(lambda pstate, r: r.owner and isinstance(r.owner.op, Subtensor),
def
set_subtensor
(
x
,
y
,
inplace
=
False
,
def
set_subtensor
(
x
,
y
,
inplace
=
False
,
tolerate_inplace_aliasing
=
False
):
tolerate_inplace_aliasing
=
False
):
"""Return x with the given subtensor overwritten by y.
"""
Return x with the given subtensor overwritten by y.
Example: To replicate the numpy expression "r[10:] = 5", type
Parameters
----------
x
Symbolic variable for the lvalue of = operation.
y
Symbolic variable for the rvalue of = operation.
tolerate_inplace_aliasing
See inc_subtensor for documentation.
Examples
--------
To replicate the numpy expression "r[10:] = 5", type
>>> r = ivector()
>>> r = ivector()
>>> new_r = set_subtensor(r[10:], 5)
>>> new_r = set_subtensor(r[10:], 5)
:param x: symbolic variable for the lvalue of = operation
:param y: symbolic variable for the rvalue of = operation
:param tolerate_inplace_aliasing: see inc_subtensor for documentation.
"""
"""
return
inc_subtensor
(
x
,
y
,
inplace
,
set_instead_of_inc
=
True
,
return
inc_subtensor
(
x
,
y
,
inplace
,
set_instead_of_inc
=
True
,
tolerate_inplace_aliasing
=
tolerate_inplace_aliasing
)
tolerate_inplace_aliasing
=
tolerate_inplace_aliasing
)
...
@@ -1003,22 +1031,32 @@ def set_subtensor(x, y, inplace=False,
...
@@ -1003,22 +1031,32 @@ def set_subtensor(x, y, inplace=False,
def
inc_subtensor
(
x
,
y
,
inplace
=
False
,
set_instead_of_inc
=
False
,
def
inc_subtensor
(
x
,
y
,
inplace
=
False
,
set_instead_of_inc
=
False
,
tolerate_inplace_aliasing
=
False
):
tolerate_inplace_aliasing
=
False
):
"""Return x with the given subtensor incremented by y.
"""
Return x with the given subtensor incremented by y.
:param x: the symbolic result of a Subtensor operation.
:param y: the amount by which to increment ths subtensor in question
Parameters
:param inplace: Don't use. Theano will do it when possible.
----------
:param set_instead_of_inc: If True, do a set_subtensor instead.
x
:param tolerate_inplace_aliasing: allow x and y to be views of a single
The symbolic result of a Subtensor operation.
underlying array even while working inplace. For correct results,
y
x and y must not be overlapping views; if they overlap, the result
The amount by which to increment the subtensor in question.
of this Op will generally be incorrect. This value has no effect if
inplace
inplace=False.
Don't use. Theano will do it when possible.
set_instead_of_inc
Example: To replicate the numpy expression "r[10:] += 5", type
If True, do a set_subtensor instead.
tolerate_inplace_aliasing:
Allow x and y to be views of a single underlying array even while
working inplace. For correct results, x and y must not be overlapping
views; if they overlap, the result of this Op will generally be
incorrect. This value has no effect if inplace=False.
Examples
--------
To replicate the numpy expression "r[10:] += 5", type
>>> r = ivector()
>>> r = ivector()
>>> new_r = inc_subtensor(r[10:], 5)
>>> new_r = inc_subtensor(r[10:], 5)
"""
"""
# First of all, y cannot have a higher dimension than x,
# First of all, y cannot have a higher dimension than x,
# nor have non-broadcastable dimensions where x is broadcastable.
# nor have non-broadcastable dimensions where x is broadcastable.
...
@@ -1159,7 +1197,8 @@ def inc_subtensor(x, y, inplace=False, set_instead_of_inc=False,
...
@@ -1159,7 +1197,8 @@ def inc_subtensor(x, y, inplace=False, set_instead_of_inc=False,
class
IncSubtensor
(
Op
):
class
IncSubtensor
(
Op
):
"""Increment a subtensor.
"""
Increment a subtensor.
This is like numpy's
This is like numpy's
...
@@ -1167,8 +1206,12 @@ class IncSubtensor(Op):
...
@@ -1167,8 +1206,12 @@ class IncSubtensor(Op):
It is used internally to implement the gradient on SubTensor.
It is used internally to implement the gradient on SubTensor.
:param set_instead_of_inc: if True set the subtensor to the value instead
Parameters
of incrementing it by that value.
----------
set_instead_of_inc
If True set the subtensor to the value instead of incrementing it by
that value.
"""
"""
check_input
=
False
check_input
=
False
...
@@ -1225,9 +1268,14 @@ class IncSubtensor(Op):
...
@@ -1225,9 +1268,14 @@ class IncSubtensor(Op):
def
make_node
(
self
,
x
,
y
,
*
inputs
):
def
make_node
(
self
,
x
,
y
,
*
inputs
):
"""
"""
x: the tensor to increment
Parameters
y: the value to increment by
----------
x
The tensor to increment.
y
The value to increment by.
inputs: TODO WRITEME
inputs: TODO WRITEME
"""
"""
x
,
y
=
map
(
theano
.
tensor
.
as_tensor_variable
,
[
x
,
y
])
x
,
y
=
map
(
theano
.
tensor
.
as_tensor_variable
,
[
x
,
y
])
if
y
.
ndim
>
x
.
ndim
:
if
y
.
ndim
>
x
.
ndim
:
...
@@ -1411,8 +1459,10 @@ class IncSubtensor(Op):
...
@@ -1411,8 +1459,10 @@ class IncSubtensor(Op):
)
)
def
do_type_checking
(
self
,
node
):
def
do_type_checking
(
self
,
node
):
""" Should raise NotImplementedError if c_code does not support
"""
Should raise NotImplementedError if c_code does not support
the types involved in this node.
the types involved in this node.
"""
"""
if
not
isinstance
(
node
.
inputs
[
0
]
.
type
,
theano
.
tensor
.
TensorType
):
if
not
isinstance
(
node
.
inputs
[
0
]
.
type
,
theano
.
tensor
.
TensorType
):
...
@@ -1427,13 +1477,18 @@ class IncSubtensor(Op):
...
@@ -1427,13 +1477,18 @@ class IncSubtensor(Op):
def
copy_of_x
(
self
,
x
):
def
copy_of_x
(
self
,
x
):
"""
"""
:param x: a string giving the name of a C variable
Parameters
pointing to an array
----------
x
A string giving the name of a C variable pointing to an array.
:return: C code expression to make a copy of x
Returns
-------
C code expression to make a copy of x.
Base class uses PyArrayObject *, subclasses may override for
Base class uses PyArrayObject *, subclasses may override for
different types of arrays.
different types of arrays.
"""
"""
# Parameters of PyArrary_FromAny are:
# Parameters of PyArrary_FromAny are:
# array
# array
...
@@ -1448,12 +1503,16 @@ class IncSubtensor(Op):
...
@@ -1448,12 +1503,16 @@ class IncSubtensor(Op):
def
make_view_array
(
self
,
x
,
view_ndim
):
def
make_view_array
(
self
,
x
,
view_ndim
):
"""
"""
:param x: a string identifying an array to be viewed
Parameters
:param view_ndim: a string specifying the number of dimensions
----------
to have in the view
x
A string identifying an array to be viewed.
view_ndim
A string specifying the number of dimensions to have in the view.
This doesn't need to actually set up the view with the right indexing;
we'll do that manually later.
This doesn't need to actually set up the view with the
right indexing; we'll do that manually later.
"""
"""
return
"""Py_INCREF(PyArray_DESCR(
%(x)
s));
return
"""Py_INCREF(PyArray_DESCR(
%(x)
s));
...
@@ -1471,22 +1530,35 @@ class IncSubtensor(Op):
...
@@ -1471,22 +1530,35 @@ class IncSubtensor(Op):
"""
%
locals
()
"""
%
locals
()
def
get_helper_c_code_args
(
self
):
def
get_helper_c_code_args
(
self
):
""" Return a dictionary of arguments to pass to helper_c_code."""
"""
Return a dictionary of arguments to pass to helper_c_code.
"""
return
Subtensor
.
default_helper_c_code_args
()
return
Subtensor
.
default_helper_c_code_args
()
def
copy_into
(
self
,
view
,
source
):
def
copy_into
(
self
,
view
,
source
):
"""
"""
view: string, C code expression for an array
Parameters
source: string, C code expression for an array
----------
view : string
C code expression for an array.
source : string
C code expression for an array.
Returns
-------
Returns a C code expression to copy source into view, and
return 0 on success.
returns a C code expression to copy source into view, and
return 0 on success
"""
"""
return
"""PyArray_CopyInto(
%(view)
s,
%(source)
s)"""
%
locals
()
return
"""PyArray_CopyInto(
%(view)
s,
%(source)
s)"""
%
locals
()
def
add_to_zview
(
self
,
name
,
x
,
fail
):
def
add_to_zview
(
self
,
name
,
x
,
fail
):
""" Return C code to add x to zview. Should DECREF zview if the
"""
add fails."""
Return C code to add x to zview. Should DECREF zview if the
add fails.
"""
return
"""
return
"""
PyArrayObject * add_rval = (PyArrayObject*)PyNumber_InPlaceAdd(
PyArrayObject * add_rval = (PyArrayObject*)PyNumber_InPlaceAdd(
...
@@ -1551,11 +1623,13 @@ class IncSubtensor(Op):
...
@@ -1551,11 +1623,13 @@ class IncSubtensor(Op):
def
_sum_grad_over_bcasted_dims
(
x
,
gx
):
def
_sum_grad_over_bcasted_dims
(
x
,
gx
):
"""Sum of gx over dimensions to reproduce x.broadcastable.
"""
Sum of gx over dimensions to reproduce x.broadcastable.
This is useful to sum gradients over certain dimensions when
This is useful to sum gradients over certain dimensions when
x has been broadcasted, and we need to sum the gradient contributions
x has been broadcasted, and we need to sum the gradient contributions
over all duplications.
over all duplications.
"""
"""
if
gx
.
broadcastable
!=
x
.
broadcastable
:
if
gx
.
broadcastable
!=
x
.
broadcastable
:
x_dim_added
=
gx
.
ndim
-
x
.
ndim
x_dim_added
=
gx
.
ndim
-
x
.
ndim
...
@@ -1592,7 +1666,10 @@ def _sum_grad_over_bcasted_dims(x, gx):
...
@@ -1592,7 +1666,10 @@ def _sum_grad_over_bcasted_dims(x, gx):
class
AdvancedSubtensor1
(
Op
):
class
AdvancedSubtensor1
(
Op
):
"""Implement x[ilist] where ilist is a vector of integers."""
"""
Implement x[ilist] where ilist is a vector of integers.
"""
# sparse_grad doesn't go in here since it only affects the output
# sparse_grad doesn't go in here since it only affects the output
# of the grad() method.
# of the grad() method.
__props__
=
()
__props__
=
()
...
@@ -1777,7 +1854,11 @@ advanced_subtensor1 = AdvancedSubtensor1()
...
@@ -1777,7 +1854,11 @@ advanced_subtensor1 = AdvancedSubtensor1()
class
AdvancedIncSubtensor1
(
Op
):
class
AdvancedIncSubtensor1
(
Op
):
"""Increments a subtensor using advanced slicing (list of index)"""
"""
Increments a subtensor using advanced slicing (list of index).
"""
__props__
=
(
'inplace'
,
'set_instead_of_inc'
)
__props__
=
(
'inplace'
,
'set_instead_of_inc'
)
def
__init__
(
self
,
inplace
=
False
,
set_instead_of_inc
=
False
):
def
__init__
(
self
,
inplace
=
False
,
set_instead_of_inc
=
False
):
...
@@ -1828,13 +1909,18 @@ class AdvancedIncSubtensor1(Op):
...
@@ -1828,13 +1909,18 @@ class AdvancedIncSubtensor1(Op):
def
copy_of_x
(
self
,
x
):
def
copy_of_x
(
self
,
x
):
"""
"""
:param x: a string giving the name of a C variable
Parameters
pointing to an array
----------
x: string
Gives the name of a C variable pointing to an array.
:return: C code expression to make a copy of x
Returns
-------
C code expression to make a copy of x.
Base class uses PyArrayObject *, subclasses may override for
Base class uses PyArrayObject *, subclasses may override for
different types of arrays.
different types of arrays.
"""
"""
# Parameters of PyArrary_FromAny are:
# Parameters of PyArrary_FromAny are:
# array
# array
...
@@ -1994,6 +2080,7 @@ def adv_index_broadcastable_pattern(a, idx):
...
@@ -1994,6 +2080,7 @@ def adv_index_broadcastable_pattern(a, idx):
For this, we make a fake ndarray and a fake idx and call use ask numpy
For this, we make a fake ndarray and a fake idx and call use ask numpy
the output. From this, we find the output broadcast pattern.
the output. From this, we find the output broadcast pattern.
"""
"""
def
replace_slice
(
v
):
def
replace_slice
(
v
):
...
@@ -2021,8 +2108,11 @@ def adv_index_broadcastable_pattern(a, idx):
...
@@ -2021,8 +2108,11 @@ def adv_index_broadcastable_pattern(a, idx):
class
AdvancedSubtensor
(
Op
):
class
AdvancedSubtensor
(
Op
):
"""Return a subtensor copy, using advanced indexing.
"""
"""
Return a subtensor copy, using advanced indexing.
"""
# Should be used by __getitem__ and __getslice__, as follow:
# Should be used by __getitem__ and __getslice__, as follow:
# AdvancedSubtensor()(self, *args),
# AdvancedSubtensor()(self, *args),
# if args contains and advanced indexing pattern
# if args contains and advanced indexing pattern
...
@@ -2094,13 +2184,16 @@ advanced_subtensor = AdvancedSubtensor()
...
@@ -2094,13 +2184,16 @@ advanced_subtensor = AdvancedSubtensor()
class
AdvancedIncSubtensor
(
Op
):
class
AdvancedIncSubtensor
(
Op
):
"""Increments a subtensor using advanced indexing.
"""
Increments a subtensor using advanced indexing.
:note: We need the numpy.inplace_increment() function currently
Notes
numpy's PR 326 to be able to make an inplace version of this
-----
op.
We need the numpy.inplace_increment() function currently
numpy's PR 326 to be able to make an inplace version of this op.
"""
"""
__props__
=
(
"inplace"
,
"set_instead_of_inc"
)
__props__
=
(
"inplace"
,
"set_instead_of_inc"
)
def
__init__
(
self
,
inplace
=
False
,
set_instead_of_inc
=
False
):
def
__init__
(
self
,
inplace
=
False
,
set_instead_of_inc
=
False
):
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论