Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
9db387c4
提交
9db387c4
authored
6月 22, 2012
作者:
Nicolas Bouchard
提交者:
Frederic
7月 06, 2012
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Add docstrings
上级
659c98b6
隐藏空白字符变更
内嵌
并排
正在显示
1 个修改的文件
包含
225 行增加
和
62 行删除
+225
-62
sp2.py
theano/sparse/sandbox/sp2.py
+225
-62
没有找到文件。
theano/sparse/sandbox/sp2.py
浏览文件 @
9db387c4
...
@@ -22,9 +22,10 @@ eliminate_zeros = remove0
...
@@ -22,9 +22,10 @@ eliminate_zeros = remove0
class
Cast
(
gof
.
op
.
Op
):
class
Cast
(
gof
.
op
.
Op
):
"""Cast sparse variable to the desired dtype.
"""Cast sparse variable to the desired dtype.
This wrap the method astype from scipy.
:param x: Sparse matrix.
:return: Same as `x` but having `out_type` as dtype.
"""
"""
# It returns a new matrix, not a view.
def
__init__
(
self
,
out_type
):
def
__init__
(
self
,
out_type
):
self
.
out_type
=
out_type
self
.
out_type
=
out_type
...
@@ -71,16 +72,13 @@ zcast = Cast('complex128')
...
@@ -71,16 +72,13 @@ zcast = Cast('complex128')
class
HStack
(
gof
.
op
.
Op
):
class
HStack
(
gof
.
op
.
Op
):
"""Stack sparse matrices horizontally (column wise).
"""Stack sparse matrices horizontally (column wise).
This wrap the method hstack from scipy.
:param blocks: Sequence of sparse array of compatible shape.
:param format: String representing the output format.
:Parameters:
:param dtype: Output dtype.
- `blocks`: Sequence of sparse array of compatible shape
- `format`: String representing the output format
- `dtype`: Output dtype
:return:
t
he concatenation of the sparse arrays column wise.
:return:
T
he concatenation of the sparse arrays column wise.
The number of line of the sparse matrix must agree.
:note:
The number of line of the sparse matrix must agree.
"""
"""
def
__init__
(
self
,
format
=
None
,
dtype
=
None
):
def
__init__
(
self
,
format
=
None
,
dtype
=
None
):
...
@@ -148,32 +146,30 @@ def hstack(blocks, format=None, dtype=None):
...
@@ -148,32 +146,30 @@ def hstack(blocks, format=None, dtype=None):
This wrap the method hstack from scipy.
This wrap the method hstack from scipy.
:Parameters:
:param blocks: List of sparse array of compatible shape.
- `blocks`: Sequence of sparse array of compatible shape
:param format: String representing the output format.
- `format`: String representing the output format
:param dtype: Output dtype.
- `dtype`: Output dtype
:return:
t
he concatenation of the sparse array column wise.
:return:
T
he concatenation of the sparse array column wise.
The number of line of the sparse matrix must agree.
:note:
The number of line of the sparse matrix must agree.
"""
"""
return
HStack
(
format
=
format
,
dtype
=
dtype
)(
*
blocks
)
return
HStack
(
format
=
format
,
dtype
=
dtype
)(
*
blocks
)
class
VStack
(
HStack
):
class
VStack
(
HStack
):
"""Stack sparse matrices vertically (row wise).
"""Stack sparse matrices vertically (row wise).
This wrap the method vstack from scipy.
:param blocks: Sequence of sparse array of compatible shape.
:param format: String representing the output format.
:Parameters:
:param dtype: Output dtype.
- `blocks`: Sequence of sparse array of compatible shape
- `format`: String representing the output format
- `dtype`: Output dtype
:return:
t
he concatenation of the sparse arrays row wise.
:return:
T
he concatenation of the sparse arrays row wise.
The number of column of the sparse matrix must agree.
:note:
The number of column of the sparse matrix must agree.
"""
"""
def
perform
(
self
,
node
,
block
,
(
out
,
)):
def
perform
(
self
,
node
,
block
,
(
out
,
)):
for
b
in
block
:
for
b
in
block
:
assert
_is_sparse
(
b
)
assert
_is_sparse
(
b
)
...
@@ -210,14 +206,13 @@ def hstack(blocks, format=None, dtype=None):
...
@@ -210,14 +206,13 @@ def hstack(blocks, format=None, dtype=None):
This wrap the method vstack from scipy.
This wrap the method vstack from scipy.
:Parameters:
:param blocks: List of sparse array of compatible shape.
- `blocks`: Sequence of sparse array of compatible shape
:param format: String representing the output format.
- `format`: String representing the output format
:param dtype: Output dtype.
- `dtype`: Output dtype
:return:
t
he concatenation of the sparse array row wise.
:return:
T
he concatenation of the sparse array row wise.
The number of column of the sparse matrix must agree.
:note:
The number of column of the sparse matrix must agree.
"""
"""
return
VStack
(
format
=
format
,
dtype
=
dtype
)(
*
blocks
)
return
VStack
(
format
=
format
,
dtype
=
dtype
)(
*
blocks
)
...
@@ -226,16 +221,16 @@ class AddSSData(gof.op.Op):
...
@@ -226,16 +221,16 @@ class AddSSData(gof.op.Op):
"""Add two sparse matrices assuming they have the same sparsity
"""Add two sparse matrices assuming they have the same sparsity
pattern.
pattern.
:Parameters:
:param x: Sparse matrix.
- `x`: Sparse matrix.
:param y: Sparse matrix.
- `y`: Sparse matrix.
:return: The sum of the two sparse matrix element wise.
:return: The sum of the two sparse matrix element wise.
:note:
`x` and `y` are assumed to have the same sparsity pattern.
:note:
The grad implemented is structured
.
- `x` and `y` are assumed to have the same sparsity pattern
.
- The grad implemented is structured.
"""
"""
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
return
(
type
(
self
)
==
type
(
other
))
...
@@ -315,6 +310,24 @@ register_specialize(local_mul_s_d)
...
@@ -315,6 +310,24 @@ register_specialize(local_mul_s_d)
class
MulSDCSC
(
gof
.
Op
):
class
MulSDCSC
(
gof
.
Op
):
"""Multiplication of sparse matrix by a broadcasted dense vector
element wise.
:param a_data: Sparse matrix data.
:param a_indices: Sparse matrix indices.
:param a_indptr: Sparse matrix indptr.
:param b: Tensor type matrix.
:return: The multiplication of the two matrix element wise.
:note:
- `a_data`, `a_indices` and `a_indptr` must be the properties
of a sparse matrix in csc format.
- The dtype of `a_data`, i.e. the dtype of the sparse matrix,
cannot be a complex type.
- This op is used as an optimization of mul_s_d.
"""
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
return
(
type
(
self
)
==
type
(
other
))
...
@@ -331,6 +344,7 @@ class MulSDCSC(gof.Op):
...
@@ -331,6 +344,7 @@ class MulSDCSC(gof.Op):
#def perform(self, node, (a_data, a_indices, a_indptr, b), (out,)):
#def perform(self, node, (a_data, a_indices, a_indptr, b), (out,)):
# return NotImplementedError()
# return NotImplementedError()
def
c_code
(
self
,
node
,
name
,
(
_data
,
_indices
,
_indptr
,
_b
,),
def
c_code
(
self
,
node
,
name
,
(
_data
,
_indices
,
_indptr
,
_b
,),
(
_zout
,
),
sub
):
(
_zout
,
),
sub
):
...
@@ -404,10 +418,31 @@ class MulSDCSC(gof.Op):
...
@@ -404,10 +418,31 @@ class MulSDCSC(gof.Op):
}
}
"""
%
dict
(
locals
(),
**
sub
)
"""
%
dict
(
locals
(),
**
sub
)
def
__str__
(
self
):
return
self
.
__class__
.
__name__
mul_s_d_csc
=
MulSDCSC
()
mul_s_d_csc
=
MulSDCSC
()
class
MulSDCSR
(
gof
.
Op
):
class
MulSDCSR
(
gof
.
Op
):
"""Multiplication of sparse matrix by a broadcasted dense vector
element wise.
:param a_data: Sparse matrix data.
:param a_indices: Sparse matrix indices.
:param a_indptr: Sparse matrix indptr.
:param b: Tensor type matrix.
:return: The multiplication of the two matrix element wise.
:note:
- `a_data`, `a_indices` and `a_indptr` must be the properties
of a sparse matrix in csr format.
- The dtype of `a_data`, i.e. the dtype of the sparse matrix,
cannot be a complex type.
- This op is used as an optimization of mul_s_d.
"""
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
return
(
type
(
self
)
==
type
(
other
))
...
@@ -424,6 +459,7 @@ class MulSDCSR(gof.Op):
...
@@ -424,6 +459,7 @@ class MulSDCSR(gof.Op):
#def perform(self, node, (a_data, a_indices, a_indptr, b), (out,)):
#def perform(self, node, (a_data, a_indices, a_indptr, b), (out,)):
# return NotImplemented()
# return NotImplemented()
def
c_code
(
self
,
node
,
name
,
(
_data
,
_indices
,
_indptr
,
_b
,),
def
c_code
(
self
,
node
,
name
,
(
_data
,
_indices
,
_indptr
,
_b
,),
(
_zout
,
),
sub
):
(
_zout
,
),
sub
):
...
@@ -497,14 +533,22 @@ class MulSDCSR(gof.Op):
...
@@ -497,14 +533,22 @@ class MulSDCSR(gof.Op):
}
}
"""
%
dict
(
locals
(),
**
sub
)
"""
%
dict
(
locals
(),
**
sub
)
def
__str__
(
self
):
return
self
.
__class__
.
__name__
mul_s_d_csr
=
MulSDCSR
()
mul_s_d_csr
=
MulSDCSR
()
class
Poisson
(
gof
.
op
.
Op
):
class
Poisson
(
gof
.
op
.
Op
):
"""Return a sparse having random values from a
p
oisson density
"""Return a sparse having random values from a
P
oisson density
with mean from the input.
with mean from the input.
:param x: Sparse matrix.
:return: A sparse matrix of random integers of a Poisson density
with mean of `x` element wise.
"""
"""
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
return
(
type
(
self
)
==
type
(
other
))
...
@@ -538,11 +582,12 @@ class Multinomial(gof.op.Op):
...
@@ -538,11 +582,12 @@ class Multinomial(gof.op.Op):
density having number of experiment `n` and probability of succes
density having number of experiment `n` and probability of succes
`p`.
`p`.
:Parameters:
:param n: Number of experiment.
- `n`: Number of experiment.
:param p: Sparse matrix ofprobability for each of the different outcomes.
- `p`: Sparse probability of each of the different outcomes.
:return: A sparse matrix of random integers of a multinomial density.
"""
"""
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
return
(
type
(
self
)
==
type
(
other
))
...
@@ -578,6 +623,22 @@ multinomial = Multinomial()
...
@@ -578,6 +623,22 @@ multinomial = Multinomial()
class
Binomial
(
gof
.
op
.
Op
):
class
Binomial
(
gof
.
op
.
Op
):
# TODO This op is not an equivalent of numpy.random.binomial. In
# facts, this does not follow a binomial distribution at all.
# To see it, just try with p = 1.
# """Return a sparse matrix having random values from a binomial
# density having number of experiment `n` and probability of succes
# `p`.
# :param n: Tensor scalar representing the number of experiment.
# :param p: Tensor scalar representing the probability of success.
# :param shape: Tensor vector for the output shape.
# :return: A sparse matrix of integers representing the number
# of success.
# """
def
__init__
(
self
,
format
,
dtype
):
def
__init__
(
self
,
format
,
dtype
):
self
.
format
=
format
self
.
format
=
format
self
.
dtype
=
dtype
self
.
dtype
=
dtype
...
@@ -612,7 +673,7 @@ class Binomial(gof.op.Op):
...
@@ -612,7 +673,7 @@ class Binomial(gof.op.Op):
return
None
,
None
,
None
return
None
,
None
,
None
def
infer_shape
(
self
,
node
,
ins_shapes
):
def
infer_shape
(
self
,
node
,
ins_shapes
):
return
ins_shapes
return
[
ins_shapes
[
2
]]
def
__str__
(
self
):
def
__str__
(
self
):
return
self
.
__class__
.
__name__
return
self
.
__class__
.
__name__
...
@@ -623,8 +684,7 @@ csc_dbinomial = Binomial('csc', 'float64')
...
@@ -623,8 +684,7 @@ csc_dbinomial = Binomial('csc', 'float64')
def
structured_monoid
(
tensor_op
):
def
structured_monoid
(
tensor_op
):
"""
"""Generic operation to perform many kinds of monoid element-wise
Generic operation to perform many kinds of monoid element-wise
operations on the non-zeros of a sparse matrix.
operations on the non-zeros of a sparse matrix.
The first parameter must always be a sparse matrix. The other parameters
The first parameter must always be a sparse matrix. The other parameters
...
@@ -699,7 +759,15 @@ def structured_add(x):
...
@@ -699,7 +759,15 @@ def structured_add(x):
class
MulSV
(
gof
.
op
.
Op
):
class
MulSV
(
gof
.
op
.
Op
):
'''Multiplication of sparse matrix by a broadcasted dense vector.'''
"""Multiplication of sparse matrix by a broadcasted dense vector
element wise.
:param x: Sparse matrix to multiply.
:param y: Tensor broadcastable vector.
:Return: The product x * y element wise.
"""
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
return
(
type
(
self
)
==
type
(
other
))
...
@@ -728,10 +796,34 @@ class MulSV(gof.op.Op):
...
@@ -728,10 +796,34 @@ class MulSV(gof.op.Op):
assert
_is_sparse_variable
(
x
)
and
_is_dense_variable
(
y
)
assert
_is_sparse_variable
(
x
)
and
_is_dense_variable
(
y
)
assert
_is_sparse_variable
(
gz
)
assert
_is_sparse_variable
(
gz
)
return
mul_s_v
(
gz
,
y
),
sp_sum
(
x
*
gz
,
axis
=
0
,
sparse_grad
=
True
)
return
mul_s_v
(
gz
,
y
),
sp_sum
(
x
*
gz
,
axis
=
0
,
sparse_grad
=
True
)
def
infer_shape
(
self
,
node
,
ins_shapes
):
return
[
ins_shapes
[
0
]]
def
__str__
(
self
):
return
self
.
__class__
.
__name__
mul_s_v
=
MulSV
()
mul_s_v
=
MulSV
()
class
MulSVCSR
(
gof
.
Op
):
class
MulSVCSR
(
gof
.
Op
):
"""Multiplication of sparse matrix by a broadcasted dense vector
element wise.
:param a_data: Sparse matrix data.
:param a_indices: Sparse matrix indices.
:param a_indptr: Sparse matrix indptr.
:param b: Tensor type matrix.
:return: The multiplication of the two matrix element wise.
:note:
- `a_data`, `a_indices` and `a_indptr` must be the properties
of a sparse matrix in csr format.
- The dtype of `a_data`, i.e. the dtype of the sparse matrix,
cannot be a complex type.
- This op is used as an optimization of MulSV.
"""
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
return
(
type
(
self
)
==
type
(
other
))
...
@@ -817,6 +909,9 @@ class MulSVCSR(gof.Op):
...
@@ -817,6 +909,9 @@ class MulSVCSR(gof.Op):
}
}
"""
%
dict
(
locals
(),
**
sub
)
"""
%
dict
(
locals
(),
**
sub
)
def
__str__
(
self
):
return
self
.
__class__
.
__name__
mul_s_v_csr
=
MulSVCSR
()
mul_s_v_csr
=
MulSVCSR
()
...
@@ -853,10 +948,20 @@ register_specialize(local_mul_s_v)
...
@@ -853,10 +948,20 @@ register_specialize(local_mul_s_v)
class
StructuredAddSV
(
gof
.
op
.
Op
):
class
StructuredAddSV
(
gof
.
op
.
Op
):
'''
Structured addition of a sparse matrix and a dense vector.
"""
Structured addition of a sparse matrix and a dense vector.
The elements of the vector are are only added to the corresponding
The elements of the vector are are only added to the corresponding
non-zero elements. Therefore, this operation outputs another sparse
non-zero elements. Therefore, this operation outputs another sparse
matrix.'''
matrix.
:param x: Sparse matrix.
:param y: Tensor type vector.
:return: A sparse matrix containing the addition of the vector to
the data of the sparse matrix.
:note: The grad implemented is structured since the op is structured.
"""
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
return
(
type
(
self
)
==
type
(
other
))
...
@@ -885,10 +990,34 @@ class StructuredAddSV(gof.op.Op):
...
@@ -885,10 +990,34 @@ class StructuredAddSV(gof.op.Op):
assert
_is_sparse_variable
(
x
)
and
not
_is_sparse_variable
(
y
)
assert
_is_sparse_variable
(
x
)
and
not
_is_sparse_variable
(
y
)
assert
_is_sparse_variable
(
gz
)
assert
_is_sparse_variable
(
gz
)
return
gz
,
sp_sum
(
gz
,
axis
=
0
,
sparse_grad
=
True
)
return
gz
,
sp_sum
(
gz
,
axis
=
0
,
sparse_grad
=
True
)
def
infer_shape
(
self
,
node
,
ins_shapes
):
return
[
ins_shapes
[
0
]]
def
__str__
(
self
):
return
self
.
__class__
.
__name__
structured_add_s_v
=
StructuredAddSV
()
structured_add_s_v
=
StructuredAddSV
()
class
StrucutedAddSVCSR
(
gof
.
Op
):
class
StrucutedAddSVCSR
(
gof
.
Op
):
"""Structured addition of a sparse matrix and a dense vector.
The elements of the vector are are only added to the corresponding
non-zero elements. Therefore, this operation outputs another sparse
matrix.
:param a_data: Sparse matrix data.
:param a_indices: Sparse matrix indices.
:param a_indptr: Sparse matrix indptr.
:param b: Tensor type vector.
:return: A sparse matrix containing the addition of the vector to
the data of the sparse matrix.
:note: The a_* are the properties of a sparse matrix in csr
format. This op is used as an optimization for
StructuredAddSV.
"""
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
(
type
(
self
)
==
type
(
other
))
return
(
type
(
self
)
==
type
(
other
))
...
@@ -986,6 +1115,9 @@ class StrucutedAddSVCSR(gof.Op):
...
@@ -986,6 +1115,9 @@ class StrucutedAddSVCSR(gof.Op):
}
}
"""
%
dict
(
locals
(),
**
sub
)
"""
%
dict
(
locals
(),
**
sub
)
def
__str__
(
self
):
return
self
.
__class__
.
__name__
structured_add_s_v_csr
=
StrucutedAddSVCSR
()
structured_add_s_v_csr
=
StrucutedAddSVCSR
()
...
@@ -1023,31 +1155,34 @@ register_specialize(local_structured_add_s_v)
...
@@ -1023,31 +1155,34 @@ register_specialize(local_structured_add_s_v)
class
SamplingDot
(
gof
.
op
.
Op
):
class
SamplingDot
(
gof
.
op
.
Op
):
"""
"""
Operand for calculating the dot product DOT(X, Y) = Z when you
Operand for calculating the dot product DOT(X, Y) = Z when you
only want to calculate a subset of Z.
only want to calculate a subset of Z. It is equivalent to P o (X
. Y) where o is the element-wise product, X and Y operands of the
It is equivalent to P o (X . Y) where o is the element-wise product,
dot product and P is a matrix that contains 1 when the
X and Y operands of the dot product and P is a matrix that contains
corresponding element of Z should be calculated and 0 when it
1 when the corresponding element of Z should be calculated and 0
shouldn't. Note that SamplingDot has a different interface than
when it shouldn't. Note that SamplingDot has a different interface
DOT because SamplingDot requires X to be a MxK matrix while Y is a
than DOT because SamplingDot requires X to be a MxK matrix while Y
NxK matrix instead of the usual KxN matrix.
is a
NxK matrix instead of the usual KxN matrix.
It will work if the pattern is not binary value, but if the
It will work if the pattern is not binary value, but if the
pattern doesn't have a high sparsity proportion it will be slower
pattern doesn't have a high sparsity proportion it will be slower
then a more optimized dot followed by a normal elemwise
then a more optimized dot followed by a normal elemwise
multiplication.
multiplication.
:param x: Sparse matrix.
:param y: Sparse matrix.
:param p: Sparse matrix.
:return: A sparse matrix containing the dot product of `x` by `y`.
"""
"""
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
return
type
(
self
)
==
type
(
other
)
def
__hash__
(
self
):
def
__hash__
(
self
):
return
hash
(
type
(
self
))
return
hash
(
type
(
self
))
def
__str__
(
self
):
return
'SamplingDot'
def
make_node
(
self
,
x
,
y
,
p
):
def
make_node
(
self
,
x
,
y
,
p
):
x
=
tensor
.
as_tensor_variable
(
x
)
x
=
tensor
.
as_tensor_variable
(
x
)
y
=
tensor
.
as_tensor_variable
(
y
)
y
=
tensor
.
as_tensor_variable
(
y
)
...
@@ -1082,17 +1217,45 @@ class SamplingDot(gof.op.Op):
...
@@ -1082,17 +1217,45 @@ class SamplingDot(gof.op.Op):
]
]
return
rval
return
rval
def
infer_shape
(
self
,
node
,
ins_shapes
):
return
[
ins_shapes
[
0
]]
def
__str__
(
self
):
return
self
.
__class__
.
__name__
sampling_dot
=
SamplingDot
()
sampling_dot
=
SamplingDot
()
class
SamplingDotCsr
(
gof
.
Op
):
class
SamplingDotCsr
(
gof
.
Op
):
"""
"""Operand optimized for calculating the dot product DOT(X, Y) = Z
Optimized SamplingDot when the pattern P is a CSR matrix.
when you only want to calculate a subset of Z and the patternP
is as csr matrix.
If we have the input of mixed dtype, we insert cast elemwise in the graph
It is equivalent to P o (X . Y) where o is the element-wise product,
to be able to call blas function as they don't allow mixed dtype.
X and Y operands of the dot product and P is a matrix that contains
1 when the corresponding element of Z should be calculated and 0
when it shouldn't. Note that SamplingDot has a different interface
than DOT because SamplingDot requires X to be a MxK matrix while Y
is a NxK matrix instead of the usual KxN matrix.
.. note::
It will work if the pattern is not binary value, but if the
pattern doesn't have a high sparsity proportion it will be slower
then a more optimized dot followed by a normal elemwise
multiplication.
:param x: Sparse matrix.
:param y: Sparse matrix.
:param p: Sparse matrix.
:return: A sparse matrix containing the dot product of `x` by `y`.
:note: If we have the input of mixed dtype, we insert cast elemwise
in the graph to be able to call blas function as they don't
allow mixed dtype.
"""
"""
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
return
type
(
self
)
==
type
(
other
)
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论