Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
48a64a38
提交
48a64a38
authored
8月 29, 2012
作者:
lamblin
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #888 from nouiz/view_map
View map
上级
ec80a34b
e155a490
显示空白字符变更
内嵌
并排
正在显示
13 个修改的文件
包含
113 行增加
和
34 行删除
+113
-34
fgraph.txt
doc/library/gof/fgraph.txt
+0
-0
conv.txt
doc/library/tensor/nnet/conv.txt
+7
-1
extending_theano.txt
doc/tutorial/extending_theano.txt
+50
-0
debugmode.py
theano/compile/debugmode.py
+6
-11
elemwise.py
theano/sandbox/cuda/elemwise.py
+9
-3
opt.py
theano/sandbox/cuda/opt.py
+1
-1
test_cuda_ndarray.py
theano/sandbox/cuda/tests/test_cuda_ndarray.py
+2
-0
basic.py
theano/sparse/basic.py
+17
-6
test_basic.py
theano/sparse/tests/test_basic.py
+2
-0
elemwise.py
theano/tensor/elemwise.py
+1
-2
Conv3D.py
theano/tensor/nnet/Conv3D.py
+14
-0
conv.py
theano/tensor/nnet/conv.py
+2
-2
test_basic.py
theano/tensor/tests/test_basic.py
+2
-8
没有找到文件。
doc/library/gof/fg.txt
→
doc/library/gof/fg
raph
.txt
浏览文件 @
48a64a38
File moved
doc/library/tensor/nnet/conv.txt
浏览文件 @
48a64a38
...
@@ -21,5 +21,11 @@
...
@@ -21,5 +21,11 @@
TODO: Give examples for how to use these things! They are pretty complicated.
TODO: Give examples for how to use these things! They are pretty complicated.
.. autofunction:: theano.tensor.nnet.conv.conv2d
- Conv implemented
- :func:`signal.conv2d <theano.tensor.signal.conv.conv2d>`.
- :func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>`.
- :func:`conv3D <theano.tensor.nnet.Conv3D.conv3D>`.
.. autofunction:: theano.tensor.signal.conv.conv2d
.. autofunction:: theano.tensor.nnet.conv.conv2d
.. autofunction:: theano.tensor.nnet.Conv3D.conv3D
doc/tutorial/extending_theano.txt
浏览文件 @
48a64a38
...
@@ -319,6 +319,56 @@ Exercises 8
...
@@ -319,6 +319,56 @@ Exercises 8
- Our current element-wise fusion generates computation with only 1 output.
- Our current element-wise fusion generates computation with only 1 output.
SciPy
-----
We can wrap SciPy function in Theano. But Scipy is an optional dependency.
Here is some code that allow to make the op Optional:
.. code-block:: python
try:
import scipy.linalg
imported_scipy = True
except ImportError:
# some ops (e.g. Cholesky, Solve, A_Xinv_b) won't work
imported_scipy = False
class SomeOp(Op):
...
def make_node(self, x):
assert imported_scipy, (
"Scipy not available. Scipy is needed for the SomeOp op.")
from nose.plugins.skip import SkipTest
class test_Solve(utt.InferShapeTester):
...
def test_infer_shape(self):
if not imported_scipy:
raise SkipTest("Scipy needed for the Cholesky op.")
Random number in tests
----------------------
Making test errors more reproducable is a good practice. To make your
tests more reproducable, you need a way to get the same random
number. You can do this by seeding NumPy's randon number
generator. There is the Theano flag unittest.rseed that specify the
seed that should be used to init random number generators. There is 2
ways to do this it numpy, here is one:
.. code-block:: python
# You can set NumPy's internal random number generator state with
numpy.random.seed(utt.fetch_seed())
# All following call to numpy.random.*() function will get affected.
# Or you can create a new RandomState separate from the others
rng = numpy.random.RandomState(utt.fetch_seed())
# You can call all numpy's random number generator function's on rng
rng.rand(5, 5)
GPU Op
GPU Op
------
------
...
...
theano/compile/debugmode.py
浏览文件 @
48a64a38
...
@@ -780,13 +780,6 @@ def _check_viewmap(node, storage_map):
...
@@ -780,13 +780,6 @@ def _check_viewmap(node, storage_map):
outstorage
=
storage_map
[
onode
][
0
]
outstorage
=
storage_map
[
onode
][
0
]
instorage_id
=
[
id
(
storage_map
[
i
][
0
])
for
i
in
node
.
inputs
]
instorage_id
=
[
id
(
storage_map
[
i
][
0
])
for
i
in
node
.
inputs
]
# TODO: investigate ways in which other Types may be aliased
# TODO: consider adding a function to Type to detect aliasing
danger_flag
=
id
(
outstorage
)
in
instorage_id
or
\
(
type
(
outstorage
)
==
numpy
.
ndarray
and
outstorage
.
flags
[
'OWNDATA'
]
==
False
)
if
danger_flag
:
# first find out which input it aliases
# first find out which input it aliases
view_map
=
getattr
(
node
.
op
,
'view_map'
,
{})
view_map
=
getattr
(
node
.
op
,
'view_map'
,
{})
destroy_map
=
getattr
(
node
.
op
,
'destroy_map'
,
{})
destroy_map
=
getattr
(
node
.
op
,
'destroy_map'
,
{})
...
@@ -803,8 +796,8 @@ def _check_viewmap(node, storage_map):
...
@@ -803,8 +796,8 @@ def _check_viewmap(node, storage_map):
bad_alias
[
nodeid
]
=
ii
bad_alias
[
nodeid
]
=
ii
# check that the aliasing was declared in [view|destroy]_map
# check that the aliasing was declared in [view|destroy]_map
if
([
ii
]
==
view_map
.
get
(
oi
,
None
)
or
\
if
([
ii
]
==
view_map
.
get
(
oi
,
None
)
or
[
ii
]
==
destroy_map
.
get
(
oi
,
None
)):
[
ii
]
==
destroy_map
.
get
(
oi
,
None
)):
good_alias
[
nodeid
]
=
bad_alias
.
pop
(
nodeid
)
good_alias
[
nodeid
]
=
bad_alias
.
pop
(
nodeid
)
...
@@ -819,7 +812,8 @@ def _check_viewmap(node, storage_map):
...
@@ -819,7 +812,8 @@ def _check_viewmap(node, storage_map):
#if its not aliased to input, check output->output aliasing
#if its not aliased to input, check output->output aliasing
if
not
good_alias
and
_is_used_in_graph
(
onode
):
if
not
good_alias
and
_is_used_in_graph
(
onode
):
for
other_oi
,
other_onode
in
enumerate
(
node
.
outputs
):
for
other_oi
,
other_onode
in
enumerate
(
node
.
outputs
):
if
other_oi
==
oi
:
continue
if
other_oi
==
oi
:
continue
other_storage
=
storage_map
[
other_onode
][
0
]
other_storage
=
storage_map
[
other_onode
][
0
]
# check to see if we share memory with this other output
# check to see if we share memory with this other output
...
@@ -1547,7 +1541,8 @@ class _VariableEquivalenceTracker(object):
...
@@ -1547,7 +1541,8 @@ class _VariableEquivalenceTracker(object):
#List of default version of make thunk.
#List of default version of make thunk.
#This is needed to know if the user overrided it.
#This is needed to know if the user overrided it.
#The GpuOp will be added here when theano.sandbox.cuda is imported.
#The GpuOp will be added here when theano.sandbox.cuda is imported.
default_make_thunk
=
[
theano
.
gof
.
Op
.
make_thunk
.
im_func
]
default_make_thunk
=
[
theano
.
gof
.
Op
.
make_thunk
.
im_func
,
theano
.
gof
.
OpenMPOp
.
make_thunk
.
im_func
]
class
_Linker
(
gof
.
link
.
LocalLinker
):
class
_Linker
(
gof
.
link
.
LocalLinker
):
...
...
theano/sandbox/cuda/elemwise.py
浏览文件 @
48a64a38
...
@@ -905,16 +905,22 @@ nd_collapse_[i]=0;
...
@@ -905,16 +905,22 @@ nd_collapse_[i]=0;
//std::cerr << "C_CODE
%(opname)
s checking input
%(iname)
s
\\
n";
//std::cerr << "C_CODE
%(opname)
s checking input
%(iname)
s
\\
n";
if (
%(nd)
s !=
%(iname)
s->nd)
if (
%(nd)
s !=
%(iname)
s->nd)
{
{
PyErr_Format(PyExc_TypeError, "need
%(nd)
s dims, not
%%
i",
%(iname)
s->nd);
PyErr_Format(PyExc_TypeError,
"need
%(nd)
s dims, not
%%
i",
%(iname)
s->nd);
%(fail)
s;
%(fail)
s;
}
}
for (int i = 0; i<
%(nd)
s; ++i)
for (int i = 0; i<
%(nd)
s; ++i)
{
{
dims[i] = (dims[i] == 1) ? CudaNdarray_HOST_DIMS(
%(iname)
s)[i] : dims[i];
dims[i] = (dims[i] == 1) ? CudaNdarray_HOST_DIMS(
%(iname)
s)[i] : dims[i];
if ((!(broadcasts_
%(iname)
s[i] && CudaNdarray_HOST_DIMS(
%(iname)
s)[i] == 1))&& (dims[i] != CudaNdarray_HOST_DIMS(
%(iname)
s)[i]))
if ((!(broadcasts_
%(iname)
s[i] &&
CudaNdarray_HOST_DIMS(
%(iname)
s)[i] == 1)) &&
(dims[i] != CudaNdarray_HOST_DIMS(
%(iname)
s)[i]))
{
{
//std::cerr << "C_CODE
%(opname)
s checking input
%(iname)
s failed
\\
n";
//std::cerr << "C_CODE
%(opname)
s checking input
%(iname)
s failed
\\
n";
PyErr_Format(PyExc_ValueError, "GpuElemwise. Input dimension mis-match. One of your inputs has shape[
%%
i] ==
%%
i, but the output's size on that axis is
%%
i.",
PyErr_Format(PyExc_ValueError,
"GpuElemwise. Input dimension mis-match. Input"
"
%(id)
d (index start at 0 has shape[
%%
i] ==
%%
i"
", but the output's size on that axis is
%%
i.",
i,
i,
CudaNdarray_HOST_DIMS(
%(iname)
s)[i],
CudaNdarray_HOST_DIMS(
%(iname)
s)[i],
dims[i]
dims[i]
...
...
theano/sandbox/cuda/opt.py
浏览文件 @
48a64a38
...
@@ -62,7 +62,7 @@ optdb.register('gpu_after_fusion',
...
@@ -62,7 +62,7 @@ optdb.register('gpu_after_fusion',
def
register_opt
(
*
tags
,
**
kwargs
):
def
register_opt
(
*
tags
,
**
kwargs
):
def
f
(
local_opt
):
def
f
(
local_opt
):
name
=
(
kwargs
and
kwargs
.
pop
(
'name'
))
or
local_opt
.
__name__
name
=
(
kwargs
and
kwargs
.
pop
(
'name'
))
or
local_opt
.
__name__
gpu_optimizer
.
register
(
name
,
local_opt
,
'fast_run'
,
'inplace'
,
*
tags
)
gpu_optimizer
.
register
(
name
,
local_opt
,
'fast_run'
,
*
tags
)
return
local_opt
return
local_opt
return
f
return
f
...
...
theano/sandbox/cuda/tests/test_cuda_ndarray.py
浏览文件 @
48a64a38
...
@@ -380,6 +380,7 @@ def test_reshape():
...
@@ -380,6 +380,7 @@ def test_reshape():
#print n_bb
#print n_bb
assert
numpy
.
all
(
aa
==
n_bb
)
assert
numpy
.
all
(
aa
==
n_bb
)
assert
aa
.
shape
==
n_bb
.
shape
# Test the not contiguous case
# Test the not contiguous case
shape_1_2x
=
(
shape_1
[
0
]
*
2
,)
+
shape_1
[
1
:]
shape_1_2x
=
(
shape_1
[
0
]
*
2
,)
+
shape_1
[
1
:]
...
@@ -396,6 +397,7 @@ def test_reshape():
...
@@ -396,6 +397,7 @@ def test_reshape():
#print n_bb
#print n_bb
assert
numpy
.
all
(
aa
==
n_bb
)
assert
numpy
.
all
(
aa
==
n_bb
)
assert
aa
.
shape
==
n_bb
.
shape
def
bad_subtest
(
shape_1
,
shape_2
,
rng
):
def
bad_subtest
(
shape_1
,
shape_2
,
rng
):
a
=
theano
.
_asarray
(
rng
.
randn
(
*
shape_1
),
dtype
=
'float32'
)
a
=
theano
.
_asarray
(
rng
.
randn
(
*
shape_1
),
dtype
=
'float32'
)
...
...
theano/sparse/basic.py
浏览文件 @
48a64a38
...
@@ -679,10 +679,6 @@ class CSM(gof.Op):
...
@@ -679,10 +679,6 @@ class CSM(gof.Op):
a regular grad.
a regular grad.
"""
"""
# should view the other inputs too, but viewing multiple inputs is not
view_map
=
{
0
:
[
0
]}
#currently supported by the destroyhandler
kmap
=
None
kmap
=
None
"""Indexing to speficied what part of the data parameter
"""Indexing to speficied what part of the data parameter
should be use to construct the sparse matrix."""
should be use to construct the sparse matrix."""
...
@@ -701,6 +697,11 @@ class CSM(gof.Op):
...
@@ -701,6 +697,11 @@ class CSM(gof.Op):
self
.
kmap
=
kmap
self
.
kmap
=
kmap
if
not
isinstance
(
self
.
kmap
,
numpy
.
ndarray
):
# should view the other inputs too, but viewing multiple
# inputs is not currently supported by the destroyhandler
self
.
view_map
=
{
0
:
[
0
]}
self
.
_hashval
=
(
hash
(
type
(
self
))
^
hash
(
self
.
format
)
^
self
.
_hashval
=
(
hash
(
type
(
self
))
^
hash
(
self
.
format
)
^
_kmap_hash
(
self
.
kmap
))
_kmap_hash
(
self
.
kmap
))
...
@@ -711,6 +712,11 @@ class CSM(gof.Op):
...
@@ -711,6 +712,11 @@ class CSM(gof.Op):
def
__hash__
(
self
):
def
__hash__
(
self
):
return
self
.
_hashval
return
self
.
_hashval
def
__str__
(
self
):
if
self
.
kmap
is
not
None
:
return
"
%
s{
%
s}"
%
(
self
.
__class__
.
__name__
,
str
(
self
.
kmap
))
return
self
.
__class__
.
__name__
def
make_node
(
self
,
data
,
indices
,
indptr
,
shape
):
def
make_node
(
self
,
data
,
indices
,
indptr
,
shape
):
data
=
tensor
.
as_tensor_variable
(
data
)
data
=
tensor
.
as_tensor_variable
(
data
)
...
@@ -802,8 +808,10 @@ class CSMGrad(gof.op.Op):
...
@@ -802,8 +808,10 @@ class CSMGrad(gof.op.Op):
def
__init__
(
self
,
kmap
=
None
):
def
__init__
(
self
,
kmap
=
None
):
self
.
kmap
=
kmap
self
.
kmap
=
kmap
if
self
.
kmap
is
None
:
#This class always allocate a new output.
self
.
view_map
=
{
0
:
[
1
]}
#I keep this here to help GD understand what this kmap think is.
#if self.kmap is None:
# self.view_map = {0: [1]}
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
and
_kmap_eq
(
self
.
kmap
,
other
.
kmap
)
return
type
(
self
)
==
type
(
other
)
and
_kmap_eq
(
self
.
kmap
,
other
.
kmap
)
...
@@ -1224,6 +1232,7 @@ class Transpose(gof.op.Op):
...
@@ -1224,6 +1232,7 @@ class Transpose(gof.op.Op):
matrix.
matrix.
:note: The grad is regular, i.e. not structured.
:note: The grad is regular, i.e. not structured.
"""
"""
view_map
=
{
0
:
[
0
]}
format_map
=
{
'csr'
:
'csc'
,
format_map
=
{
'csr'
:
'csc'
,
'csc'
:
'csr'
}
'csc'
:
'csr'
}
...
@@ -1355,6 +1364,8 @@ class RowScaleCSC(gof.op.Op):
...
@@ -1355,6 +1364,8 @@ class RowScaleCSC(gof.op.Op):
# :note: The grad implemented is structured.
# :note: The grad implemented is structured.
view_map
=
{
0
:
[
0
]}
def
__eq__
(
self
,
other
):
def
__eq__
(
self
,
other
):
return
type
(
self
)
==
type
(
other
)
return
type
(
self
)
==
type
(
other
)
...
...
theano/sparse/tests/test_basic.py
浏览文件 @
48a64a38
...
@@ -2070,6 +2070,7 @@ def _hv_switch(op, expected_function):
...
@@ -2070,6 +2070,7 @@ def _hv_switch(op, expected_function):
def
expected_f
(
self
,
a
,
format
=
None
,
dtype
=
None
):
def
expected_f
(
self
,
a
,
format
=
None
,
dtype
=
None
):
return
expected_function
(
a
,
format
,
dtype
)
return
expected_function
(
a
,
format
,
dtype
)
XStackTester
.
__name__
=
op
.
__name__
+
"Tester"
return
XStackTester
return
XStackTester
HStackTester
=
_hv_switch
(
HStack
,
sp
.
hstack
)
HStackTester
=
_hv_switch
(
HStack
,
sp
.
hstack
)
...
@@ -2385,6 +2386,7 @@ def elemwise_checker(op, expected_f, gap=None, test_dtypes=None,
...
@@ -2385,6 +2386,7 @@ def elemwise_checker(op, expected_f, gap=None, test_dtypes=None,
verify_grad_sparse
(
self
.
op
,
verify_grad_sparse
(
self
.
op
,
data
,
data
,
structured
=
True
)
structured
=
True
)
Tester
.
__name__
=
op
.
__name__
+
"Tester"
return
Tester
return
Tester
...
...
theano/tensor/elemwise.py
浏览文件 @
48a64a38
...
@@ -817,8 +817,7 @@ class Elemwise(Op):
...
@@ -817,8 +817,7 @@ class Elemwise(Op):
min_informative_str
(
output
)
+
'
\n
'
min_informative_str
(
output
)
+
'
\n
'
errormsg
+=
'original exception was: '
+
'
\n
'
.
join
(
errormsg
+=
'original exception was: '
+
'
\n
'
.
join
(
traceback
.
format_exception_only
(
*
sys
.
exc_info
()[
0
:
2
]))
traceback
.
format_exception_only
(
*
sys
.
exc_info
()[
0
:
2
]))
raise
Exception
(
errormsg
)
else
:
e
.
args
=
e
.
args
+
(
errormsg
,
)
e
.
args
=
e
.
args
+
(
errormsg
,
)
raise
raise
...
...
theano/tensor/nnet/Conv3D.py
浏览文件 @
48a64a38
...
@@ -549,6 +549,20 @@ class Conv3D(theano.Op):
...
@@ -549,6 +549,20 @@ class Conv3D(theano.Op):
global
conv3D
global
conv3D
conv3D
=
Conv3D
()
conv3D
=
Conv3D
()
"""
3D "convolution" of multiple filters on a minibatch
(does not flip the kernel, moves kernel with a user specified stride)
:param V: Visible unit, input.
dimensions: (batch, row, column, time, in channel)
:param W: Weights, filter.
dimensions: (out channel, row, column, time ,in channel)
:param b: bias, shape == (W.shape[0],)
:param d: strides when moving the filter over the input(dx, dy, dt)
:note: The order of dimensions do not correspond with the one in `conv2d`.
This is for optimization.
"""
def
computeH
(
V
,
W
,
b
,
d
):
def
computeH
(
V
,
W
,
b
,
d
):
assert
len
(
W
.
shape
)
==
5
assert
len
(
W
.
shape
)
==
5
...
...
theano/tensor/nnet/conv.py
浏览文件 @
48a64a38
...
@@ -690,8 +690,8 @@ class ConvOp(OpenMPOp):
...
@@ -690,8 +690,8 @@ class ConvOp(OpenMPOp):
fulloutshp
=
tuple
(
ConvOp
.
getOutputShape
(
imshp_logical
[
fulloutshp
=
tuple
(
ConvOp
.
getOutputShape
(
imshp_logical
[
1
:],
kshp_logical
,
(
1
,
1
),
self
.
out_mode
))
1
:],
kshp_logical
,
(
1
,
1
),
self
.
out_mode
))
if
z
[
0
]
is
None
or
z
[
0
]
.
shape
!=
(
bsize
,
)
+
(
nkern
,)
+
fulloutshp
:
if
z
[
0
]
is
None
or
z
[
0
]
.
shape
!=
(
bsize
,
nkern
,)
+
fulloutshp
:
z
[
0
]
=
numpy
.
zeros
((
bsize
,
)
+
(
nkern
,)
+
fulloutshp
,
z
[
0
]
=
numpy
.
zeros
((
bsize
,
nkern
,)
+
fulloutshp
,
dtype
=
img2d
.
dtype
)
dtype
=
img2d
.
dtype
)
zz
=
z
[
0
]
zz
=
z
[
0
]
...
...
theano/tensor/tests/test_basic.py
浏览文件 @
48a64a38
...
@@ -1397,12 +1397,8 @@ _good_broadcast_unary_gammaln = dict(
...
@@ -1397,12 +1397,8 @@ _good_broadcast_unary_gammaln = dict(
normal
=
(
rand_ranged
(
-
1
+
1e-2
,
10
,
(
2
,
3
)),),
normal
=
(
rand_ranged
(
-
1
+
1e-2
,
10
,
(
2
,
3
)),),
empty
=
(
numpy
.
asarray
([]),),)
empty
=
(
numpy
.
asarray
([]),),)
_grad_broadcast_unary_gammaln
=
dict
(
_grad_broadcast_unary_gammaln
=
dict
(
normal
=
(
rand_ranged
(
1e-8
,
10
,
(
2
,
3
)),),)
# smaler range as our grad method don't estimate it good enough.
normal
=
(
rand_ranged
(
1e-8
,
8
,
(
2
,
3
)),),)
if
theano
.
config
.
floatX
==
'float32'
:
gamma_eps
=
3e-4
else
:
gamma_eps
=
2e-10
GammaTester
=
makeBroadcastTester
(
GammaTester
=
makeBroadcastTester
(
op
=
tensor
.
gamma
,
op
=
tensor
.
gamma
,
...
@@ -1410,7 +1406,6 @@ GammaTester = makeBroadcastTester(
...
@@ -1410,7 +1406,6 @@ GammaTester = makeBroadcastTester(
good
=
_good_broadcast_unary_gammaln
,
good
=
_good_broadcast_unary_gammaln
,
grad
=
_grad_broadcast_unary_gammaln
,
grad
=
_grad_broadcast_unary_gammaln
,
mode
=
mode_no_scipy
,
mode
=
mode_no_scipy
,
eps
=
gamma_eps
,
skip
=
skip_scipy
)
skip
=
skip_scipy
)
GammaInplaceTester
=
makeBroadcastTester
(
GammaInplaceTester
=
makeBroadcastTester
(
op
=
inplace
.
gamma_inplace
,
op
=
inplace
.
gamma_inplace
,
...
@@ -1418,7 +1413,6 @@ GammaInplaceTester = makeBroadcastTester(
...
@@ -1418,7 +1413,6 @@ GammaInplaceTester = makeBroadcastTester(
good
=
_good_broadcast_unary_gammaln
,
good
=
_good_broadcast_unary_gammaln
,
grad
=
_grad_broadcast_unary_gammaln
,
grad
=
_grad_broadcast_unary_gammaln
,
mode
=
mode_no_scipy
,
mode
=
mode_no_scipy
,
eps
=
gamma_eps
,
inplace
=
True
,
inplace
=
True
,
skip
=
skip_scipy
)
skip
=
skip_scipy
)
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论