提交 23b7a597 authored 作者: lamblin's avatar lamblin

Merge pull request #1373 from nouiz/make_vector

Add MakeVector.c_code
......@@ -683,7 +683,7 @@ Reductions
.. function:: max(x, axis=None, keepdims=False)
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis or axes along which to compute the sum
:Parameter: *axis* - axis or axes along which to compute the maximum
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
......@@ -697,7 +697,7 @@ Reductions
.. function:: argmax(x, axis=None, keepdims=False)
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis along which to compute the maximum
:Parameter: *axis* - axis along which to compute the index of the maximum
:Parameter: *keepdims* - (boolean) If this is set to True, the axis which is reduced is
left in the result as a dimension with size one. With this option, the result
will broadcast correctly against the original tensor.
......@@ -709,7 +709,7 @@ Reductions
.. function:: max_and_argmax(x, axis=None, keepdims=False)
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis along which to compute the maximum
:Parameter: *axis* - axis along which to compute the maximum and its index
:Parameter: *keepdims* - (boolean) If this is set to True, the axis which is reduced is
left in the result as a dimension with size one. With this option, the result
will broadcast correctly against the original tensor.
......@@ -721,7 +721,7 @@ Reductions
.. function:: min(x, axis=None, keepdims=False)
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis or axes along which to compute the sum
:Parameter: *axis* - axis or axes along which to compute the minimum
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
......@@ -735,7 +735,7 @@ Reductions
.. function:: argmin(x, axis=None, keepdims=False)
:Parameter: *x* - symbolic Tensor (or compatible)
:Parameter: *axis* - axis along which to compute the minimum
:Parameter: *axis* - axis along which to compute the index of the minimum
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
left in the result as dimensions with size one. With this option, the result
will broadcast correctly against the original tensor.
......
"""We don't have real test for the cache, but it would be great to make them!
But this one test a current behavior that isn't good: the c_code isn't
deterministic based on the input type and the op.
"""
import numpy
import theano
class MyOp(theano.compile.ops.DeepCopyOp):
nb_called = 0
def c_code_cache_version(self):
return ()
def c_code(self, node, name, inames, onames, sub):
MyOp.nb_called += 1
iname, = inames
oname, = onames
fail = sub['fail']
itype = node.inputs[0].type.__class__
if itype in self.c_code_and_version:
code, version = self.c_code_and_version[itype]
rand = numpy.random.rand()
return ("""printf("%(rand)s\\n");""" + code) % locals()
# Else, no C code
return super(DeepCopyOp, self).c_code(node, name, inames, onames, sub)
def test_inter_process_cache():
"""When an op with c_code, but no version. If we have 2 apply node
in the graph with different inputs variable(so they don't get
merged) but the inputs variable have the same type, do we reuse
the same module? Even if they would generate different c_code?
Currently this test show that we generate the c_code only once.
This is to know if the c_code can add information specific to the
node.inputs[*].owner like the name of the variable.
"""
x, y = theano.tensor.vectors('xy')
f = theano.function([x, y], [MyOp()(x), MyOp()(y)])
f(numpy.arange(60), numpy.arange(60))
assert MyOp.nb_called == 1
# What if we compile a new function with new variables?
x, y = theano.tensor.vectors('xy')
f = theano.function([x, y], [MyOp()(x), MyOp()(y)])
f(numpy.arange(60), numpy.arange(60))
assert MyOp.nb_called == 1
......@@ -544,6 +544,32 @@ class MakeVector(T.Op):
# assume that out has correct dtype. there is no cheap way to check
out[0][...] = inputs
def c_code_cache_version(self):
return (1,)
def c_code(self, node, name, inp, out_, sub):
out, = out_
# Shouldn't use PyArray_TYPE(inp[0]) for the dtype
# when len(inp) == 0 (we need to support this case.
# So there will be (1 * nb_dtype) + ((nb len(inp) - 1 ))
# different c code with the following algo
out_shape = len(inp)
out_dtype = numpy.dtype(node.outputs[0].dtype).num
if len(inp) > 0:
assert self.dtype == node.inputs[0].dtype
out_dtype = 'PyArray_TYPE(%s)' % inp[0]
ret = """
npy_intp dims[1];
dims[0] = %(out_shape)s;
%(out)s = (PyArrayObject*)PyArray_EMPTY(1, dims, %(out_dtype)s, 0);
""" % locals()
for idx, i in enumerate(inp):
ret += """
*((dtype_%(out)s *)PyArray_GETPTR1(%(out)s, %(idx)s)) = *((dtype_%(out)s *) PyArray_DATA(%(i)s));
""" % locals()
return ret
def infer_shape(self, node, ishapes):
return [(len(ishapes),)]
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论