提交 f1fd5c79 authored 作者: Brandon T. Willard's avatar Brandon T. Willard 提交者: Thomas Wiecki

Rename COp to ExternalCOp

上级 117f23f5
......@@ -720,8 +720,8 @@ simple but it still involves defining many methods as well as mixing, in the
same file, both Python and C code which tends to make the result less
readable.
To help with this, Theano defines a class, ``COp``, from which new C ops
can inherit. The class ``COp`` aims to simplify the process of implementing
To help with this, Theano defines a class, ``ExternalCOp``, from which new C ops
can inherit. The class ``ExternalCOp`` aims to simplify the process of implementing
C ops by doing the following :
* It allows you to define the C implementation of your op in a distinct
......@@ -732,12 +732,12 @@ C ops by doing the following :
in addition to :meth:`Op.c_code_cache_version()` based on the
provided external C implementation.
To illustrate how much simpler the class ``COp`` makes the process of defining
To illustrate how much simpler the class ``ExternalCOp`` makes the process of defining
a new op with a C implementation, let's revisit the second example of this
tutorial, the ``VectorTimesVector`` op. In that example, we implemented an op
to perform the task of element-wise vector-vector multiplication. The two
following blocks of code illustrate what the op would look like if it was
implemented using the ``COp`` class.
implemented using the ``ExternalCOp`` class.
The new op is defined inside a Python file with the following code :
......@@ -746,7 +746,7 @@ The new op is defined inside a Python file with the following code :
import theano
from theano import gof
class VectorTimesVector(gof.COp):
class VectorTimesVector(gof.ExternalCOp):
__props__ = ()
func_file = "./vectorTimesVector.c"
......@@ -850,25 +850,25 @@ As you can see from this example, the Python and C implementations are nicely
decoupled which makes them much more readable than when they were intertwined
in the same file and the C code contained string formatting markers.
Now that we have motivated the COp class, we can have a more precise look at
Now that we have motivated the `ExternalCOp` class, we can have a more precise look at
what it does for us. For this, we go through the various elements that make up
this new version of the VectorTimesVector op :
* Parent class : instead of inheriting from the class :class:`Op`,
VectorTimesVector inherits from the class ``COp``.
VectorTimesVector inherits from the class ``ExternalCOp``.
* Constructor : in our new op, the ``__init__()`` method has an
important use; to inform the constructor of the ``COp`` class
* Constructor : in our new `Op`, the ``__init__()`` method has an
important use; to inform the constructor of the ``ExternalCOp`` class
of the location, on the filesystem of the C implementation of
this op. To do this, it gives a list of file paths containing
this `Op`. To do this, it gives a list of file paths containing
the C code for this op. To auto-generate the c_code method
with a function call you can specify the function name as the
second parameter. The paths should be given as a relative
path from the folder where the descendant of the ``COp`` class
path from the folder where the descendant of the ``ExternalCOp`` class
is defined.
* ``make_node()`` : the ``make_node()`` method is absolutely
identical to the one in our old example. Using the ``COp``
identical to the one in our old example. Using the ``ExternalCOp``
class doesn't change anything here.
* External C code : the external C code implements the various
......@@ -880,7 +880,7 @@ Main function
-------------
If you pass a function name to the ``__init__()`` method of the
``COp`` class, it must respect the following constraints:
``ExternalCOp`` class, it must respect the following constraints:
* It must return an int. The value of that int indicates whether
the op could perform its task or not. A value of 0 indicates
......@@ -983,7 +983,7 @@ Certain section are limited in what you can place in them due to
semantic and syntactic restrictions of the C++ language. Most of
these restrictions apply to the tags that end in ``_struct``.
When we defined the VectorTimesVector op without using the ``COp``
When we defined the VectorTimesVector op without using the ``ExternalCOp``
class, we had to make a distinction between two types of support_code
: the support code that was apply-specific and the support code that
wasn't. The apply-specific code was defined in the
......@@ -999,7 +999,7 @@ used when defining the functions ``vector_elemwise_mult()`` and
``vector_times_vector()`` as well as when calling function
``vector_elemwise_mult()`` from inside ``vector_times_vector()``.
When using the ``COp`` class, we still have to make the distinction
When using the ``ExternalCOp`` class, we still have to make the distinction
between C code for each of the methods of a C class. These sections of
code are separated by ``#section <tag>`` markers. The tag determines
the name of the method this C code applies to with the rule that
......@@ -1020,9 +1020,9 @@ arguments).
In the above example, the ``function vector_same_shape()`` is
apply-agnostic because it uses none of the macros defined by the class
``COp`` and it doesn't rely on any apply-specific code. The function
``ExternalCOp`` and it doesn't rely on any apply-specific code. The function
``vector_elemwise_mult()`` is apply-specific because it uses the
macros defined by ``COp``. Finally, the function
macros defined by ``ExternalCOp``. Finally, the function
``vector_times_vector()`` is apply-specific because it uses those same
macros and also because it calls ``vector_elemwise_mult()`` which is
an apply-specific function.
......
......@@ -96,7 +96,7 @@ that you will need to write a new kernel. The best way to do this is
to leverage :class:`GpuKernelBase
<theano.gpuarray.basic_ops.GpuKernelBase>` (or :class:`CGpuKernelBase
<theano.gpuarray.basic_ops.CGpuKernelBase>` if you want to use the
:class:`COp <theano.gof.op.COp>` functionality).
:class:`ExternalCOp <theano.gof.op.ExternalCOp>` functionality).
For plain :class:`GpuKernelBase
<theano.gpuarray.basic_ops.GpuKernelBase>`, you have to define a
......@@ -118,7 +118,7 @@ this::
params=[gpuarray.GpuArray, gpuarray.SIZE, gpuarray.SIZE],
flags=Kernel.get_flags('float64'))]
If you want to use ``COp``, then you should use ``CGpuKernelBase``
If you want to use ``ExternalCOp``, then you should use ``CGpuKernelBase``
instead. It adds a new section to the parsed files whose tag is
``kernels``. Inside that section you can define some kernels with
``#kernel name:params:flags``.
......
......@@ -4,7 +4,7 @@ import pytest
import theano
from tests import unittest_tools as utt
from theano import Generic, tensor
from theano.gof import Apply, COp, EnumList, Op, Params, ParamsType
from theano.gof import Apply, EnumList, ExternalCOp, Op, Params, ParamsType
from theano.scalar import Scalar
from theano.tensor import TensorType
......@@ -96,8 +96,9 @@ class QuadraticOpFunc(Op):
)
# Same op as above, but implemented as a COp (with C code in an external file).
class QuadraticCOpFunc(COp):
# Same op as above, but implemented as a ExternalCOp (with C code in an
# external file).
class QuadraticCOpFunc(ExternalCOp):
__props__ = ("a", "b", "c")
params_type = ParamsType(a=tensor_type_0d, b=scalar_type, c=generic_type)
......
......@@ -4,7 +4,13 @@ import theano
from theano.gof.destroyhandler import DestroyHandler
from theano.gof.fg import FunctionGraph, InconsistencyError, MissingInputError
from theano.gof.graph import Apply, Constant, Variable, view_roots
from theano.gof.op import COp, Op, OpenMPOp, get_test_value, ops_with_inner_function
from theano.gof.op import (
ExternalCOp,
Op,
OpenMPOp,
get_test_value,
ops_with_inner_function,
)
from theano.gof.opt import (
CheckStackTraceOptimization,
EquilibriumOptimizer,
......
......@@ -811,7 +811,7 @@ def apply_meth(tag):
return f
class COp(Op):
class ExternalCOp(Op):
"""
Class to allow an op to have an external C implementation.
......
......@@ -7,7 +7,7 @@ import numpy as np
import theano
from theano import Apply, Op, Type, Variable, config, tensor
from theano.gof import COp, ParamsType
from theano.gof import ExternalCOp, ParamsType
from theano.gof.opt import copy_stack_trace
from theano.gof.utils import MethodNotDefined
from theano.gradient import grad_undefined
......@@ -493,7 +493,7 @@ def forward_string_meth(name):
def f(*args):
res = getattr(GpuKernelBase, name)(*args)
try:
res = res + "\n" + getattr(COp, name)(*args)
res = res + "\n" + getattr(ExternalCOp, name)(*args)
except MethodNotDefined:
pass
return res
......@@ -513,15 +513,15 @@ def get_dtype(s):
return np.dtype(s)
class CGpuKernelBase(COp, GpuKernelBase):
class CGpuKernelBase(ExternalCOp, GpuKernelBase):
"""
Class to combine GpuKernelBase and COp.
Class to combine GpuKernelBase and ExternalCOp.
It adds a new section type 'kernels' where you can define kernels
with the '#kernel' tag
"""
SECTIONS = copy.copy(COp.SECTIONS)
SECTIONS = copy.copy(ExternalCOp.SECTIONS)
SECTIONS.add("kernels")
kernel_re = re.compile(r"^#kernel ([a-zA-Z_].*?)$", re.MULTILINE)
......
......@@ -3,7 +3,7 @@ import logging
import numpy as np
from theano import Apply, tensor
from theano.gof import COp, ParamsType
from theano.gof import ExternalCOp, ParamsType
from theano.gradient import grad_undefined
from theano.scalar import bool as bool_t
from theano.tensor import as_tensor_variable, discrete_dtypes
......@@ -15,7 +15,7 @@ from .type import gpu_context_type
_logger = logging.getLogger("theano.gpuarray.blocksparse")
class GpuSparseBlockGemv(COp):
class GpuSparseBlockGemv(ExternalCOp):
"""
GPU version of SparseBlockGemv. Check SparseBlockGemv's docstring for more
information.
......@@ -30,7 +30,7 @@ class GpuSparseBlockGemv(COp):
# NB: DTYPE_INPUT_* is used in C code, so I think we should not set check_input to False.
def __init__(self, inplace=False):
COp.__init__(self, "c_code/blockgemv.c", "APPLY_SPECIFIC(blockgemv)")
ExternalCOp.__init__(self, "c_code/blockgemv.c", "APPLY_SPECIFIC(blockgemv)")
self.inplace = inplace
if self.inplace:
self.destroy_map = {0: [0]}
......@@ -90,7 +90,7 @@ gpu_sparse_block_gemv = GpuSparseBlockGemv(False)
gpu_sparse_block_gemv_inplace = GpuSparseBlockGemv(True)
class GpuSparseBlockOuter(COp):
class GpuSparseBlockOuter(ExternalCOp):
"""
GPU version of SparseBlockOuter. See SparseBlockOuter's docstring for more
information.
......@@ -104,7 +104,7 @@ class GpuSparseBlockOuter(COp):
params_type = ParamsType(inplace=bool_t, context=gpu_context_type)
def __init__(self, inplace=False):
COp.__init__(self, ["c_code/blockger.c"], "APPLY_SPECIFIC(blockger)")
ExternalCOp.__init__(self, ["c_code/blockger.c"], "APPLY_SPECIFIC(blockger)")
self.inplace = inplace
if self.inplace:
self.destroy_map = {0: [0]}
......
......@@ -20,7 +20,7 @@ from theano.tensor.nnet.ctc import ctc_available
from theano.tensor.opt import register_canonicalize
class GpuConnectionistTemporalClassification(gof.COp):
class GpuConnectionistTemporalClassification(gof.ExternalCOp):
"""
GPU wrapper for Baidu CTC loss function.
......@@ -52,7 +52,7 @@ class GpuConnectionistTemporalClassification(gof.COp):
# Return only the cost. Gradient will be returned by grad()
self.default_output = 0
gof.COp.__init__(self, self.func_file, self.func_name)
gof.ExternalCOp.__init__(self, self.func_file, self.func_name)
def c_lib_dirs(self):
lib_dirs = []
......
......@@ -11,7 +11,7 @@ import theano.pathparse
from theano import Apply, Op, Variable, config, tensor
from theano.compile.ops import shape_i, shape_i_op
from theano.configdefaults import SUPPORTED_DNN_CONV_ALGO_RUNTIME
from theano.gof import COp, EnumList, ParamsType
from theano.gof import EnumList, ExternalCOp, ParamsType
from theano.gof.type import CDataType, Generic
from theano.gpuarray import cudnn_defs, pygpu
from theano.gpuarray.basic_ops import (
......@@ -384,7 +384,7 @@ def get_precision(precision, inputs, for_grad=False):
return precision, common_dtype
class DnnBase(COp):
class DnnBase(ExternalCOp):
"""
Creates a handle for cudnn and pulls in the cudnn libraries and headers.
......@@ -420,7 +420,7 @@ class DnnBase(COp):
def __init__(self, files=None, c_func=None):
if files is None:
files = []
COp.__init__(self, ["c_code/dnn_base.c"] + files, c_func)
ExternalCOp.__init__(self, ["c_code/dnn_base.c"] + files, c_func)
def c_headers(self):
return [
......@@ -459,7 +459,7 @@ class DnnBase(COp):
return (super().c_code_cache_version(), version(), 4)
class GpuDnnConvDesc(COp):
class GpuDnnConvDesc(ExternalCOp):
"""
This Op builds a convolution descriptor for use in the other convolution
......@@ -531,7 +531,7 @@ class GpuDnnConvDesc(COp):
precision="float32",
num_groups=1,
):
COp.__init__(self, ["c_code/conv_desc.c"], "APPLY_SPECIFIC(conv_desc)")
ExternalCOp.__init__(self, ["c_code/conv_desc.c"], "APPLY_SPECIFIC(conv_desc)")
if version() < 6000 and any([d != 1 for d in dilation]):
raise RuntimeError("Dilation > 1 not supported for cuDNN version < 6.")
......@@ -3077,7 +3077,7 @@ class GpuDnnRNNGradInputs(DnnBase):
return Apply(self, inputs, outputs)
# We have special requirements so this is hooking into COp
# We have special requirements so this is hooking into ExternalCOp
def format_c_function_args(self, inp, out):
rinp = inp[:7]
others = inp[7:]
......@@ -3094,7 +3094,7 @@ class GpuDnnRNNGradInputs(DnnBase):
else:
rinp.append("NULL")
assert len(others) == 0
return COp.format_c_function_args(self, rinp, out)
return ExternalCOp.format_c_function_args(self, rinp, out)
class GpuDnnRNNGradWeights(DnnBase):
......
......@@ -6,7 +6,7 @@ from numpy.linalg.linalg import LinAlgError
import theano
from theano import Op, config, tensor
from theano.gof import COp, ParamsType
from theano.gof import ExternalCOp, ParamsType
from theano.gpuarray.basic_ops import (
CGpuKernelBase,
as_gpuarray_variable,
......@@ -694,7 +694,7 @@ def gpu_cholesky(A, lower=True):
# TODO: add support for float64
class GpuMagmaBase(COp):
class GpuMagmaBase(ExternalCOp):
"""Base class for magma related operations. Add the necessary headers,
libraries and optionally the location of headers and library.
"""
......@@ -756,7 +756,7 @@ class GpuMagmaSVD(GpuMagmaBase):
def __init__(self, full_matrices=True, compute_uv=True):
self.full_matrices = full_matrices
self.compute_uv = compute_uv
COp.__init__(self, ["c_code/magma_svd.c"], "APPLY_SPECIFIC(magma_svd)")
ExternalCOp.__init__(self, ["c_code/magma_svd.c"], "APPLY_SPECIFIC(magma_svd)")
def make_node(self, A):
ctx_name = infer_context_name(A)
......@@ -849,7 +849,7 @@ class GpuMagmaMatrixInverse(GpuMagmaBase):
params_type = ParamsType(inplace=bool_t, context=gpu_context_type)
def __init__(self, inplace=False):
COp.__init__(self, ["c_code/magma_inv.c"], "APPLY_SPECIFIC(magma_inv)")
ExternalCOp.__init__(self, ["c_code/magma_inv.c"], "APPLY_SPECIFIC(magma_inv)")
self.inplace = inplace
if self.inplace:
self.destroy_map = {0: [0]}
......@@ -898,7 +898,7 @@ class GpuMagmaCholesky(GpuMagmaBase, CGpuKernelBase):
def __init__(self, lower=True, inplace=False):
self.lower = lower
COp.__init__(
ExternalCOp.__init__(
self, ["c_code/magma_cholesky.c"], "APPLY_SPECIFIC(magma_cholesky)"
)
self.inplace = inplace
......@@ -949,7 +949,7 @@ class GpuMagmaQR(GpuMagmaBase, CGpuKernelBase):
def __init__(self, complete=True):
self.complete = complete
COp.__init__(self, ["c_code/magma_qr.c"], "APPLY_SPECIFIC(magma_qr)")
ExternalCOp.__init__(self, ["c_code/magma_qr.c"], "APPLY_SPECIFIC(magma_qr)")
def make_node(self, A):
ctx_name = infer_context_name(A)
......@@ -1021,7 +1021,9 @@ class GpuMagmaEigh(GpuMagmaBase):
assert UPLO in ["L", "U"]
self.lower = UPLO == "L"
self.compute_v = compute_v
COp.__init__(self, ["c_code/magma_eigh.c"], "APPLY_SPECIFIC(magma_eigh)")
ExternalCOp.__init__(
self, ["c_code/magma_eigh.c"], "APPLY_SPECIFIC(magma_eigh)"
)
def make_node(self, A):
ctx_name = infer_context_name(A)
......
......@@ -4,7 +4,7 @@ import numpy as np
import theano
from theano import config, gof, scalar
from theano.gof import Apply, COp, Op, OpenMPOp, ParamsType
from theano.gof import Apply, ExternalCOp, Op, OpenMPOp, ParamsType
from theano.gof.null_type import NullType
from theano.gradient import DisconnectedType
from theano.misc.frozendict import frozendict
......@@ -51,7 +51,7 @@ def TensorConstant(*inputs, **kwargs):
##################
class DimShuffle(COp):
class DimShuffle(ExternalCOp):
"""
Allows to reorder the dimensions of a tensor or insert or remove
broadcastable dimensions.
......@@ -155,7 +155,7 @@ class DimShuffle(COp):
return self.shuffle + self.drop
def __init__(self, input_broadcastable, new_order, inplace=True):
COp.__init__(self, [self.c_func_file], self.c_func_name)
ExternalCOp.__init__(self, [self.c_func_file], self.c_func_name)
self.input_broadcastable = tuple(input_broadcastable)
self.new_order = tuple(new_order)
if inplace is True:
......@@ -217,8 +217,8 @@ class DimShuffle(COp):
self.__dict__.update(state)
if not hasattr(self, "func_files"):
# Perhaps we are loading an old `Op` version of DimShuffle.
# Let's just build the COp.
COp.__init__(self, [self.c_func_file], self.c_func_name)
# Let's just build the ExternalCOp.
ExternalCOp.__init__(self, [self.c_func_file], self.c_func_name)
def make_node(self, _input):
input = as_tensor_variable(_input)
......
......@@ -88,7 +88,7 @@ ctc_available.msg = None
ctc_available.path = None
class ConnectionistTemporalClassification(gof.COp, gof.OpenMPOp):
class ConnectionistTemporalClassification(gof.ExternalCOp, gof.OpenMPOp):
"""
CTC loss function wrapper.
......@@ -120,7 +120,7 @@ class ConnectionistTemporalClassification(gof.COp, gof.OpenMPOp):
"can not be constructed."
)
gof.COp.__init__(self, self.func_file, self.func_name)
gof.ExternalCOp.__init__(self, self.func_file, self.func_name)
gof.OpenMPOp.__init__(self, openmp=openmp)
self.compute_grad = compute_grad
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论