提交 bf02577b authored 作者: Olivier Delalleau's avatar Olivier Delalleau

Merged

2010-10-22 Theano 0.3
---------------------
This is the first major release of Theano since 0.1. Version 0.2 development
started internally but it was never advertised as a release.
There have been so many changes since 0.1 that we have lost track of many of
them. Below is a *partial* list of changes since 0.1.
* GPU code using NVIDIA's CUDA framework is now generated for many Ops.
* Some interface changes since 0.1:
* A new "shared variable" system to allow reusing memory space between
Theano functions.
* A new memory contract has been formally written for Theano,
for people who want to minimize memory copies.
* The old module system has been deprecated.
* By default, inputs to a Theano function will not be silently
downcasted (e.g. from float64 to float32).
* An error is now raised when using the result of logical operation on
Theano variable in an 'if' (i.e. an implicit call to __nonzeros__).
* An error is now raised when we receive a non-aligned ndarray as
input to a function (this is not supported).
* An error is raised when the list of dimensions passed to
dimshuffle() contains duplicates or is otherwise not sensible.
* Call NumPy BLAS bindings for gemv operations in addition to the
already supported gemm.
* If gcc is unavailable at import time, Theano now falls back to a
Python-based emulation mode after raising a warning.
* An error is now raised when tensor.grad is called on a non-scalar
Theano variable (in the past we would implicitly do a sum on the
tensor to make it a scalar).
* Added support for "erf" and "erfc" functions.
* The current default value of the parameter axis of theano.{max,min,
argmax,argmin,max_and_argmax} is deprecated. We now use the default NumPy
behavior of operating on the entire tensor.
* Theano is now available from PyPI and installable through "easy_install" or
"pip".
.. _NEWS:
=============
Release Notes
=============
Theano 0.1
==========
*Release date: 2009-04-02*
What works
----------
- building symbolic expression.
- arranging symbolic expressions into Modules so that multiple functions
can work on the same data.
- symbolic gradient descent.
- graph optimization.
- compilation to C for many kinds of expression.
- a debugging mode that checks that your expression results are correct,
using a variety of sanity checks.
What's missing?
---------------
- An algorithm library. We're missing a library of examples and standard
component implementations. Some examples will find their way into
the Theano repo, but standard algorithms will go into the 'pylearn'
project (toolbox style). Now that we have a stable foundation, we
can reach a consensus on style for algorithms.
doc/NEWS.txt
\ No newline at end of file
Theano 0.3 (2010-11-23)
-----------------------
This is the first major release of Theano since 0.1. Version 0.2 development started internally but it was never advertised as a release.
There have been so many changes since 0.1 that we have lost track of many of them. Below is a *partial* list of changes since 0.1.
* GPU code using NVIDIA's CUDA framework is now generated for many Ops.
* Some interface changes since 0.1:
* A new "shared variable" system to allow reusing memory space between Theano functions.
* A new memory contract has been formally written for Theano, for people who want to minimize memory copies.
* The old module system has been deprecated.
* By default, inputs to a Theano function will not be silently downcasted (e.g. from float64 to float32).
* An error is now raised when using the result of logical operation on Theano variable in an 'if' (i.e. an implicit call to __nonzeros__).
* An error is now raised when we receive a non-aligned ndarray as input to a function (this is not supported).
* An error is raised when the list of dimensions passed to dimshuffle() contains duplicates or is otherwise not sensible.
* Call NumPy BLAS bindings for gemv operations in addition to the already supported gemm.
* If gcc is unavailable at import time, Theano now falls back to a Python-based emulation mode after raising a warning.
* An error is now raised when tensor.grad is called on a non-scalar Theano variable (in the past we would implicitly do a sum on the tensor to make it a scalar).
* Added support for "erf" and "erfc" functions.
* The current default value of the parameter axis of theano.{max,min,argmax,argmin,max_and_argmax} is deprecated. We now use the default NumPy behavior of operating on the entire tensor.
* Theano is now available from PyPI and installable through "easy_install" or "pip".
...@@ -36,7 +36,8 @@ MAINTAINER = "LISA laboratory, University of Montreal" ...@@ -36,7 +36,8 @@ MAINTAINER = "LISA laboratory, University of Montreal"
MAINTAINER_EMAIL = "theano-dev@googlegroups.com" MAINTAINER_EMAIL = "theano-dev@googlegroups.com"
DESCRIPTION = ('Optimizing compiler for evaluating mathematical ' + DESCRIPTION = ('Optimizing compiler for evaluating mathematical ' +
'expressions on CPUs and GPUs.') 'expressions on CPUs and GPUs.')
LONG_DESCRIPTION = open("DESCRIPTION.txt").read() LONG_DESCRIPTION = (open("DESCRIPTION.txt").read() + "\n\n"
open("NEWS.txt").read())
URL = "http://deeplearning.net/software/theano/" URL = "http://deeplearning.net/software/theano/"
DOWNLOAD_URL = "" DOWNLOAD_URL = ""
LICENSE = 'BSD' LICENSE = 'BSD'
......
...@@ -40,6 +40,18 @@ def test_shape_i(): ...@@ -40,6 +40,18 @@ def test_shape_i():
assert len(topo)==1 assert len(topo)==1
assert isinstance(topo[0].op,T.opt.Shape_i) assert isinstance(topo[0].op,T.opt.Shape_i)
def test_shape():
x = cuda.ftensor3()
v = cuda.CudaNdarray(numpy.zeros((3,4,5),dtype='float32'))
f = theano.function([x],x.shape)
topo = f.maker.env.toposort()
assert numpy.all(f(v)==(3,4,5))
if theano.config.mode!='FAST_COMPILE':
assert len(topo)==4
assert isinstance(topo[0].op,T.opt.Shape_i)
assert isinstance(topo[1].op,T.opt.Shape_i)
assert isinstance(topo[2].op,T.opt.Shape_i)
assert isinstance(topo[3].op,T.opt.MakeVector)
def test_softmax_optimizations(): def test_softmax_optimizations():
from theano.tensor.nnet.nnet import softmax, crossentropy_categorical_1hot, crossentropy_softmax_argmax_1hot_with_bias from theano.tensor.nnet.nnet import softmax, crossentropy_categorical_1hot, crossentropy_softmax_argmax_1hot_with_bias
......
...@@ -248,6 +248,12 @@ class _sparse_py_operators: ...@@ -248,6 +248,12 @@ class _sparse_py_operators:
def __dot__(left, right): return structured_dot(left, right) def __dot__(left, right): return structured_dot(left, right)
def __rdot__(right, left): return structured_dot(left, right) def __rdot__(right, left): return structured_dot(left, right)
#def _as_TensorVariable(self):
# return dense_from_sparse(self)
shape = property(lambda self: tensor.shape(self))
ndim = property(lambda self: self.type.ndim)
dtype = property(lambda self: self.type.dtype)
class SparseVariable(gof.Variable, _sparse_py_operators): class SparseVariable(gof.Variable, _sparse_py_operators):
dtype = property(lambda self: self.type.dtype) dtype = property(lambda self: self.type.dtype)
...@@ -1380,4 +1386,3 @@ class StructuredDotGradCSR(gof.Op): ...@@ -1380,4 +1386,3 @@ class StructuredDotGradCSR(gof.Op):
"""% dict(locals(), **sub) """% dict(locals(), **sub)
sdg_csr = StructuredDotGradCSR() sdg_csr = StructuredDotGradCSR()
...@@ -2094,8 +2094,8 @@ class Mean(elemwise.CAReduce): ...@@ -2094,8 +2094,8 @@ class Mean(elemwise.CAReduce):
ret = elemwise.CAReduce.c_code(self, node, name, inames, onames, sub) ret = elemwise.CAReduce.c_code(self, node, name, inames, onames, sub)
#TODO: c_code perform support only axis==None #TODO: c_code perform support only axis==None
return ret + """ return ret + """
*((double *)PyArray_DATA(%s)) /= PyArray_SIZE(%s); *((double *)PyArray_DATA(%s)) /= PyArray_SIZE(%s);
"""%(onames[0],inames[0]) """%(onames[0],inames[0])
#TODO: implement the grad. When done and tested, you can make this the default version. #TODO: implement the grad. When done and tested, you can make this the default version.
# def grad(self, (x,), (gout,)): # def grad(self, (x,), (gout,)):
......
...@@ -3098,5 +3098,3 @@ if config.tensor.local_elemwise_fusion: ...@@ -3098,5 +3098,3 @@ if config.tensor.local_elemwise_fusion:
else: else:
_logger.debug("not enabling optimization fusion elemwise in fast_run") _logger.debug("not enabling optimization fusion elemwise in fast_run")
compile.optdb.register('elemwise_fusion', FusionOptimizer(local_elemwise_fusion), 71.00, 'fusion', 'local_elemwise_fusion') compile.optdb.register('elemwise_fusion', FusionOptimizer(local_elemwise_fusion), 71.00, 'fusion', 'local_elemwise_fusion')
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论