There have been so many changes since 0.1 that we have lost track of many of them. Below is a *partial* list of changes since 0.1.
Change in output memory storage for Ops:
If you implemented custom Ops, with either C or Python implementation,
* GPU code using NVIDIA's CUDA framework is now generated for many Ops.
this will concern you.
* Some interface changes since 0.1:
* A new "shared variable" system to allow reusing memory space between Theano functions.
The contract for memory storage of Ops has been changed. In particular,
* A new memory contract has been formally written for Theano, for people who want to minimize memory copies.
it is no longer guaranteed that output memory buffers are either empty,
* The old module system has been deprecated.
or allocated by a previous execution of the same Op.
* By default, inputs to a Theano function will not be silently downcasted (e.g. from float64 to float32).
* An error is now raised when using the result of logical operation on Theano variable in an 'if' (i.e. an implicit call to __nonzeros__).
Right now, here is the situation:
* An error is now raised when we receive a non-aligned ndarray as input to a function (this is not supported).
* For Python implementation (perform), what is inside output_storage
* An error is raised when the list of dimensions passed to dimshuffle() contains duplicates or is otherwise not sensible.
may have been allocated from outside the perform() function, for
* Call NumPy BLAS bindings for gemv operations in addition to the already supported gemm.
instance by another node (e.g., Scan) or the Mode. If that was the
* If gcc is unavailable at import time, Theano now falls back to a Python-based emulation mode after raising a warning.
case, the memory can be assumed to be C-contiguous (for the moment).
* An error is now raised when tensor.grad is called on a non-scalar Theano variable (in the past we would implicitly do a sum on the tensor to make it a scalar).
* For C implementations (c_code), nothing has changed yet.
* Added support for "erf" and "erfc" functions.
* The current default value of the parameter axis of theano.{max,min,argmax,argmin,max_and_argmax} is deprecated. We now use the default NumPy behavior of operating on the entire tensor.
In a future version, the content of the output storage, both for Python and C
* Theano is now available from PyPI and installable through "easy_install" or "pip".
versions, will either be NULL, or have the following guarantees:
* It will be a Python object of the appropriate Type (for a Tensor variable,
a numpy.ndarray, for a GPU variable, a CudaNdarray, for instance)
* It will have the correct number of dimensions, and correct dtype
However, its shape and memory layout (strides) will not be guaranteed.
When that change is made, the config flag DebugMode.check_preallocated_output
will help you find implementations that are not up-to-date.
Deprecation:
* tag.shape attribute deprecated (#633)
* CudaNdarray_new_null is deprecated in favour of CudaNdarray_New
* Dividing integers with / is deprecated: use // for integer division, or
cast one of the integers to a float type if you want a float result (you may
also change this behavior with config.int_division).