提交 547e54c8 authored 作者: lamblin's avatar lamblin

Merge pull request #979 from nouiz/0.6rc1

0.6rc1
差异被折叠。
差异被折叠。
......@@ -13,7 +13,7 @@ Highlight:
* Theano vision: http://deeplearning.net/software/theano/introduction.html#theano-vision (Many people)
* Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban)
* Faster dot() call: New/Better direct call to cpu and gpu ger, gemv, gemm
and dot(vector, vector). (James, Frédéric, Pascal)
and dot(vector, vector). (James, Frederic, Pascal)
* C implementation of Alloc. (James, Pascal)
* theano.grad() now also work with sparse variable. (Arnaud)
* Macro to implement the Jacobian/Hessian with theano.tensor.{jacobian,hessian} (Razvan)
......@@ -24,7 +24,7 @@ Interface Behavior Changes:
* The current default value of the parameter axis of
theano.{max,min,argmax,argmin,max_and_argmax} is now the same as
numpy: None. i.e. operate on all dimensions of the tensor.
(Frédéric Bastien, Olivier Delalleau) (was deprecated and generated
(Frederic Bastien, Olivier Delalleau) (was deprecated and generated
a warning since Theano 0.3 released Nov. 23rd, 2010)
* The current output dtype of sum with input dtype [u]int* is now always [u]int64.
You can specify the output dtype with a new dtype parameter to sum.
......@@ -209,7 +209,7 @@ Crashes fixed:
* When the subtensor inputs had 0 dimensions and the outputs 0 dimensions. (Frederic)
* Crash when the step to subtensor was not 1 in conjunction with some optimization. (Frederic, reported by Olivier Chapelle)
* Runtime crash related to an optimization with subtensor of alloc (reported by Razvan, fixed by Frederic)
* Fix dot22scalar cast of integer scalars (Justin Bayer, Frédéric, Olivier)
* Fix dot22scalar cast of integer scalars (Justin Bayer, Frederic, Olivier)
* Fix runtime crash in gemm, dot22. FB
* Fix on 32bits computer: make sure all shape are int64.(Olivier)
* Fix to deque on python 2.4 (Olivier)
......
......@@ -51,9 +51,9 @@ copyright = '2008--2012, LISA lab'
# other places throughout the built documents.
#
# The short X.Y version.
version = '0.5'
version = '0.6'
# The full version, including alpha/beta/rc tags.
release = '0.5'
release = '0.6rc1'
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
......
......@@ -165,11 +165,11 @@ Note: There is no short term plan to support multi-node computation.
Theano Vision State
===================
Here is the state of that vision as of 24 October 2011 (after Theano release
0.4.1):
Here is the state of that vision as of 1 October 2012 (after Theano release
0.6rc1):
* We support tensors using the `numpy.ndarray` object and we support many operations on them.
* We support sparse types by using the `scipy.{csc,csr}_matrix` object and support some operations on them (more are coming).
* We support sparse types by using the `scipy.{csc,csr}_matrix` object and support some operations on them.
* We have started implementing/wrapping more advanced linear algebra operations.
* We have many graph transformations that cover the 4 categories listed above.
* We can improve the graph transformation with better storage optimization
......@@ -196,16 +196,15 @@ Here is the state of that vision as of 24 October 2011 (after Theano release
* The profiler used by cvm is less complete than `ProfileMode`.
* SIMD parallelism on the CPU comes from the compiler.
* Multi-core parallelism is only supported for gemv and gemm, and only
if the external BLAS implementation supports it.
* Multi-core parallelism is only supported Conv2d. If the external BLAS implementation supports it,
there is also, gemm, gemv and ger that are parallelized.
* No multi-node support.
* Many, but not all NumPy functions/aliases are implemented.
* http://www.assembla.com/spaces/theano/tickets/781
* Wrapping an existing Python function in easy, but better documentation of
it would make it even easier.
* We need to find a way to separate the shared variable memory
* Wrapping an existing Python function in easy and documented.
* We know how to separate the shared variable memory
storage location from its object type (tensor, sparse, dtype, broadcast
flags).
flags), but we need to do it.
Contact us
......
......@@ -310,7 +310,7 @@ import theano and print the config variable, as in:
.. attribute:: config.warn.ignore_bug_before
String value: 'None', 'all', '0.3', '0.4', '0.4.1', '0.5'
String value: 'None', 'all', '0.3', '0.4', '0.4.1', '0.5', '0.6'
Default: 'None'
......
......@@ -44,9 +44,9 @@ AUTHOR = "LISA laboratory, University of Montreal"
AUTHOR_EMAIL = "theano-dev@googlegroups.com"
PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"]
MAJOR = 0
MINOR = 5
MINOR = 6
MICRO = 0
SUFFIX = "" # Should be blank except for rc's, betas, etc.
SUFFIX = "rc1" # Should be blank except for rc's, betas, etc.
ISRELEASED = False
VERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论