提交 547e54c8 authored 作者: lamblin's avatar lamblin

Merge pull request #979 from nouiz/0.6rc1

0.6rc1
差异被折叠。
差异被折叠。
...@@ -13,7 +13,7 @@ Highlight: ...@@ -13,7 +13,7 @@ Highlight:
* Theano vision: http://deeplearning.net/software/theano/introduction.html#theano-vision (Many people) * Theano vision: http://deeplearning.net/software/theano/introduction.html#theano-vision (Many people)
* Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban) * Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban)
* Faster dot() call: New/Better direct call to cpu and gpu ger, gemv, gemm * Faster dot() call: New/Better direct call to cpu and gpu ger, gemv, gemm
and dot(vector, vector). (James, Frédéric, Pascal) and dot(vector, vector). (James, Frederic, Pascal)
* C implementation of Alloc. (James, Pascal) * C implementation of Alloc. (James, Pascal)
* theano.grad() now also work with sparse variable. (Arnaud) * theano.grad() now also work with sparse variable. (Arnaud)
* Macro to implement the Jacobian/Hessian with theano.tensor.{jacobian,hessian} (Razvan) * Macro to implement the Jacobian/Hessian with theano.tensor.{jacobian,hessian} (Razvan)
...@@ -24,7 +24,7 @@ Interface Behavior Changes: ...@@ -24,7 +24,7 @@ Interface Behavior Changes:
* The current default value of the parameter axis of * The current default value of the parameter axis of
theano.{max,min,argmax,argmin,max_and_argmax} is now the same as theano.{max,min,argmax,argmin,max_and_argmax} is now the same as
numpy: None. i.e. operate on all dimensions of the tensor. numpy: None. i.e. operate on all dimensions of the tensor.
(Frédéric Bastien, Olivier Delalleau) (was deprecated and generated (Frederic Bastien, Olivier Delalleau) (was deprecated and generated
a warning since Theano 0.3 released Nov. 23rd, 2010) a warning since Theano 0.3 released Nov. 23rd, 2010)
* The current output dtype of sum with input dtype [u]int* is now always [u]int64. * The current output dtype of sum with input dtype [u]int* is now always [u]int64.
You can specify the output dtype with a new dtype parameter to sum. You can specify the output dtype with a new dtype parameter to sum.
...@@ -209,7 +209,7 @@ Crashes fixed: ...@@ -209,7 +209,7 @@ Crashes fixed:
* When the subtensor inputs had 0 dimensions and the outputs 0 dimensions. (Frederic) * When the subtensor inputs had 0 dimensions and the outputs 0 dimensions. (Frederic)
* Crash when the step to subtensor was not 1 in conjunction with some optimization. (Frederic, reported by Olivier Chapelle) * Crash when the step to subtensor was not 1 in conjunction with some optimization. (Frederic, reported by Olivier Chapelle)
* Runtime crash related to an optimization with subtensor of alloc (reported by Razvan, fixed by Frederic) * Runtime crash related to an optimization with subtensor of alloc (reported by Razvan, fixed by Frederic)
* Fix dot22scalar cast of integer scalars (Justin Bayer, Frédéric, Olivier) * Fix dot22scalar cast of integer scalars (Justin Bayer, Frederic, Olivier)
* Fix runtime crash in gemm, dot22. FB * Fix runtime crash in gemm, dot22. FB
* Fix on 32bits computer: make sure all shape are int64.(Olivier) * Fix on 32bits computer: make sure all shape are int64.(Olivier)
* Fix to deque on python 2.4 (Olivier) * Fix to deque on python 2.4 (Olivier)
......
...@@ -51,9 +51,9 @@ copyright = '2008--2012, LISA lab' ...@@ -51,9 +51,9 @@ copyright = '2008--2012, LISA lab'
# other places throughout the built documents. # other places throughout the built documents.
# #
# The short X.Y version. # The short X.Y version.
version = '0.5' version = '0.6'
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = '0.5' release = '0.6rc1'
# There are two options for replacing |today|: either, you set today to some # There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used: # non-false value, then it is used:
......
...@@ -165,11 +165,11 @@ Note: There is no short term plan to support multi-node computation. ...@@ -165,11 +165,11 @@ Note: There is no short term plan to support multi-node computation.
Theano Vision State Theano Vision State
=================== ===================
Here is the state of that vision as of 24 October 2011 (after Theano release Here is the state of that vision as of 1 October 2012 (after Theano release
0.4.1): 0.6rc1):
* We support tensors using the `numpy.ndarray` object and we support many operations on them. * We support tensors using the `numpy.ndarray` object and we support many operations on them.
* We support sparse types by using the `scipy.{csc,csr}_matrix` object and support some operations on them (more are coming). * We support sparse types by using the `scipy.{csc,csr}_matrix` object and support some operations on them.
* We have started implementing/wrapping more advanced linear algebra operations. * We have started implementing/wrapping more advanced linear algebra operations.
* We have many graph transformations that cover the 4 categories listed above. * We have many graph transformations that cover the 4 categories listed above.
* We can improve the graph transformation with better storage optimization * We can improve the graph transformation with better storage optimization
...@@ -196,16 +196,15 @@ Here is the state of that vision as of 24 October 2011 (after Theano release ...@@ -196,16 +196,15 @@ Here is the state of that vision as of 24 October 2011 (after Theano release
* The profiler used by cvm is less complete than `ProfileMode`. * The profiler used by cvm is less complete than `ProfileMode`.
* SIMD parallelism on the CPU comes from the compiler. * SIMD parallelism on the CPU comes from the compiler.
* Multi-core parallelism is only supported for gemv and gemm, and only * Multi-core parallelism is only supported Conv2d. If the external BLAS implementation supports it,
if the external BLAS implementation supports it. there is also, gemm, gemv and ger that are parallelized.
* No multi-node support. * No multi-node support.
* Many, but not all NumPy functions/aliases are implemented. * Many, but not all NumPy functions/aliases are implemented.
* http://www.assembla.com/spaces/theano/tickets/781 * http://www.assembla.com/spaces/theano/tickets/781
* Wrapping an existing Python function in easy, but better documentation of * Wrapping an existing Python function in easy and documented.
it would make it even easier. * We know how to separate the shared variable memory
* We need to find a way to separate the shared variable memory
storage location from its object type (tensor, sparse, dtype, broadcast storage location from its object type (tensor, sparse, dtype, broadcast
flags). flags), but we need to do it.
Contact us Contact us
......
...@@ -310,7 +310,7 @@ import theano and print the config variable, as in: ...@@ -310,7 +310,7 @@ import theano and print the config variable, as in:
.. attribute:: config.warn.ignore_bug_before .. attribute:: config.warn.ignore_bug_before
String value: 'None', 'all', '0.3', '0.4', '0.4.1', '0.5' String value: 'None', 'all', '0.3', '0.4', '0.4.1', '0.5', '0.6'
Default: 'None' Default: 'None'
......
...@@ -44,9 +44,9 @@ AUTHOR = "LISA laboratory, University of Montreal" ...@@ -44,9 +44,9 @@ AUTHOR = "LISA laboratory, University of Montreal"
AUTHOR_EMAIL = "theano-dev@googlegroups.com" AUTHOR_EMAIL = "theano-dev@googlegroups.com"
PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"] PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"]
MAJOR = 0 MAJOR = 0
MINOR = 5 MINOR = 6
MICRO = 0 MICRO = 0
SUFFIX = "" # Should be blank except for rc's, betas, etc. SUFFIX = "rc1" # Should be blank except for rc's, betas, etc.
ISRELEASED = False ISRELEASED = False
VERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX) VERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论