提交 6c8b5b83 authored 作者: Pascal Lamblin's avatar Pascal Lamblin

Modifications for 0.5rc2 release

上级 fdc95437
=========================== ===========================
Announcing Theano 0.4.0 Announcing Theano 0.5rc2
=========================== ===========================
This is a major release, with lots of new features, bug fixes, and some This is a release candidate for a major version, with lots of new
interface changes (deprecated or potentially misleading features were features, bug fixes, and some interface changes (deprecated or
removed). The upgrade is recommended for everybody, unless you rely on potentially misleading features were removed).
deprecated features that have been removed.
For those using the bleeding edge version in the The upgrade is recommended for developpers who want to help test and
git repository, we encourage you to update to the `0.4.0` tag. report bugs, or want to use new features now. If you have updated
to 0.5rc1, you are highly encouraged to update to 0.5rc2. There are
more bug fixes and speed uptimization! But there is also a small new
Deleting old cache interface change about sum of [u]int* dtype. Otherwise, users should
------------------ wait for the 0.5 release.
The caching mechanism for compiled C modules has been updated. For those using the bleeding edge version in the
In some cases, using previously-compiled modules with the new version of git repository, we encourage you to update to the `0.5rc2` tag.
Theano can lead to high memory usage and code slow-down. If you experience
these symptoms, we encourage you to clear your cache.
The easiest way to do that is to execute:
theano-cache clear
(The theano-cache executable is in Theano/bin.)
What's New What's New
...@@ -87,11 +79,17 @@ Acknowledgments ...@@ -87,11 +79,17 @@ Acknowledgments
--------------- ---------------
I would like to thank all contributors of Theano. For this particular I would like to thank all contributors of Theano. For this particular
release, many people have helped during the release sprint: (in release, many people have helped, notably (in alphabetical order):
alphabetical order) Frederic Bastien, Arnaud Bergeron, James Bergstra, Frédéric Bastien, Justin Bayer, Arnaud Bergerond, James Bergstra,
Nicolas Boulanger-Lewandowski, Raul Chandias Ferrari, Olivier Valentin Bisson, Josh Bleecher Snyder, Yann Dauphin, Olivier Delalleau,
Delalleau, Guillaume Desjardins, Philippe Hamel, Pascal Lamblin, Guillaume Desjardins, Sander Dieleman, Xavier Glorot, Ian Goodfellow,
Razvan Pascanu and David Warde-Farley. Philippe Hamel, Pascal Lamblin, Eric Laufer, Razvan Pascanu, Matthew
Rocklin, Graham Taylor, Sebastian Urban, David Warde-Farley, and Yao Li.
I would also like to thank users who submitted bug reports, notably
(this list is incomplete, please let us know if someone should be
added): Nicolas Boulanger-Lewandowski, Olivier Chapelle, Michael
Forbes, and Timothy Lillicrap.
Also, thank you to all NumPy and Scipy developers as Theano builds on Also, thank you to all NumPy and Scipy developers as Theano builds on
their strengths. their strengths.
......
...@@ -6,7 +6,8 @@ Old Release Notes ...@@ -6,7 +6,8 @@ Old Release Notes
================= =================
Modifications in the 0.4.1 (12 August 2011) Theano 0.4.1 (12 August 2011)
=============================
New features: New features:
......
UPDATED THIS FILE UP TO: 41103b5d158739e4147428ce776fb5716062d4a8 .. _NEWS:
=============
Release Notes
=============
If you have updated to 0.5rc1, you are highly encouraged to update to If you have updated to 0.5rc1, you are highly encouraged to update to
0.5rc2. There are more bug fixes and speed uptimization! But there is 0.5rc2. There are more bug fixes and speed uptimization! But there is
...@@ -6,7 +10,7 @@ also a small new interface change about sum of [u]int* dtype. ...@@ -6,7 +10,7 @@ also a small new interface change about sum of [u]int* dtype.
Modifications in the trunk since the 0.4.1 release (August 12th, 2011) Modifications in the trunk since the 0.4.1 release (August 12th, 2011)
======================================================================
Upgrading to Theano 0.5rc2 is recommended for everyone, but you should first make Upgrading to Theano 0.5rc2 is recommended for everyone, but you should first make
sure that your code does not raise deprecation warnings with Theano 0.4.1. sure that your code does not raise deprecation warnings with Theano 0.4.1.
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
LICENSE LICENSE
======= =======
Copyright (c) 2008--2011, Theano Development Team Copyright (c) 2008--2012, Theano Development Team
All rights reserved. All rights reserved.
Redistribution and use in source and binary forms, with or without Redistribution and use in source and binary forms, with or without
......
Modifications in the trunk since the 0.4.1 release (August 12th, 2011) up to December 5th, 2011 .. _NEWS:
=============
Release Notes
=============
Upgrading to Theano 0.5 is recommended for everyone, but you should first make If you have updated to 0.5rc1, you are highly encouraged to update to
0.5rc2. There are more bug fixes and speed uptimization! But there is
also a small new interface change about sum of [u]int* dtype.
Modifications in the trunk since the 0.4.1 release (August 12th, 2011)
======================================================================
Upgrading to Theano 0.5rc2 is recommended for everyone, but you should first make
sure that your code does not raise deprecation warnings with Theano 0.4.1. sure that your code does not raise deprecation warnings with Theano 0.4.1.
Otherwise, in one case the results can change. In other cases, the warnings are Otherwise, in one case the results can change. In other cases, the warnings are
turned into errors (see below for details). turned into errors (see below for details).
Important changes: Highlight:
* Moved to github: http://github.com/Theano/Theano/ * Moved to github: http://github.com/Theano/Theano/
* Old trac ticket moved to assembla ticket: http://www.assembla.com/spaces/theano/tickets * Old trac ticket moved to assembla ticket: http://www.assembla.com/spaces/theano/tickets
* Theano vision: http://deeplearning.net/software/theano/introduction.html#theano-vision (Many people) * Theano vision: http://deeplearning.net/software/theano/introduction.html#theano-vision (Many people)
* Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban) * Theano with GPU works in some cases on Windows now. Still experimental. (Sebastian Urban)
* Faster dot() call: New/Better direct call to cpu and gpu ger, gemv, gemm and dot(vector, vector). (James, Frédéric, Pascal)
* C implementation of Alloc. (James, Pascal)
* theano.grad() now also work with sparse variable. (Arnaud)
* Macro to implement the Jacobian/Hessian with theano.tensor.{jacobian,hessian} (Razvan)
* See the Interface changes. * See the Interface changes.
Interface Behavior Change (was deprecated and generated a warning since Theano 0.3 released Nov. 23rd, 2010): Interface Behavior Change (was deprecated and generated a warning since Theano 0.3 released Nov. 23rd, 2010):
* The current default value of the parameter axis of * The current default value of the parameter axis of
theano.{max,min,argmax,argmin,max_and_argmax} is now the same as theano.{max,min,argmax,argmin,max_and_argmax} is now the same as
numpy: None. i.e. operate on all dimensions of the tensor. (Frédéric Bastien, Olivier Delalleau) numpy: None. i.e. operate on all dimensions of the tensor. (Frédéric Bastien, Olivier Delalleau)
* The current output dtype of sum with input dtype [u]int* is now always [u]int64.
You can specify the output dtype with a new dtype parameter to sum.
The output dtype is the one using for the summation.
There is no warning in previous Theano version about this.
The consequence is that the sum is done in a dtype with more precession then before.
So the sum could be slower, but will be more resistent to overflow.
This new behavior is the same as numpy. (Olivier, Pascal)
Interface Features Removed (most were deprecated): Interface Features Removed (most were deprecated):
...@@ -32,10 +54,10 @@ Interface Features Removed (most were deprecated): ...@@ -32,10 +54,10 @@ Interface Features Removed (most were deprecated):
* Theano config option "home" is not used anymore as it was redundant with "base_compiledir". * Theano config option "home" is not used anymore as it was redundant with "base_compiledir".
If you use it, Theano will now raise an error. (Olivier D.) If you use it, Theano will now raise an error. (Olivier D.)
* scan interface changes: (Razvan Pascanu) * scan interface changes: (Razvan Pascanu)
- The use of `return_steps` for specifying how many entries of the output * The use of `return_steps` for specifying how many entries of the output
to return has been removed. Instead, apply a subtensor to the output to return has been removed. Instead, apply a subtensor to the output
returned by scan to select a certain slice. returned by scan to select a certain slice.
- The inner function (that scan receives) should return its outputs and * The inner function (that scan receives) should return its outputs and
updates following this order: updates following this order:
[outputs], [updates], [condition]. [outputs], [updates], [condition].
One can skip any of the three if not used, but the order has to stay unchanged. One can skip any of the three if not used, but the order has to stay unchanged.
...@@ -46,8 +68,30 @@ Interface bug fixes: ...@@ -46,8 +68,30 @@ Interface bug fixes:
New deprecation (will be removed in Theano 0.6, warning generated if you use them): New deprecation (will be removed in Theano 0.6, warning generated if you use them):
* tensor.shared() renamed to tensor._shared(). You probably want to call theano.shared() instead! (Olivier D.) * tensor.shared() renamed to tensor._shared(). You probably want to call theano.shared() instead! (Olivier D.)
Scan fix:
* computing grad of a function of grad of scan(reported by ?, Razvan)
before : most of the time crash, but could be wrong value with bad number of dimensions(so a visible bug)
now : do the right thing.
* gradient with respect to outputs using multiple taps(Timothy reported, fix by Razvan)
before : it used to return wrong values
now : do the right thing.
Note: The reported case of this bug was happening in conjunction with the
save optimization of scan that give run time errors. So if you didn't
manually disable the same memory optimization(number in the list4),
you are fine if you didn't manually request multiple taps.
* Rop of gradient of scan (reported by Timothy and Justin Bayer, fix by Razvan)
before : compilation error when computing R-op
now : do the right thing.
* save memory optimization of scan (reported by Timothy and Nicolas BL, fix by Razvan)
before : for certain corner cases used to result in a runtime shape error
now : do the right thing.
* Scan grad when the input of scan has sequences of different lengths. (Razvan, reported by Michael Forbes)
* Scan.infer_shape now works correctly when working with a condition for the number of loops.
In the past, it returned n_steps as the length, which is not always true. (Razvan)
* Scan.infer_shape crash fix. (Reported by ?, Razvan)
New features: New features:
* AdvancedIncSubtensor grad defined and tested (Justin Bayer)
* Adding 1D advanced indexing support to inc_subtensor and set_subtensor (James Bergstra) * Adding 1D advanced indexing support to inc_subtensor and set_subtensor (James Bergstra)
* tensor.{zeros,ones}_like now support the dtype param as numpy (Frederic) * tensor.{zeros,ones}_like now support the dtype param as numpy (Frederic)
* Added configuration flag "exception_verbosity" to control the verbosity of exceptions (Ian) * Added configuration flag "exception_verbosity" to control the verbosity of exceptions (Ian)
...@@ -68,12 +112,31 @@ New features: ...@@ -68,12 +112,31 @@ New features:
* Note: theano.dot and theano.sparse.structured_dot() always had a gradient with the same sparsity pattern as the inputs. * Note: theano.dot and theano.sparse.structured_dot() always had a gradient with the same sparsity pattern as the inputs.
The new theano.sparse.dot() has a dense gradient for all inputs. The new theano.sparse.dot() has a dense gradient for all inputs.
* GpuAdvancedSubtensor1 supports broadcasted dimensions. (Frederic) * GpuAdvancedSubtensor1 supports broadcasted dimensions. (Frederic)
* TensorVariable.zeros_like() and SparseVariable.zeros_like()
* theano.sandbox.cuda.cuda_ndarray.cuda_ndarray.device_properties() (Frederic)
* theano.sandbox.cuda.cuda_ndarray.cuda_ndarray.mem_info() return free and total gpu memory (Frederic)
* Theano flags compiledir_format. Keep the same default as before: compiledir_%(platform)s-%(processor)s-%(python_version)s. (Josh Bleecher Snyder)
* We also support the "theano_version" substitution.
* IntDiv c code (faster and allow this elemwise to be fused with other elemwise) (Pascal)
* Internal filter_variable mechanism in Type. (Pascal, Ian)
* Ifelse work on sparse.
* Make use of gpu shared variable more transparent with theano.function updates and givens parameter.
* Added a_tensor.transpose(axes) axes is optional (James)
* theano.tensor.transpose(a_tensor, kwargs) We where ignoring kwargs, now it is used as the axes.
* a_CudaNdarray_object[*] = int, now work (Frederic)
* tensor_variable.size (as numpy) computes the product of the shape elements. (Olivier)
* sparse_variable.size (as scipy) computes the number of stored values. (Olivier)
* sparse_variable[N, N] now works (Li Yao, Frederic)
* sparse_variable[M:N, O:P] now works (Li Yao, Frederic)
* Warning: M, N, O, and P should be Python int or scalar tensor variables,
in particular, None is not well-supported.
* tensor.tensordot can now be moved to GPU (Sander Dieleman,
Pascal, based on code from Tijmen Tieleman's gnumpy,
http://www.cs.toronto.edu/~tijmen/gnumpy.html)
New optimizations: New optimizations:
* AdvancedSubtensor1 reuses preallocated memory if available (scan, c|py_nogc linker) (Frederic) * AdvancedSubtensor1 reuses preallocated memory if available (scan, c|py_nogc linker) (Frederic)
* tensor_variable.size (as numpy) computes the product of the shape elements. (Olivier)
* sparse_variable.size (as scipy) computes the number of stored values. (Olivier)
* dot22, dot22scalar work with complex. (Frederic) * dot22, dot22scalar work with complex. (Frederic)
* Generate Gemv/Gemm more often. (James) * Generate Gemv/Gemm more often. (James)
* Remove scan when all computations can be moved outside the loop. (Razvan) * Remove scan when all computations can be moved outside the loop. (Razvan)
...@@ -88,9 +151,6 @@ New optimizations: ...@@ -88,9 +151,6 @@ New optimizations:
Bug fixes (the result changed): Bug fixes (the result changed):
* On CPU, if the convolution had received explicit shape information, they where not checked at runtime. * On CPU, if the convolution had received explicit shape information, they where not checked at runtime.
This caused wrong result if the input shape was not the one expected. (Frederic, reported by Sander Dieleman) This caused wrong result if the input shape was not the one expected. (Frederic, reported by Sander Dieleman)
* Scan grad when the input of scan has sequences of different lengths. (Razvan, reported by Michael Forbes)
* Scan.infer_shape now works correctly when working with a condition for the number of loops.
In the past, it returned n_steps as the length, which is not always true. (Razvan)
* Theoretical bug: in some case we could have GPUSum return bad value. * Theoretical bug: in some case we could have GPUSum return bad value.
We were not able to reproduce this problem We were not able to reproduce this problem
* patterns affected ({0,1}*nb dim, 0 no reduction on this dim, 1 reduction on this dim): * patterns affected ({0,1}*nb dim, 0 no reduction on this dim, 1 reduction on this dim):
...@@ -101,6 +161,15 @@ Bug fixes (the result changed): ...@@ -101,6 +161,15 @@ Bug fixes (the result changed):
* An expression of the form "1 / (exp(x) +- constant)" was systematically matched to "1 / (exp(x) + 1)" * An expression of the form "1 / (exp(x) +- constant)" was systematically matched to "1 / (exp(x) + 1)"
and turned into a sigmoid regardless of the value of the constant. A warning will be issued if your and turned into a sigmoid regardless of the value of the constant. A warning will be issued if your
code was affected by this bug. (Olivier, reported by Sander Dieleman) code was affected by this bug. (Olivier, reported by Sander Dieleman)
* When indexing into a subtensor of negative stride (for instance, x[a:b:-1][c]),
an optimization replacing it with a direct indexing (x[d]) used an incorrect formula,
leading to incorrect results. (Pascal, reported by Razvan)
* The tile() function is now stricter in what it accepts to allow for better
error-checking/avoiding nonsensical situations. The gradient has been
disabled for the time being as it only implemented (incorrectly) one special
case. The `reps` argument must be a constant (not a tensor variable), and
must have the same length as the number of dimensions in the `x` argument;
this is now checked. (David)
Crashes fixed: Crashes fixed:
...@@ -116,19 +185,22 @@ Crashes fixed: ...@@ -116,19 +185,22 @@ Crashes fixed:
* Support for OSX Enthought Python Distribution 7.x. (Graham Taylor, Olivier) * Support for OSX Enthought Python Distribution 7.x. (Graham Taylor, Olivier)
* When the subtensor inputs had 0 dimensions and the outputs 0 dimensions. (Frederic) * When the subtensor inputs had 0 dimensions and the outputs 0 dimensions. (Frederic)
* Crash when the step to subtensor was not 1 in conjunction with some optimization. (Frederic, reported by Olivier Chapelle) * Crash when the step to subtensor was not 1 in conjunction with some optimization. (Frederic, reported by Olivier Chapelle)
* fix dot22scalar cast of integer scalars (Justin Bayer, Frédéric, Olivier) * Runtime crash related to an optimization with subtensor of alloc (reported by Razvan, fixed by Frederic)
* Fix dot22scalar cast of integer scalars (Justin Bayer, Frédéric, Olivier)
* Fix runtime crash in gemm, dot22. FB
* Fix on 32bits computer: make sure all shape are int64.(Olivier)
* Fix to deque on python 2.4 (Olivier)
* Fix crash when not using c code(or using DebugMode)(not used by default) with numpy 1.6*. Numpy have a bug in the reduction code that make it crash. ufunc.reduce (Pascal)
Known bugs: Known bugs:
* CAReduce with nan in inputs don't return the good output (`Ticket <https://www.assembla.com/spaces/theano/tickets/763>`_). * CAReduce with nan in inputs don't return the good output (`Ticket <https://www.assembla.com/spaces/theano/tickets/763>`_).
* This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements. * This is used in tensor.{max,mean,prod,sum} and in the grad of PermuteRowElements.
* If you take the grad of a grad of scan, now we raise an error during the construction of the graph. In the past, you could have wrong results in some cases or an error at run time.
* Scan can raise an IncSubtensor error at run time (no wrong result possible). The current workaround is to disable an optimization with this Theano flag: "optimizer_excluding=scanOp_save_mem".
* If you have multiple optimizations to disable, you must separate them with ":".
Sandbox: Sandbox:
* cvm interface more consistent with current linker. (James) * cvm interface more consistent with current linker. (James)
* Now all tests pass with the linker=cvm flags.
* vm linker has a callback parameter. (James) * vm linker has a callback parameter. (James)
* review/finish/doc: diag/extract_diag. (Arnaud Bergeron, Frederic, Olivier) * review/finish/doc: diag/extract_diag. (Arnaud Bergeron, Frederic, Olivier)
* review/finish/doc: AllocDiag/diag. (Arnaud, Frederic, Guillaume) * review/finish/doc: AllocDiag/diag. (Arnaud, Frederic, Guillaume)
...@@ -139,39 +211,51 @@ Sandbox: ...@@ -139,39 +211,51 @@ Sandbox:
* review/finish/doc: ensure_sorted_indices. (Li Yao) * review/finish/doc: ensure_sorted_indices. (Li Yao)
* review/finish/doc: spectral_radius_boud. (Xavier Glorot) * review/finish/doc: spectral_radius_boud. (Xavier Glorot)
* review/finish/doc: sparse sum. (Valentin Bisson) * review/finish/doc: sparse sum. (Valentin Bisson)
* review/finish/doc: Remove0 (Valentin)
* review/finish/doc: SquareDiagonal (Eric)
Sandbox New features (not enabled by default): Sandbox New features (not enabled by default):
* CURAND_RandomStreams for uniform and normal (not picklable, GPU only) (James) * CURAND_RandomStreams for uniform and normal (not picklable, GPU only) (James)
* New sandbox.linalg.ops.pinv(pseudo-inverse) op (Razvan)
Documentation: Documentation:
* Many updates. (Many people) * Many updates. (Many people)
* Updates to install doc on MacOS. (Olivier) * Updates to install doc on MacOS. (Olivier)
* Updates to install doc on Windows. (David, Olivier) * Updates to install doc on Windows. (David, Olivier)
* Doc on the Rop function (Ian)
* Added how to use scan to loop with a condition as the number of iteration. (Razvan) * Added how to use scan to loop with a condition as the number of iteration. (Razvan)
* Added how to wrap in Theano an existing python function (in numpy, scipy, ...). (Frederic) * Added how to wrap in Theano an existing python function (in numpy, scipy, ...). (Frederic)
* Refactored GPU installation of Theano. (Olivier) * Refactored GPU installation of Theano. (Olivier)
Others: Others:
* Better error messages in many places. (David, Ian, Frederic, Olivier) * Better error messages in many places. (Many people)
* PEP8 fixes. (Many people) * PEP8 fixes. (Many people)
* Add a warning about numpy bug with subtensor with more then 2**32 elemenent(TODO, more explicit)
* Added Scalar.ndim=0 and ScalarSharedVariable.ndim=0 (simplify code)(Razvan)
* New min_informative_str() function to print graph. (Ian) * New min_informative_str() function to print graph. (Ian)
* Fix catching of exception. (Sometimes we catched interupt) (Frederic, David, Ian, Olivier) * Fix catching of exception. (Sometimes we used to catch interrupts) (Frederic, David, Ian, Olivier)
* Better support for uft string. (David) * Better support for uft string. (David)
* Fix pydotprint with a function compiled with a ProfileMode (Frederic) * Fix pydotprint with a function compiled with a ProfileMode (Frederic)
* Was broken with change to the profiler. * Was broken with change to the profiler.
* Warning when people have old cache entries. (Olivier) * Warning when people have old cache entries. (Olivier)
* More tests for join on the GPU and CPU. (Frederic) * More tests for join on the GPU and CPU. (Frederic)
* Don't request to load the GPU module by default in scan module. (Razvan) * Don't request to load the GPU module by default in scan module. (Razvan)
* Fixed some import problems. * Fixed some import problems. (Frederic and others)
* Filtering update. (James) * Filtering update. (James)
* On Windows, the default compiledir changed to be local to the computer/user and not transferred with roaming profile. (Sebastian Urban) * On Windows, the default compiledir changed to be local to the computer/user and not transferred with roaming profile. (Sebastian Urban)
* New theano flag "on_shape_error". Defaults to "warn" (same as previous behavior): * New theano flag "on_shape_error". Defaults to "warn" (same as previous behavior):
it prints a warning when an error occurs when inferring the shape of some apply node. it prints a warning when an error occurs when inferring the shape of some apply node.
The other accepted value is "raise" to raise an error when this happens. (Frederic) The other accepted value is "raise" to raise an error when this happens. (Frederic)
* The buidbot now raises optimization/shape errors instead of just printing a warning. (Frederic) * The buidbot now raises optimization/shape errors instead of just printing a warning. (Frederic)
* better pycuda tests (Frederic)
* check_blas.py now accept the shape and the number of iteration as parameter (Frederic)
* Fix opt warning when the opt ShapeOpt is disabled(enabled by default) (Frederic)
* More internal verification on what each op.infer_shape return. (Frederic, James)
* Argmax dtype to int64 (Olivier)
* Improved docstring and basic tests for the Tile Op (David).
Reviewers (alphabetical order): Reviewers (alphabetical order):
* David, Frederic, Ian, James, Olivier, Razvan * David, Frederic, Ian, James, Olivier, Razvan
...@@ -45,7 +45,7 @@ master_doc = 'index' ...@@ -45,7 +45,7 @@ master_doc = 'index'
# General substitutions. # General substitutions.
project = 'Theano' project = 'Theano'
copyright = '2008--2011, LISA lab' copyright = '2008--2012, LISA lab'
# The default replacements for |version| and |release|, also used in various # The default replacements for |version| and |release|, also used in various
# other places throughout the built documents. # other places throughout the built documents.
...@@ -53,7 +53,7 @@ copyright = '2008--2011, LISA lab' ...@@ -53,7 +53,7 @@ copyright = '2008--2011, LISA lab'
# The short X.Y version. # The short X.Y version.
version = '0.5' version = '0.5'
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = '0.5rc1' release = '0.5rc2'
# There are two options for replacing |today|: either, you set today to some # There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used: # non-false value, then it is used:
......
...@@ -48,7 +48,7 @@ PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"] ...@@ -48,7 +48,7 @@ PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"]
MAJOR = 0 MAJOR = 0
MINOR = 5 MINOR = 5
MICRO = 0 MICRO = 0
SUFFIX = "rc1" # Should be blank except for rc's, betas, etc. SUFFIX = "rc2" # Should be blank except for rc's, betas, etc.
ISRELEASED = False ISRELEASED = False
VERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX) VERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX)
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论