Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
aeb68ef2
提交
aeb68ef2
authored
4月 29, 2013
作者:
nouiz
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #1355 from delallea/minor
Minor fixes
上级
a2168c05
fcc7442a
显示空白字符变更
内嵌
并排
正在显示
19 个修改的文件
包含
131 行增加
和
109 行删除
+131
-109
optimization.txt
doc/extending/optimization.txt
+4
-3
install.txt
doc/install.txt
+2
-2
config.txt
doc/library/config.txt
+10
-9
scan.txt
doc/library/scan.txt
+14
-13
basic.txt
doc/library/tensor/basic.txt
+10
-10
optimizations.txt
doc/optimizations.txt
+4
-4
extending_theano.txt
doc/tutorial/extending_theano.txt
+13
-10
faq.txt
doc/tutorial/faq.txt
+2
-1
debugmode.py
theano/compile/debugmode.py
+14
-7
mode.py
theano/compile/mode.py
+2
-2
configdefaults.py
theano/configdefaults.py
+4
-3
cmodule.py
theano/gof/cmodule.py
+7
-5
gradient.py
theano/gradient.py
+18
-18
doubleop.py
theano/misc/doubleop.py
+3
-3
__init__.py
theano/sandbox/cuda/__init__.py
+11
-8
scan_op.py
theano/scan_module/scan_op.py
+5
-4
basic.py
theano/tensor/basic.py
+6
-5
Conv3D.py
theano/tensor/nnet/Conv3D.py
+1
-1
run_tests_in_batch.py
theano/tests/run_tests_in_batch.py
+1
-1
没有找到文件。
doc/extending/optimization.txt
浏览文件 @
aeb68ef2
...
@@ -21,9 +21,10 @@ In this section we will define a couple optimizations on doubles.
...
@@ -21,9 +21,10 @@ In this section we will define a couple optimizations on doubles.
.. note::
.. note::
There is the optimization tag `cxx_only` that tell this
The optimization tag `cxx_only` is used for optimizations that insert
optimization will insert Op that only have c code. So we should not
Ops which have no Python implementation (so they only have C code).
run them when we don't have a c++ compiler.
Optimizations with this tag are skipped when there is no C++ compiler
available.
Global and local optimizations
Global and local optimizations
==============================
==============================
...
...
doc/install.txt
浏览文件 @
aeb68ef2
...
@@ -442,9 +442,9 @@ correctly (for example, for MKL this might be ``-lmkl -lguide -lpthread`` or
...
@@ -442,9 +442,9 @@ correctly (for example, for MKL this might be ``-lmkl -lguide -lpthread`` or
If you have problems linking with MKL, `Intel Line Advisor
If you have problems linking with MKL, `Intel Line Advisor
<http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor>`_
<http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor>`_
and `MKL User Guide
and
the
`MKL User Guide
<http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mkl_userguide_lnx/index.htm>`_
<http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mkl_userguide_lnx/index.htm>`_
can help you find the correct flag to use.
can help you find the correct flag
s
to use.
.. _gpu_linux:
.. _gpu_linux:
...
...
doc/library/config.txt
浏览文件 @
aeb68ef2
...
@@ -410,17 +410,18 @@ import theano and print the config variable, as in:
...
@@ -410,17 +410,18 @@ import theano and print the config variable, as in:
.. attribute:: config.cxx
.. attribute:: config.cxx
Default: 'g++' if g++ is present.
'' O
therwise.
Default: 'g++' if g++ is present.
Empty string o
therwise.
Tell the c++ compiler to use. If empty, don't compile c++ code
.
Indicates which C++ compiler to use. If empty, no C++ code is compiled
.
We automatically detect if g++ is present and disable it if not
Theano automatically detects whether g++ is present and disables
presen
t.
C++ compilation when it is no
t.
We print a warning if we detect that g++ is not present. It is
We print a warning if we detect that g++ is not present. It is
recommended to run with
c
++ compilation as Theano will be much
recommended to run with
C
++ compilation as Theano will be much
slower otherwise.
slower otherwise.
Currently only g++ is supported, but supporting others is easy.
Currently only g++ is supported, but supporting other compilers should
not be too difficult.
.. attribute:: optimizer_excluding
.. attribute:: optimizer_excluding
...
@@ -636,12 +637,12 @@ import theano and print the config variable, as in:
...
@@ -636,12 +637,12 @@ import theano and print the config variable, as in:
Bool value, default: False
Bool value, default: False
If True, will remove -O* parameter passed to g++.
If True, will remove
the
-O* parameter passed to g++.
This is useful to debug in gdb module compiled by Theano.
This is useful to debug in gdb module
s
compiled by Theano.
The parameter -g is passed by default to g++.
The parameter -g is passed by default to g++.
.. attribute:: cmodule.compilation_warning
.. attribute:: cmodule.compilation_warning
Bool value, default: False
Bool value, default: False
If True, will print compilation warning.
If True, will print compilation warning
s
.
doc/library/scan.txt
浏览文件 @
aeb68ef2
...
@@ -295,25 +295,26 @@ the following:
...
@@ -295,25 +295,26 @@ the following:
.. code-block:: python
.. code-block:: python
W = theano.shared
( W_values
) # we assume that ``W_values`` contains the
W = theano.shared
(W_values
) # we assume that ``W_values`` contains the
# initial values of your weight matrix
# initial values of your weight matrix
bvis = theano.shared(
bvis_values)
bvis = theano.shared(bvis_values)
bhid = theano.shared(
bhid_values)
bhid = theano.shared(bhid_values)
trng = T.shared_randomstreams.RandomStreams(1234)
trng = T.shared_randomstreams.RandomStreams(1234)
def OneStep( vsample) :
def OneStep(vsample) :
hmean = T.nnet.sigmoid( theano.dot( vsample, W) + bhid)
hmean = T.nnet.sigmoid(theano.dot(vsample, W) + bhid)
hsample = trng.binomial( size = hmean.shape, n = 1, prob = hmean)
hsample = trng.binomial(size=hmean.shape, n=1, p=hmean)
vmean = T.nnet.sigmoid( theano.dot( hsample. W.T) + bvis)
vmean = T.nnet.sigmoid(theano.dot(hsample, W.T) + bvis)
return trng.binomial( size = vsample.shape, n = 1, prob = vsample)
return trng.binomial(size=vsample.shape, n=1, p=vmean,
dtype=theano.config.floatX)
sample = theano.tensor.vector()
sample = theano.tensor.vector()
values, updates = theano.scan(
OneStep, outputs_info = sample, n_steps = 10
)
values, updates = theano.scan(
OneStep, outputs_info=sample, n_steps=10
)
gibbs10 = theano.function([sample], values[-1], updates
=
updates)
gibbs10 = theano.function([sample], values[-1], updates
=
updates)
Note that if we use shared variables ( ``W``, ``bvis``, ``bhid``) but
Note that if we use shared variables ( ``W``, ``bvis``, ``bhid``) but
...
@@ -335,7 +336,7 @@ afterwards. Look at this example :
...
@@ -335,7 +336,7 @@ afterwards. Look at this example :
.. code-block:: python
.. code-block:: python
a = theano.shared(1)
a = theano.shared(1)
values,
updates = theano.scan( lambda : {a:a+1}, n_steps = 10
)
values,
updates = theano.scan(lambda: {a: a+1}, n_steps=10
)
In this case the lambda expression does not require any input parameters
In this case the lambda expression does not require any input parameters
and returns an update dictionary which tells how ``a`` should be updated
and returns an update dictionary which tells how ``a`` should be updated
...
@@ -343,9 +344,9 @@ after each step of scan. If we write :
...
@@ -343,9 +344,9 @@ after each step of scan. If we write :
.. code-block:: python
.. code-block:: python
b = a
+
1
b = a
+
1
c = updates[a] + 1
c = updates[a] + 1
f = theano.function([], [b,
c], updates =
updates)
f = theano.function([], [b,
c], updates=
updates)
print b
print b
print c
print c
...
...
doc/library/tensor/basic.txt
浏览文件 @
aeb68ef2
...
@@ -646,30 +646,30 @@ dimensions, see :meth:`_tensor_py_operators.dimshuffle`.
...
@@ -646,30 +646,30 @@ dimensions, see :meth:`_tensor_py_operators.dimshuffle`.
>>> x = T.concatenate([x0, x1[0], T.shape_padright(x2)], axis=1)
>>> x = T.concatenate([x0, x1[0], T.shape_padright(x2)], axis=1)
>>> # x.ndim == 2
>>> # x.ndim == 2
.. function:: stacklist(tensor_list)
.. function:: stacklist
s
(tensor_list)
:type tensor_list: an iterable that contain
tensors or iterable
:type tensor_list: an iterable that contain
s either tensors or other
with at the end tensors.
iterables of the same type as `tensor_list` (in other words, this
:param tensor_list: tensors to be
is a tree whose leaves are tensors).
stacken
d together.
:param tensor_list: tensors to be stacke
d together.
Recursivly stack lists of tensors to maintain similar structure.
Recursiv
e
ly stack lists of tensors to maintain similar structure.
This function can create a tensor from a shaped list of scalars
This function can create a tensor from a shaped list of scalars
:
>>> from theano.tensor import stacklists, scalars, matrices
>>> from theano.tensor import stacklists, scalars, matrices
>>> from theano import function
>>> from theano import function
>>> a,
b,c,
d = scalars('abcd')
>>> a,
b, c,
d = scalars('abcd')
>>> X = stacklists([[a, b], [c, d]])
>>> X = stacklists([[a, b], [c, d]])
>>> f = function([a, b, c, d], X)
>>> f = function([a, b, c, d], X)
>>> f(1, 2, 3, 4)
>>> f(1, 2, 3, 4)
>>> # array([[ 1., 2.], [ 3., 4.]], dtype=float32)
>>> # array([[ 1., 2.], [ 3., 4.]], dtype=float32)
We can also stack arbitrarily shaped tensors. Here we stack matrices into
We can also stack arbitrarily shaped tensors. Here we stack matrices into
a 2 by 2 grid
.
a 2 by 2 grid
:
>>> from numpy import ones
>>> from numpy import ones
>>> a,
b,c,d,
= matrices('abcd')
>>> a,
b, c, d
= matrices('abcd')
>>> X = stacklists([[a, b], [c, d]])
>>> X = stacklists([[a, b], [c, d]])
>>> f = function([a, b, c, d], X)
>>> f = function([a, b, c, d], X)
>>> x = ones((4, 4), 'float32')
>>> x = ones((4, 4), 'float32')
...
...
doc/optimizations.txt
浏览文件 @
aeb68ef2
...
@@ -20,8 +20,8 @@ If you would like to add an additional optimization, refer to
...
@@ -20,8 +20,8 @@ If you would like to add an additional optimization, refer to
This list is partial.
This list is partial.
The print_summary method allow
several OpDBs and optimizers to list the optimization executed
.
The print_summary method allow
s several OpDBs and optimizers to list the executed optimizations
.
This
allow
to have an up-to-date list.
This
makes it possible
to have an up-to-date list.
python -c 'import theano; theano.compile.FAST_RUN.optimizer.print_summary()'
python -c 'import theano; theano.compile.FAST_RUN.optimizer.print_summary()'
...
@@ -255,6 +255,6 @@ Optimization FAST_RUN FAST_COMPILE
...
@@ -255,6 +255,6 @@ Optimization FAST_RUN FAST_COMPILE
local_log_softmax
local_log_softmax
This is a stabilization optimization.
This is a stabilization optimization.
It can happen due to rounding
problem that the softmax probability of one value get
to 0.
It can happen due to rounding
errors that the softmax probability of one value gets
to 0.
Taking the log of 0
,
would generate -inf that will probably generate NaN later.
Taking the log of 0 would generate -inf that will probably generate NaN later.
We return a closer answer.
We return a closer answer.
doc/tutorial/extending_theano.txt
浏览文件 @
aeb68ef2
...
@@ -397,8 +397,8 @@ have to be jointly optimized explicitly in the code.)
...
@@ -397,8 +397,8 @@ have to be jointly optimized explicitly in the code.)
SciPy
SciPy
-----
-----
We can wrap SciPy function
in Theano. But Scip
y is an optional dependency.
We can wrap SciPy function
s in Theano. But SciP
y is an optional dependency.
Here is some code that allow
to make the op O
ptional:
Here is some code that allow
s the Op to be o
ptional:
.. code-block:: python
.. code-block:: python
...
@@ -413,17 +413,19 @@ Here is some code that allow to make the op Optional:
...
@@ -413,17 +413,19 @@ Here is some code that allow to make the op Optional:
...
...
def make_node(self, x):
def make_node(self, x):
assert imported_scipy, (
assert imported_scipy, (
"Scipy not available. Scipy is needed for the SomeOp op.")
"SciPy not available. SciPy is needed for the SomeOp op.")
...
from nose.plugins.skip import SkipTest
from nose.plugins.skip import SkipTest
class test_So
lve
(utt.InferShapeTester):
class test_So
meOp
(utt.InferShapeTester):
...
...
def test_infer_shape(self):
def test_infer_shape(self):
if not imported_scipy:
if not imported_scipy:
raise SkipTest("Scipy needed for the Cholesky op.")
raise SkipTest("SciPy needed for the SomeOp op.")
...
Random number in tests
Random number
s
in tests
----------------------
----------------------
-
Making tests errors more reproducible is a good practice. To make your
Making tests errors more reproducible is a good practice. To make your
tests more reproducible, you need a way to get the same random
tests more reproducible, you need a way to get the same random
...
@@ -449,7 +451,7 @@ tutorial :ref:`Extending Theano<extending>`
...
@@ -449,7 +451,7 @@ tutorial :ref:`Extending Theano<extending>`
See :ref:`metadocumentation`, for some information on how to generate
See :ref:`metadocumentation`, for some information on how to generate
the documentation.
the documentation.
Here is an example how to add docstring to a
n
class.
Here is an example how to add docstring to a class.
.. code-block:: python
.. code-block:: python
...
@@ -460,7 +462,7 @@ Here is an example how to add docstring to an class.
...
@@ -460,7 +462,7 @@ Here is an example how to add docstring to an class.
:param x: input tensor.
:param x: input tensor.
:return: a tensor of the s
hap
e shape and dtype as the input with all
:return: a tensor of the s
am
e shape and dtype as the input with all
values doubled.
values doubled.
:note:
:note:
...
@@ -473,7 +475,8 @@ Here is an example how to add docstring to an class.
...
@@ -473,7 +475,8 @@ Here is an example how to add docstring to an class.
.. versionadded:: 0.6
.. versionadded:: 0.6
"""
"""
This is how it will show up for file that we auto list in the library documentation:
This is how it will show up for files that we auto-list in the library
documentation:
.. automodule:: theano.misc.doubleop
.. automodule:: theano.misc.doubleop
...
...
doc/tutorial/faq.txt
浏览文件 @
aeb68ef2
...
@@ -24,7 +24,8 @@ internals cannot be modified.
...
@@ -24,7 +24,8 @@ internals cannot be modified.
Faster gcc optimization
Faster gcc optimization
-----------------------
-----------------------
You can enable faster gcc optimization with the ``cxxflags``. This list of flags was suggested on the mailing list::
You can enable faster gcc optimization with the ``cxxflags`` option.
This list of flags was suggested on the mailing list::
-O3 -ffast-math -ftree-loop-distribution -funroll-loops -ftracer
-O3 -ffast-math -ftree-loop-distribution -funroll-loops -ftracer
...
...
theano/compile/debugmode.py
浏览文件 @
aeb68ef2
...
@@ -886,8 +886,8 @@ def _lessbroken_deepcopy(a):
...
@@ -886,8 +886,8 @@ def _lessbroken_deepcopy(a):
"""
"""
:param a: any object
:param a: any object
Returns a copy of `a` that shares no internal storage with the original
.
Returns a copy of `a` that shares no internal storage with the original
A deep copy
.
(a deep copy)
.
This function handles numpy arrays specially, because copy.deepcopy()
This function handles numpy arrays specially, because copy.deepcopy()
called on a 0-d array will return a numpy scalar, not an array.
called on a 0-d array will return a numpy scalar, not an array.
"""
"""
...
@@ -2199,22 +2199,29 @@ class _Maker(FunctionMaker): # inheritance buys a few helper functions
...
@@ -2199,22 +2199,29 @@ class _Maker(FunctionMaker): # inheritance buys a few helper functions
raise
StochasticOrder
(
infolog
.
getvalue
())
raise
StochasticOrder
(
infolog
.
getvalue
())
else
:
else
:
if
self
.
verbose
:
if
self
.
verbose
:
print
>>
sys
.
stderr
,
"OPTCHECK: optimization"
,
i
,
"of"
,
len
(
li
),
"events was stable."
print
>>
sys
.
stderr
,
"OPTCHECK: optimization"
,
i
,
\
"of"
,
len
(
li
),
"events was stable."
else
:
else
:
fgraph0
=
fgraph
fgraph0
=
fgraph
del
fgraph0
del
fgraph0
self
.
fgraph
=
fgraph
self
.
fgraph
=
fgraph
#equivalence_tracker.printstuff()
#equivalence_tracker.printstuff()
linker
=
_Linker
(
self
)
linker
=
_Linker
(
self
)
# the 'no_borrow' outputs are the ones for which that we can't return
# the internal storage pointer.
#the 'no_borrow' outputs are the ones for which that we can't return the internal storage pointer.
no_borrow
=
[
no_borrow
=
[
output
for
output
,
spec
in
zip
(
fgraph
.
outputs
,
outputs
+
additional_outputs
)
if
not
spec
.
borrow
]
output
for
output
,
spec
in
izip
(
fgraph
.
outputs
,
outputs
+
additional_outputs
)
if
not
spec
.
borrow
]
if
no_borrow
:
if
no_borrow
:
self
.
linker
=
linker
.
accept
(
fgraph
,
no_recycling
=
infer_reuse_pattern
(
fgraph
,
no_borrow
))
self
.
linker
=
linker
.
accept
(
fgraph
,
no_recycling
=
infer_reuse_pattern
(
fgraph
,
no_borrow
))
else
:
else
:
self
.
linker
=
linker
.
accept
(
fgraph
)
self
.
linker
=
linker
.
accept
(
fgraph
)
...
...
theano/compile/mode.py
浏览文件 @
aeb68ef2
...
@@ -86,7 +86,7 @@ def register_linker(name, linker):
...
@@ -86,7 +86,7 @@ def register_linker(name, linker):
# If a string is passed as the optimizer argument in the constructor
# If a string is passed as the optimizer argument in the constructor
# for Mode, it will be used as the key to retrieve the real optimizer
# for Mode, it will be used as the key to retrieve the real optimizer
# in this dictionary
# in this dictionary
exclude
=
[]
exclude
=
[]
if
not
theano
.
config
.
cxx
:
if
not
theano
.
config
.
cxx
:
exclude
=
[
'cxx_only'
]
exclude
=
[
'cxx_only'
]
OPT_FAST_RUN
=
gof
.
Query
(
include
=
[
'fast_run'
],
exclude
=
exclude
)
OPT_FAST_RUN
=
gof
.
Query
(
include
=
[
'fast_run'
],
exclude
=
exclude
)
...
@@ -120,7 +120,7 @@ def register_optimizer(name, opt):
...
@@ -120,7 +120,7 @@ def register_optimizer(name, opt):
class
AddDestroyHandler
(
gof
.
Optimizer
):
class
AddDestroyHandler
(
gof
.
Optimizer
):
"""This optimizer performs two important functions:
"""This optimizer performs two important functions:
1)
it has a 'requirement' of the destroyhandler.
This means that the fgraph
1)
It has a 'requirement' of the destroyhandler.
This means that the fgraph
will include it as a feature for this optimization, and keep this feature
will include it as a feature for this optimization, and keep this feature
enabled for subsequent optimizations. All optimizations that work inplace
enabled for subsequent optimizations. All optimizations that work inplace
on any of their inputs must run *after* this optimization to ensure that
on any of their inputs must run *after* this optimization to ensure that
...
...
theano/configdefaults.py
浏览文件 @
aeb68ef2
...
@@ -131,9 +131,10 @@ else:
...
@@ -131,9 +131,10 @@ else:
enum
=
EnumStr
(
""
)
enum
=
EnumStr
(
""
)
AddConfigVar
(
'cxx'
,
AddConfigVar
(
'cxx'
,
"The c++ compiler to use. Currently only g++ is"
"The C++ compiler to use. Currently only g++ is"
" supported. But supporting more is easy if someone want this."
" supported, but supporting additional compilers should not be "
"If it is empty, we don't compile c++ code."
,
"too difficult. "
"If it is empty, no C++ code is compiled."
,
enum
,
enum
,
in_c_key
=
False
)
in_c_key
=
False
)
del
enum
del
enum
...
...
theano/gof/cmodule.py
浏览文件 @
aeb68ef2
...
@@ -45,13 +45,13 @@ AddConfigVar('cmodule.warn_no_version',
...
@@ -45,13 +45,13 @@ AddConfigVar('cmodule.warn_no_version',
in_c_key
=
False
)
in_c_key
=
False
)
AddConfigVar
(
'cmodule.remove_gxx_opt'
,
AddConfigVar
(
'cmodule.remove_gxx_opt'
,
"If True, will remove -O* parameter passed to g++."
"If True, will remove
the
-O* parameter passed to g++."
"This is useful to debug in gdb module compiled by Theano."
"This is useful to debug in gdb module
s
compiled by Theano."
"The parameter -g is passed by default to g++"
,
"The parameter -g is passed by default to g++"
,
BoolParam
(
False
))
BoolParam
(
False
))
AddConfigVar
(
'cmodule.compilation_warning'
,
AddConfigVar
(
'cmodule.compilation_warning'
,
"If True, will print compilation warning."
,
"If True, will print compilation warning
s
."
,
BoolParam
(
False
))
BoolParam
(
False
))
...
@@ -162,13 +162,15 @@ static struct PyModuleDef moduledef = {{
...
@@ -162,13 +162,15 @@ static struct PyModuleDef moduledef = {{
MyMethods,
MyMethods,
}};
}};
"""
.
format
(
name
=
self
.
hash_placeholder
)
"""
.
format
(
name
=
self
.
hash_placeholder
)
print
>>
stream
,
"PyMODINIT_FUNC PyInit_
%
s(void) {"
%
self
.
hash_placeholder
print
>>
stream
,
(
"PyMODINIT_FUNC PyInit_
%
s(void) {"
%
self
.
hash_placeholder
)
for
block
in
self
.
init_blocks
:
for
block
in
self
.
init_blocks
:
print
>>
stream
,
' '
,
block
print
>>
stream
,
' '
,
block
print
>>
stream
,
" PyObject *m = PyModule_Create(&moduledef);"
print
>>
stream
,
" PyObject *m = PyModule_Create(&moduledef);"
print
>>
stream
,
" return m;"
print
>>
stream
,
" return m;"
else
:
else
:
print
>>
stream
,
"PyMODINIT_FUNC init
%
s(void){"
%
self
.
hash_placeholder
print
>>
stream
,
(
"PyMODINIT_FUNC init
%
s(void){"
%
self
.
hash_placeholder
)
for
block
in
self
.
init_blocks
:
for
block
in
self
.
init_blocks
:
print
>>
stream
,
' '
,
block
print
>>
stream
,
' '
,
block
print
>>
stream
,
' '
,
(
'(void) Py_InitModule("
%
s", MyMethods);'
print
>>
stream
,
' '
,
(
'(void) Py_InitModule("
%
s", MyMethods);'
...
...
theano/gradient.py
浏览文件 @
aeb68ef2
...
@@ -869,17 +869,19 @@ def _populate_grad_dict(var_to_app_to_idx,
...
@@ -869,17 +869,19 @@ def _populate_grad_dict(var_to_app_to_idx,
for
o
,
og
in
zip
(
node
.
outputs
,
output_grads
):
for
o
,
og
in
zip
(
node
.
outputs
,
output_grads
):
o_dt
=
getattr
(
o
.
type
,
'dtype'
,
None
)
o_dt
=
getattr
(
o
.
type
,
'dtype'
,
None
)
og_dt
=
getattr
(
og
.
type
,
'dtype'
,
None
)
og_dt
=
getattr
(
og
.
type
,
'dtype'
,
None
)
if
o_dt
not
in
theano
.
tensor
.
discrete_dtypes
and
og_dt
and
o_dt
!=
og_dt
:
if
(
o_dt
not
in
theano
.
tensor
.
discrete_dtypes
and
og_dt
and
o_dt
!=
og_dt
):
new_output_grads
.
append
(
og
.
astype
(
o_dt
))
new_output_grads
.
append
(
og
.
astype
(
o_dt
))
else
:
else
:
new_output_grads
.
append
(
og
)
new_output_grads
.
append
(
og
)
# Make sure that, if new_output_grads[i] has a floating point
dtype,
# Make sure that, if new_output_grads[i] has a floating point
# it is the same dtype as outputs[i]
#
dtype,
it is the same dtype as outputs[i]
for
o
,
ng
in
zip
(
node
.
outputs
,
new_output_grads
):
for
o
,
ng
in
zip
(
node
.
outputs
,
new_output_grads
):
o_dt
=
getattr
(
o
.
type
,
'dtype'
,
None
)
o_dt
=
getattr
(
o
.
type
,
'dtype'
,
None
)
ng_dt
=
getattr
(
ng
.
type
,
'dtype'
,
None
)
ng_dt
=
getattr
(
ng
.
type
,
'dtype'
,
None
)
if
ng_dt
is
not
None
and
o_dt
not
in
theano
.
tensor
.
discrete_dtypes
:
if
(
ng_dt
is
not
None
and
o_dt
not
in
theano
.
tensor
.
discrete_dtypes
):
assert
ng_dt
==
o_dt
assert
ng_dt
==
o_dt
# Someone who had obviously not read the Op contract tried
# Someone who had obviously not read the Op contract tried
...
@@ -890,7 +892,8 @@ def _populate_grad_dict(var_to_app_to_idx,
...
@@ -890,7 +892,8 @@ def _populate_grad_dict(var_to_app_to_idx,
# 2) Talk to Ian Goodfellow
# 2) Talk to Ian Goodfellow
# (Both of these sources will tell you not to do it)
# (Both of these sources will tell you not to do it)
for
ng
in
new_output_grads
:
for
ng
in
new_output_grads
:
assert
getattr
(
ng
.
type
,
'dtype'
,
None
)
not
in
theano
.
tensor
.
discrete_dtypes
assert
(
getattr
(
ng
.
type
,
'dtype'
,
None
)
not
in
theano
.
tensor
.
discrete_dtypes
)
input_grads
=
node
.
op
.
grad
(
inputs
,
new_output_grads
)
input_grads
=
node
.
op
.
grad
(
inputs
,
new_output_grads
)
...
@@ -908,7 +911,6 @@ def _populate_grad_dict(var_to_app_to_idx,
...
@@ -908,7 +911,6 @@ def _populate_grad_dict(var_to_app_to_idx,
# Do type checking on the result
# Do type checking on the result
# List of bools indicating if each input only has integer outputs
# List of bools indicating if each input only has integer outputs
only_connected_to_int
=
[(
True
not
in
only_connected_to_int
=
[(
True
not
in
[
in_to_out
and
out_to_cost
and
not
out_int
[
in_to_out
and
out_to_cost
and
not
out_int
...
@@ -916,7 +918,6 @@ def _populate_grad_dict(var_to_app_to_idx,
...
@@ -916,7 +918,6 @@ def _populate_grad_dict(var_to_app_to_idx,
zip
(
in_to_outs
,
outputs_connected
,
output_is_int
)])
zip
(
in_to_outs
,
outputs_connected
,
output_is_int
)])
for
in_to_outs
in
connection_pattern
]
for
in_to_outs
in
connection_pattern
]
for
i
,
term
in
enumerate
(
input_grads
):
for
i
,
term
in
enumerate
(
input_grads
):
# Disallow Nones
# Disallow Nones
...
@@ -933,7 +934,6 @@ def _populate_grad_dict(var_to_app_to_idx,
...
@@ -933,7 +934,6 @@ def _populate_grad_dict(var_to_app_to_idx,
'the grad_undefined or grad_unimplemented helper '
'the grad_undefined or grad_unimplemented helper '
'functions.'
)
%
node
.
op
)
'functions.'
)
%
node
.
op
)
if
not
isinstance
(
term
.
type
,
if
not
isinstance
(
term
.
type
,
(
NullType
,
DisconnectedType
)):
(
NullType
,
DisconnectedType
)):
if
term
.
type
.
dtype
not
in
theano
.
tensor
.
float_dtypes
:
if
term
.
type
.
dtype
not
in
theano
.
tensor
.
float_dtypes
:
...
@@ -973,8 +973,8 @@ def _populate_grad_dict(var_to_app_to_idx,
...
@@ -973,8 +973,8 @@ def _populate_grad_dict(var_to_app_to_idx,
msg
+=
"evaluate to zeros, but it evaluates to"
msg
+=
"evaluate to zeros, but it evaluates to"
msg
+=
"
%
s."
msg
+=
"
%
s."
msg
%
(
str
(
node
.
op
),
str
(
term
),
str
(
type
(
term
))
,
msg
%
(
node
.
op
,
term
,
type
(
term
),
i
,
i
,
str
(
theano
.
get_scalar_constant_value
(
term
)
))
theano
.
get_scalar_constant_value
(
term
))
raise
ValueError
(
msg
)
raise
ValueError
(
msg
)
...
@@ -1010,8 +1010,6 @@ def _populate_grad_dict(var_to_app_to_idx,
...
@@ -1010,8 +1010,6 @@ def _populate_grad_dict(var_to_app_to_idx,
#cache the result
#cache the result
term_dict
[
node
]
=
input_grads
term_dict
[
node
]
=
input_grads
return
term_dict
[
node
]
return
term_dict
[
node
]
# populate grad_dict[var] and return it
# populate grad_dict[var] and return it
...
@@ -1040,7 +1038,7 @@ def _populate_grad_dict(var_to_app_to_idx,
...
@@ -1040,7 +1038,7 @@ def _populate_grad_dict(var_to_app_to_idx,
if
isinstance
(
term
.
type
,
DisconnectedType
):
if
isinstance
(
term
.
type
,
DisconnectedType
):
continue
continue
if
hasattr
(
var
,
'ndim'
)
and
term
.
ndim
!=
var
.
ndim
:
if
hasattr
(
var
,
'ndim'
)
and
term
.
ndim
!=
var
.
ndim
:
raise
ValueError
((
"
%
s.grad returned a term with"
raise
ValueError
((
"
%
s.grad returned a term with"
"
%
d dimensions, but
%
d are required."
)
%
(
"
%
d dimensions, but
%
d are required."
)
%
(
str
(
node
.
op
),
term
.
ndim
,
var
.
ndim
))
str
(
node
.
op
),
term
.
ndim
,
var
.
ndim
))
...
@@ -1058,8 +1056,8 @@ def _populate_grad_dict(var_to_app_to_idx,
...
@@ -1058,8 +1056,8 @@ def _populate_grad_dict(var_to_app_to_idx,
if
cost_name
is
not
None
and
var
.
name
is
not
None
:
if
cost_name
is
not
None
and
var
.
name
is
not
None
:
grad_dict
[
var
]
.
name
=
'(d
%
s/d
%
s)'
%
(
cost_name
,
var
.
name
)
grad_dict
[
var
]
.
name
=
'(d
%
s/d
%
s)'
%
(
cost_name
,
var
.
name
)
else
:
else
:
# this variable isn't connected to the cost in the
computational
# this variable isn't connected to the cost in the
# graph
#
computational
graph
grad_dict
[
var
]
=
DisconnectedType
()()
grad_dict
[
var
]
=
DisconnectedType
()()
# end if cache miss
# end if cache miss
return
grad_dict
[
var
]
return
grad_dict
[
var
]
...
@@ -1068,6 +1066,7 @@ def _populate_grad_dict(var_to_app_to_idx,
...
@@ -1068,6 +1066,7 @@ def _populate_grad_dict(var_to_app_to_idx,
return
rval
return
rval
def
_float_zeros_like
(
x
):
def
_float_zeros_like
(
x
):
""" Like zeros_like, but forces the object to have a
""" Like zeros_like, but forces the object to have a
a floating point dtype """
a floating point dtype """
...
@@ -1317,9 +1316,9 @@ def verify_grad(fun, pt, n_tests=2, rng=None, eps=None,
...
@@ -1317,9 +1316,9 @@ def verify_grad(fun, pt, n_tests=2, rng=None, eps=None,
:param eps: stepsize used in the Finite Difference Method (Default
:param eps: stepsize used in the Finite Difference Method (Default
None is type-dependent)
None is type-dependent)
Raising the value of eps can raise or lower the absolute and
Raising the value of eps can raise or lower the absolute and
relative error
of the verification depending of
the
relative error
s of the verification depending on
the
Op. Raising
the eps do
not lower the verification quality. It
Op. Raising
eps does
not lower the verification quality. It
is better to raise eps th
e
n raising abs_tol or rel_tol.
is better to raise eps th
a
n raising abs_tol or rel_tol.
:param out_type: dtype of output, if complex (i.e. 'complex32' or
:param out_type: dtype of output, if complex (i.e. 'complex32' or
'complex64')
'complex64')
:param abs_tol: absolute tolerance used as threshold for gradient
:param abs_tol: absolute tolerance used as threshold for gradient
...
@@ -1599,6 +1598,7 @@ def hessian(cost, wrt, consider_constant=None,
...
@@ -1599,6 +1598,7 @@ def hessian(cost, wrt, consider_constant=None,
hessians
.
append
(
hess
)
hessians
.
append
(
hess
)
return
format_as
(
using_list
,
using_tuple
,
hessians
)
return
format_as
(
using_list
,
using_tuple
,
hessians
)
def
_is_zero
(
x
):
def
_is_zero
(
x
):
"""
"""
Returns 'yes', 'no', or 'maybe' indicating whether x
Returns 'yes', 'no', or 'maybe' indicating whether x
...
...
theano/misc/doubleop.py
浏览文件 @
aeb68ef2
...
@@ -8,7 +8,7 @@ class DoubleOp(theano.Op):
...
@@ -8,7 +8,7 @@ class DoubleOp(theano.Op):
:param x: input tensor.
:param x: input tensor.
:return: a tensor of the s
hap
e shape and dtype as the input with all
:return: a tensor of the s
am
e shape and dtype as the input with all
values doubled.
values doubled.
:note:
:note:
...
@@ -46,8 +46,8 @@ class DoubleOp(theano.Op):
...
@@ -46,8 +46,8 @@ class DoubleOp(theano.Op):
def
R_op
(
self
,
inputs
,
eval_points
):
def
R_op
(
self
,
inputs
,
eval_points
):
# R_op can receive None as eval_points.
# R_op can receive None as eval_points.
# That mean
there is no diferientiable path through that input
# That mean
s there is no differentiable path through that input.
# If this impl
y
that you cannot compute some outputs,
# If this impl
ies
that you cannot compute some outputs,
# return None for those.
# return None for those.
if
eval_points
[
0
]
is
None
:
if
eval_points
[
0
]
is
None
:
return
eval_points
return
eval_points
...
...
theano/sandbox/cuda/__init__.py
浏览文件 @
aeb68ef2
...
@@ -244,7 +244,8 @@ class GpuOp(theano.gof.Op):
...
@@ -244,7 +244,8 @@ class GpuOp(theano.gof.Op):
return
super
(
GpuOp
,
self
)
.
make_thunk
(
node
,
storage_map
,
return
super
(
GpuOp
,
self
)
.
make_thunk
(
node
,
storage_map
,
compute_map
,
no_recycling
)
compute_map
,
no_recycling
)
theano
.
compile
.
debugmode
.
default_make_thunk
.
append
(
get_unbound_function
(
GpuOp
.
make_thunk
))
theano
.
compile
.
debugmode
.
default_make_thunk
.
append
(
get_unbound_function
(
GpuOp
.
make_thunk
))
# We must do those import to be able to create the full doc when
# We must do those import to be able to create the full doc when
# nvcc is not available
# nvcc is not available
...
@@ -271,7 +272,8 @@ if cuda_available:
...
@@ -271,7 +272,8 @@ if cuda_available:
shared_constructor
=
float32_shared_constructor
shared_constructor
=
float32_shared_constructor
import
basic_ops
import
basic_ops
from
basic_ops
import
(
GpuFromHost
,
HostFromGpu
,
GpuElemwise
,
from
basic_ops
import
(
GpuFromHost
,
HostFromGpu
,
GpuElemwise
,
GpuDimShuffle
,
GpuCAReduce
,
GpuReshape
,
GpuContiguous
,
GpuDimShuffle
,
GpuCAReduce
,
GpuReshape
,
GpuContiguous
,
GpuSubtensor
,
GpuIncSubtensor
,
GpuSubtensor
,
GpuIncSubtensor
,
GpuAdvancedSubtensor1
,
GpuAdvancedIncSubtensor1
,
GpuAdvancedSubtensor1
,
GpuAdvancedIncSubtensor1
,
...
@@ -388,16 +390,17 @@ def use(device,
...
@@ -388,16 +390,17 @@ def use(device,
cuda_enabled
=
True
cuda_enabled
=
True
if
config
.
print_active_device
:
if
config
.
print_active_device
:
print
>>
sys
.
stderr
,
"Using gpu device
%
d:
%
s"
%
(
print
>>
sys
.
stderr
,
"Using gpu device
%
d:
%
s"
%
(
active_device_number
(),
active_device_name
())
active_device_number
(),
active_device_name
())
if
device_properties
(
use
.
device_number
)[
'regsPerBlock'
]
<
16384
:
if
device_properties
(
use
.
device_number
)[
'regsPerBlock'
]
<
16384
:
# We will try to use too much register per bloc at many places
# We will try to use too much register per bloc at many places
# when there is only 8k register per multi-processor.
# when there is only 8k register per multi-processor.
_logger
.
warning
(
"You are probably using an old GPU."
_logger
.
warning
(
" We didn't optimize nor we support those GPU."
"You are probably using an old GPU, that Theano"
" This mean GPU code will be slow AND will"
" does not support."
" crash when we try to use feature/properties"
" This means GPU code will most likely be slow AND may"
" that your GPU don't support."
)
" crash when we try to use features"
" that your GPU does not support."
)
except
(
EnvironmentError
,
ValueError
,
RuntimeError
),
e
:
except
(
EnvironmentError
,
ValueError
,
RuntimeError
),
e
:
_logger
.
error
((
"ERROR: Not using GPU."
_logger
.
error
((
"ERROR: Not using GPU."
...
...
theano/scan_module/scan_op.py
浏览文件 @
aeb68ef2
...
@@ -228,7 +228,7 @@ class Scan(PureOp):
...
@@ -228,7 +228,7 @@ class Scan(PureOp):
)
)
err_msg2
=
(
'When compiling the inner function of scan the '
err_msg2
=
(
'When compiling the inner function of scan the '
'following error has been encountered: The '
'following error has been encountered: The '
'initial state (outputs_info in scan nomenclature)'
'initial state (outputs_info in scan nomenclature)
'
'of variable
%
s (argument number
%
d)'
'of variable
%
s (argument number
%
d)'
' has dtype
%
s and
%
d dimension(s), while the result '
' has dtype
%
s and
%
d dimension(s), while the result '
'of the inner function for this output has dtype
%
s '
'of the inner function for this output has dtype
%
s '
...
@@ -1387,6 +1387,7 @@ class Scan(PureOp):
...
@@ -1387,6 +1387,7 @@ class Scan(PureOp):
self
.
inner_nitsot_outs
(
self_outputs
))
self
.
inner_nitsot_outs
(
self_outputs
))
scan_node
=
outs
[
0
]
.
owner
scan_node
=
outs
[
0
]
.
owner
connection_pattern
=
self
.
connection_pattern
(
scan_node
)
connection_pattern
=
self
.
connection_pattern
(
scan_node
)
def
get_inp_idx
(
iidx
):
def
get_inp_idx
(
iidx
):
if
iidx
<
self
.
n_seqs
:
if
iidx
<
self
.
n_seqs
:
return
1
+
iidx
return
1
+
iidx
...
@@ -1428,10 +1429,10 @@ class Scan(PureOp):
...
@@ -1428,10 +1429,10 @@ class Scan(PureOp):
odx
=
get_out_idx
(
self_outputs
.
index
(
y
))
odx
=
get_out_idx
(
self_outputs
.
index
(
y
))
wrt
=
[
x
for
x
in
theano
.
gof
.
graph
.
inputs
([
y
])
wrt
=
[
x
for
x
in
theano
.
gof
.
graph
.
inputs
([
y
])
if
(
x
in
diff_inputs
)
and
if
(
x
in
diff_inputs
)
and
(
connection_pattern
[
get_inp_idx
(
self_inputs
.
index
(
x
))][
odx
])
]
connection_pattern
[
get_inp_idx
(
self_inputs
.
index
(
x
))][
odx
]
]
grads
=
gradient
.
grad
(
grads
=
gradient
.
grad
(
cost
=
None
,
cost
=
None
,
known_grads
=
{
y
:
g_y
},
known_grads
=
{
y
:
g_y
},
wrt
=
wrt
,
consider_constant
=
wrt
,
wrt
=
wrt
,
consider_constant
=
wrt
,
disconnected_inputs
=
'ignore'
,
disconnected_inputs
=
'ignore'
,
return_disconnected
=
'None'
)
return_disconnected
=
'None'
)
...
...
theano/tensor/basic.py
浏览文件 @
aeb68ef2
...
@@ -8236,13 +8236,14 @@ def diag(v, k=0):
...
@@ -8236,13 +8236,14 @@ def diag(v, k=0):
def
stacklists
(
arg
):
def
stacklists
(
arg
):
""" Recursivly stack lists of tensors to maintain similar structure
"""
Recursively stack lists of tensors to maintain similar structure.
This function can create a tensor from a shaped list of scalars
This function can create a tensor from a shaped list of scalars
:
>>> from theano.tensor import stacklists, scalars, matrices
>>> from theano.tensor import stacklists, scalars, matrices
>>> from theano import function
>>> from theano import function
>>> a,
b,c,
d = scalars('abcd')
>>> a,
b, c,
d = scalars('abcd')
>>> X = stacklists([[a, b], [c, d]])
>>> X = stacklists([[a, b], [c, d]])
>>> f = function([a, b, c, d], X)
>>> f = function([a, b, c, d], X)
>>> f(1, 2, 3, 4)
>>> f(1, 2, 3, 4)
...
@@ -8250,10 +8251,10 @@ def stacklists(arg):
...
@@ -8250,10 +8251,10 @@ def stacklists(arg):
[ 3., 4.]], dtype=float32)
[ 3., 4.]], dtype=float32)
We can also stack arbitrarily shaped tensors. Here we stack matrices into
We can also stack arbitrarily shaped tensors. Here we stack matrices into
a 2 by 2 grid
.
a 2 by 2 grid
:
>>> from numpy import ones
>>> from numpy import ones
>>> a,
b,c,d,
= matrices('abcd')
>>> a,
b, c, d
= matrices('abcd')
>>> X = stacklists([[a, b], [c, d]])
>>> X = stacklists([[a, b], [c, d]])
>>> f = function([a, b, c, d], X)
>>> f = function([a, b, c, d], X)
>>> x = ones((4, 4), 'float32')
>>> x = ones((4, 4), 'float32')
...
...
theano/tensor/nnet/Conv3D.py
浏览文件 @
aeb68ef2
...
@@ -560,7 +560,7 @@ conv3D = Conv3D()
...
@@ -560,7 +560,7 @@ conv3D = Conv3D()
:param b: bias, shape == (W.shape[0],)
:param b: bias, shape == (W.shape[0],)
:param d: strides when moving the filter over the input(dx, dy, dt)
:param d: strides when moving the filter over the input(dx, dy, dt)
:note: The order of dimensions do
not correspond with
the one in `conv2d`.
:note: The order of dimensions do
es not correspond to
the one in `conv2d`.
This is for optimization.
This is for optimization.
"""
"""
...
...
theano/tests/run_tests_in_batch.py
浏览文件 @
aeb68ef2
...
@@ -103,7 +103,7 @@ def main(stdout=None, stderr=None, argv=None, theano_nose=None,
...
@@ -103,7 +103,7 @@ def main(stdout=None, stderr=None, argv=None, theano_nose=None,
theano_nose
=
path
theano_nose
=
path
break
break
if
theano_nose
is
None
:
if
theano_nose
is
None
:
raise
Exception
(
"
Not
able to find theano-nose"
)
raise
Exception
(
"
Un
able to find theano-nose"
)
if
batch_size
is
None
:
if
batch_size
is
None
:
batch_size
=
100
batch_size
=
100
stdout_backup
=
sys
.
stdout
stdout_backup
=
sys
.
stdout
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论