Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
P
pytensor
项目
项目
详情
活动
周期分析
仓库
仓库
文件
提交
分支
标签
贡献者
图表
比较
统计图
议题
0
议题
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
CI / CD
CI / CD
流水线
作业
日程
统计图
Wiki
Wiki
代码片段
代码片段
成员
成员
折叠边栏
关闭边栏
活动
图像
聊天
创建新问题
作业
提交
问题看板
Open sidebar
testgroup
pytensor
Commits
3e303fc9
提交
3e303fc9
authored
8月 13, 2015
作者:
Arnaud Bergeron
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Fixup tutorial/* and remove matching tests.
上级
a5e70754
全部展开
显示空白字符变更
内嵌
并排
正在显示
13 个修改的文件
包含
122 行增加
和
221 行删除
+122
-221
adding.txt
doc/tutorial/adding.txt
+1
-7
aliasing.txt
doc/tutorial/aliasing.txt
+6
-11
conditions.txt
doc/tutorial/conditions.txt
+3
-3
debug_faq.txt
doc/tutorial/debug_faq.txt
+31
-14
examples.txt
doc/tutorial/examples.txt
+32
-41
extending_theano.txt
doc/tutorial/extending_theano.txt
+6
-6
extending_theano_c.txt
doc/tutorial/extending_theano_c.txt
+2
-2
gpu_data_convert.txt
doc/tutorial/gpu_data_convert.txt
+2
-2
loading_and_saving.txt
doc/tutorial/loading_and_saving.txt
+11
-1
modes.txt
doc/tutorial/modes.txt
+6
-50
symbolic_graphs.txt
doc/tutorial/symbolic_graphs.txt
+2
-2
using_gpu.txt
doc/tutorial/using_gpu.txt
+20
-82
test_tutorial.py
theano/tests/test_tutorial.py
+0
-0
没有找到文件。
doc/tutorial/adding.txt
浏览文件 @
3e303fc9
...
...
@@ -11,9 +11,6 @@ To get us started with Theano and get a feel of what we're working with,
let's make a simple function: add two numbers together. Here is how you do
it:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_adding.test_adding_1
>>> import theano.tensor as T
>>> from theano import function
>>> x = T.dscalar('x')
...
...
@@ -150,9 +147,6 @@ You might already have guessed how to do this. Indeed, the only change
from the previous example is that you need to instantiate *x* and
*y* using the matrix Types:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_adding.test_adding_2
>>> x = T.dmatrix('x')
>>> y = T.dmatrix('y')
>>> z = x + y
...
...
@@ -207,7 +201,7 @@ Exercise
a = theano.tensor.vector() # declare variable
out = a + a ** 10 # build symbolic expression
f = theano.function([a], out) # compile function
print
f([0, 1, 2]
)
print
(f([0, 1, 2])
)
.. testoutput::
...
...
doc/tutorial/aliasing.txt
浏览文件 @
3e303fc9
...
...
@@ -55,10 +55,6 @@ Borrowing when Creating Shared Variables
A ``borrow`` argument can be provided to the shared-variable constructor.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_aliasing.test_aliasing_1
.. testcode:: borrow
import numpy, theano
...
...
@@ -124,9 +120,6 @@ A ``borrow`` argument can also be used to control how a ``shared`` variable's va
retrieved.
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_aliasing.test_aliasing_2
.. testcode:: borrow
s = theano.shared(np_array)
...
...
@@ -185,6 +178,11 @@ that Theano *may* reuse the buffer you provide as the internal storage for the v
A standard pattern for manually updating the value of a ``shared`` variable is as
follows:
.. testsetup:: borrow
def some_inplace_fn(v):
return v
.. testcode:: borrow
s.set_value(
...
...
@@ -231,9 +229,6 @@ Borrowing when Constructing Function Objects
A ``borrow`` argument can also be provided to the ``In`` and ``Out`` objects
that control how ``theano.function`` handles its argument[s] and return value[s].
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_aliasing.test_aliasing_3
.. testcode::
import theano, theano.tensor
...
...
@@ -268,7 +263,7 @@ graph.
For GPU graphs, this borrowing can have a major speed impact. See the following code:
..
testcode::
..
code-block:: python
from theano import function, config, shared, sandbox, tensor, Out
import numpy
...
...
doc/tutorial/conditions.txt
浏览文件 @
3e303fc9
...
...
@@ -52,9 +52,6 @@ IfElse vs Switch
f_lazyifelse(val1, val2, big_mat1, big_mat2)
print 'time spent evaluating one value %f sec' % (time.clock() - tic)
In this example, the ``IfElse`` op spends less time (about half as much) than ``Switch``
since it computes only one variable out of the two.
.. testoutput::
:hide:
:options: +ELLIPSIS
...
...
@@ -62,6 +59,9 @@ since it computes only one variable out of the two.
time spent evaluating both values ... sec
time spent evaluating one value ... sec
In this example, the ``IfElse`` op spends less time (about half as much) than ``Switch``
since it computes only one variable out of the two.
.. code-block:: none
$ python ifelse_switch.py
...
...
doc/tutorial/debug_faq.txt
浏览文件 @
3e303fc9
...
...
@@ -39,14 +39,10 @@ messages. Consider the following faulty code.
Running the code above we see:
.. testoutput::
:options: +ELLIPSIS
Traceback (most recent call last):
File "test0.py", line 10, in <module>
f(np.ones((2,)), np.ones((3,)))
File "/PATH_TO_THEANO/theano/compile/function_module.py", line 605, in __call__
self.fn.thunks[self.fn.position_of_error])
File "/PATH_TO_THEANO/theano/compile/function_module.py", line 595, in __call__
outputs = self.fn()
...
ValueError: Input dimension mis-match. (input[0].shape[0] = 3, input[1].shape[0] = 2)
Apply node that caused the error: Elemwise{add,no_inplace}(<TensorType(float64, vector)>, <TensorType(float64, vector)>, <TensorType(float64, vector)>)
Inputs types: [TensorType(float64, vector), TensorType(float64, vector), TensorType(float64, vector)]
...
...
@@ -71,7 +67,7 @@ the faulty line, while ``exception_verbosity=high`` will display a
debugprint of the apply node. Using these hints, the end of the error
message becomes :
..
testoutput::
..
code-block:: none
Backtrace when the node is created:
File "test0.py", line 8, in <module>
...
...
@@ -101,7 +97,7 @@ following example. Here, we use ``exception_verbosity=high`` and
``optimizer=None`` would and it could therefore be used instead of test values.
.. testcode:: testvalue
s
.. testcode:: testvalue
import numpy
import theano
...
...
@@ -137,7 +133,7 @@ following example. Here, we use ``exception_verbosity=high`` and
Running the above code generates the following error message:
.. testoutput:: testvalue
s
.. testoutput:: testvalue
Traceback (most recent call last):
File "test1.py", line 31, in <module>
...
...
@@ -166,7 +162,7 @@ Running the above code generates the following error message:
If the above is not informative enough, by instrumenting the code ever
so slightly, we can get Theano to reveal the exact source of the error.
..
testcode:: testvalues
..
code-block:: python
# enable on-the-fly graph computations
theano.config.compute_test_value = 'warn'
...
...
@@ -185,7 +181,7 @@ of error can thus be identified with much more precision and much earlier in
the compilation pipeline. For example, running the above code yields the
following error message, which properly identifies *line 24* as the culprit.
..
testoutput:: testvalues
..
code-block:: node
Traceback (most recent call last):
File "test2.py", line 24, in <module>
...
...
@@ -393,6 +389,7 @@ can be achieved as follows:
f(0) # log(0) * 0 = -inf * 0 = NaN
.. testoutput:: compiled
:options: +NORMALIZE_WHITESPACE
*** NaN detected ***
Elemwise{Composite{(log(i0) * i0)}} [@A] ''
...
...
@@ -430,12 +427,11 @@ the execution of the node can garbage collect its inputs that aren't
needed anymore by the Theano function. This can be done with the Theano
flag:
..
testcode:: compiled
..
code-block:: python
allow_gc=False
.. TODO: documentation for link.WrapLinkerMany
...
...
@@ -468,11 +464,32 @@ Consider this example script ("ex.py"):
f(mat1, mat2)
.. testoutput::
:hide:
:options: +ELLIPSIS
Traceback (most recent call last):
...
ValueError: Input dimension mis-match. (input[0].shape[0] = 3, input[1].shape[0] = 5)
Apply node that caused the error: Elemwise{mul,no_inplace}(a, b)
Toposort index: 0
Inputs types: [TensorType(float64, matrix), TensorType(float64, matrix)]
Inputs shapes: [(3, 4), (5, 5)]
Inputs strides: [(32, 8), (40, 8)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [['output']]
Backtrace when the node is created:
File "<doctest default[0]>", line 8, in <module>
f = theano.function([a, b], [a * b])
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
This is actually so simple the debugging could be done easily, but it's for
illustrative purposes. As the matrices can't be multiplied element-wise
(unsuitable shapes), we get the following exception:
..
testoutput::
..
code-block:: none
File "ex.py", line 14, in <module>
f(mat1, mat2)
...
...
doc/tutorial/examples.txt
浏览文件 @
3e303fc9
...
...
@@ -412,54 +412,46 @@ corresponding to the random number generation process (i.e. RandomFunction{unifo
An example of how "random states" can be transferred from one theano function
to another is shown below.
.. testcode::
from __future__ import print_function
import theano
import numpy
import theano.tensor as T
from theano.sandbox.rng_mrg import MRG_RandomStreams
from theano.tensor.shared_randomstreams import RandomStreams
class Graph():
def __init__(self, seed=123):
self.rng = RandomStreams(seed)
self.y = self.rng.uniform(size=(1,))
g1 = Graph(seed=123)
f1 = theano.function([], g1.y)
>>> from __future__ import print_function
>>> import theano
>>> import numpy
>>> import theano.tensor as T
>>> from theano.sandbox.rng_mrg import MRG_RandomStreams
>>> from theano.tensor.shared_randomstreams import RandomStreams
g2 = Graph(seed=987)
f2 = theano.function([], g2.y)
>>> class Graph():
... def __init__(self, seed=123):
... self.rng = RandomStreams(seed)
... self.y = self.rng.uniform(size=(1,))
print('By default, the two functions are out of sync.')
print("f1() returns ", end=" "); print(f1())
print("f2() returns ", end=" "); print(f2())
>>> g1 = Graph(seed=123)
>>> f1 = theano.function([], g1.y)
def copy_random_state(g1, g2):
if isinstance(g1.rng, MRG_RandomStreams):
g2.rng.rstate = g1.rng.rstate
for (su1, su2) in zip(g1.rng.state_updates, g2.rng.state_updates):
su2[0].set_value(su1[0].get_value())
>>> g2 = Graph(seed=987)
>>> f2 = theano.function([], g2.y)
print('We now copy the state of the theano random number generators.')
copy_random_state(g1, g2)
print("f1() returns ", end=" "); print(f1())
print("f2() returns ", end=" "); print(f2())
>>> # By default, the two functions are out of sync.
>>> f1()
array([ 0.72803009])
>>> f2()
array([ 0.55056769])
This gives the following output:
>>> def copy_random_state(g1, g2):
... if isinstance(g1.rng, MRG_RandomStreams):
... g2.rng.rstate = g1.rng.rstate
... for (su1, su2) in zip(g1.rng.state_updates, g2.rng.state_updates):
... su2[0].set_value(su1[0].get_value())
.. testoutput::
>>> # We now copy the state of the theano random number generators.
>>> copy_random_state(g1, g2)
>>> f1()
array([ 0.59044123])
>>> f2()
array([ 0.59044123])
By default, the two functions are out of sync.
f1() returns [ 0.72803009]
f2() returns [ 0.55056769]
We now copy the state of the theano random number generators.
f1() returns [ 0.59044123]
f2() returns [ 0.59044123]
Other Random Distributions
--------------------------
-
--------------------------
There are :ref:`other distributions implemented <libdoc_tensor_raw_random>`.
...
...
@@ -545,10 +537,9 @@ It will be used repeatedly.
Initial model:
...
...
0.0
Final model:
...
...
target values for D:
...
prediction on D:
...
...
doc/tutorial/extending_theano.txt
浏览文件 @
3e303fc9
...
...
@@ -73,9 +73,9 @@ possibilities you may encounter or need. For that refer to
# Other type of implementation
# C implementation: [see theano web site for other functions]
def c_code(...):
# ...
def c_code(self, node, inputs, outputs, sub):
pass
# Other implementations (pycuda, ...):
def make_thunk(self, node, storage_map, _, _2):
pass
...
...
@@ -83,7 +83,7 @@ possibilities you may encounter or need. For that refer to
# optional:
check_input = True
def __init__(self,
...
):
def __init__(self,
*args
):
pass
def grad(self, inputs, g):
...
...
@@ -92,7 +92,7 @@ possibilities you may encounter or need. For that refer to
def R_op(self, inputs, eval_points):
pass
def infer_shape(node,
(i0_shapes, ...)
):
def infer_shape(node,
input_shapes
):
pass
.. ../extending/op.txt
...
...
@@ -684,8 +684,8 @@ You can try it as follows:
x = theano.tensor.fmatrix()
y = theano.tensor.fmatrix()
f = function([x, y], numpy_dot(x, y))
inp1 = numpy.random.rand(5, 4)
inp2 = numpy.random.rand(4, 7)
inp1 = numpy.random.rand(5, 4)
.astype('float32')
inp2 = numpy.random.rand(4, 7)
.astype('float32')
out = f(inp1, inp2)
...
...
doc/tutorial/extending_theano_c.txt
浏览文件 @
3e303fc9
...
...
@@ -895,7 +895,7 @@ defined to False. In these descrptions 'i' refers to the position
corresponds to ``npy_float32`` and can directly be used to declare a
new variable of the same dtype as the data in the array :
..
testcode::
..
code-block:: c
DTYPE_INPUT_0 myVar = someValue;
...
...
@@ -914,7 +914,7 @@ In addition to these macros, the ``init_code_struct``, ``code``, and
* ``FAIL`` : Code to insert at error points. A python exception
should be set prior to this code. An invocation look like this:
..
testcode::
..
code-block:: c
if (error) {
// Set python exception
...
...
doc/tutorial/gpu_data_convert.txt
浏览文件 @
3e303fc9
...
...
@@ -36,7 +36,7 @@ Compiling with PyCUDA
You can use PyCUDA to compile CUDA functions that work directly on
CudaNdarrays. Here is an example from the file ``theano/misc/tests/test_pycuda_theano_simple.py``:
..
testcode::
..
code-block:: python
import sys
import numpy
...
...
@@ -78,7 +78,7 @@ Theano Op using a PyCUDA function
You can use a GPU function compiled with PyCUDA in a Theano op:
..
testcode::
..
code-block:: python
import numpy, theano
import theano.misc.pycuda_init
...
...
doc/tutorial/loading_and_saving.txt
浏览文件 @
3e303fc9
...
...
@@ -38,6 +38,10 @@ The two modules ``pickle`` and ``cPickle`` have the same functionalities, but
You can serialize (or *save*, or *pickle*) objects to a file with
``cPickle.dump``:
.. testsetup::
my_obj = object()
>>> f = file('obj.save', 'wb')
>>> cPickle.dump(my_obj, f, protocol=cPickle.HIGHEST_PROTOCOL)
>>> f.close()
...
...
@@ -64,6 +68,12 @@ To de-serialize (or *load*, or *unpickle*) a pickled file, use
You can pickle several objects into the same file, and load them all (in the
same order):
.. testsetup::
obj1 = object()
obj2 = object()
obj3 = object()
>>> f = file('objects.save', 'wb')
>>> for obj in [obj1, obj2, obj3]:
... cPickle.dump(obj, f, protocol=cPickle.HIGHEST_PROTOCOL)
...
...
@@ -127,7 +137,7 @@ The main advantage of this approach is that you don't even need Theano installed
in order to look at the values of shared variables that you pickled. You can
just load the parameters manually with `numpy`.
..
testcode::
..
code-block:: python
import numpy
numpy.load('model.zip')
...
...
doc/tutorial/modes.txt
浏览文件 @
3e303fc9
...
...
@@ -63,8 +63,6 @@ Consider the logistic regression:
b = theano.shared(numpy.asarray(0., dtype=theano.config.floatX), name="b")
x.tag.test_value = D[0]
y.tag.test_value = D[1]
#print "Initial model:"
#print w.get_value(), b.get_value()
# Construct Theano expression graph
p_1 = 1 / (1 + T.exp(-T.dot(x, w)-b)) # Probability of having a one
...
...
@@ -77,7 +75,7 @@ Consider the logistic regression:
train = theano.function(
inputs=[x,y],
outputs=[prediction, xent],
updates=
{w:w-0.01*gw, b:b-0.01*gb}
,
updates=
[(w, w-0.01*gw), (b, b-0.01*gb)]
,
name = "train")
predict = theano.function(inputs=[x], outputs=prediction,
name = "predict")
...
...
@@ -94,8 +92,6 @@ Consider the logistic regression:
for i in range(training_steps):
pred, err = train(D[0], D[1])
#print "Final model:"
#print w.get_value(), b.get_value()
print("target values for D")
print(D[1])
...
...
@@ -108,52 +104,11 @@ Consider the logistic regression:
:options: +ELLIPSIS
Used the cpu
targe
values for D
target
values for D
...
prediction on D
...
.. code-block:: none
Used the cpu
target values for D
[ 0. 0. 1. 0. 1. 1. 0. 1. 0. 1. 0. 1. 0. 0. 0. 0. 1. 0.
1. 0. 1. 1. 0. 1. 0. 0. 1. 1. 1. 0. 1. 0. 0. 1. 1. 1.
0. 0. 0. 0. 0. 1. 1. 1. 0. 0. 1. 1. 1. 1. 1. 0. 0. 1.
0. 1. 0. 0. 1. 1. 0. 1. 1. 1. 1. 0. 0. 1. 1. 0. 1. 1.
1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.
1. 0. 1. 0. 1. 0. 0. 0. 1. 1. 0. 1. 0. 1. 0. 1. 0. 1.
0. 1. 0. 0. 0. 1. 0. 0. 1. 1. 1. 0. 1. 1. 0. 0. 1. 0.
1. 1. 1. 0. 1. 1. 0. 0. 1. 0. 1. 0. 0. 1. 0. 0. 0. 1.
0. 0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 0. 0. 1. 1. 0. 1. 0.
0. 0. 0. 1. 1. 1. 0. 0. 0. 1. 1. 1. 0. 1. 0. 0. 0. 0.
1. 1. 1. 1. 1. 0. 1. 1. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1.
0. 1. 1. 1. 0. 1. 1. 0. 0. 0. 1. 1. 1. 0. 0. 0. 1. 0.
0. 1. 0. 1. 1. 1. 0. 1. 1. 1. 0. 0. 0. 1. 1. 0. 1. 0.
0. 1. 1. 0. 1. 1. 1. 0. 0. 1. 1. 1. 0. 1. 1. 1. 1. 0.
1. 0. 1. 0. 0. 0. 1. 0. 0. 1. 0. 0. 1. 0. 1. 0. 0. 0.
1. 0. 0. 0. 0. 0. 1. 1. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0.
1. 0. 0. 0. 1. 1. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1.
1. 1. 1. 1. 0. 0. 0. 1. 1. 1. 0. 1. 1. 1. 0. 1. 1. 0.
1. 1. 1. 0. 1. 1. 0. 0. 1. 1. 0. 1. 0. 1. 1. 1. 1. 1.
0. 0. 0. 1. 1. 0. 0. 1. 1. 1. 0. 0. 0. 0. 1. 0. 0. 0.
0. 1. 0. 0. 0. 0. 0. 1. 1. 0. 0. 1. 1. 1. 0. 1. 1. 0.
0. 0. 1. 0. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 1. 0. 0. 1.
1. 1. 1. 1.]
prediction on D
[0 0 1 0 1 1 0 1 0 1 0 1 0 0 0 0 1 0 1 0 1 1 0 1 0 0 1 1 1 0 1 0 0 1 1 1 0
0 0 0 0 1 1 1 0 0 1 1 1 1 1 0 0 1 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1
0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 1 1 0 1 0 1 0 1 0 1 0 1 0
0 0 1 0 0 1 1 1 0 1 1 0 0 1 0 1 1 1 0 1 1 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 0
0 1 1 1 1 1 1 0 0 1 1 0 1 0 0 0 0 1 1 1 0 0 0 1 1 1 0 1 0 0 0 0 1 1 1 1 1
0 1 1 0 0 0 0 0 1 1 1 1 1 0 1 1 1 0 1 1 0 0 0 1 1 1 0 0 0 1 0 0 1 0 1 1 1
0 1 1 1 0 0 0 1 1 0 1 0 0 1 1 0 1 1 1 0 0 1 1 1 0 1 1 1 1 0 1 0 1 0 0 0 1
0 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 0 1 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 1 0 1
0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 1 0 1 1 1 0 1 1 0 1 1 1 0 1 1 0 0 1
1 0 1 0 1 1 1 1 1 0 0 0 1 1 0 0 1 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 1 0
0 1 1 1 0 1 1 0 0 0 1 0 1 1 1 1 1 1 0 1 1 1 1 0 0 1 1 1 1 1]
Modify and execute this example to run on CPU (the default) with floatX=float32 and
time the execution using the command line ``time python file.py``. Save your code
as it will be useful later on.
...
...
@@ -346,8 +301,9 @@ Compiling your Graph with ProfileMode
Once the ProfileMode instance is created, simply compile your graph as you
would normally, by specifying the mode parameter.
>>> # with functions
>>> f = theano.function([input1,input2],[output1], mode=profmode)
>>> v1, v2 = T.vectors(2)
>>> o = v1 + v2
>>> f = theano.function([v1,v2],[o], mode=profmode)
Retrieving Timing Information
-----------------------------
...
...
@@ -361,7 +317,7 @@ regression example.
Compiling the module with ``ProfileMode`` and calling ``profmode.print_summary()``
generates the following output:
..
testcode::
..
code-block:: python
"""
ProfileMode.print_summary()
...
...
doc/tutorial/symbolic_graphs.txt
浏览文件 @
3e303fc9
...
...
@@ -160,9 +160,9 @@ as we apply it. Consider the following example of optimization:
>>> f = theano.function([a], b) # compile function
>>> print f([0, 1, 2]) # prints `array([0,2,1026])`
[ 0. 2. 1026.]
>>> theano.printing.pydotprint(b, outfile="./pics/symbolic_graph_unopt.png", var_with_name_simple=True)
>>> theano.printing.pydotprint(b, outfile="./pics/symbolic_graph_unopt.png", var_with_name_simple=True)
# doctest: +SKIP
The output file is available at ./pics/symbolic_graph_unopt.png
>>> theano.printing.pydotprint(f, outfile="./pics/symbolic_graph_opt.png", var_with_name_simple=True)
>>> theano.printing.pydotprint(f, outfile="./pics/symbolic_graph_opt.png", var_with_name_simple=True)
# doctest: +SKIP
The output file is available at ./pics/symbolic_graph_opt.png
...
...
doc/tutorial/using_gpu.txt
浏览文件 @
3e303fc9
...
...
@@ -54,8 +54,8 @@ file and run it.
for i in xrange(iters):
r = f()
t1 = time.time()
print("Looping %d times took
" % iters, t1 - t0, "seconds"
)
print("Result is
", r
)
print("Looping %d times took
%f seconds" % (iters, t1 - t0)
)
print("Result is
%s" % (r,)
)
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
...
...
@@ -75,19 +75,11 @@ same floating-point numbers as the CPU. As a benchmark, a loop that calls ``nump
:hide:
:options: +ELLIPSIS
$ THEANO_FLAGS=mode=FAST_RUN,device=cpu,floatX=float32 python check1.py
[Elemwise{exp,no_inplace}(<TensorType(float32, vector)>)]
[Elemwise{exp,no_inplace}(<TensorType(float64, vector)>)]
Looping 1000 times took ... seconds
Result is ...
Used the cpu
$ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python check1.py
Using gpu device 0: GeForce GTX 580
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took ... seconds
Result is ...
Used the gpu
.. code-block:: none
$ THEANO_FLAGS=mode=FAST_RUN,device=cpu,floatX=float32 python check1.py
...
...
@@ -134,16 +126,16 @@ after the ``T.exp(x)`` is replaced by a GPU version of ``exp()``.
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen),
config.floatX
))
x = shared(numpy.asarray(rng.rand(vlen),
'float32'
))
f = function([], sandbox.cuda.basic_ops.gpu_from_host(T.exp(x)))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
r = f()
t1 = time.time()
print("Looping %d times took
" % iters, t1 - t0, "seconds"
)
print("Result is
", r
)
print("Numpy result is
", numpy.asarray(r
))
print("Looping %d times took
%f seconds" % (iters, t1 - t0)
)
print("Result is
%s" % (r,)
)
print("Numpy result is
%s" % (numpy.asarray(r),
))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
...
...
@@ -153,13 +145,12 @@ The output from this program is
.. testoutput::
:hide:
:options: +ELLIPSIS
:options: +ELLIPSIS
, +SKIP
$ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python check2.py
Using gpu device 0: GeForce GTX 580
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>)]
Looping 1000 times took ... seconds
Result is <CudaNdarray object at 0x6a7a5f0
>
Result is <CudaNdarray object at 0x...
>
Numpy result is ...
Used the gpu
...
...
@@ -302,8 +293,6 @@ Consider again the logistic regression:
b = theano.shared(numpy.asarray(0., dtype=theano.config.floatX), name="b")
x.tag.test_value = D[0]
y.tag.test_value = D[1]
#print "Initial model:"
#print w.get_value(), b.get_value()
# Construct Theano expression graph
p_1 = 1 / (1 + T.exp(-T.dot(x, w)-b)) # Probability of having a one
...
...
@@ -316,7 +305,7 @@ Consider again the logistic regression:
train = theano.function(
inputs=[x,y],
outputs=[prediction, xent],
updates=
{w:w-0.01*gw, b:b-0.01*gb}
,
updates=
[(w, w-0.01*gw), (b, b-0.01*gb)]
,
name = "train")
predict = theano.function(inputs=[x], outputs=prediction,
name = "predict")
...
...
@@ -333,8 +322,6 @@ Consider again the logistic regression:
for i in range(training_steps):
pred, err = train(D[0], D[1])
#print "Final model:"
#print w.get_value(), b.get_value()
print("target values for D")
print(D[1])
...
...
@@ -352,47 +339,6 @@ Consider again the logistic regression:
prediction on D
...
.. code-block:: none
Used the cpu
target values for D
[ 0. 1. 0. 0. 1. 1. 1. 0. 1. 1. 1. 1. 1. 1. 1. 0. 1. 0.
0. 0. 0. 0. 1. 1. 0. 1. 1. 0. 0. 1. 1. 1. 1. 0. 1. 1.
0. 0. 0. 0. 0. 0. 1. 1. 1. 0. 1. 1. 0. 0. 0. 0. 1. 0.
0. 1. 0. 0. 0. 0. 1. 1. 0. 1. 0. 0. 1. 1. 0. 1. 0. 0.
1. 1. 0. 0. 0. 0. 0. 1. 1. 1. 0. 1. 0. 0. 0. 0. 1. 0.
0. 0. 0. 1. 0. 1. 1. 0. 0. 0. 1. 1. 1. 1. 1. 1. 0. 0.
0. 1. 0. 1. 0. 1. 1. 0. 0. 1. 0. 0. 1. 0. 1. 1. 0. 1.
1. 1. 0. 1. 0. 1. 1. 0. 1. 1. 1. 0. 0. 0. 0. 0. 1. 0.
0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 1. 1. 0. 0. 0. 1. 0. 0.
0. 1. 1. 0. 1. 1. 1. 1. 1. 1. 0. 0. 1. 1. 1. 1. 1. 0.
1. 0. 1. 1. 1. 0. 0. 0. 1. 0. 1. 1. 0. 1. 1. 0. 1. 1.
1. 1. 0. 0. 0. 0. 0. 1. 0. 1. 0. 0. 1. 1. 1. 0. 0. 0.
1. 0. 0. 0. 0. 1. 1. 1. 0. 1. 0. 1. 1. 0. 1. 0. 0. 1.
0. 0. 0. 1. 1. 0. 0. 0. 1. 1. 0. 1. 0. 1. 0. 0. 1. 0.
1. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 1. 1.
0. 0. 0. 0. 0. 0. 1. 0. 1. 1. 1. 0. 1. 1. 0. 1. 1. 1.
0. 1. 0. 1. 0. 0. 0. 1. 1. 1. 0. 1. 0. 1. 1. 1. 0. 0.
0. 1. 0. 1. 1. 0. 0. 0. 1. 0. 1. 0. 1. 0. 1. 0. 0. 0.
1. 1. 0. 1. 0. 1. 1. 1. 1. 1. 0. 1. 1. 0. 0. 0. 0. 1.
1. 1. 0. 1. 1. 1. 0. 1. 0. 1. 1. 0. 1. 0. 0. 1. 0. 1.
0. 0. 0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 0. 0. 1. 1. 1. 0.
0. 0. 1. 1. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0. 0. 1. 1.
1. 1. 0. 1.]
prediction on D
[0 1 0 0 1 1 1 0 1 1 1 1 1 1 1 0 1 0 0 0 0 0 1 1 0 1 1 0 0 1 1 1 1 0 1 1 0
0 0 0 0 0 1 1 1 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 1 0 0 1 1 0 1 0 0 1 1
0 0 0 0 0 1 1 1 0 1 0 0 0 0 1 0 0 0 0 1 0 1 1 0 0 0 1 1 1 1 1 1 0 0 0 1 0
1 0 1 1 0 0 1 0 0 1 0 1 1 0 1 1 1 0 1 0 1 1 0 1 1 1 0 0 0 0 0 1 0 0 0 0 0
1 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1 1 0 1 1 1 1 1 1 0 0 1 1 1 1 1 0 1 0 1 1 1
0 0 0 1 0 1 1 0 1 1 0 1 1 1 1 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 0 1 0 0 0 0 1
1 1 0 1 0 1 1 0 1 0 0 1 0 0 0 1 1 0 0 0 1 1 0 1 0 1 0 0 1 0 1 1 1 1 1 1 0
0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 0 1 1 1 0 1 0 1 0 0 0 1
1 1 0 1 0 1 1 1 0 0 0 1 0 1 1 0 0 0 1 0 1 0 1 0 1 0 0 0 1 1 0 1 0 1 1 1 1
1 0 1 1 0 0 0 0 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 0 1 0 1 0 0 0 0 0 0 1 1 1 1
1 1 0 0 1 1 1 0 0 0 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 1 1 1 0 1]
Modify and execute this example to run on GPU with ``floatX=float32`` and
time it using the command line ``time python file.py``. (Of course, you may use some of your answer
to the exercise in section :ref:`Configuration Settings and Compiling Mode<using_modes>`.)
...
...
@@ -461,15 +407,15 @@ into a file and run it.
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray
x = shared(numpy.asarray
(rng.rand(vlen), config.floatX))
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], tensor.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in xrange(iters):
r = f()
t1 = time.time()
print("Looping %d times took
" % iters, t1 - t0, "seconds"
)
print("Result is
", r
)
print("Looping %d times took
%f seconds" % (iters, t1 - t0)
)
print("Result is
%s" % (r,)
)
if numpy.any([isinstance(x.op, tensor.Elemwise) and
('Gpu' not in type(x.op).__name__)
for x in f.maker.fgraph.toposort()]):
...
...
@@ -485,19 +431,11 @@ input *x* is stored on the GPU.
:hide:
:options: +ELLIPSIS
$ THEANO_FLAGS=device=cpu python check1.py
[Elemwise{exp,no_inplace}(<TensorType(float64, vector)>)]
Looping 1000 times took ... seconds
Result is ...
Used the cpu
$ THEANO_FLAGS=device=cuda0 python check1.py
Using device cuda0: GeForce GTX 275
[GpuElemwise{exp,no_inplace}(<GpuArray<float64>>), HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took ... seconds
Result is ...
Used the gpu
.. code-block:: none
$ THEANO_FLAGS=device=cpu python check1.py
...
...
@@ -545,8 +483,8 @@ the GPU object directly. The following code is modifed to do just that.
for i in xrange(iters):
r = f()
t1 = time.time()
print("Looping %d times took
" % iters, t1 - t0, "seconds"
)
print("Result is
", numpy.asarray(r
))
print("Looping %d times took
%f seconds" % (iters, t1 - t0)
)
print("Result is
%s" % (numpy.asarray(r),
))
if numpy.any([isinstance(x.op, tensor.Elemwise) and
('Gpu' not in type(x.op).__name__)
for x in f.maker.fgraph.toposort()]):
...
...
@@ -742,7 +680,7 @@ you feel competent enough, you may try yourself on the corresponding exercises.
**Example: PyCUDA**
..
testcode::
..
code-block:: python
# (from PyCUDA's documentation)
import pycuda.autoinit
...
...
@@ -786,7 +724,7 @@ Modify and execute to work for a matrix of shape (20, 10).
**Example: Theano + PyCUDA**
..
testcode::
..
code-block:: python
import numpy, theano
import theano.misc.pycuda_init
...
...
@@ -828,10 +766,10 @@ Modify and execute to work for a matrix of shape (20, 10).
Use this code to test it:
>>> x = theano.tensor.fmatrix()
>>> f = theano.function([x], PyCUDADoubleOp()(x))
>>> f = theano.function([x], PyCUDADoubleOp()(x))
# doctest: +SKIP
>>> xv = numpy.ones((4, 5), dtype="float32")
>>> assert numpy.allclose(f(xv), xv*2)
>>> print(numpy.asarray(f(xv)))
>>> assert numpy.allclose(f(xv), xv*2)
# doctest: +SKIP
>>> print(numpy.asarray(f(xv)))
# doctest: +SKIP
Exercise
...
...
theano/tests/test_tutorial.py
浏览文件 @
3e303fc9
差异被折叠。
点击展开。
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论