提交 b0509165 authored 作者: hantek's avatar hantek

fix all doctest errors, but not turning on warning-to-error flag in sphinx build

上级 a8316c2c
...@@ -672,13 +672,14 @@ is a :ref:`variable` we statically know the value of. ...@@ -672,13 +672,14 @@ is a :ref:`variable` we statically know the value of.
.. doctest:: mul .. doctest:: mul
>>> import numpy
>>> x = double('x') >>> x = double('x')
>>> z = mul(x, 2) >>> z = mul(x, 2)
>>> f = theano.function([x], z) >>> f = theano.function([x], z)
>>> f(10) >>> f(10)
20.0 20.0
>>> f(3.4) >>> numpy.allclose(f(3.4), 6.8)
6.8 True
Now the code works the way we want it to. Now the code works the way we want it to.
......
...@@ -146,7 +146,7 @@ the params type. ...@@ -146,7 +146,7 @@ the params type.
def make_node(self, inp): def make_node(self, inp):
inp = as_scalar(inp) inp = as_scalar(inp)
return Apply(self, [inp], [inp.type()] return Apply(self, [inp], [inp.type()])
def perform(self, node, inputs, output_storage, params): def perform(self, node, inputs, output_storage, params):
# Here params is a python float so this is ok # Here params is a python float so this is ok
...@@ -193,7 +193,7 @@ weights. ...@@ -193,7 +193,7 @@ weights.
def make_node(self, x, y): def make_node(self, x, y):
x = as_scalar(x) x = as_scalar(x)
y = as_scalar(y) y = as_scalar(y)
return Apply(self, [x, y], [x.type()] return Apply(self, [x, y], [x.type()])
def c_support_code_struct(self, node, name): def c_support_code_struct(self, node, name):
return """ return """
......
...@@ -19,7 +19,7 @@ Blas Op ...@@ -19,7 +19,7 @@ Blas Op
.. automodule:: theano.sandbox.cuda.blas .. automodule:: theano.sandbox.cuda.blas
:members: :members:
.. autofunction:: theano.sandbox.cuda.blas.batched_dot .. autoclass:: theano.sandbox.cuda.blas.BatchedDotOp
Nnet Op Nnet Op
======= =======
......
...@@ -56,7 +56,7 @@ if __name__ == '__main__': ...@@ -56,7 +56,7 @@ if __name__ == '__main__':
def call_sphinx(builder, workdir, extraopts=None): def call_sphinx(builder, workdir, extraopts=None):
import sphinx import sphinx
if extraopts is None: if extraopts is None:
extraopts = [] extraopts = [] # '-W']
if not options['--cache'] and files is None: if not options['--cache'] and files is None:
extraopts.append('-E') extraopts.append('-E')
docpath = os.path.join(throot, 'doc') docpath = os.path.join(throot, 'doc')
......
...@@ -11,6 +11,7 @@ To get us started with Theano and get a feel of what we're working with, ...@@ -11,6 +11,7 @@ To get us started with Theano and get a feel of what we're working with,
let's make a simple function: add two numbers together. Here is how you do let's make a simple function: add two numbers together. Here is how you do
it: it:
>>> import numpy
>>> import theano.tensor as T >>> import theano.tensor as T
>>> from theano import function >>> from theano import function
>>> x = T.dscalar('x') >>> x = T.dscalar('x')
...@@ -22,9 +23,8 @@ And now that we've created our function we can use it: ...@@ -22,9 +23,8 @@ And now that we've created our function we can use it:
>>> f(2, 3) >>> f(2, 3)
array(5.0) array(5.0)
>>> f(16.3, 12.1) >>> numpy.allclose(f(16.3, 12.1), 28.4)
array(28.4) True
Let's break this down into several steps. The first step is to define Let's break this down into several steps. The first step is to define
two symbols (*Variables*) representing the quantities that you want two symbols (*Variables*) representing the quantities that you want
...@@ -123,12 +123,13 @@ then be used like a normal Python function. ...@@ -123,12 +123,13 @@ then be used like a normal Python function.
the tutorial so far. It has the added benefit of not requiring the tutorial so far. It has the added benefit of not requiring
you to import :func:`function` . Here is how :func:`eval` works: you to import :func:`function` . Here is how :func:`eval` works:
>>> import numpy
>>> import theano.tensor as T >>> import theano.tensor as T
>>> x = T.dscalar('x') >>> x = T.dscalar('x')
>>> y = T.dscalar('y') >>> y = T.dscalar('y')
>>> z = x + y >>> z = x + y
>>> z.eval({x : 16.3, y : 12.1}) >>> numpy.allclose(z.eval({x : 16.3, y : 12.1}), 28.4)
array(28.4) True
We passed :func:`eval` a dictionary mapping symbolic theano We passed :func:`eval` a dictionary mapping symbolic theano
variables to the values to substitute for them, and it returned variables to the values to substitute for them, and it returned
......
...@@ -207,15 +207,15 @@ Let's try it out! ...@@ -207,15 +207,15 @@ Let's try it out!
.. theano/tests/test_tutorial.py:T_examples.test_examples_8 .. theano/tests/test_tutorial.py:T_examples.test_examples_8
>>> state.get_value() >>> state.get_value()
array(0) 0
>>> accumulator(1) >>> accumulator(1)
array(0) array(0)
>>> state.get_value() >>> state.get_value()
array(1) 1
>>> accumulator(300) >>> accumulator(300)
array(1) array(1)
>>> state.get_value() >>> state.get_value()
array(301) 301
It is possible to reset the state. Just use the ``.set_value()`` method: It is possible to reset the state. Just use the ``.set_value()`` method:
...@@ -223,7 +223,7 @@ It is possible to reset the state. Just use the ``.set_value()`` method: ...@@ -223,7 +223,7 @@ It is possible to reset the state. Just use the ``.set_value()`` method:
>>> accumulator(3) >>> accumulator(3)
array(-1) array(-1)
>>> state.get_value() >>> state.get_value()
array(2) 2
As we mentioned above, you can define more than one function to use the same As we mentioned above, you can define more than one function to use the same
shared variable. These functions can all update the value. shared variable. These functions can all update the value.
...@@ -235,7 +235,7 @@ shared variable. These functions can all update the value. ...@@ -235,7 +235,7 @@ shared variable. These functions can all update the value.
>>> decrementor(2) >>> decrementor(2)
array(2) array(2)
>>> state.get_value() >>> state.get_value()
array(0) 0
You might be wondering why the updates mechanism exists. You can always You might be wondering why the updates mechanism exists. You can always
achieve a similar result by returning the new expressions, and working with achieve a similar result by returning the new expressions, and working with
...@@ -262,7 +262,7 @@ for the purpose of one particular function. ...@@ -262,7 +262,7 @@ for the purpose of one particular function.
>>> skip_shared(1, 3) # we're using 3 for the state, not state.value >>> skip_shared(1, 3) # we're using 3 for the state, not state.value
array(7) array(7)
>>> state.get_value() # old state still there, but we didn't use it >>> state.get_value() # old state still there, but we didn't use it
array(0) 0
The ``givens`` parameter can be used to replace any symbolic variable, not just a The ``givens`` parameter can be used to replace any symbolic variable, not just a
shared variable. You can replace constants, and expressions, in general. Be shared variable. You can replace constants, and expressions, in general. Be
......
...@@ -23,6 +23,7 @@ Here is the code to compute this gradient: ...@@ -23,6 +23,7 @@ Here is the code to compute this gradient:
.. If you modify this code, also change : .. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_examples.test_examples_4 .. theano/tests/test_tutorial.py:T_examples.test_examples_4
>>> import numpy
>>> import theano >>> import theano
>>> import theano.tensor as T >>> import theano.tensor as T
>>> from theano import pp >>> from theano import pp
...@@ -34,8 +35,8 @@ Here is the code to compute this gradient: ...@@ -34,8 +35,8 @@ Here is the code to compute this gradient:
>>> f = theano.function([x], gy) >>> f = theano.function([x], gy)
>>> f(4) >>> f(4)
array(8.0) array(8.0)
>>> f(94.2) >>> numpy.allclose(f(94.2), 188.4)
array(188.4) True
In this example, we can see from ``pp(gy)`` that we are computing In this example, we can see from ``pp(gy)`` that we are computing
the correct symbolic gradient. the correct symbolic gradient.
......
...@@ -50,9 +50,10 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None, ...@@ -50,9 +50,10 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None,
>>> f = theano.function(**d) # doctest: +SKIP >>> f = theano.function(**d) # doctest: +SKIP
Note: Note:
The parameter extra_tag_to_remove, is passed to the StripPickler used.
To pickle graph made by Blocks, it must be: The parameter extra_tag_to_remove, is passed to the StripPickler used.
['annotations', 'replacement_of', 'aggregation_scheme', 'rolesc'] To pickle graph made by Blocks, it must be:
['annotations', 'replacement_of', 'aggregation_scheme', 'rolesc']
""" """
assert isinstance(filename, string_types) assert isinstance(filename, string_types)
......
...@@ -485,12 +485,13 @@ class Variable(Node): ...@@ -485,12 +485,13 @@ class Variable(Node):
Examples Examples
-------- --------
>>> import numpy
>>> import theano.tensor as T >>> import theano.tensor as T
>>> x = T.dscalar('x') >>> x = T.dscalar('x')
>>> y = T.dscalar('y') >>> y = T.dscalar('y')
>>> z = x + y >>> z = x + y
>>> z.eval({x : 16.3, y : 12.1}) >>> numpy.allclose(z.eval({x : 16.3, y : 12.1}), 28.4)
array(28.4) True
We passed :func:`eval` a dictionary mapping symbolic theano We passed :func:`eval` a dictionary mapping symbolic theano
variables to the values to substitute for them, and it returned variables to the values to substitute for them, and it returned
......
...@@ -144,6 +144,7 @@ else: ...@@ -144,6 +144,7 @@ else:
u = CompatUnpickler(fp, encoding="latin1") u = CompatUnpickler(fp, encoding="latin1")
else: else:
u = CompatUnpickler(fp) u = CompatUnpickler(fp)
mat = u.load() mat = u.load()
""" """
pass pass
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论