提交 b0509165 authored 作者: hantek's avatar hantek

fix all doctest errors, but not turning on warning-to-error flag in sphinx build

上级 a8316c2c
......@@ -672,13 +672,14 @@ is a :ref:`variable` we statically know the value of.
.. doctest:: mul
>>> import numpy
>>> x = double('x')
>>> z = mul(x, 2)
>>> f = theano.function([x], z)
>>> f(10)
20.0
>>> f(3.4)
6.8
>>> numpy.allclose(f(3.4), 6.8)
True
Now the code works the way we want it to.
......
......@@ -146,7 +146,7 @@ the params type.
def make_node(self, inp):
inp = as_scalar(inp)
return Apply(self, [inp], [inp.type()]
return Apply(self, [inp], [inp.type()])
def perform(self, node, inputs, output_storage, params):
# Here params is a python float so this is ok
......@@ -193,7 +193,7 @@ weights.
def make_node(self, x, y):
x = as_scalar(x)
y = as_scalar(y)
return Apply(self, [x, y], [x.type()]
return Apply(self, [x, y], [x.type()])
def c_support_code_struct(self, node, name):
return """
......
......@@ -19,7 +19,7 @@ Blas Op
.. automodule:: theano.sandbox.cuda.blas
:members:
.. autofunction:: theano.sandbox.cuda.blas.batched_dot
.. autoclass:: theano.sandbox.cuda.blas.BatchedDotOp
Nnet Op
=======
......
......@@ -56,7 +56,7 @@ if __name__ == '__main__':
def call_sphinx(builder, workdir, extraopts=None):
import sphinx
if extraopts is None:
extraopts = []
extraopts = [] # '-W']
if not options['--cache'] and files is None:
extraopts.append('-E')
docpath = os.path.join(throot, 'doc')
......
......@@ -11,6 +11,7 @@ To get us started with Theano and get a feel of what we're working with,
let's make a simple function: add two numbers together. Here is how you do
it:
>>> import numpy
>>> import theano.tensor as T
>>> from theano import function
>>> x = T.dscalar('x')
......@@ -22,9 +23,8 @@ And now that we've created our function we can use it:
>>> f(2, 3)
array(5.0)
>>> f(16.3, 12.1)
array(28.4)
>>> numpy.allclose(f(16.3, 12.1), 28.4)
True
Let's break this down into several steps. The first step is to define
two symbols (*Variables*) representing the quantities that you want
......@@ -123,12 +123,13 @@ then be used like a normal Python function.
the tutorial so far. It has the added benefit of not requiring
you to import :func:`function` . Here is how :func:`eval` works:
>>> import numpy
>>> import theano.tensor as T
>>> x = T.dscalar('x')
>>> y = T.dscalar('y')
>>> z = x + y
>>> z.eval({x : 16.3, y : 12.1})
array(28.4)
>>> numpy.allclose(z.eval({x : 16.3, y : 12.1}), 28.4)
True
We passed :func:`eval` a dictionary mapping symbolic theano
variables to the values to substitute for them, and it returned
......
......@@ -207,15 +207,15 @@ Let's try it out!
.. theano/tests/test_tutorial.py:T_examples.test_examples_8
>>> state.get_value()
array(0)
0
>>> accumulator(1)
array(0)
>>> state.get_value()
array(1)
1
>>> accumulator(300)
array(1)
>>> state.get_value()
array(301)
301
It is possible to reset the state. Just use the ``.set_value()`` method:
......@@ -223,7 +223,7 @@ It is possible to reset the state. Just use the ``.set_value()`` method:
>>> accumulator(3)
array(-1)
>>> state.get_value()
array(2)
2
As we mentioned above, you can define more than one function to use the same
shared variable. These functions can all update the value.
......@@ -235,7 +235,7 @@ shared variable. These functions can all update the value.
>>> decrementor(2)
array(2)
>>> state.get_value()
array(0)
0
You might be wondering why the updates mechanism exists. You can always
achieve a similar result by returning the new expressions, and working with
......@@ -262,7 +262,7 @@ for the purpose of one particular function.
>>> skip_shared(1, 3) # we're using 3 for the state, not state.value
array(7)
>>> state.get_value() # old state still there, but we didn't use it
array(0)
0
The ``givens`` parameter can be used to replace any symbolic variable, not just a
shared variable. You can replace constants, and expressions, in general. Be
......
......@@ -23,6 +23,7 @@ Here is the code to compute this gradient:
.. If you modify this code, also change :
.. theano/tests/test_tutorial.py:T_examples.test_examples_4
>>> import numpy
>>> import theano
>>> import theano.tensor as T
>>> from theano import pp
......@@ -34,8 +35,8 @@ Here is the code to compute this gradient:
>>> f = theano.function([x], gy)
>>> f(4)
array(8.0)
>>> f(94.2)
array(188.4)
>>> numpy.allclose(f(94.2), 188.4)
True
In this example, we can see from ``pp(gy)`` that we are computing
the correct symbolic gradient.
......
......@@ -50,9 +50,10 @@ def function_dump(filename, inputs, outputs=None, mode=None, updates=None,
>>> f = theano.function(**d) # doctest: +SKIP
Note:
The parameter extra_tag_to_remove, is passed to the StripPickler used.
To pickle graph made by Blocks, it must be:
['annotations', 'replacement_of', 'aggregation_scheme', 'rolesc']
The parameter extra_tag_to_remove, is passed to the StripPickler used.
To pickle graph made by Blocks, it must be:
['annotations', 'replacement_of', 'aggregation_scheme', 'rolesc']
"""
assert isinstance(filename, string_types)
......
......@@ -485,12 +485,13 @@ class Variable(Node):
Examples
--------
>>> import numpy
>>> import theano.tensor as T
>>> x = T.dscalar('x')
>>> y = T.dscalar('y')
>>> z = x + y
>>> z.eval({x : 16.3, y : 12.1})
array(28.4)
>>> numpy.allclose(z.eval({x : 16.3, y : 12.1}), 28.4)
True
We passed :func:`eval` a dictionary mapping symbolic theano
variables to the values to substitute for them, and it returned
......
......@@ -144,6 +144,7 @@ else:
u = CompatUnpickler(fp, encoding="latin1")
else:
u = CompatUnpickler(fp)
mat = u.load()
"""
pass
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论