提交 f07e4644 authored 作者: Frédéric Bastien's avatar Frédéric Bastien

Merge pull request #1596 from delallea/minor

Minor fixes
......@@ -19,7 +19,7 @@ Acknowledgements
`theano/misc/cpucount.py` come from the project `pyprocessing
<http://pyprocessing.berlios.de/>`_. It is available under the same license
as Theano.
* Our random number generator implementation on CPU and GPU use the MRG31k3p algorithm that is described in:
* Our random number generator implementation on CPU and GPU uses the MRG31k3p algorithm that is described in:
P. L'Ecuyer and R. Touzin, `Fast Combined Multiple Recursive Generators with Multipliers of the form a = +/- 2^d +/- 2^e <http://www.informs-sim.org/wsc00papers/090.PDF>`_, Proceedings of the 2000 Winter Simulation Conference, Dec. 2000, 683--689.
......
......@@ -36,7 +36,7 @@ Reference
***TODO***
.. note:: FunctionGraph(inputs, outputs) clone the inputs by
default. To don't have this behavior, add the parameter
clone=False. This is needed as we don't want cached constant
.. note:: FunctionGraph(inputs, outputs) clones the inputs by
default. To avoid this behavior, add the parameter
clone=False. This is needed as we do not want cached constants
in fgraph.
......@@ -29,17 +29,17 @@ TODO: Give examples for how to use these things! They are pretty complicated.
- :func:`nnet.conv2d <theano.tensor.nnet.conv.conv2d>`.
- :func:`conv3D <theano.tensor.nnet.Conv3D.conv3D>`.
- :func:`conv3d2d <theano.tensor.nnet.conv3d2d.conv3d>`
Another conv3d implementation that use the conv2d with data reshaping.
It is faster in some case then conv3d, specificaly on the GPU.
Another conv3d implementation that uses the conv2d with data reshaping.
It is faster in some cases than conv3d, specifically on the GPU.
- `Faster conv2d <http://deeplearning.net/software/pylearn2/library/alex.html>`_
This is in Pylearn2, not very documented and use a different
memory layout for the input. It is important to have the input
This is in Pylearn2, not very documented and uses a different
memory layout for the input. It is important to have the input
in the native memory layout, and not use dimshuffle on the
inputs, otherwise you loose much of the speed up. So this is not
inputs, otherwise you lose most of the speed up. So this is not
a drop in replacement of conv2d.
Normally those are called from the `linear transfrom
Normally those are called from the `linear transform
<http://deeplearning.net/software/pylearn2/library/linear.html>`_
implementation.
......
......@@ -86,7 +86,7 @@ class FunctionGraph(utils.object2):
is added via the constructor. How constructed is the FunctionGraph?
:param clone: If true, we will clone the graph. This is
usefull to remove the constant cache problem.
useful to remove the constant cache problem.
"""
if clone:
......@@ -138,8 +138,8 @@ class FunctionGraph(utils.object2):
if getattr(r, 'cached', False):
raise CachedConstantError(
"You manually constructed a FunctionGraph, but you passed it a"
" graph that have cached constant. This should happen."
" Clone the graph before building the FunctionGraph")
" graph that has a cached constant. This should not happen."
" Clone the graph before building the FunctionGraph.")
if (hasattr(r, 'fgraph') and
r.fgraph is not None and
r.fgraph is not self):
......
......@@ -89,8 +89,8 @@ def shape_of_variables(fgraph, input_shapes):
if any([i not in fgraph.inputs for i in input_shapes.keys()]):
raise ValueError(
"input_shapes keys aren't in the fgraph.inputs. FunctionGraph()"
" interface changed. Now by default, it clone the graph it receive."
" To have the old behavior, give him this new parameter `clone=False`.")
" interface changed. Now by default, it clones the graph it receives."
" To have the old behavior, give it this new parameter `clone=False`.")
numeric_input_dims = [dim for inp in fgraph.inputs
for dim in input_shapes[inp]]
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论