提交 6c273415 authored 作者: nouiz's avatar nouiz

Merge pull request #97 from delallea/docfix

Docfix
......@@ -472,9 +472,12 @@ import theano and print the config variable, as in:
augmented with test values, by writing to their ``'tag.test_value'``
attribute (e.g. x.tag.test_value = numpy.random.rand(5,4)).
``'warn'`` will result in a UserWarning being raised when some Op inputs
do not contain an appropriate test value. ``'raise'`` will instead raise
an Exception when a problem is encountered during this debugging phase.
When not ``'off'``, the value of this option dictates what happens when
an Op's inputs do not provide appropriate test values:
- ``'ignore'`` will silently skip the debug mechanism for this Op
- ``'warn'`` will raise a UserWarning and skip the debug mechanism for
this Op
- ``'raise'`` will raise an Exception
.. attribute:: config.exception_verbosity
......
......@@ -24,13 +24,13 @@ We describe the details of the compressed sparse matrix types.
is faster if we are modifying the array. After initial inserts,
we can then convert to the appropriate sparse matrix format.
Their is also those type that exist:
The following types also exist:
``dok_matrix``
Dictionary of Keys format. From their doc: This is an efficient structure for constructing sparse matrices incrementally.
``coo_matrix``
Coordinate format. From their lil doc: consider using the COO format when constructing large matrices.
Their seam new format planed for scipy 0.7.x:
There seems to be a new format planned for scipy 0.7.x:
``bsr_matrix``
Block Compressed Row (BSR). From their doc: The Block Compressed Row (BSR) format is very similar to the Compressed Sparse Row (CSR) format. BSR is appropriate for sparse matrices with dense sub matrices like the last example below. Block matrices often arise in vector-valued finite element discretizations. In such cases, BSR is considerably more efficient than CSR and CSC for many sparse arithmetic operations.
``dia_matrix``
......
......@@ -159,7 +159,7 @@ For the transparent use of different type of optimization Theano can make,
there is the policy that get_value() always return by default the same object type
it received when the shared variable was created. So if you created manually data on
the gpu and create a shared variable on the gpu with this data, get_value will always
return gpu data event when return_internal_type=False.
return gpu data even when return_internal_type=False.
*Take home message:*
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论