提交 05eed162 authored 作者: goodfeli's avatar goodfeli

Merge pull request #30 from nouiz/doc

Doc
......@@ -70,6 +70,31 @@ then go to your fork's github page on the github website, select your feature
branch and hit the "Pull Request" button in the top right corner.
If you don't get any feedback, bug us on the theano-dev mailing list.
When the your pull request have been merged, you can delete the branch
from the github list of branch. That is usefull to don't have too many
that stay there!
.. code-block:: bash
git push origin :my_shiny_feature
You can keep you local repo up to date with central/master with those commands:
.. code-block:: bash
git checkout master
git fetch central
git merge central/master
If you want to fix a commit done in a pull request(i.e. fix small
typo) to keep the history clean, you can do it like this:
.. code-block:: bash
git checkout branch
git commit --amend
git push -u origin my_shiny_feature:my_shiny_feature
Cleaning up history
-------------------
......@@ -86,6 +111,17 @@ this. In summary:
* There are other tools that are useful if your branch is too big for one squash.
To checkout another user branch in his repo:
.. code-block:: bash
git remote add REPO_NAME HIS_REPO_PATH
git checkout -b LOCAL_BRANCH_NAME REPO_NAME/REMOVE_BRANCH_NAME
You can find move information and tips in the `numpy development
<http://docs.scipy.org/doc/numpy/dev/gitwash/development_workflow.html>`_
page.
Details about ``PYTHONPATH``
----------------------------
......
......@@ -1137,4 +1137,51 @@ Gradient / Differentiation
R op
====
See the tutorial for the R op documentation.
list of ops that support R-op:
* with test [Most is tensor/tests/test_rop.py]
* SpecifyShape
* MaxAndArgmax
* Subtensor
* IncSubtensor set_subtensor too
* Alloc
* Dot
* Elemwise
* Sum
* Softmax
* Shape
* Join
* Rebroadcast
* Reshape
* Flatten
* DimShuffle
* Scan [In scan_module/tests/test_scan.test_rop]
* without test
* Split
* ARange
* ScalarFromTensor
* AdvancedSubtensor1
* AdvancedIncSubtensor1
* AdvancedIncSubtensor
Partial list of ops without support for R-op:
* All sparse ops
* All linear algebra ops.
* PermuteRowElements
* Tile
* AdvancedSubtensor
* TensorDot
* Outer
* Prod
* MulwithoutZeros
* ProdWithoutZeros
* CAReduce(for max,... done for MaxAndArgmax op)
* MaxAndArgmax(only for matrix on axis 0 or 1)
......@@ -3,53 +3,7 @@
Tests for the R operator / L operator
ops without:
PermuteRowElements
Tile
AdvancedSubtensor
TensorDot
Outer
Prod
MulwithoutZeros
ProdWithoutZeros
CAReduce(for max,... done for MaxAndArgmax op)
MaxAndArgmax(only for matrix on axis 0 or 1)
list of ops that support R-op:
* with test
* SpecifyShape
* MaxAndArgmax
* Subtensor
* IncSubtensor set_subtensor too
* Alloc
* Dot
* Elemwise
* Sum
* Softmax
* Shape
* Join
* Rebroadcast
* Reshape
* Flatten
* DimShuffle
* Scan [ RP: scan has a test in scan_module/tests/test_scan.test_rop ]
* without test
* Split
* ARange
* ScalarFromTensor
* AdvancedSubtensor1
* AdvancedIncSubtensor1
* AdvancedIncSubtensor
For the list of op with r op defined, with or without missing test see this file: defined see this file
"""
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论