- 13 1月, 2026 2 次提交
-
-
由 Ricardo Vieira 提交于
Mainly, joining 0 axes is equivalent to inserting a new dimension. This is the mirror of how splitting a single axis into an empty shape is equivalent to squeezing it.
-
由 jessegrabowski 提交于
Also: * Allow default `None` on unpack
-
- 12 1月, 2026 7 次提交
-
-
-
由 ricardoV94 提交于
-
由 Ricardo Vieira 提交于
-
由 Ricardo Vieira 提交于
-
由 ricardoV94 提交于
Remove cases where type-hints are better than bad type-hints
-
由 ricardoV94 提交于
-
由 ricardoV94 提交于
-
- 11 1月, 2026 2 次提交
-
-
由 Ricardo Vieira 提交于
Inline when only used in one place, or remove if altogether unused
-
由 Ricardo Vieira 提交于
-
- 09 1月, 2026 4 次提交
-
-
由 Ricardo Vieira 提交于
* Do not coerce gradients to TensorVariable This could cause spurious disconnected errors, because the tensorified variable was not in the graph of the cost * Type-consistent checks --------- Co-authored-by:jessegrabowski <jessegrabowski@gmail.com>
-
由 jessegrabowski 提交于
-
由 Eby Elanjikal 提交于
-
由 Eby Elanjikal 提交于
-
- 08 1月, 2026 2 次提交
-
-
由 Ricardo Vieira 提交于
This circumvents a bug when DimShuffle of a scalar shows up inside a Blockwise, as the outer indexing yields a float (as opposed to a numpy scalar) which has no `.shape` attribute.
-
由 Ricardo Vieira 提交于
-
- 07 1月, 2026 2 次提交
-
-
由 Ricardo Vieira 提交于
-
由 Ricardo Vieira 提交于
Removes depracated `create_tuple_creator`
-
- 06 1月, 2026 4 次提交
-
-
由 Ben Mares 提交于
-
由 Ricardo Vieira 提交于
-
由 Ricardo Vieira 提交于
-
由 Ricardo Vieira 提交于
-
- 05 1月, 2026 3 次提交
-
-
由 Ricardo Vieira 提交于
-
由 ricardoV94 提交于
-
由 Jesse Grabowski 提交于
* Split blockwise and subtensor tests to deparate CI job * Skip dimshuffle memory leak test in numba CI
-
- 04 1月, 2026 6 次提交
-
-
由 Jesse Grabowski 提交于
* local_subtensor_of_squeeze bugfix * Test graph correctness * Follow template for test
-
由 jessegrabowski 提交于
-
由 ricardoV94 提交于
-
由 ricardoV94 提交于
-
由 ricardoV94 提交于
-
由 Jesse Grabowski 提交于
* Remove depreciated AbstractConv Ops and tests * Remove tensor/conv from test CI * remove conv.rst
-
- 03 1月, 2026 8 次提交
-
-
由 ricardoV94 提交于
-
由 ricardoV94 提交于
-
由 ricardoV94 提交于
-
由 ricardoV94 提交于
-
由 ricardoV94 提交于
-
由 Jesse Grabowski 提交于
* Add machinery to enable numba caching of function pointers * Numba cache for Cholesky * Numba cache for SolveTriangular * Numba cache for CholeskySolve * Numba cache for solve helpers * Numba cache for GECON * Numba cache for lu_factor * Numba cache for Solve when assume_a="gen" * Numba cache for Solve when assume_a="sym" * Numba cache for Solve when assume_a="pos" * Numba cache for Solve when assume_a='tri' * Numba cache for QR * Clean up obsolete code * Feedback * More feedback * Rename `cache_key_lit` -> `unique_func_name_lit`
-
由 Jesse Grabowski 提交于
* Implement `L_Op` for `join_dims` and `split_dims` Improve type hints for `join_dims` and `split_dims` * Feedback
-
由 Ricardo Vieira 提交于
-