- 04 11月, 2016 2 次提交
-
-
由 abergeron 提交于
Gpuarray pool grad grad
-
由 Frédéric Bastien 提交于
Fix some failing tests in debugmode
-
- 03 11月, 2016 11 次提交
-
-
由 Frédéric Bastien 提交于
[REG,BUG] Use _props_dict() from the right Op.
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
CudaNdarray.filter rejects broadcastable dimensions with non-zero strides
-
由 Alexander Matyasko 提交于
-
由 Alexander Matyasko 提交于
-
由 Alexander Matyasko 提交于
Add Max3dPoolGradGrad which accept 5d input and pools over last 3 dimensions. Also add padding parameter and now accept tensor variables to make_node which removes necessity to check for constant args. Local optimization are updated accordingly which now work for 2d and 3d pooling.
-
由 Alexander Matyasko 提交于
-
由 Alexander Matyasko 提交于
-
由 Alexander Matyasko 提交于
-
由 Pascal Lamblin 提交于
It was using the one from GpuFromHost instead of AdvancedIncSubtensor1.
-
- 02 11月, 2016 6 次提交
-
-
由 Frédéric Bastien 提交于
Fix some problems in float16.
-
由 abergeron 提交于
[ENH] fix opt error/skip and better error printing.
-
由 Frédéric Bastien 提交于
Correct blocks/threads for gpuarray CorrMM and Corr3DMM
-
由 Arnaud Bergeron 提交于
-
由 Arnaud Bergeron 提交于
-
由 Arnaud Bergeron 提交于
we do it the normal way now.
-
- 01 11月, 2016 17 次提交
-
-
由 Frédéric Bastien 提交于
Doc: Fix library path. Closes #5156.
-
由 Gijs van Tulder 提交于
Number of blocks and number of threads were swapped.
-
由 Greg Ciccarelli 提交于
-
由 Pascal Lamblin 提交于
use floatX in gpuarray dnn tests
-
由 Pascal Lamblin 提交于
Pool 2d rename
-
由 abergeron 提交于
Add profiling of which node make_thunk take times.
-
由 Pascal Lamblin 提交于
Issue 5008 fixed
-
由 Arnaud Bergeron 提交于
-
由 Arnaud Bergeron 提交于
-
由 Arnaud Bergeron 提交于
-
由 Arnaud Bergeron 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 Pascal Lamblin 提交于
Add bool dtype in scalar and tensor.
-
由 Frédéric Bastien 提交于
Fixed typo in config doc
-
由 abergeron 提交于
make var() return float16 when input is float16.
-
- 31 10月, 2016 3 次提交
-
-
由 slefrancois 提交于
-
由 notoraptor 提交于
-
由 hsintone 提交于
-
- 29 10月, 2016 1 次提交
-
-
由 notoraptor 提交于
Use "PyArray_Transpose" in gemm implementation Add a test to test_blas.py:TestBlasStrides to check is gemm works well with non-contiguous matrices (A,B,C are all passed as non-contiguous). All tests passed ! Recall: these are the tests executed: theano-cache purge && THEANO_FLAGS=blas.ldflags= nosetests theano/tensor/nnet/tests/test_abstract_conv.py:TestAbstractConvNoOptim theano-cache purge && THEANO_FLAGS=blas.ldflags= nosetests theano/tensor/nnet/tests/test_abstract_conv.py:TestBilinearUpsampling theano-cache purge && THEANO_FLAGS=blas.ldflags= nosetests theano/tensor/nnet/tests/test_abstract_conv.py:TestCorrConv2d theano-cache purge && THEANO_FLAGS=blas.ldflags= nosetests theano/tensor/nnet/tests/test_abstract_conv.py:TestCorrConv3d theano-cache purge && THEANO_FLAGS=blas.ldflags= nosetests theano/tensor/nnet/tests/test_abstract_conv.py:TestCpuConv2d theano-cache purge && THEANO_FLAGS=blas.ldflags= nosetests theano/tensor/nnet/tests/test_abstract_conv.py:TestCpuConv3d theano-cache purge && THEANO_FLAGS=blas.ldflags= nosetests theano/tensor/tests/test_blas_c.py theano-cache purge && THEANO_FLAGS=blas.ldflags= nosetests theano/tensor/tests/test_blas.py theano-cache purge && THEANO_FLAGS=blas.ldflags= nosetests theano/tensor/tests/test_blas_scipy.py theano-cache purge && THEANO_FLAGS=optdb.max_use_ratio=10,blas.ldflags= nosetests theano/tensor/nnet/tests/test_corr3d.py theano-cache purge && THEANO_FLAGS=optdb.max_use_ratio=7,blas.ldflags= nosetests theano/tensor/nnet/tests/test_corr.py
-