- 18 11月, 2014 3 次提交
- 14 11月, 2014 1 次提交
-
-
由 f0k 提交于
-
- 13 11月, 2014 7 次提交
-
-
由 Frédéric Bastien 提交于
Implemented grad for cudnn softmax.
-
由 Dustin Webb 提交于
-
由 Dustin Webb 提交于
-
由 Dustin Webb 提交于
Added test to ensure SoftmaxGrad to DnnSoftmaxGrad is not applied when cudnn is excluded from optimizations.
-
由 Dustin Webb 提交于
Added optimization that converts SoftmaxGrad to DnnSoftmaxGrad and associated test to make sure it is applied correctly.
-
由 Dustin Webb 提交于
-
由 Dustin Webb 提交于
-
- 12 11月, 2014 1 次提交
-
-
由 Frédéric Bastien 提交于
allow cxx flag to be full path to compiler
-
- 11 11月, 2014 1 次提交
-
-
由 cocu 提交于
-
- 10 11月, 2014 1 次提交
-
-
由 cocu 提交于
-
- 09 11月, 2014 2 次提交
- 08 11月, 2014 1 次提交
-
-
由 abergeron 提交于
Dnn default and doc
-
- 07 11月, 2014 23 次提交
-
-
由 Frédéric Bastien 提交于
Prevent computations in float16 in scalar and elemwise
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Frederic 提交于
-
由 Frédéric Bastien 提交于
Catch exception if open() fails as well.
-
由 Frederic 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
Also: - add tests for "inv" op - do not test inplace if the input is int and the output is float - remove a couple of redundant dicts
-
由 Pascal Lamblin 提交于
Otherwise, the expected values could be computed in float16, which would not be precise enough, and which would not be supported as a dtype by Theano.
-
由 Pascal Lamblin 提交于
Since many elemwise ops do not call the scalar op's impl() method, we needed to also change the perform method when numpy built-in ufuncs are used.
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
Ops that are defined with upcast_to_float should upcast to float32 or float64 minimally. Add tests for the cases where the inputs were int8, which is when float16 values appeared.
-
由 Pascal Lamblin 提交于
-
由 cocu 提交于
-
由 Pascal Lamblin 提交于
Drop support for python 2.4.
-
由 Frederic 提交于
-
由 Frederic 提交于
-
由 Frederic 提交于
-
由 Frederic 提交于
-
由 Frederic 提交于
-