- 14 1月, 2010 10 次提交
-
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
in profile mode, now print the total time since the import and the time spend outside op and compile time.
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 James Bergstra 提交于
-
由 Frederic Bastien 提交于
-
- 13 1月, 2010 2 次提交
-
-
由 Frederic Bastien 提交于
-
由 Pierre-Antoine Manzagol 提交于
-
- 12 1月, 2010 9 次提交
-
-
由 james@crane 提交于
-
由 Pascal Lamblin 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
Minor changes to complete implementation of local_advanced_indexing_crossentropy_onehot optimization
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 James Bergstra 提交于
-
- 11 1月, 2010 8 次提交
-
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
crossentropy_softmax_1hot_with_bias_dx
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 James Bergstra 提交于
These two functions are used by as_tensor_variable to determine how to turn python ints and floats into ndarrays for TensorConstants. This provides an i-hope-not-too-hacky way for config.floatX=='float32' to make it so that python literals like 1.1 don't force an upcast in expressions like (fvector() + 1.1). Another option would have been to leave the downcast of 1.1 in the graph as a symbolic node that would be pre-computed at compile time, but I think that would behave pretty similarly, and further burden the optimizer.
-
- 09 1月, 2010 11 次提交
-
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
the optimization they register. This way they can be used as decorators without accidentally making their optimization disappear from the defining module.
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
cross-entropy
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-