- 12 1月, 2010 2 次提交
-
-
由 Frederic Bastien 提交于
-
由 James Bergstra 提交于
-
- 11 1月, 2010 8 次提交
-
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
crossentropy_softmax_1hot_with_bias_dx
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 Pascal Lamblin 提交于
-
由 James Bergstra 提交于
These two functions are used by as_tensor_variable to determine how to turn python ints and floats into ndarrays for TensorConstants. This provides an i-hope-not-too-hacky way for config.floatX=='float32' to make it so that python literals like 1.1 don't force an upcast in expressions like (fvector() + 1.1). Another option would have been to leave the downcast of 1.1 in the graph as a symbolic node that would be pre-computed at compile time, but I think that would behave pretty similarly, and further burden the optimizer.
-
- 09 1月, 2010 30 次提交
-
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
the optimization they register. This way they can be used as decorators without accidentally making their optimization disappear from the defining module.
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
cross-entropy
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 Frederic Bastien 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
added an not used argumement to DebugMode.__init__ to have it accept the same arguments as other Mode. This allow to work in more generic way.
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 Frederic Bastien 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-