- 30 9月, 2009 1 次提交
-
-
由 Olivier Delalleau 提交于
-
- 25 9月, 2009 1 次提交
-
-
由 Frederic Bastien 提交于
-
- 24 9月, 2009 24 次提交
-
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
[recursive] handling of code versioning for ops like Elemwise, CAReduce, etc. Changed the default cache version of Op to (). Added c_code_cache_version() functions to many Ops that were using the default before.
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
scalar/basic. There is also a cast function, which is used in the grad() of ops that might upcast their arguments, to downcast the corresponding gradients.
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
types.
-
由 James Bergstra 提交于
the deprecation warning on startup.
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
problem.
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 Frederic Bastien 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
complex->real is forbidden by the cast() function.
-
由 James Bergstra 提交于
-
Message.
-
- 23 9月, 2009 14 次提交
-
-
由 James Bergstra 提交于
bumped version numbers on Elemwise and CAReduce. Not sure why it was necessary, but it fixed my problem. Probably something got changed before and versions werent raised.
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 Philippe Hamel 提交于
-
由 James Bergstra 提交于
makes it play nicer with the new rule that update values must have the same type as their shared vars. An alternative fix to several tests would have been to implement a Tensor->Scalar cast, but I didn't do that because it's nice to just use Tensors.
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
Reason being there is a problem with the way we deal with strict. Strictness requires that a value has exactly the right type, but non-strictness implies just about any kind of casting including in-exact casting. What I would like the default to be is something that means casting is ok as long as it is exact.
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
had been upcast. I did this for add, sub, div. Mul looked tricky so I didn't do it.
-
由 James Bergstra 提交于
function. This corresponds to what is already in theano.tensor.
-
由 James Bergstra 提交于
-
由 James Bergstra 提交于
when initializing a shared variable with a value of 0 because subsequent floating-point assignments will be silently downcasted to 0. At the same time, it is slightly more annoying when it comes to float32 vs. float64 differences.
-