提交 1086f062 authored 作者: James Bergstra's avatar James Bergstra

fixed cost fn

上级 c720a3fb
...@@ -172,7 +172,7 @@ training by simple gradient descent. ...@@ -172,7 +172,7 @@ training by simple gradient descent.
# REGRESSION MODEL AND COSTS TO MINIMIZE # REGRESSION MODEL AND COSTS TO MINIMIZE
prediction = T.softmax(T.dot(x, w) + b) prediction = T.softmax(T.dot(x, w) + b)
cross_entropy = T.sum(y * T.log(prediction) + (1-y) * T.log(1.0 - prediction), axis=1) cross_entropy = T.sum(y * T.log(prediction), axis=1)
cost = T.sum(cross_entropy) + l2_coef * T.sum(T.sum(w*w)) cost = T.sum(cross_entropy) + l2_coef * T.sum(T.sum(w*w))
# GET THE GRADIENTS NECESSARY TO FIT OUR PARAMETERS # GET THE GRADIENTS NECESSARY TO FIT OUR PARAMETERS
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论