提交 fc5fe33a authored 作者: AdeB's avatar AdeB

Replace sparse_block_dot by tensor_block when full output is requested.

上级 58cddb74
...@@ -2293,14 +2293,8 @@ def h_softmax(x, batch_size, n_outputs, n_classes, n_outputs_per_class, ...@@ -2293,14 +2293,8 @@ def h_softmax(x, batch_size, n_outputs, n_classes, n_outputs_per_class,
if target is None: # Computes the probabilites of all the outputs if target is None: # Computes the probabilites of all the outputs
class_ids = tensor.tile(
tensor.arange(n_classes, dtype="int32")[None, :], (batch_size, 1))
# Second softmax that computes the output probabilities # Second softmax that computes the output probabilities
activations = sparse_block_dot( activations = tensor.tensordot(x, W2, (1, 1)) + b2
W2[None, :, :, :], x[:, None, :],
tensor.zeros((batch_size, 1), dtype='int32'), b2, class_ids)
output_probs = theano.tensor.nnet.softmax( output_probs = theano.tensor.nnet.softmax(
activations.reshape((-1, n_outputs_per_class))) activations.reshape((-1, n_outputs_per_class)))
output_probs = output_probs.reshape((batch_size, n_classes, -1)) output_probs = output_probs.reshape((batch_size, n_classes, -1))
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论