提交 6279b842 authored 作者: James Bergstra's avatar James Bergstra

removed some benchmarking code to DeepLearningBenchmarks on github

上级 fb491a08
The benchmarking folder contains efforts to benchmark Theano against various
other systems. Each subfolder corresponds to a particular type of computation,
and each sub-subfolder corresponds to the implementation of that computation in
with a particular software package.
Since there is a variety of benchmark problems and of software systems, there
isn't a standard for how to run the benchmark suite.
There is however a standard for how each benchmark should produce results.
Every benchmark run should produce one or more files with the results of
benchmarking. These files must end with extension '.bmark'. These files must
have at least three lines each:
1) line 1 - description of computation/problem
2) line 2 - description of implementation/platform
3) line 3 - time required (in seconds)
4) line 4 - [optional] an estimated number of FLOPS performed (not necessarily same for all implementations of problem)
require "lab"
require "os"
dataset={};
n_examples=1000;
inputs=1000;
outputs=12;
HUs=500;
function dataset:size() return n_examples end -- 100 examples
for i=1,dataset:size() do
local input = lab.randn(inputs); -- normally distributed example in 2d
local output = lab.randn(outputs);
dataset[i] = {input, output}
end
require "nn"
mlp = nn.Sequential(); -- make a multi-layer perceptron
mlp:add(nn.Linear(inputs, HUs))
mlp:add(nn.Tanh())
mlp:add(nn.Linear(HUs, outputs))
criterion = nn.MSECriterion()
trainer = nn.StochasticGradient(mlp, criterion)
trainer.learningRate = 0.01
trainer.shuffleIndices = true
trainer.maxIteration = 4
trainer:train(dataset)
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论