ModelsLeaderboardEvalsTrainRentalsAPI Docs

Eval Suites

Community benchmark suites for evaluating local LLM quality. Submit results via the API.

MBPPOfficial
v1.0 · lm-eval-harness

Mostly Basic Python Problems — 500 crowd-sourced Python programming problems with automated test cases. Broader coverage than HumanEval.

coding0 runs
HumanEvalOfficial
v1.0 · lm-eval-harness

OpenAI's Python function completion benchmark. 164 hand-crafted problems with unit tests measuring pass@1 code synthesis accuracy.

coding0 runs
HumanEval 0-shot
v1.0 · lm-eval-harness

OpenAI HumanEval via EleutherAI lm-evaluation-harness task humaneval, 0-shot, pass@k code-generation scoring.

coding0 runs