Community benchmark suites for evaluating local LLM quality. Submit results via the API.
Mostly Basic Python Problems — 500 crowd-sourced Python programming problems with automated test cases. Broader coverage than HumanEval.
OpenAI's Python function completion benchmark. 164 hand-crafted problems with unit tests measuring pass@1 code synthesis accuracy.
OpenAI HumanEval via EleutherAI lm-evaluation-harness task humaneval, 0-shot, pass@k code-generation scoring.