ModelsLeaderboardEvalsTrainRentalsAPI Docs

Eval Suites

Community benchmark suites for evaluating local LLM quality. Submit results via the API.

MMLU 5-shot
v1.0 · lm-eval-harness

Massive Multitask Language Understanding via EleutherAI lm-evaluation-harness task mmlu, 5-shot, exact-match/accuracy style scoring.

knowledge0 runs
Probe
v1.0 · lm-eval-harness
knowledge1 run