ModelsLeaderboardEvalsTrainRentalsAPI Docs

Eval Suites

Community benchmark suites for evaluating local LLM quality. Submit results via the API.

Open LLM LeaderboardOfficial
v1.0 · lm-eval-harness

The canonical HuggingFace Open LLM Leaderboard suite: MMLU, ARC Challenge, HellaSwag, WinoGrande, TruthfulQA MC2, and GSM8K with official few-shot settings. Weighted mean aggregate.

reasoning0 runs
DROPOfficial
v1.0 · lm-eval-harness

Discrete Reasoning Over Paragraphs. Reading-comprehension benchmark requiring numerical and symbolic reasoning over passages.

reasoning0 runs
Big-Bench HardOfficial
v1.0 · lm-eval-harness

A collection of challenging BIG-Bench tasks selected because prior models performed poorly. Covers symbolic reasoning, algorithmic reasoning, and language understanding.

reasoning0 runs
GPQA DiamondOfficial
v1.0 · lm-eval-harness

Graduate-level Google-proof Q&A benchmark focused on biology, physics, and chemistry. The Diamond split is the highest-quality expert-validated subset.

reasoning0 runs
WinoGrandeOfficial
v1.0 · lm-eval-harness

Large-scale Winograd schema challenge for commonsense reasoning. Fill-in-the-blank pronoun resolution requiring world knowledge.

reasoning0 runs
HellaSwagOfficial
v1.0 · lm-eval-harness

Sentence completion benchmark testing grounded commonsense inference. Models must pick the most plausible continuation of an activity description.

reasoning1 run
ARC ChallengeOfficial
v1.0 · lm-eval-harness

AI2 Reasoning Challenge (Challenge set) — grade-school science questions that require reasoning beyond simple retrieval. Harder subset of ARC.

reasoning0 runs
MMLUOfficial
v1.0 · lm-eval-harness

Massive Multitask Language Understanding — 57-subject academic exam covering STEM, humanities, social sciences, and more. The gold-standard broad-knowledge benchmark.

reasoning1 run
Local Reasoning MiniOfficial
v1.0 · Custom

A lightweight 10-question sanity check for locally served models. Designed for the trusted /api/evals/execute path.

reasoning2 runs