ModelsLeaderboardEvalsTrainRentalsAPI Docs

Eval Suites

Community benchmark suites for evaluating local LLM quality. Submit results via the API.

Tech GreenpostOfficial
v1.0 · Custom

A five-prompt creative writing eval where models draft short tech-related 4chan-style greenposts. DeepSeek judges format compliance, reasonable length, tech relevance, coherence, and humor.

writing1 run
Open LLM LeaderboardOfficial
v1.0 · lm-eval-harness

The canonical HuggingFace Open LLM Leaderboard suite: MMLU, ARC Challenge, HellaSwag, WinoGrande, TruthfulQA MC2, and GSM8K with official few-shot settings. Weighted mean aggregate.

reasoning0 runs
MATHOfficial
v1.0 · lm-eval-harness

Competition math problems spanning algebra, counting, geometry, intermediate algebra, number theory, prealgebra, and precalculus.

math0 runs
DROPOfficial
v1.0 · lm-eval-harness

Discrete Reasoning Over Paragraphs. Reading-comprehension benchmark requiring numerical and symbolic reasoning over passages.

reasoning0 runs
Big-Bench HardOfficial
v1.0 · lm-eval-harness

A collection of challenging BIG-Bench tasks selected because prior models performed poorly. Covers symbolic reasoning, algorithmic reasoning, and language understanding.

reasoning0 runs
GPQA DiamondOfficial
v1.0 · lm-eval-harness

Graduate-level Google-proof Q&A benchmark focused on biology, physics, and chemistry. The Diamond split is the highest-quality expert-validated subset.

reasoning0 runs
MBPPOfficial
v1.0 · lm-eval-harness

Mostly Basic Python Problems — 500 crowd-sourced Python programming problems with automated test cases. Broader coverage than HumanEval.

coding0 runs
HumanEvalOfficial
v1.0 · lm-eval-harness

OpenAI's Python function completion benchmark. 164 hand-crafted problems with unit tests measuring pass@1 code synthesis accuracy.

coding0 runs
GSM8KOfficial
v1.0 · lm-eval-harness

Grade School Math 8K — 8,500 grade-school math word problems requiring multi-step arithmetic reasoning. Standard benchmark for math reasoning capability.

math1 run
TruthfulQAOfficial
v1.0 · lm-eval-harness

Tests whether models generate truthful answers to questions that humans often answer incorrectly due to misconceptions or false beliefs.

truthfulness0 runs
WinoGrandeOfficial
v1.0 · lm-eval-harness

Large-scale Winograd schema challenge for commonsense reasoning. Fill-in-the-blank pronoun resolution requiring world knowledge.

reasoning0 runs
HellaSwagOfficial
v1.0 · lm-eval-harness

Sentence completion benchmark testing grounded commonsense inference. Models must pick the most plausible continuation of an activity description.

reasoning1 run
ARC ChallengeOfficial
v1.0 · lm-eval-harness

AI2 Reasoning Challenge (Challenge set) — grade-school science questions that require reasoning beyond simple retrieval. Harder subset of ARC.

reasoning0 runs
MMLUOfficial
v1.0 · lm-eval-harness

Massive Multitask Language Understanding — 57-subject academic exam covering STEM, humanities, social sciences, and more. The gold-standard broad-knowledge benchmark.

reasoning1 run
Local Reasoning MiniOfficial
v1.0 · Custom

A lightweight 10-question sanity check for locally served models. Designed for the trusted /api/evals/execute path.

reasoning2 runs