ModelsLeaderboardEvalsTrainRentalsAPI Docs

Eval Suites

Community benchmark suites for evaluating local LLM quality. Submit results via the API.

TruthfulQAOfficial
v1.0 · lm-eval-harness

Tests whether models generate truthful answers to questions that humans often answer incorrectly due to misconceptions or false beliefs.

truthfulness0 runs