ModelsLeaderboardEvalsTrainRentalsAPI Docs

TruthfulQA

Official

Tests whether models generate truthful answers to questions that humans often answer incorrectly due to misconceptions or false beliefs.

Source
Category: truthfulnessRunner: lm-eval-harnessVersion: v1.0Submitted by: Community

Eval Details

Scoring
Exact Match
Aggregation
Mean
Direction
Higher is better
Tasks
1 task

Default Run Config

Seed: 42FewShot: 0
TaskDatasetWeightShotsMax Tokens
TruthfulQA MC2
truthfulqa_mc2
truthful_qa / multiple_choice / validation10-shot

Leaderboard— best run per model

No approved results yet. Submit a run via the API.