TruthfulQA
OfficialTests whether models generate truthful answers to questions that humans often answer incorrectly due to misconceptions or false beliefs.
Category: truthfulnessRunner: lm-eval-harnessVersion: v1.0Submitted by: Community
Eval Details
Scoring
Exact Match
Aggregation
Mean
Direction
Higher is better
Tasks
1 task
Default Run Config
Seed: 42FewShot: 0
Leaderboard— best run per model
No approved results yet. Submit a run via the API.