HellaSwag
OfficialSentence completion benchmark testing grounded commonsense inference. Models must pick the most plausible continuation of an activity description.
Category: reasoningRunner: lm-eval-harnessVersion: v1.0Submitted by: Community
Eval Details
Scoring
Exact Match
Aggregation
Mean
Direction
Higher is better
Tasks
1 task
Default Run Config
Seed: 42FewShot: 10
Leaderboard— best run per model
No approved results yet. Submit a run via the API.