ModelsLeaderboardEvalsTrainRentalsAPI Docs

HellaSwag

Official

Sentence completion benchmark testing grounded commonsense inference. Models must pick the most plausible continuation of an activity description.

Source
Category: reasoningRunner: lm-eval-harnessVersion: v1.0Submitted by: Community

Eval Details

Scoring
Exact Match
Aggregation
Mean
Direction
Higher is better
Tasks
1 task

Default Run Config

Seed: 42FewShot: 10
TaskDatasetWeightShotsMax Tokens
HellaSwag
hellaswag
Rowan/hellaswag / validation110-shot

Leaderboard— best run per model

No approved results yet. Submit a run via the API.