Big-Bench Hard
OfficialA collection of challenging BIG-Bench tasks selected because prior models performed poorly. Covers symbolic reasoning, algorithmic reasoning, and language understanding.
Category: reasoningRunner: lm-eval-harnessVersion: v1.0Submitted by: Community
Eval Details
Scoring
Exact Match
Aggregation
Mean
Direction
Higher is better
Tasks
1 task
Default Run Config
Seed: 42FewShot: 3
Leaderboard— best run per model
No approved results yet. Submit a run via the API.