ModelsLeaderboardEvalsTrainRentalsAPI Docs

Big-Bench Hard

Official

A collection of challenging BIG-Bench tasks selected because prior models performed poorly. Covers symbolic reasoning, algorithmic reasoning, and language understanding.

Source
Category: reasoningRunner: lm-eval-harnessVersion: v1.0Submitted by: Community

Eval Details

Scoring
Exact Match
Aggregation
Mean
Direction
Higher is better
Tasks
1 task

Default Run Config

Seed: 42FewShot: 3
TaskDatasetWeightShotsMax Tokens
Big-Bench Hard
bbh
Not specified13-shot

Leaderboard— best run per model

No approved results yet. Submit a run via the API.