HumanEval
OfficialOpenAI's Python function completion benchmark. 164 hand-crafted problems with unit tests measuring pass@1 code synthesis accuracy.
Category: codingRunner: lm-eval-harnessVersion: v1.0Submitted by: Community
Eval Details
Scoring
Pass At K
Aggregation
Mean
Direction
Higher is better
Tasks
1 task
Default Run Config
Seed: 42FewShot: 0
Leaderboard— best run per model
No approved results yet. Submit a run via the API.