ModelsLeaderboardEvalsTrainRentalsAPI Docs

HumanEval

Official

OpenAI's Python function completion benchmark. 164 hand-crafted problems with unit tests measuring pass@1 code synthesis accuracy.

Source
Category: codingRunner: lm-eval-harnessVersion: v1.0Submitted by: Community

Eval Details

Scoring
Pass At K
Aggregation
Mean
Direction
Higher is better
Tasks
1 task

Default Run Config

Seed: 42FewShot: 0
TaskDatasetWeightShotsMax Tokens
HumanEval (pass@1)
humaneval
openai_humaneval / test10-shot

Leaderboard— best run per model

No approved results yet. Submit a run via the API.