ModelsLeaderboardEvalsTrainRentalsAPI Docs

MBPP

Official

Mostly Basic Python Problems — 500 crowd-sourced Python programming problems with automated test cases. Broader coverage than HumanEval.

Source
Category: codingRunner: lm-eval-harnessVersion: v1.0Submitted by: Community

Eval Details

Scoring
Pass At K
Aggregation
Mean
Direction
Higher is better
Tasks
1 task

Default Run Config

Seed: 42FewShot: 3
TaskDatasetWeightShotsMax Tokens
MBPP (pass@1)
mbpp
google-research-datasets/mbpp / sanitized / test13-shot

Leaderboard— best run per model

No approved results yet. Submit a run via the API.