ModelsLeaderboardEvalsTrainRentalsAPI Docs

LocalMaxxing

Community benchmarks for local LLM inference. Track speed, compare hardware, and find your optimal setup.

Top inference speeds across all hardwareBrowse every community benchmark runQuality evaluations and accuracy scoresRent community inference endpoints by the token