Community benchmarks for local LLM inference. Track speed, compare hardware, and find your optimal setup.