59 model groups · 197 total
Qwen3.6-27B
Qwen / Qwen3.6-27B
Qwen3.6-35B-A3B
Qwen / Qwen3.6-35B-A3B
Qwen3.5-27B
Qwen / Qwen3.5-27B
MiniMax-M2.7
MiniMaxAI / MiniMax-M2.7
Qwen3.5-9B-Base
Qwen / Qwen3.5-9B-Base
Qwen3.5-35B-A3B-Base
Qwen / Qwen3.5-35B-A3B-Base
gemma-4-26B-A4B
google / gemma-4-26B-A4B
Qwen3.5-122B-A10B
Qwen / Qwen3.5-122B-A10B
Qwen3-Coder-30B-A3B-Instruct
Qwen / Qwen3-Coder-30B-A3B-Instruct
gemma-4-31B
google / gemma-4-31B
Qwen3-Coder-Next
Qwen / Qwen3-Coder-Next
Qwen3.5-4B-Base
Qwen / Qwen3.5-4B-Base
Qwen3.6-27B-DFlash
z-lab / Qwen3.6-27B-DFlash
Ling-2.6-flash
inclusionAI / Ling-2.6-flash
gemma-4-E4B
google / gemma-4-E4B
Llama-3.1-8B
meta-llama / Llama-3.1-8B
gemma-4-E2B
google / gemma-4-E2B
GLM-4.7-Flash
zai-org / GLM-4.7-Flash
Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16
nvidia / Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16
Nemotron-Cascade-2-30B-A3B
nvidia / Nemotron-Cascade-2-30B-A3B
NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4
nvidia / NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4
DeepSeek-V4-Flash
deepseek-ai / DeepSeek-V4-Flash
GLM-5.1
zai-org / GLM-5.1
Mistral-Medium-3.5-128B
mistralai / Mistral-Medium-3.5-128B
gpt-oss-20b
openai / gpt-oss-20b
gpt-oss-120b
openai / gpt-oss-120b
Qwen2.5-72B
Qwen / Qwen2.5-72B
Qwen3-8B-Base
Qwen / Qwen3-8B-Base
Gemopus-4-26B-A4B-it
Jackrong / Gemopus-4-26B-A4B-it
NVIDIA-Nemotron-3-Super-120B-A12B-BF16
nvidia / NVIDIA-Nemotron-3-Super-120B-A12B-BF16
Qwen3-32B
Qwen / Qwen3-32B
MiniMax-M2.5
MiniMaxAI / MiniMax-M2.5
Qwen3.5-0.8B-Base
Qwen / Qwen3.5-0.8B-Base
Qwen2.5-7B
Qwen / Qwen2.5-7B
Kimi-K2.5
moonshotai / Kimi-K2.5
granite-4.1-30b
ibm-granite / granite-4.1-30b
Mistral-Small-3.1-24B-Base-2503
mistralai / Mistral-Small-3.1-24B-Base-2503
MiniMax-M2.1
MiniMaxAI / MiniMax-M2.1
MiniMax-M2
MiniMaxAI / MiniMax-M2
Qwen3-VL-30B-A3B-Instruct
Qwen / Qwen3-VL-30B-A3B-Instruct
Ministral-3-3B-Base-2512
mistralai / Ministral-3-3B-Base-2512
Llama-3.1-70B
meta-llama / Llama-3.1-70B
Qwen3.5-122B-A10B-GPTQ-Int4
Qwen / Qwen3.5-122B-A10B-GPTQ-Int4
Llama-2-7b
meta-llama / Llama-2-7b
Qwen2.5-32B
Qwen / Qwen2.5-32B
Qwen3-VL-8B-Instruct
Qwen / Qwen3-VL-8B-Instruct
Llama-3.2-3B-Instruct
meta-llama / Llama-3.2-3B-Instruct
Qwen3-30B-A3B-Base
Qwen / Qwen3-30B-A3B-Base
Ternary-Bonsai-8B-unpacked
prism-ml / Ternary-Bonsai-8B-unpacked
NVIDIA-Nemotron-3-Nano-30B-A3B-BF16
nvidia / NVIDIA-Nemotron-3-Nano-30B-A3B-BF16
Qwen3.5-35B-A3B-4bit
mlx-community / Qwen3.5-35B-A3B-4bit
gemma-3-4b-pt
google / gemma-3-4b-pt
GLM-5
zai-org / GLM-5
DeepSeek-V4-Flash-2bit-DQ
mlx-community / DeepSeek-V4-Flash-2bit-DQ
Qwen3-VL-2B-Instruct
Qwen / Qwen3-VL-2B-Instruct
Qwen3-30B-A3B-Instruct-2507
Qwen / Qwen3-30B-A3B-Instruct-2507
Gemopus-4-26B-A4B-it-GGUF
Jackrong / Gemopus-4-26B-A4B-it-GGUF
LFM2-24B-A2B
LiquidAI / LFM2-24B-A2B
LFM2-24B-A2B-GGUF
lmstudio-community / LFM2-24B-A2B-GGUF