Loading GPU data...
Loading GPU data...
Quick Answer: NVIDIA A6000 offers 48GB VRAM and starts around $4699.00. It delivers approximately 111 tokens/sec on meta-llama/Llama-3.2-1B-Instruct. It typically draws 300W under load.
This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.
Rotate out primary variants whenever validation flags an issue.
| Model | Quantization | Tokens/sec | VRAM used |
|---|---|---|---|
| meta-llama/Llama-3.2-1B-Instruct | Q4 | 110.71 tok/sEstimated Auto-generated benchmark | 1GB |
| apple/OpenELM-1_1B-Instruct | Q4 | 109.94 tok/sEstimated Auto-generated benchmark | 1GB |
| meta-llama/Llama-Guard-3-1B | Q4 | 107.89 tok/sEstimated Auto-generated benchmark | 1GB |
| unsloth/Llama-3.2-1B-Instruct | Q4 | 106.57 tok/sEstimated Auto-generated benchmark | 1GB |
| meta-llama/Llama-3.2-1B | Q4 | 106.24 tok/sEstimated Auto-generated benchmark | 1GB |
| unsloth/gemma-3-1b-it | Q4 | 104.34 tok/sEstimated Auto-generated benchmark | 1GB |
| google/gemma-3-1b-it | Q4 | 103.17 tok/sEstimated Auto-generated benchmark | 1GB |
| TinyLlama/TinyLlama-1.1B-Chat-v1.0 | Q4 | 102.48 tok/sEstimated Auto-generated benchmark | 1GB |
| allenai/OLMo-2-0425-1B | Q4 | 96.84 tok/sEstimated Auto-generated benchmark | 1GB |
| google/gemma-2b | Q4 | 89.87 tok/sEstimated Auto-generated benchmark | 1GB |
| ibm-granite/granite-3.3-2b-instruct | Q4 | 89.66 tok/sEstimated Auto-generated benchmark | 1GB |
| google/gemma-2-2b-it | Q4 | 89.25 tok/sEstimated Auto-generated benchmark | 1GB |
| LiquidAI/LFM2-1.2B | Q4 | 88.86 tok/sEstimated Auto-generated benchmark | 1GB |
| allenai/OLMo-2-0425-1B | Q8 | 78.14 tok/sEstimated Auto-generated benchmark | 1GB |
| meta-llama/Llama-3.2-1B-Instruct | Q8 | 77.44 tok/sEstimated Auto-generated benchmark | 1GB |
| unsloth/Llama-3.2-1B-Instruct | Q8 | 77.43 tok/sEstimated Auto-generated benchmark | 1GB |
| deepseek-ai/deepseek-coder-1.3b-instruct | Q4 | 77.31 tok/sEstimated Auto-generated benchmark | 2GB |
| TinyLlama/TinyLlama-1.1B-Chat-v1.0 | Q8 | 74.92 tok/sEstimated Auto-generated benchmark | 1GB |
| unsloth/Llama-3.2-3B-Instruct | Q4 | 74.05 tok/sEstimated Auto-generated benchmark | 2GB |
| meta-llama/Llama-Guard-3-1B | Q8 | 73.78 tok/sEstimated Auto-generated benchmark | 1GB |
| meta-llama/Llama-3.2-1B | Q8 | 73.60 tok/sEstimated Auto-generated benchmark | 1GB |
| bigcode/starcoder2-3b | Q4 | 73.48 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen3-4B-Thinking-2507 | Q4 | 70.83 tok/sEstimated Auto-generated benchmark | 2GB |
| apple/OpenELM-1_1B-Instruct | Q8 | 69.86 tok/sEstimated Auto-generated benchmark | 1GB |
| meta-llama/Llama-3.2-3B-Instruct | Q4 | 69.37 tok/sEstimated Auto-generated benchmark | 2GB |
| unsloth/gemma-3-1b-it | Q8 | 69.32 tok/sEstimated Auto-generated benchmark | 1GB |
| google/gemma-3-1b-it | Q8 | 68.62 tok/sEstimated Auto-generated benchmark | 1GB |
| google-t5/t5-3b | Q4 | 68.15 tok/sEstimated Auto-generated benchmark | 2GB |
| context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16 | Q4 | 67.82 tok/sEstimated Auto-generated benchmark | 2GB |
| inference-net/Schematron-3B | Q4 | 67.62 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen3-4B-Base | Q4 | 67.52 tok/sEstimated Auto-generated benchmark | 2GB |
| ibm-research/PowerMoE-3b | Q4 | 66.27 tok/sEstimated Auto-generated benchmark | 2GB |
| meta-llama/Llama-3.2-3B | Q4 | 66.24 tok/sEstimated Auto-generated benchmark | 2GB |
| lmstudio-community/Qwen3-4B-Thinking-2507-MLX-8bit | Q4 | 66.20 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen2.5-3B-Instruct | Q4 | 65.93 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen2.5-3B | Q4 | 65.86 tok/sEstimated Auto-generated benchmark | 2GB |
| microsoft/VibeVoice-1.5B | Q4 | 65.40 tok/sEstimated Auto-generated benchmark | 3GB |
| Qwen/Qwen2.5-1.5B | Q4 | 64.77 tok/sEstimated Auto-generated benchmark | 3GB |
| Qwen/Qwen2-0.5B-Instruct | Q4 | 63.99 tok/sEstimated Auto-generated benchmark | 3GB |
| Alibaba-NLP/gte-Qwen2-1.5B-instruct | Q4 | 62.94 tok/sEstimated Auto-generated benchmark | 3GB |
| kaitchup/Phi-3-mini-4k-instruct-gptq-4bit | Q4 | 61.95 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen2.5-0.5B | Q4 | 61.77 tok/sEstimated Auto-generated benchmark | 3GB |
| Qwen/Qwen2.5-1.5B-Instruct | Q4 | 61.61 tok/sEstimated Auto-generated benchmark | 3GB |
| google/gemma-2b | Q8 | 61.20 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen3-0.6B-Base | Q4 | 61.15 tok/sEstimated Auto-generated benchmark | 3GB |
| lmstudio-community/Qwen3-4B-Thinking-2507-MLX-4bit | Q4 | 61.10 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen3-4B | Q4 | 61.08 tok/sEstimated Auto-generated benchmark | 2GB |
| lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bit | Q4 | 61.08 tok/sEstimated Auto-generated benchmark | 2GB |
| LiquidAI/LFM2-1.2B | Q8 | 60.96 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen3-Reranker-0.6B | Q4 | 60.77 tok/sEstimated Auto-generated benchmark | 3GB |
| Qwen/Qwen3-Embedding-4B | Q4 | 60.50 tok/sEstimated Auto-generated benchmark | 2GB |
| google/gemma-2-2b-it | Q8 | 60.04 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen2-0.5B | Q4 | 59.66 tok/sEstimated Auto-generated benchmark | 3GB |
| Qwen/Qwen3-4B-Instruct-2507 | Q4 | 58.72 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen3-4B-Thinking-2507-FP8 | Q4 | 58.36 tok/sEstimated Auto-generated benchmark | 2GB |
| huggyllama/llama-7b | Q4 | 57.68 tok/sEstimated Auto-generated benchmark | 4GB |
| EleutherAI/gpt-neo-125m | Q4 | 57.65 tok/sEstimated Auto-generated benchmark | 4GB |
| MiniMaxAI/MiniMax-M2 | Q4 | 57.62 tok/sEstimated Auto-generated benchmark | 4GB |
| microsoft/Phi-3.5-vision-instruct | Q4 | 57.62 tok/sEstimated Auto-generated benchmark | 4GB |
| openai-community/gpt2-large | Q4 | 57.44 tok/sEstimated Auto-generated benchmark | 4GB |
| microsoft/Phi-3-mini-4k-instruct | Q4 | 57.29 tok/sEstimated Auto-generated benchmark | 4GB |
| trl-internal-testing/tiny-LlamaForCausalLM-3.2 | Q4 | 57.24 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen3-0.6B | Q4 | 57.24 tok/sEstimated Auto-generated benchmark | 3GB |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B | Q4 | 57.07 tok/sEstimated Auto-generated benchmark | 3GB |
| rednote-hilab/dots.ocr | Q4 | 56.78 tok/sEstimated Auto-generated benchmark | 4GB |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-7B | Q4 | 56.69 tok/sEstimated Auto-generated benchmark | 4GB |
| meta-llama/Llama-2-7b-hf | Q4 | 56.61 tok/sEstimated Auto-generated benchmark | 4GB |
| Gensyn/Qwen2.5-0.5B-Instruct | Q4 | 56.57 tok/sEstimated Auto-generated benchmark | 3GB |
| Qwen/Qwen2-1.5B-Instruct | Q4 | 56.36 tok/sEstimated Auto-generated benchmark | 3GB |
| microsoft/Phi-4-mini-instruct | Q4 | 56.04 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen2.5-0.5B-Instruct | Q4 | 55.99 tok/sEstimated Auto-generated benchmark | 3GB |
| Qwen/Qwen2.5-Coder-1.5B | Q4 | 55.97 tok/sEstimated Auto-generated benchmark | 3GB |
| openai-community/gpt2-medium | Q4 | 55.91 tok/sEstimated Auto-generated benchmark | 4GB |
| microsoft/phi-2 | Q4 | 55.64 tok/sEstimated Auto-generated benchmark | 4GB |
| deepseek-ai/DeepSeek-V3.1 | Q4 | 55.62 tok/sEstimated Auto-generated benchmark | 4GB |
| microsoft/DialoGPT-medium | Q4 | 55.47 tok/sEstimated Auto-generated benchmark | 4GB |
| mistralai/Mistral-7B-Instruct-v0.1 | Q4 | 55.37 tok/sEstimated Auto-generated benchmark | 4GB |
| deepseek-ai/deepseek-coder-1.3b-instruct | Q8 | 54.75 tok/sEstimated Auto-generated benchmark | 3GB |
| ibm-granite/granite-3.3-2b-instruct | Q8 | 54.71 tok/sEstimated Auto-generated benchmark | 2GB |
| microsoft/phi-4 | Q4 | 54.66 tok/sEstimated Auto-generated benchmark | 4GB |
| ibm-research/PowerMoE-3b | Q8 | 54.64 tok/sEstimated Auto-generated benchmark | 3GB |
| google/gemma-3-270m-it | Q4 | 54.61 tok/sEstimated Auto-generated benchmark | 4GB |
| meta-llama/Meta-Llama-3-8B-Instruct | Q4 | 54.56 tok/sEstimated Auto-generated benchmark | 4GB |
| unsloth/mistral-7b-v0.3-bnb-4bit | Q4 | 54.33 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen2.5-Math-1.5B | Q4 | 54.32 tok/sEstimated Auto-generated benchmark | 3GB |
| dicta-il/dictalm2.0-instruct | Q4 | 54.24 tok/sEstimated Auto-generated benchmark | 4GB |
| llamafactory/tiny-random-Llama-3 | Q4 | 53.96 tok/sEstimated Auto-generated benchmark | 4GB |
| deepseek-ai/DeepSeek-R1-0528 | Q4 | 53.96 tok/sEstimated Auto-generated benchmark | 4GB |
| bigscience/bloomz-560m | Q4 | 53.92 tok/sEstimated Auto-generated benchmark | 4GB |
| trl-internal-testing/tiny-Qwen2ForCausalLM-2.5 | Q4 | 53.82 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen2.5-3B | Q8 | 53.68 tok/sEstimated Auto-generated benchmark | 3GB |
| unsloth/Llama-3.2-3B-Instruct | Q8 | 53.61 tok/sEstimated Auto-generated benchmark | 3GB |
| trl-internal-testing/tiny-random-LlamaForCausalLM | Q4 | 53.60 tok/sEstimated Auto-generated benchmark | 4GB |
| inference-net/Schematron-3B | Q8 | 53.30 tok/sEstimated Auto-generated benchmark | 3GB |
| numind/NuExtract-1.5 | Q4 | 53.11 tok/sEstimated Auto-generated benchmark | 4GB |
| zai-org/GLM-4.5-Air | Q4 | 53.08 tok/sEstimated Auto-generated benchmark | 4GB |
| facebook/opt-125m | Q4 | 53.05 tok/sEstimated Auto-generated benchmark | 4GB |
| deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct | Q4 | 52.92 tok/sEstimated Auto-generated benchmark | 4GB |
| skt/kogpt2-base-v2 | Q4 | 52.91 tok/sEstimated Auto-generated benchmark | 4GB |
| vikhyatk/moondream2 | Q4 | 52.88 tok/sEstimated Auto-generated benchmark | 4GB |
| HuggingFaceM4/tiny-random-LlamaForCausalLM | Q4 | 52.76 tok/sEstimated Auto-generated benchmark | 4GB |
| liuhaotian/llava-v1.5-7b | Q4 | 52.72 tok/sEstimated Auto-generated benchmark | 4GB |
| lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bit | Q4 | 52.71 tok/sEstimated Auto-generated benchmark | 4GB |
| nvidia/NVIDIA-Nemotron-Nano-9B-v2 | Q4 | 52.68 tok/sEstimated Auto-generated benchmark | 5GB |
| Qwen/Qwen3-8B | Q4 | 52.50 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen2.5-Coder-7B-Instruct | Q4 | 52.46 tok/sEstimated Auto-generated benchmark | 4GB |
| rinna/japanese-gpt-neox-small | Q4 | 52.40 tok/sEstimated Auto-generated benchmark | 4GB |
| lmsys/vicuna-7b-v1.5 | Q4 | 52.39 tok/sEstimated Auto-generated benchmark | 4GB |
| context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16 | Q8 | 52.35 tok/sEstimated Auto-generated benchmark | 3GB |
| Qwen/Qwen2-7B-Instruct | Q4 | 52.33 tok/sEstimated Auto-generated benchmark | 4GB |
| distilbert/distilgpt2 | Q4 | 52.30 tok/sEstimated Auto-generated benchmark | 4GB |
| deepseek-ai/DeepSeek-R1 | Q4 | 51.66 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen3-Embedding-0.6B | Q4 | 51.61 tok/sEstimated Auto-generated benchmark | 3GB |
| Qwen/Qwen3-8B-Base | Q4 | 51.56 tok/sEstimated Auto-generated benchmark | 4GB |
| zai-org/GLM-4.6-FP8 | Q4 | 51.07 tok/sEstimated Auto-generated benchmark | 4GB |
| NousResearch/Meta-Llama-3.1-8B-Instruct | Q4 | 51.05 tok/sEstimated Auto-generated benchmark | 4GB |
| microsoft/Phi-3.5-mini-instruct | Q4 | 50.91 tok/sEstimated Auto-generated benchmark | 4GB |
| EleutherAI/pythia-70m-deduped | Q4 | 50.88 tok/sEstimated Auto-generated benchmark | 4GB |
| HuggingFaceH4/zephyr-7b-beta | Q4 | 50.83 tok/sEstimated Auto-generated benchmark | 4GB |
| lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-4bit | Q4 | 50.73 tok/sEstimated Auto-generated benchmark | 4GB |
| deepseek-ai/DeepSeek-V3 | Q4 | 50.59 tok/sEstimated Auto-generated benchmark | 4GB |
| meta-llama/Llama-3.2-3B-Instruct | Q8 | 50.53 tok/sEstimated Auto-generated benchmark | 3GB |
| swiss-ai/Apertus-8B-Instruct-2509 | Q4 | 50.47 tok/sEstimated Auto-generated benchmark | 4GB |
| google-t5/t5-3b | Q8 | 50.26 tok/sEstimated Auto-generated benchmark | 3GB |
| Qwen/Qwen3-1.7B-Base | Q4 | 50.16 tok/sEstimated Auto-generated benchmark | 4GB |
| ibm-granite/granite-docling-258M | Q4 | 50.02 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen3-Embedding-8B | Q4 | 49.88 tok/sEstimated Auto-generated benchmark | 4GB |
| unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit | Q4 | 49.83 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen3-8B-FP8 | Q4 | 49.77 tok/sEstimated Auto-generated benchmark | 4GB |
| mistralai/Mistral-7B-Instruct-v0.2 | Q4 | 49.72 tok/sEstimated Auto-generated benchmark | 4GB |
| microsoft/DialoGPT-small | Q4 | 49.67 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen3-1.7B | Q4 | 49.48 tok/sEstimated Auto-generated benchmark | 4GB |
| lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bit | Q8 | 49.37 tok/sEstimated Auto-generated benchmark | 4GB |
| parler-tts/parler-tts-large-v1 | Q4 | 49.36 tok/sEstimated Auto-generated benchmark | 4GB |
| GSAI-ML/LLaDA-8B-Instruct | Q4 | 49.22 tok/sEstimated Auto-generated benchmark | 4GB |
| hmellor/tiny-random-LlamaForCausalLM | Q4 | 49.10 tok/sEstimated Auto-generated benchmark | 4GB |
| microsoft/Phi-4-multimodal-instruct | Q4 | 49.08 tok/sEstimated Auto-generated benchmark | 4GB |
| deepseek-ai/DeepSeek-R1-Distill-Llama-8B | Q4 | 48.91 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen2.5-7B-Instruct | Q4 | 48.82 tok/sEstimated Auto-generated benchmark | 4GB |
| openai-community/gpt2-xl | Q4 | 48.72 tok/sEstimated Auto-generated benchmark | 4GB |
| meta-llama/Llama-2-7b-chat-hf | Q4 | 48.71 tok/sEstimated Auto-generated benchmark | 4GB |
| deepseek-ai/DeepSeek-V3-0324 | Q4 | 48.59 tok/sEstimated Auto-generated benchmark | 4GB |
| meta-llama/Llama-3.1-8B | Q4 | 48.55 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen3-Embedding-4B | Q8 | 48.54 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen3-4B-Thinking-2507-FP8 | Q8 | 48.48 tok/sEstimated Auto-generated benchmark | 4GB |
| petals-team/StableBeluga2 | Q4 | 48.36 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen2.5-7B | Q4 | 48.32 tok/sEstimated Auto-generated benchmark | 4GB |
| sshleifer/tiny-gpt2 | Q4 | 48.01 tok/sEstimated Auto-generated benchmark | 4GB |
| BSC-LT/salamandraTA-7b-instruct | Q4 | 47.95 tok/sEstimated Auto-generated benchmark | 4GB |
| IlyaGusev/saiga_llama3_8b | Q4 | 47.95 tok/sEstimated Auto-generated benchmark | 4GB |
| GSAI-ML/LLaDA-8B-Base | Q4 | 47.89 tok/sEstimated Auto-generated benchmark | 4GB |
| mistralai/Mistral-7B-v0.1 | Q4 | 47.80 tok/sEstimated Auto-generated benchmark | 4GB |
| HuggingFaceTB/SmolLM2-135M | Q4 | 47.66 tok/sEstimated Auto-generated benchmark | 4GB |
| microsoft/Phi-3-mini-128k-instruct | Q4 | 47.55 tok/sEstimated Auto-generated benchmark | 4GB |
| HuggingFaceTB/SmolLM-135M | Q4 | 47.50 tok/sEstimated Auto-generated benchmark | 4GB |
| openai-community/gpt2 | Q4 | 47.49 tok/sEstimated Auto-generated benchmark | 4GB |
| meta-llama/Llama-Guard-3-8B | Q4 | 47.17 tok/sEstimated Auto-generated benchmark | 4GB |
| meta-llama/Meta-Llama-3-8B | Q4 | 47.08 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen2.5-3B-Instruct | Q8 | 46.84 tok/sEstimated Auto-generated benchmark | 3GB |
| bigcode/starcoder2-3b | Q8 | 46.63 tok/sEstimated Auto-generated benchmark | 3GB |
| ibm-granite/granite-3.3-8b-instruct | Q4 | 46.34 tok/sEstimated Auto-generated benchmark | 4GB |
| meta-llama/Llama-3.2-3B | Q8 | 45.89 tok/sEstimated Auto-generated benchmark | 3GB |
| Qwen/Qwen3-4B-Base | Q8 | 45.87 tok/sEstimated Auto-generated benchmark | 4GB |
| meta-llama/Llama-3.1-8B-Instruct | Q4 | 45.54 tok/sEstimated Auto-generated benchmark | 4GB |
| unsloth/Meta-Llama-3.1-8B-Instruct | Q4 | 45.18 tok/sEstimated Auto-generated benchmark | 4GB |
| kaitchup/Phi-3-mini-4k-instruct-gptq-4bit | Q8 | 45.12 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen3-4B-Instruct-2507 | Q8 | 44.69 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen2.5-1.5B-Instruct | Q8 | 44.60 tok/sEstimated Auto-generated benchmark | 5GB |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B | Q8 | 44.60 tok/sEstimated Auto-generated benchmark | 5GB |
| Qwen/Qwen2-0.5B-Instruct | Q8 | 44.38 tok/sEstimated Auto-generated benchmark | 5GB |
| lmstudio-community/Qwen3-4B-Thinking-2507-MLX-4bit | Q8 | 44.19 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen3-4B | Q8 | 43.91 tok/sEstimated Auto-generated benchmark | 4GB |
| Alibaba-NLP/gte-Qwen2-1.5B-instruct | Q8 | 43.65 tok/sEstimated Auto-generated benchmark | 5GB |
| Qwen/Qwen3-14B-Base | Q4 | 43.20 tok/sEstimated Auto-generated benchmark | 7GB |
| lmstudio-community/Qwen3-4B-Thinking-2507-MLX-8bit | Q8 | 43.03 tok/sEstimated Auto-generated benchmark | 4GB |
| meta-llama/Llama-2-13b-chat-hf | Q4 | 42.45 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen3-4B-Thinking-2507 | Q8 | 42.15 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen3-14B | Q4 | 42.13 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen2.5-14B | Q4 | 41.99 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen2.5-0.5B-Instruct | Q8 | 41.74 tok/sEstimated Auto-generated benchmark | 5GB |
| Qwen/Qwen3-0.6B-Base | Q8 | 41.69 tok/sEstimated Auto-generated benchmark | 6GB |
| Gensyn/Qwen2.5-0.5B-Instruct | Q8 | 41.23 tok/sEstimated Auto-generated benchmark | 5GB |
| Qwen/Qwen3-0.6B | Q8 | 41.03 tok/sEstimated Auto-generated benchmark | 6GB |
| mistralai/Mistral-7B-Instruct-v0.1 | Q8 | 40.39 tok/sEstimated Auto-generated benchmark | 7GB |
| trl-internal-testing/tiny-LlamaForCausalLM-3.2 | Q8 | 40.13 tok/sEstimated Auto-generated benchmark | 7GB |
| microsoft/Phi-3-mini-128k-instruct | Q8 | 39.99 tok/sEstimated Auto-generated benchmark | 7GB |
| ai-forever/ruGPT-3.5-13B | Q4 | 39.93 tok/sEstimated Auto-generated benchmark | 7GB |
| mistralai/Mistral-7B-Instruct-v0.2 | Q8 | 39.86 tok/sEstimated Auto-generated benchmark | 7GB |
| microsoft/VibeVoice-1.5B | Q8 | 39.76 tok/sEstimated Auto-generated benchmark | 5GB |
| huggyllama/llama-7b | Q8 | 39.75 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen3-Reranker-0.6B | Q8 | 39.72 tok/sEstimated Auto-generated benchmark | 6GB |
| EleutherAI/pythia-70m-deduped | Q8 | 39.68 tok/sEstimated Auto-generated benchmark | 7GB |
| deepseek-ai/DeepSeek-V3.1 | Q8 | 39.68 tok/sEstimated Auto-generated benchmark | 7GB |
| skt/kogpt2-base-v2 | Q8 | 39.60 tok/sEstimated Auto-generated benchmark | 7GB |
| lmsys/vicuna-7b-v1.5 | Q8 | 39.55 tok/sEstimated Auto-generated benchmark | 7GB |
| microsoft/DialoGPT-medium | Q8 | 39.53 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen2.5-7B | Q8 | 39.47 tok/sEstimated Auto-generated benchmark | 7GB |
| zai-org/GLM-4.6-FP8 | Q8 | 39.33 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen3-1.7B | Q8 | 39.33 tok/sEstimated Auto-generated benchmark | 7GB |
| llamafactory/tiny-random-Llama-3 | Q8 | 39.28 tok/sEstimated Auto-generated benchmark | 7GB |
| facebook/opt-125m | Q8 | 39.23 tok/sEstimated Auto-generated benchmark | 7GB |
| HuggingFaceTB/SmolLM-135M | Q8 | 39.15 tok/sEstimated Auto-generated benchmark | 7GB |
| BSC-LT/salamandraTA-7b-instruct | Q8 | 39.04 tok/sEstimated Auto-generated benchmark | 7GB |
| rinna/japanese-gpt-neox-small | Q8 | 38.71 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen2.5-1.5B | Q8 | 38.70 tok/sEstimated Auto-generated benchmark | 5GB |
| petals-team/StableBeluga2 | Q8 | 38.67 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen2-0.5B | Q8 | 38.64 tok/sEstimated Auto-generated benchmark | 5GB |
| ibm-granite/granite-docling-258M | Q8 | 38.55 tok/sEstimated Auto-generated benchmark | 7GB |
| deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct | Q8 | 38.45 tok/sEstimated Auto-generated benchmark | 7GB |
| microsoft/Phi-4-mini-instruct | Q8 | 38.39 tok/sEstimated Auto-generated benchmark | 7GB |
| deepseek-ai/DeepSeek-R1 | Q8 | 38.37 tok/sEstimated Auto-generated benchmark | 7GB |
| trl-internal-testing/tiny-Qwen2ForCausalLM-2.5 | Q8 | 38.35 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen2.5-Coder-7B-Instruct | Q8 | 38.33 tok/sEstimated Auto-generated benchmark | 7GB |
| deepseek-ai/DeepSeek-R1-0528 | Q8 | 38.31 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen2.5-Math-1.5B | Q8 | 38.27 tok/sEstimated Auto-generated benchmark | 5GB |
| Qwen/Qwen2.5-Coder-1.5B | Q8 | 38.27 tok/sEstimated Auto-generated benchmark | 5GB |
| HuggingFaceH4/zephyr-7b-beta | Q8 | 38.17 tok/sEstimated Auto-generated benchmark | 7GB |
| microsoft/phi-2 | Q8 | 38.10 tok/sEstimated Auto-generated benchmark | 7GB |
| deepseek-ai/DeepSeek-R1-Distill-Llama-8B | Q8 | 38.03 tok/sEstimated Auto-generated benchmark | 8GB |
| Qwen/Qwen2.5-0.5B | Q8 | 38.03 tok/sEstimated Auto-generated benchmark | 5GB |
| OpenPipe/Qwen3-14B-Instruct | Q4 | 38.01 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen2-1.5B-Instruct | Q8 | 37.98 tok/sEstimated Auto-generated benchmark | 5GB |
| Qwen/Qwen3-8B-FP8 | Q8 | 37.96 tok/sEstimated Auto-generated benchmark | 8GB |
| Qwen/Qwen2.5-7B-Instruct | Q8 | 37.93 tok/sEstimated Auto-generated benchmark | 7GB |
| microsoft/Phi-4-multimodal-instruct | Q8 | 37.76 tok/sEstimated Auto-generated benchmark | 7GB |
| sshleifer/tiny-gpt2 | Q8 | 37.71 tok/sEstimated Auto-generated benchmark | 7GB |
| meta-llama/Meta-Llama-3-8B | Q8 | 37.56 tok/sEstimated Auto-generated benchmark | 8GB |
| dicta-il/dictalm2.0-instruct | Q8 | 37.48 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen2-7B-Instruct | Q8 | 37.46 tok/sEstimated Auto-generated benchmark | 7GB |
| unsloth/mistral-7b-v0.3-bnb-4bit | Q8 | 37.45 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen2.5-14B-Instruct | Q4 | 37.31 tok/sEstimated Auto-generated benchmark | 7GB |
| microsoft/DialoGPT-small | Q8 | 37.24 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen3-Embedding-8B | Q8 | 37.07 tok/sEstimated Auto-generated benchmark | 8GB |
| numind/NuExtract-1.5 | Q8 | 37.04 tok/sEstimated Auto-generated benchmark | 7GB |
| hmellor/tiny-random-LlamaForCausalLM | Q8 | 36.87 tok/sEstimated Auto-generated benchmark | 7GB |
| google/gemma-3-270m-it | Q8 | 36.81 tok/sEstimated Auto-generated benchmark | 7GB |
| deepseek-ai/DeepSeek-V3-0324 | Q8 | 36.71 tok/sEstimated Auto-generated benchmark | 7GB |
| meta-llama/Llama-Guard-3-8B | Q8 | 36.56 tok/sEstimated Auto-generated benchmark | 8GB |
| openai-community/gpt2-medium | Q8 | 36.50 tok/sEstimated Auto-generated benchmark | 7GB |
| nvidia/NVIDIA-Nemotron-Nano-9B-v2 | Q8 | 36.11 tok/sEstimated Auto-generated benchmark | 9GB |
| distilbert/distilgpt2 | Q8 | 36.09 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen3-Embedding-0.6B | Q8 | 36.03 tok/sEstimated Auto-generated benchmark | 6GB |
| liuhaotian/llava-v1.5-7b | Q8 | 35.99 tok/sEstimated Auto-generated benchmark | 7GB |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-7B | Q8 | 35.82 tok/sEstimated Auto-generated benchmark | 7GB |
| trl-internal-testing/tiny-random-LlamaForCausalLM | Q8 | 35.77 tok/sEstimated Auto-generated benchmark | 7GB |
| ibm-granite/granite-3.3-8b-instruct | Q8 | 35.76 tok/sEstimated Auto-generated benchmark | 8GB |
| meta-llama/Llama-3.1-8B-Instruct | Q8 | 35.73 tok/sEstimated Auto-generated benchmark | 8GB |
| microsoft/Phi-3.5-mini-instruct | Q8 | 35.71 tok/sEstimated Auto-generated benchmark | 7GB |
| deepseek-ai/DeepSeek-V3 | Q8 | 35.56 tok/sEstimated Auto-generated benchmark | 7GB |
| rednote-hilab/dots.ocr | Q8 | 35.44 tok/sEstimated Auto-generated benchmark | 7GB |
| swiss-ai/Apertus-8B-Instruct-2509 | Q8 | 35.29 tok/sEstimated Auto-generated benchmark | 8GB |
| NousResearch/Meta-Llama-3.1-8B-Instruct | Q8 | 35.23 tok/sEstimated Auto-generated benchmark | 8GB |
| openai-community/gpt2-xl | Q8 | 35.15 tok/sEstimated Auto-generated benchmark | 7GB |
| meta-llama/Llama-2-7b-chat-hf | Q8 | 35.13 tok/sEstimated Auto-generated benchmark | 7GB |
| microsoft/Phi-3.5-vision-instruct | Q8 | 35.11 tok/sEstimated Auto-generated benchmark | 7GB |
| HuggingFaceTB/SmolLM2-135M | Q8 | 35.09 tok/sEstimated Auto-generated benchmark | 7GB |
| bigscience/bloomz-560m | Q8 | 35.08 tok/sEstimated Auto-generated benchmark | 7GB |
| HuggingFaceM4/tiny-random-LlamaForCausalLM | Q8 | 35.07 tok/sEstimated Auto-generated benchmark | 7GB |
| vikhyatk/moondream2 | Q8 | 34.87 tok/sEstimated Auto-generated benchmark | 7GB |
| Qwen/Qwen3-1.7B-Base | Q8 | 34.67 tok/sEstimated Auto-generated benchmark | 7GB |
| openai-community/gpt2 | Q8 | 34.37 tok/sEstimated Auto-generated benchmark | 7GB |
| meta-llama/Llama-2-7b-hf | Q8 | 34.26 tok/sEstimated Auto-generated benchmark | 7GB |
| MiniMaxAI/MiniMax-M2 | Q8 | 34.21 tok/sEstimated Auto-generated benchmark | 7GB |
| lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-4bit | Q8 | 34.21 tok/sEstimated Auto-generated benchmark | 8GB |
| zai-org/GLM-4.5-Air | Q8 | 34.17 tok/sEstimated Auto-generated benchmark | 7GB |
| unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit | Q8 | 34.11 tok/sEstimated Auto-generated benchmark | 8GB |
| meta-llama/Meta-Llama-3-8B-Instruct | Q8 | 34.07 tok/sEstimated Auto-generated benchmark | 8GB |
| parler-tts/parler-tts-large-v1 | Q8 | 33.90 tok/sEstimated Auto-generated benchmark | 7GB |
| microsoft/phi-4 | Q8 | 33.89 tok/sEstimated Auto-generated benchmark | 7GB |
| unsloth/gpt-oss-20b-unsloth-bnb-4bit | Q4 | 33.83 tok/sEstimated Auto-generated benchmark | 10GB |
| microsoft/Phi-3-mini-4k-instruct | Q8 | 33.72 tok/sEstimated Auto-generated benchmark | 7GB |
| EleutherAI/gpt-neo-125m | Q8 | 33.65 tok/sEstimated Auto-generated benchmark | 7GB |
| mlx-community/gpt-oss-20b-MXFP4-Q8 | Q4 | 33.63 tok/sEstimated Auto-generated benchmark | 10GB |
| Qwen/Qwen3-8B | Q8 | 33.55 tok/sEstimated Auto-generated benchmark | 8GB |
| openai-community/gpt2-large | Q8 | 33.44 tok/sEstimated Auto-generated benchmark | 7GB |
| unsloth/gpt-oss-20b-BF16 | Q4 | 33.23 tok/sEstimated Auto-generated benchmark | 10GB |
| mistralai/Mistral-7B-v0.1 | Q8 | 33.17 tok/sEstimated Auto-generated benchmark | 7GB |
| openai/gpt-oss-20b | Q4 | 33.15 tok/sEstimated Auto-generated benchmark | 10GB |
| Qwen/Qwen3-8B-Base | Q8 | 32.81 tok/sEstimated Auto-generated benchmark | 8GB |
| IlyaGusev/saiga_llama3_8b | Q8 | 32.77 tok/sEstimated Auto-generated benchmark | 8GB |
| unsloth/Meta-Llama-3.1-8B-Instruct | Q8 | 32.75 tok/sEstimated Auto-generated benchmark | 8GB |
| GSAI-ML/LLaDA-8B-Base | Q8 | 32.56 tok/sEstimated Auto-generated benchmark | 8GB |
| lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bit | Q8 | 32.24 tok/sEstimated Auto-generated benchmark | 8GB |
| Qwen/Qwen3-30B-A3B-Instruct-2507 | Q4 | 32.18 tok/sEstimated Auto-generated benchmark | 15GB |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-8bit | Q4 | 32.12 tok/sEstimated Auto-generated benchmark | 15GB |
| meta-llama/Llama-3.1-8B | Q8 | 32.08 tok/sEstimated Auto-generated benchmark | 8GB |
| Qwen/Qwen3-30B-A3B-Instruct-2507-FP8 | Q4 | 32.06 tok/sEstimated Auto-generated benchmark | 15GB |
| GSAI-ML/LLaDA-8B-Instruct | Q8 | 31.64 tok/sEstimated Auto-generated benchmark | 8GB |
| Qwen/Qwen2.5-14B-Instruct | Q8 | 30.99 tok/sEstimated Auto-generated benchmark | 14GB |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-4bit | Q4 | 30.81 tok/sEstimated Auto-generated benchmark | 15GB |
| Qwen/Qwen3-30B-A3B-Thinking-2507 | Q4 | 30.29 tok/sEstimated Auto-generated benchmark | 15GB |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-32B | Q4 | 30.24 tok/sEstimated Auto-generated benchmark | 16GB |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-6bit | Q4 | 30.20 tok/sEstimated Auto-generated benchmark | 15GB |
| Qwen/Qwen3-32B | Q4 | 30.14 tok/sEstimated Auto-generated benchmark | 16GB |
| Qwen/Qwen2.5-14B | Q8 | 29.95 tok/sEstimated Auto-generated benchmark | 14GB |
| OpenPipe/Qwen3-14B-Instruct | Q8 | 29.60 tok/sEstimated Auto-generated benchmark | 14GB |
| Qwen/Qwen3-Coder-30B-A3B-Instruct | Q4 | 29.49 tok/sEstimated Auto-generated benchmark | 15GB |
| Qwen/Qwen3-14B-Base | Q8 | 29.30 tok/sEstimated Auto-generated benchmark | 14GB |
| Qwen/Qwen2.5-32B-Instruct | Q4 | 28.93 tok/sEstimated Auto-generated benchmark | 16GB |
| unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bit | Q4 | 28.85 tok/sEstimated Auto-generated benchmark | 16GB |
| meta-llama/Llama-2-13b-chat-hf | Q8 | 28.49 tok/sEstimated Auto-generated benchmark | 13GB |
| Qwen/Qwen2.5-32B | Q4 | 28.35 tok/sEstimated Auto-generated benchmark | 16GB |
| Qwen/Qwen3-14B | Q8 | 27.69 tok/sEstimated Auto-generated benchmark | 14GB |
| ai-forever/ruGPT-3.5-13B | Q8 | 27.49 tok/sEstimated Auto-generated benchmark | 13GB |
| Qwen/Qwen3-30B-A3B | Q4 | 27.07 tok/sEstimated Auto-generated benchmark | 15GB |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-5bit | Q4 | 26.88 tok/sEstimated Auto-generated benchmark | 15GB |
| codellama/CodeLlama-34b-hf | Q4 | 26.85 tok/sEstimated Auto-generated benchmark | 17GB |
| baichuan-inc/Baichuan-M2-32B | Q4 | 26.63 tok/sEstimated Auto-generated benchmark | 16GB |
| dphn/dolphin-2.9.1-yi-1.5-34b | Q4 | 26.45 tok/sEstimated Auto-generated benchmark | 17GB |
| unsloth/gpt-oss-20b-unsloth-bnb-4bit | Q8 | 25.80 tok/sEstimated Auto-generated benchmark | 20GB |
| openai/gpt-oss-20b | Q8 | 23.52 tok/sEstimated Auto-generated benchmark | 20GB |
| mlx-community/gpt-oss-20b-MXFP4-Q8 | Q8 | 23.28 tok/sEstimated Auto-generated benchmark | 20GB |
| unsloth/gpt-oss-20b-BF16 | Q8 | 22.50 tok/sEstimated Auto-generated benchmark | 20GB |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-4bit | Q8 | 22.47 tok/sEstimated Auto-generated benchmark | 30GB |
| Qwen/Qwen2.5-32B-Instruct | Q8 | 21.55 tok/sEstimated Auto-generated benchmark | 32GB |
| Qwen/Qwen2.5-32B | Q8 | 21.29 tok/sEstimated Auto-generated benchmark | 32GB |
| Qwen/Qwen3-32B | Q8 | 21.06 tok/sEstimated Auto-generated benchmark | 32GB |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-5bit | Q8 | 20.93 tok/sEstimated Auto-generated benchmark | 30GB |
| baichuan-inc/Baichuan-M2-32B | Q8 | 20.71 tok/sEstimated Auto-generated benchmark | 32GB |
| Qwen/Qwen3-Coder-30B-A3B-Instruct | Q8 | 20.69 tok/sEstimated Auto-generated benchmark | 30GB |
| meta-llama/Llama-3.3-70B-Instruct | Q4 | 20.65 tok/sEstimated Auto-generated benchmark | 35GB |
| Qwen/Qwen3-30B-A3B-Instruct-2507-FP8 | Q8 | 20.18 tok/sEstimated Auto-generated benchmark | 30GB |
| unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bit | Q8 | 20.02 tok/sEstimated Auto-generated benchmark | 32GB |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-6bit | Q8 | 19.94 tok/sEstimated Auto-generated benchmark | 30GB |
| meta-llama/Meta-Llama-3-70B-Instruct | Q4 | 19.87 tok/sEstimated Auto-generated benchmark | 35GB |
| Qwen/Qwen3-30B-A3B-Instruct-2507 | Q8 | 19.85 tok/sEstimated Auto-generated benchmark | 30GB |
| AI-MO/Kimina-Prover-72B | Q4 | 19.50 tok/sEstimated Auto-generated benchmark | 36GB |
| RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16 | Q4 | 19.49 tok/sEstimated Auto-generated benchmark | 35GB |
| RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamic | Q4 | 19.37 tok/sEstimated Auto-generated benchmark | 35GB |
| Qwen/Qwen3-30B-A3B | Q8 | 19.23 tok/sEstimated Auto-generated benchmark | 30GB |
| meta-llama/Llama-3.1-70B-Instruct | Q4 | 19.05 tok/sEstimated Auto-generated benchmark | 35GB |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-8bit | Q8 | 18.92 tok/sEstimated Auto-generated benchmark | 30GB |
| codellama/CodeLlama-34b-hf | Q8 | 18.87 tok/sEstimated Auto-generated benchmark | 34GB |
| Qwen/Qwen3-Next-80B-A3B-Thinking-FP8 | Q4 | 18.80 tok/sEstimated Auto-generated benchmark | 40GB |
| Qwen/Qwen3-30B-A3B-Thinking-2507 | Q8 | 18.62 tok/sEstimated Auto-generated benchmark | 30GB |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-32B | Q8 | 18.49 tok/sEstimated Auto-generated benchmark | 32GB |
| Qwen/Qwen2.5-72B-Instruct | Q4 | 17.76 tok/sEstimated Auto-generated benchmark | 36GB |
| dphn/dolphin-2.9.1-yi-1.5-34b | Q8 | 17.61 tok/sEstimated Auto-generated benchmark | 34GB |
| Qwen/Qwen3-Next-80B-A3B-Instruct | Q4 | 17.58 tok/sEstimated Auto-generated benchmark | 40GB |
| Qwen/Qwen3-Next-80B-A3B-Thinking | Q4 | 16.91 tok/sEstimated Auto-generated benchmark | 40GB |
| Qwen/Qwen3-Next-80B-A3B-Instruct-FP8 | Q4 | 16.45 tok/sEstimated Auto-generated benchmark | 40GB |
| RedHatAI/Llama-3.2-90B-Vision-Instruct-FP8-dynamic | Q4 | 16.44 tok/sEstimated Auto-generated benchmark | 45GB |
Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data
| Model | Quantization | Verdict | Estimated speed | VRAM needed |
|---|---|---|---|---|
| Qwen/Qwen3-Next-80B-A3B-Instruct-FP8 | Q8 | Not supported | — | 80GB (have 48GB) |
| Qwen/Qwen3-Next-80B-A3B-Instruct-FP8 | Q4 | Fits comfortably | 16.45 tok/sEstimated | 40GB (have 48GB) |
| ai-forever/ruGPT-3.5-13B | Q8 | Fits comfortably | 27.49 tok/sEstimated | 13GB (have 48GB) |
| ai-forever/ruGPT-3.5-13B | Q4 | Fits comfortably | 39.93 tok/sEstimated | 7GB (have 48GB) |
| baichuan-inc/Baichuan-M2-32B | Q8 | Fits comfortably | 20.71 tok/sEstimated | 32GB (have 48GB) |
| baichuan-inc/Baichuan-M2-32B | Q4 | Fits comfortably | 26.63 tok/sEstimated | 16GB (have 48GB) |
| HuggingFaceM4/tiny-random-LlamaForCausalLM | Q8 | Fits comfortably | 35.07 tok/sEstimated | 7GB (have 48GB) |
| HuggingFaceM4/tiny-random-LlamaForCausalLM | Q4 | Fits comfortably | 52.76 tok/sEstimated | 4GB (have 48GB) |
| ibm-granite/granite-3.3-8b-instruct | Q8 | Fits comfortably | 35.76 tok/sEstimated | 8GB (have 48GB) |
| ibm-granite/granite-3.3-8b-instruct | Q4 | Fits comfortably | 46.34 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-1.7B-Base | Q8 | Fits comfortably | 34.67 tok/sEstimated | 7GB (have 48GB) |
| Qwen/Qwen3-1.7B-Base | Q4 | Fits comfortably | 50.16 tok/sEstimated | 4GB (have 48GB) |
| unsloth/gpt-oss-20b-unsloth-bnb-4bit | Q8 | Fits comfortably | 25.80 tok/sEstimated | 20GB (have 48GB) |
| unsloth/gpt-oss-20b-unsloth-bnb-4bit | Q4 | Fits comfortably | 33.83 tok/sEstimated | 10GB (have 48GB) |
| BSC-LT/salamandraTA-7b-instruct | Q8 | Fits comfortably | 39.04 tok/sEstimated | 7GB (have 48GB) |
| BSC-LT/salamandraTA-7b-instruct | Q4 | Fits comfortably | 47.95 tok/sEstimated | 4GB (have 48GB) |
| dicta-il/dictalm2.0-instruct | Q8 | Fits comfortably | 37.48 tok/sEstimated | 7GB (have 48GB) |
| dicta-il/dictalm2.0-instruct | Q4 | Fits comfortably | 54.24 tok/sEstimated | 4GB (have 48GB) |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-6bit | Q8 | Fits comfortably | 19.94 tok/sEstimated | 30GB (have 48GB) |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-6bit | Q4 | Fits comfortably | 30.20 tok/sEstimated | 15GB (have 48GB) |
| GSAI-ML/LLaDA-8B-Base | Q8 | Fits comfortably | 32.56 tok/sEstimated | 8GB (have 48GB) |
| GSAI-ML/LLaDA-8B-Base | Q4 | Fits comfortably | 47.89 tok/sEstimated | 4GB (have 48GB) |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-8bit | Q8 | Fits comfortably | 18.92 tok/sEstimated | 30GB (have 48GB) |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-8bit | Q4 | Fits comfortably | 32.12 tok/sEstimated | 15GB (have 48GB) |
| Qwen/Qwen2-0.5B-Instruct | Q8 | Fits comfortably | 44.38 tok/sEstimated | 5GB (have 48GB) |
| Qwen/Qwen2-0.5B-Instruct | Q4 | Fits comfortably | 63.99 tok/sEstimated | 3GB (have 48GB) |
| deepseek-ai/DeepSeek-V3 | Q8 | Fits comfortably | 35.56 tok/sEstimated | 7GB (have 48GB) |
| deepseek-ai/DeepSeek-V3 | Q4 | Fits comfortably | 50.59 tok/sEstimated | 4GB (have 48GB) |
| Alibaba-NLP/gte-Qwen2-1.5B-instruct | Q8 | Fits comfortably | 43.65 tok/sEstimated | 5GB (have 48GB) |
| Alibaba-NLP/gte-Qwen2-1.5B-instruct | Q4 | Fits comfortably | 62.94 tok/sEstimated | 3GB (have 48GB) |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-5bit | Q8 | Fits comfortably | 20.93 tok/sEstimated | 30GB (have 48GB) |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-5bit | Q4 | Fits comfortably | 26.88 tok/sEstimated | 15GB (have 48GB) |
| Qwen/Qwen3-30B-A3B-Thinking-2507 | Q8 | Fits comfortably | 18.62 tok/sEstimated | 30GB (have 48GB) |
| Qwen/Qwen3-30B-A3B-Thinking-2507 | Q4 | Fits comfortably | 30.29 tok/sEstimated | 15GB (have 48GB) |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-4bit | Q8 | Fits comfortably | 22.47 tok/sEstimated | 30GB (have 48GB) |
| lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-4bit | Q4 | Fits comfortably | 30.81 tok/sEstimated | 15GB (have 48GB) |
| AI-MO/Kimina-Prover-72B | Q8 | Not supported | — | 72GB (have 48GB) |
| AI-MO/Kimina-Prover-72B | Q4 | Fits comfortably | 19.50 tok/sEstimated | 36GB (have 48GB) |
| apple/OpenELM-1_1B-Instruct | Q8 | Fits comfortably | 69.86 tok/sEstimated | 1GB (have 48GB) |
| apple/OpenELM-1_1B-Instruct | Q4 | Fits comfortably | 109.94 tok/sEstimated | 1GB (have 48GB) |
| NousResearch/Meta-Llama-3.1-8B-Instruct | Q8 | Fits comfortably | 35.23 tok/sEstimated | 8GB (have 48GB) |
| NousResearch/Meta-Llama-3.1-8B-Instruct | Q4 | Fits comfortably | 51.05 tok/sEstimated | 4GB (have 48GB) |
| nvidia/NVIDIA-Nemotron-Nano-9B-v2 | Q8 | Fits comfortably | 36.11 tok/sEstimated | 9GB (have 48GB) |
| nvidia/NVIDIA-Nemotron-Nano-9B-v2 | Q4 | Fits comfortably | 52.68 tok/sEstimated | 5GB (have 48GB) |
| Qwen/Qwen2.5-3B | Q8 | Fits comfortably | 53.68 tok/sEstimated | 3GB (have 48GB) |
| Qwen/Qwen2.5-3B | Q4 | Fits comfortably | 65.86 tok/sEstimated | 2GB (have 48GB) |
| lmsys/vicuna-7b-v1.5 | Q8 | Fits comfortably | 39.55 tok/sEstimated | 7GB (have 48GB) |
| lmsys/vicuna-7b-v1.5 | Q4 | Fits comfortably | 52.39 tok/sEstimated | 4GB (have 48GB) |
| meta-llama/Llama-2-13b-chat-hf | Q8 | Fits comfortably | 28.49 tok/sEstimated | 13GB (have 48GB) |
| meta-llama/Llama-2-13b-chat-hf | Q4 | Fits comfortably | 42.45 tok/sEstimated | 7GB (have 48GB) |
| Qwen/Qwen3-Next-80B-A3B-Thinking | Q8 | Not supported | — | 80GB (have 48GB) |
| Qwen/Qwen3-Next-80B-A3B-Thinking | Q4 | Fits comfortably | 16.91 tok/sEstimated | 40GB (have 48GB) |
| unsloth/gemma-3-1b-it | Q8 | Fits comfortably | 69.32 tok/sEstimated | 1GB (have 48GB) |
| unsloth/gemma-3-1b-it | Q4 | Fits comfortably | 104.34 tok/sEstimated | 1GB (have 48GB) |
| bigcode/starcoder2-3b | Q8 | Fits comfortably | 46.63 tok/sEstimated | 3GB (have 48GB) |
| bigcode/starcoder2-3b | Q4 | Fits comfortably | 73.48 tok/sEstimated | 2GB (have 48GB) |
| Qwen/Qwen3-Next-80B-A3B-Thinking-FP8 | Q8 | Not supported | — | 80GB (have 48GB) |
| Qwen/Qwen3-Next-80B-A3B-Thinking-FP8 | Q4 | Fits comfortably | 18.80 tok/sEstimated | 40GB (have 48GB) |
| ibm-granite/granite-docling-258M | Q8 | Fits comfortably | 38.55 tok/sEstimated | 7GB (have 48GB) |
| ibm-granite/granite-docling-258M | Q4 | Fits comfortably | 50.02 tok/sEstimated | 4GB (have 48GB) |
| skt/kogpt2-base-v2 | Q8 | Fits comfortably | 39.60 tok/sEstimated | 7GB (have 48GB) |
| skt/kogpt2-base-v2 | Q4 | Fits comfortably | 52.91 tok/sEstimated | 4GB (have 48GB) |
| google/gemma-3-270m-it | Q8 | Fits comfortably | 36.81 tok/sEstimated | 7GB (have 48GB) |
| google/gemma-3-270m-it | Q4 | Fits comfortably | 54.61 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-4B-Thinking-2507-FP8 | Q8 | Fits comfortably | 48.48 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-4B-Thinking-2507-FP8 | Q4 | Fits comfortably | 58.36 tok/sEstimated | 2GB (have 48GB) |
| Qwen/Qwen2.5-32B | Q8 | Fits comfortably | 21.29 tok/sEstimated | 32GB (have 48GB) |
| Qwen/Qwen2.5-32B | Q4 | Fits comfortably | 28.35 tok/sEstimated | 16GB (have 48GB) |
| parler-tts/parler-tts-large-v1 | Q8 | Fits comfortably | 33.90 tok/sEstimated | 7GB (have 48GB) |
| parler-tts/parler-tts-large-v1 | Q4 | Fits comfortably | 49.36 tok/sEstimated | 4GB (have 48GB) |
| EleutherAI/pythia-70m-deduped | Q8 | Fits comfortably | 39.68 tok/sEstimated | 7GB (have 48GB) |
| EleutherAI/pythia-70m-deduped | Q4 | Fits comfortably | 50.88 tok/sEstimated | 4GB (have 48GB) |
| microsoft/VibeVoice-1.5B | Q8 | Fits comfortably | 39.76 tok/sEstimated | 5GB (have 48GB) |
| microsoft/VibeVoice-1.5B | Q4 | Fits comfortably | 65.40 tok/sEstimated | 3GB (have 48GB) |
| ibm-granite/granite-3.3-2b-instruct | Q8 | Fits comfortably | 54.71 tok/sEstimated | 2GB (have 48GB) |
| ibm-granite/granite-3.3-2b-instruct | Q4 | Fits comfortably | 89.66 tok/sEstimated | 1GB (have 48GB) |
| Qwen/Qwen2.5-72B-Instruct | Q8 | Not supported | — | 72GB (have 48GB) |
| Qwen/Qwen2.5-72B-Instruct | Q4 | Fits comfortably | 17.76 tok/sEstimated | 36GB (have 48GB) |
| liuhaotian/llava-v1.5-7b | Q8 | Fits comfortably | 35.99 tok/sEstimated | 7GB (have 48GB) |
| liuhaotian/llava-v1.5-7b | Q4 | Fits comfortably | 52.72 tok/sEstimated | 4GB (have 48GB) |
| google/gemma-2b | Q8 | Fits comfortably | 61.20 tok/sEstimated | 2GB (have 48GB) |
| google/gemma-2b | Q4 | Fits comfortably | 89.87 tok/sEstimated | 1GB (have 48GB) |
| trl-internal-testing/tiny-LlamaForCausalLM-3.2 | Q8 | Fits comfortably | 40.13 tok/sEstimated | 7GB (have 48GB) |
| trl-internal-testing/tiny-LlamaForCausalLM-3.2 | Q4 | Fits comfortably | 57.24 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-235B-A22B | Q8 | Not supported | — | 235GB (have 48GB) |
| Qwen/Qwen3-235B-A22B | Q4 | Not supported | — | 118GB (have 48GB) |
| lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bit | Q8 | Fits comfortably | 32.24 tok/sEstimated | 8GB (have 48GB) |
| lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bit | Q4 | Fits comfortably | 52.71 tok/sEstimated | 4GB (have 48GB) |
| microsoft/Phi-4-mini-instruct | Q8 | Fits comfortably | 38.39 tok/sEstimated | 7GB (have 48GB) |
| microsoft/Phi-4-mini-instruct | Q4 | Fits comfortably | 56.04 tok/sEstimated | 4GB (have 48GB) |
| llamafactory/tiny-random-Llama-3 | Q8 | Fits comfortably | 39.28 tok/sEstimated | 7GB (have 48GB) |
| llamafactory/tiny-random-Llama-3 | Q4 | Fits comfortably | 53.96 tok/sEstimated | 4GB (have 48GB) |
| HuggingFaceH4/zephyr-7b-beta | Q8 | Fits comfortably | 38.17 tok/sEstimated | 7GB (have 48GB) |
| HuggingFaceH4/zephyr-7b-beta | Q4 | Fits comfortably | 50.83 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-4B-Thinking-2507 | Q8 | Fits comfortably | 42.15 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-4B-Thinking-2507 | Q4 | Fits comfortably | 70.83 tok/sEstimated | 2GB (have 48GB) |
| Qwen/Qwen3-30B-A3B-Instruct-2507-FP8 | Q8 | Fits comfortably | 20.18 tok/sEstimated | 30GB (have 48GB) |
| Qwen/Qwen3-30B-A3B-Instruct-2507-FP8 | Q4 | Fits comfortably | 32.06 tok/sEstimated | 15GB (have 48GB) |
| lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-4bit | Q8 | Fits comfortably | 34.21 tok/sEstimated | 8GB (have 48GB) |
| lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-4bit | Q4 | Fits comfortably | 50.73 tok/sEstimated | 4GB (have 48GB) |
| unsloth/Llama-3.2-1B-Instruct | Q8 | Fits comfortably | 77.43 tok/sEstimated | 1GB (have 48GB) |
| unsloth/Llama-3.2-1B-Instruct | Q4 | Fits comfortably | 106.57 tok/sEstimated | 1GB (have 48GB) |
| GSAI-ML/LLaDA-8B-Instruct | Q8 | Fits comfortably | 31.64 tok/sEstimated | 8GB (have 48GB) |
| GSAI-ML/LLaDA-8B-Instruct | Q4 | Fits comfortably | 49.22 tok/sEstimated | 4GB (have 48GB) |
| RedHatAI/Llama-3.2-90B-Vision-Instruct-FP8-dynamic | Q8 | Not supported | — | 90GB (have 48GB) |
| RedHatAI/Llama-3.2-90B-Vision-Instruct-FP8-dynamic | Q4 | Fits comfortably | 16.44 tok/sEstimated | 45GB (have 48GB) |
| Qwen/Qwen2.5-Coder-7B-Instruct | Q8 | Fits comfortably | 38.33 tok/sEstimated | 7GB (have 48GB) |
| Qwen/Qwen2.5-Coder-7B-Instruct | Q4 | Fits comfortably | 52.46 tok/sEstimated | 4GB (have 48GB) |
| numind/NuExtract-1.5 | Q8 | Fits comfortably | 37.04 tok/sEstimated | 7GB (have 48GB) |
| numind/NuExtract-1.5 | Q4 | Fits comfortably | 53.11 tok/sEstimated | 4GB (have 48GB) |
| deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct | Q8 | Fits comfortably | 38.45 tok/sEstimated | 7GB (have 48GB) |
| deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct | Q4 | Fits comfortably | 52.92 tok/sEstimated | 4GB (have 48GB) |
| hmellor/tiny-random-LlamaForCausalLM | Q8 | Fits comfortably | 36.87 tok/sEstimated | 7GB (have 48GB) |
| hmellor/tiny-random-LlamaForCausalLM | Q4 | Fits comfortably | 49.10 tok/sEstimated | 4GB (have 48GB) |
| huggyllama/llama-7b | Q8 | Fits comfortably | 39.75 tok/sEstimated | 7GB (have 48GB) |
| huggyllama/llama-7b | Q4 | Fits comfortably | 57.68 tok/sEstimated | 4GB (have 48GB) |
| deepseek-ai/DeepSeek-V3-0324 | Q8 | Fits comfortably | 36.71 tok/sEstimated | 7GB (have 48GB) |
| deepseek-ai/DeepSeek-V3-0324 | Q4 | Fits comfortably | 48.59 tok/sEstimated | 4GB (have 48GB) |
| microsoft/Phi-3-mini-128k-instruct | Q8 | Fits comfortably | 39.99 tok/sEstimated | 7GB (have 48GB) |
| microsoft/Phi-3-mini-128k-instruct | Q4 | Fits comfortably | 47.55 tok/sEstimated | 4GB (have 48GB) |
| sshleifer/tiny-gpt2 | Q8 | Fits comfortably | 37.71 tok/sEstimated | 7GB (have 48GB) |
| sshleifer/tiny-gpt2 | Q4 | Fits comfortably | 48.01 tok/sEstimated | 4GB (have 48GB) |
| meta-llama/Llama-Guard-3-8B | Q8 | Fits comfortably | 36.56 tok/sEstimated | 8GB (have 48GB) |
| meta-llama/Llama-Guard-3-8B | Q4 | Fits comfortably | 47.17 tok/sEstimated | 4GB (have 48GB) |
| openai-community/gpt2-xl | Q8 | Fits comfortably | 35.15 tok/sEstimated | 7GB (have 48GB) |
| openai-community/gpt2-xl | Q4 | Fits comfortably | 48.72 tok/sEstimated | 4GB (have 48GB) |
| OpenPipe/Qwen3-14B-Instruct | Q8 | Fits comfortably | 29.60 tok/sEstimated | 14GB (have 48GB) |
| OpenPipe/Qwen3-14B-Instruct | Q4 | Fits comfortably | 38.01 tok/sEstimated | 7GB (have 48GB) |
| RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16 | Q8 | Not supported | — | 70GB (have 48GB) |
| RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16 | Q4 | Fits comfortably | 19.49 tok/sEstimated | 35GB (have 48GB) |
| lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bit | Q8 | Fits comfortably | 49.37 tok/sEstimated | 4GB (have 48GB) |
| lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bit | Q4 | Fits comfortably | 61.08 tok/sEstimated | 2GB (have 48GB) |
| ibm-research/PowerMoE-3b | Q8 | Fits comfortably | 54.64 tok/sEstimated | 3GB (have 48GB) |
| ibm-research/PowerMoE-3b | Q4 | Fits comfortably | 66.27 tok/sEstimated | 2GB (have 48GB) |
| lmstudio-community/Qwen3-4B-Thinking-2507-MLX-8bit | Q8 | Fits comfortably | 43.03 tok/sEstimated | 4GB (have 48GB) |
| lmstudio-community/Qwen3-4B-Thinking-2507-MLX-8bit | Q4 | Fits comfortably | 66.20 tok/sEstimated | 2GB (have 48GB) |
| unsloth/Llama-3.2-3B-Instruct | Q8 | Fits comfortably | 53.61 tok/sEstimated | 3GB (have 48GB) |
| unsloth/Llama-3.2-3B-Instruct | Q4 | Fits comfortably | 74.05 tok/sEstimated | 2GB (have 48GB) |
| lmstudio-community/Qwen3-4B-Thinking-2507-MLX-4bit | Q8 | Fits comfortably | 44.19 tok/sEstimated | 4GB (have 48GB) |
| lmstudio-community/Qwen3-4B-Thinking-2507-MLX-4bit | Q4 | Fits comfortably | 61.10 tok/sEstimated | 2GB (have 48GB) |
| meta-llama/Llama-3.2-3B | Q8 | Fits comfortably | 45.89 tok/sEstimated | 3GB (have 48GB) |
| meta-llama/Llama-3.2-3B | Q4 | Fits comfortably | 66.24 tok/sEstimated | 2GB (have 48GB) |
| EleutherAI/gpt-neo-125m | Q8 | Fits comfortably | 33.65 tok/sEstimated | 7GB (have 48GB) |
| EleutherAI/gpt-neo-125m | Q4 | Fits comfortably | 57.65 tok/sEstimated | 4GB (have 48GB) |
| codellama/CodeLlama-34b-hf | Q8 | Fits comfortably | 18.87 tok/sEstimated | 34GB (have 48GB) |
| codellama/CodeLlama-34b-hf | Q4 | Fits comfortably | 26.85 tok/sEstimated | 17GB (have 48GB) |
| meta-llama/Llama-Guard-3-1B | Q8 | Fits comfortably | 73.78 tok/sEstimated | 1GB (have 48GB) |
| meta-llama/Llama-Guard-3-1B | Q4 | Fits comfortably | 107.89 tok/sEstimated | 1GB (have 48GB) |
| Qwen/Qwen2-1.5B-Instruct | Q8 | Fits comfortably | 37.98 tok/sEstimated | 5GB (have 48GB) |
| Qwen/Qwen2-1.5B-Instruct | Q4 | Fits comfortably | 56.36 tok/sEstimated | 3GB (have 48GB) |
| google/gemma-2-2b-it | Q8 | Fits comfortably | 60.04 tok/sEstimated | 2GB (have 48GB) |
| google/gemma-2-2b-it | Q4 | Fits comfortably | 89.25 tok/sEstimated | 1GB (have 48GB) |
| Qwen/Qwen2.5-14B | Q8 | Fits comfortably | 29.95 tok/sEstimated | 14GB (have 48GB) |
| Qwen/Qwen2.5-14B | Q4 | Fits comfortably | 41.99 tok/sEstimated | 7GB (have 48GB) |
| unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bit | Q8 | Fits comfortably | 20.02 tok/sEstimated | 32GB (have 48GB) |
| unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bit | Q4 | Fits comfortably | 28.85 tok/sEstimated | 16GB (have 48GB) |
| microsoft/Phi-3.5-mini-instruct | Q8 | Fits comfortably | 35.71 tok/sEstimated | 7GB (have 48GB) |
| microsoft/Phi-3.5-mini-instruct | Q4 | Fits comfortably | 50.91 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-4B-Base | Q8 | Fits comfortably | 45.87 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-4B-Base | Q4 | Fits comfortably | 67.52 tok/sEstimated | 2GB (have 48GB) |
| Qwen/Qwen2-7B-Instruct | Q8 | Fits comfortably | 37.46 tok/sEstimated | 7GB (have 48GB) |
| Qwen/Qwen2-7B-Instruct | Q4 | Fits comfortably | 52.33 tok/sEstimated | 4GB (have 48GB) |
| meta-llama/Llama-2-7b-chat-hf | Q8 | Fits comfortably | 35.13 tok/sEstimated | 7GB (have 48GB) |
| meta-llama/Llama-2-7b-chat-hf | Q4 | Fits comfortably | 48.71 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-14B-Base | Q8 | Fits comfortably | 29.30 tok/sEstimated | 14GB (have 48GB) |
| Qwen/Qwen3-14B-Base | Q4 | Fits comfortably | 43.20 tok/sEstimated | 7GB (have 48GB) |
| swiss-ai/Apertus-8B-Instruct-2509 | Q8 | Fits comfortably | 35.29 tok/sEstimated | 8GB (have 48GB) |
| swiss-ai/Apertus-8B-Instruct-2509 | Q4 | Fits comfortably | 50.47 tok/sEstimated | 4GB (have 48GB) |
| microsoft/Phi-3.5-vision-instruct | Q8 | Fits comfortably | 35.11 tok/sEstimated | 7GB (have 48GB) |
| microsoft/Phi-3.5-vision-instruct | Q4 | Fits comfortably | 57.62 tok/sEstimated | 4GB (have 48GB) |
| unsloth/mistral-7b-v0.3-bnb-4bit | Q8 | Fits comfortably | 37.45 tok/sEstimated | 7GB (have 48GB) |
| unsloth/mistral-7b-v0.3-bnb-4bit | Q4 | Fits comfortably | 54.33 tok/sEstimated | 4GB (have 48GB) |
| rinna/japanese-gpt-neox-small | Q8 | Fits comfortably | 38.71 tok/sEstimated | 7GB (have 48GB) |
| rinna/japanese-gpt-neox-small | Q4 | Fits comfortably | 52.40 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen2.5-Coder-1.5B | Q8 | Fits comfortably | 38.27 tok/sEstimated | 5GB (have 48GB) |
| Qwen/Qwen2.5-Coder-1.5B | Q4 | Fits comfortably | 55.97 tok/sEstimated | 3GB (have 48GB) |
| IlyaGusev/saiga_llama3_8b | Q8 | Fits comfortably | 32.77 tok/sEstimated | 8GB (have 48GB) |
| IlyaGusev/saiga_llama3_8b | Q4 | Fits comfortably | 47.95 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-30B-A3B | Q8 | Fits comfortably | 19.23 tok/sEstimated | 30GB (have 48GB) |
| Qwen/Qwen3-30B-A3B | Q4 | Fits comfortably | 27.07 tok/sEstimated | 15GB (have 48GB) |
| deepseek-ai/DeepSeek-R1 | Q8 | Fits comfortably | 38.37 tok/sEstimated | 7GB (have 48GB) |
| deepseek-ai/DeepSeek-R1 | Q4 | Fits comfortably | 51.66 tok/sEstimated | 4GB (have 48GB) |
| microsoft/DialoGPT-small | Q8 | Fits comfortably | 37.24 tok/sEstimated | 7GB (have 48GB) |
| microsoft/DialoGPT-small | Q4 | Fits comfortably | 49.67 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-8B-FP8 | Q8 | Fits comfortably | 37.96 tok/sEstimated | 8GB (have 48GB) |
| Qwen/Qwen3-8B-FP8 | Q4 | Fits comfortably | 49.77 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-Coder-30B-A3B-Instruct | Q8 | Fits comfortably | 20.69 tok/sEstimated | 30GB (have 48GB) |
| Qwen/Qwen3-Coder-30B-A3B-Instruct | Q4 | Fits comfortably | 29.49 tok/sEstimated | 15GB (have 48GB) |
| Qwen/Qwen3-Embedding-4B | Q8 | Fits comfortably | 48.54 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-Embedding-4B | Q4 | Fits comfortably | 60.50 tok/sEstimated | 2GB (have 48GB) |
| microsoft/Phi-4-multimodal-instruct | Q8 | Fits comfortably | 37.76 tok/sEstimated | 7GB (have 48GB) |
| microsoft/Phi-4-multimodal-instruct | Q4 | Fits comfortably | 49.08 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-8B-Base | Q8 | Fits comfortably | 32.81 tok/sEstimated | 8GB (have 48GB) |
| Qwen/Qwen3-8B-Base | Q4 | Fits comfortably | 51.56 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-0.6B-Base | Q8 | Fits comfortably | 41.69 tok/sEstimated | 6GB (have 48GB) |
| Qwen/Qwen3-0.6B-Base | Q4 | Fits comfortably | 61.15 tok/sEstimated | 3GB (have 48GB) |
| openai-community/gpt2-medium | Q8 | Fits comfortably | 36.50 tok/sEstimated | 7GB (have 48GB) |
| openai-community/gpt2-medium | Q4 | Fits comfortably | 55.91 tok/sEstimated | 4GB (have 48GB) |
| trl-internal-testing/tiny-random-LlamaForCausalLM | Q8 | Fits comfortably | 35.77 tok/sEstimated | 7GB (have 48GB) |
| trl-internal-testing/tiny-random-LlamaForCausalLM | Q4 | Fits comfortably | 53.60 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen2.5-Math-1.5B | Q8 | Fits comfortably | 38.27 tok/sEstimated | 5GB (have 48GB) |
| Qwen/Qwen2.5-Math-1.5B | Q4 | Fits comfortably | 54.32 tok/sEstimated | 3GB (have 48GB) |
| HuggingFaceTB/SmolLM-135M | Q8 | Fits comfortably | 39.15 tok/sEstimated | 7GB (have 48GB) |
| HuggingFaceTB/SmolLM-135M | Q4 | Fits comfortably | 47.50 tok/sEstimated | 4GB (have 48GB) |
| unsloth/gpt-oss-20b-BF16 | Q8 | Fits comfortably | 22.50 tok/sEstimated | 20GB (have 48GB) |
| unsloth/gpt-oss-20b-BF16 | Q4 | Fits comfortably | 33.23 tok/sEstimated | 10GB (have 48GB) |
| meta-llama/Meta-Llama-3-70B-Instruct | Q8 | Not supported | — | 70GB (have 48GB) |
| meta-llama/Meta-Llama-3-70B-Instruct | Q4 | Fits comfortably | 19.87 tok/sEstimated | 35GB (have 48GB) |
| unsloth/Meta-Llama-3.1-8B-Instruct | Q8 | Fits comfortably | 32.75 tok/sEstimated | 8GB (have 48GB) |
| unsloth/Meta-Llama-3.1-8B-Instruct | Q4 | Fits comfortably | 45.18 tok/sEstimated | 4GB (have 48GB) |
| zai-org/GLM-4.5-Air | Q8 | Fits comfortably | 34.17 tok/sEstimated | 7GB (have 48GB) |
| zai-org/GLM-4.5-Air | Q4 | Fits comfortably | 53.08 tok/sEstimated | 4GB (have 48GB) |
| mistralai/Mistral-7B-Instruct-v0.1 | Q8 | Fits comfortably | 40.39 tok/sEstimated | 7GB (have 48GB) |
| mistralai/Mistral-7B-Instruct-v0.1 | Q4 | Fits comfortably | 55.37 tok/sEstimated | 4GB (have 48GB) |
| LiquidAI/LFM2-1.2B | Q8 | Fits comfortably | 60.96 tok/sEstimated | 2GB (have 48GB) |
| LiquidAI/LFM2-1.2B | Q4 | Fits comfortably | 88.86 tok/sEstimated | 1GB (have 48GB) |
| mistralai/Mistral-7B-v0.1 | Q8 | Fits comfortably | 33.17 tok/sEstimated | 7GB (have 48GB) |
| mistralai/Mistral-7B-v0.1 | Q4 | Fits comfortably | 47.80 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen2.5-32B-Instruct | Q8 | Fits comfortably | 21.55 tok/sEstimated | 32GB (have 48GB) |
| Qwen/Qwen2.5-32B-Instruct | Q4 | Fits comfortably | 28.93 tok/sEstimated | 16GB (have 48GB) |
| deepseek-ai/DeepSeek-R1-0528 | Q8 | Fits comfortably | 38.31 tok/sEstimated | 7GB (have 48GB) |
| deepseek-ai/DeepSeek-R1-0528 | Q4 | Fits comfortably | 53.96 tok/sEstimated | 4GB (have 48GB) |
| meta-llama/Llama-3.1-8B | Q8 | Fits comfortably | 32.08 tok/sEstimated | 8GB (have 48GB) |
| meta-llama/Llama-3.1-8B | Q4 | Fits comfortably | 48.55 tok/sEstimated | 4GB (have 48GB) |
| deepseek-ai/DeepSeek-V3.1 | Q8 | Fits comfortably | 39.68 tok/sEstimated | 7GB (have 48GB) |
| deepseek-ai/DeepSeek-V3.1 | Q4 | Fits comfortably | 55.62 tok/sEstimated | 4GB (have 48GB) |
| microsoft/phi-4 | Q8 | Fits comfortably | 33.89 tok/sEstimated | 7GB (have 48GB) |
| microsoft/phi-4 | Q4 | Fits comfortably | 54.66 tok/sEstimated | 4GB (have 48GB) |
| deepseek-ai/deepseek-coder-1.3b-instruct | Q8 | Fits comfortably | 54.75 tok/sEstimated | 3GB (have 48GB) |
| deepseek-ai/deepseek-coder-1.3b-instruct | Q4 | Fits comfortably | 77.31 tok/sEstimated | 2GB (have 48GB) |
| Qwen/Qwen2-0.5B | Q8 | Fits comfortably | 38.64 tok/sEstimated | 5GB (have 48GB) |
| Qwen/Qwen2-0.5B | Q4 | Fits comfortably | 59.66 tok/sEstimated | 3GB (have 48GB) |
| MiniMaxAI/MiniMax-M2 | Q8 | Fits comfortably | 34.21 tok/sEstimated | 7GB (have 48GB) |
| MiniMaxAI/MiniMax-M2 | Q4 | Fits comfortably | 57.62 tok/sEstimated | 4GB (have 48GB) |
| microsoft/DialoGPT-medium | Q8 | Fits comfortably | 39.53 tok/sEstimated | 7GB (have 48GB) |
| microsoft/DialoGPT-medium | Q4 | Fits comfortably | 55.47 tok/sEstimated | 4GB (have 48GB) |
| zai-org/GLM-4.6-FP8 | Q8 | Fits comfortably | 39.33 tok/sEstimated | 7GB (have 48GB) |
| zai-org/GLM-4.6-FP8 | Q4 | Fits comfortably | 51.07 tok/sEstimated | 4GB (have 48GB) |
| HuggingFaceTB/SmolLM2-135M | Q8 | Fits comfortably | 35.09 tok/sEstimated | 7GB (have 48GB) |
| HuggingFaceTB/SmolLM2-135M | Q4 | Fits comfortably | 47.66 tok/sEstimated | 4GB (have 48GB) |
| deepseek-ai/DeepSeek-R1-Distill-Llama-8B | Q8 | Fits comfortably | 38.03 tok/sEstimated | 8GB (have 48GB) |
| deepseek-ai/DeepSeek-R1-Distill-Llama-8B | Q4 | Fits comfortably | 48.91 tok/sEstimated | 4GB (have 48GB) |
| meta-llama/Llama-2-7b-hf | Q8 | Fits comfortably | 34.26 tok/sEstimated | 7GB (have 48GB) |
| meta-llama/Llama-2-7b-hf | Q4 | Fits comfortably | 56.61 tok/sEstimated | 4GB (have 48GB) |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-7B | Q8 | Fits comfortably | 35.82 tok/sEstimated | 7GB (have 48GB) |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-7B | Q4 | Fits comfortably | 56.69 tok/sEstimated | 4GB (have 48GB) |
| microsoft/phi-2 | Q8 | Fits comfortably | 38.10 tok/sEstimated | 7GB (have 48GB) |
| microsoft/phi-2 | Q4 | Fits comfortably | 55.64 tok/sEstimated | 4GB (have 48GB) |
| meta-llama/Llama-3.1-70B-Instruct | Q8 | Not supported | — | 70GB (have 48GB) |
| meta-llama/Llama-3.1-70B-Instruct | Q4 | Fits comfortably | 19.05 tok/sEstimated | 35GB (have 48GB) |
| Qwen/Qwen2.5-0.5B | Q8 | Fits comfortably | 38.03 tok/sEstimated | 5GB (have 48GB) |
| Qwen/Qwen2.5-0.5B | Q4 | Fits comfortably | 61.77 tok/sEstimated | 3GB (have 48GB) |
| Qwen/Qwen3-14B | Q8 | Fits comfortably | 27.69 tok/sEstimated | 14GB (have 48GB) |
| Qwen/Qwen3-14B | Q4 | Fits comfortably | 42.13 tok/sEstimated | 7GB (have 48GB) |
| Qwen/Qwen3-Embedding-8B | Q8 | Fits comfortably | 37.07 tok/sEstimated | 8GB (have 48GB) |
| Qwen/Qwen3-Embedding-8B | Q4 | Fits comfortably | 49.88 tok/sEstimated | 4GB (have 48GB) |
| meta-llama/Llama-3.3-70B-Instruct | Q8 | Not supported | — | 70GB (have 48GB) |
| meta-llama/Llama-3.3-70B-Instruct | Q4 | Fits comfortably | 20.65 tok/sEstimated | 35GB (have 48GB) |
| unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit | Q8 | Fits comfortably | 34.11 tok/sEstimated | 8GB (have 48GB) |
| unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit | Q4 | Fits comfortably | 49.83 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen2.5-14B-Instruct | Q8 | Fits comfortably | 30.99 tok/sEstimated | 14GB (have 48GB) |
| Qwen/Qwen2.5-14B-Instruct | Q4 | Fits comfortably | 37.31 tok/sEstimated | 7GB (have 48GB) |
| Qwen/Qwen2.5-1.5B | Q8 | Fits comfortably | 38.70 tok/sEstimated | 5GB (have 48GB) |
| Qwen/Qwen2.5-1.5B | Q4 | Fits comfortably | 64.77 tok/sEstimated | 3GB (have 48GB) |
| kaitchup/Phi-3-mini-4k-instruct-gptq-4bit | Q8 | Fits comfortably | 45.12 tok/sEstimated | 4GB (have 48GB) |
| kaitchup/Phi-3-mini-4k-instruct-gptq-4bit | Q4 | Fits comfortably | 61.95 tok/sEstimated | 2GB (have 48GB) |
| mlx-community/gpt-oss-20b-MXFP4-Q8 | Q8 | Fits comfortably | 23.28 tok/sEstimated | 20GB (have 48GB) |
| mlx-community/gpt-oss-20b-MXFP4-Q8 | Q4 | Fits comfortably | 33.63 tok/sEstimated | 10GB (have 48GB) |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B | Q8 | Fits comfortably | 44.60 tok/sEstimated | 5GB (have 48GB) |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B | Q4 | Fits comfortably | 57.07 tok/sEstimated | 3GB (have 48GB) |
| meta-llama/Meta-Llama-3-8B-Instruct | Q8 | Fits comfortably | 34.07 tok/sEstimated | 8GB (have 48GB) |
| meta-llama/Meta-Llama-3-8B-Instruct | Q4 | Fits comfortably | 54.56 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-Reranker-0.6B | Q8 | Fits comfortably | 39.72 tok/sEstimated | 6GB (have 48GB) |
| Qwen/Qwen3-Reranker-0.6B | Q4 | Fits comfortably | 60.77 tok/sEstimated | 3GB (have 48GB) |
| rednote-hilab/dots.ocr | Q8 | Fits comfortably | 35.44 tok/sEstimated | 7GB (have 48GB) |
| rednote-hilab/dots.ocr | Q4 | Fits comfortably | 56.78 tok/sEstimated | 4GB (have 48GB) |
| google-t5/t5-3b | Q8 | Fits comfortably | 50.26 tok/sEstimated | 3GB (have 48GB) |
| google-t5/t5-3b | Q4 | Fits comfortably | 68.15 tok/sEstimated | 2GB (have 48GB) |
| Qwen/Qwen3-30B-A3B-Instruct-2507 | Q8 | Fits comfortably | 19.85 tok/sEstimated | 30GB (have 48GB) |
| Qwen/Qwen3-30B-A3B-Instruct-2507 | Q4 | Fits comfortably | 32.18 tok/sEstimated | 15GB (have 48GB) |
| Qwen/Qwen3-4B | Q8 | Fits comfortably | 43.91 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-4B | Q4 | Fits comfortably | 61.08 tok/sEstimated | 2GB (have 48GB) |
| Qwen/Qwen3-1.7B | Q8 | Fits comfortably | 39.33 tok/sEstimated | 7GB (have 48GB) |
| Qwen/Qwen3-1.7B | Q4 | Fits comfortably | 49.48 tok/sEstimated | 4GB (have 48GB) |
| openai-community/gpt2-large | Q8 | Fits comfortably | 33.44 tok/sEstimated | 7GB (have 48GB) |
| openai-community/gpt2-large | Q4 | Fits comfortably | 57.44 tok/sEstimated | 4GB (have 48GB) |
| microsoft/Phi-3-mini-4k-instruct | Q8 | Fits comfortably | 33.72 tok/sEstimated | 7GB (have 48GB) |
| microsoft/Phi-3-mini-4k-instruct | Q4 | Fits comfortably | 57.29 tok/sEstimated | 4GB (have 48GB) |
| allenai/OLMo-2-0425-1B | Q8 | Fits comfortably | 78.14 tok/sEstimated | 1GB (have 48GB) |
| allenai/OLMo-2-0425-1B | Q4 | Fits comfortably | 96.84 tok/sEstimated | 1GB (have 48GB) |
| Qwen/Qwen3-Next-80B-A3B-Instruct | Q8 | Not supported | — | 80GB (have 48GB) |
| Qwen/Qwen3-Next-80B-A3B-Instruct | Q4 | Fits comfortably | 17.58 tok/sEstimated | 40GB (have 48GB) |
| Qwen/Qwen3-32B | Q8 | Fits comfortably | 21.06 tok/sEstimated | 32GB (have 48GB) |
| Qwen/Qwen3-32B | Q4 | Fits comfortably | 30.14 tok/sEstimated | 16GB (have 48GB) |
| Qwen/Qwen2.5-0.5B-Instruct | Q8 | Fits comfortably | 41.74 tok/sEstimated | 5GB (have 48GB) |
| Qwen/Qwen2.5-0.5B-Instruct | Q4 | Fits comfortably | 55.99 tok/sEstimated | 3GB (have 48GB) |
| Qwen/Qwen2.5-7B | Q8 | Fits comfortably | 39.47 tok/sEstimated | 7GB (have 48GB) |
| Qwen/Qwen2.5-7B | Q4 | Fits comfortably | 48.32 tok/sEstimated | 4GB (have 48GB) |
| meta-llama/Meta-Llama-3-8B | Q8 | Fits comfortably | 37.56 tok/sEstimated | 8GB (have 48GB) |
| meta-llama/Meta-Llama-3-8B | Q4 | Fits comfortably | 47.08 tok/sEstimated | 4GB (have 48GB) |
| meta-llama/Llama-3.2-1B | Q8 | Fits comfortably | 73.60 tok/sEstimated | 1GB (have 48GB) |
| meta-llama/Llama-3.2-1B | Q4 | Fits comfortably | 106.24 tok/sEstimated | 1GB (have 48GB) |
| petals-team/StableBeluga2 | Q8 | Fits comfortably | 38.67 tok/sEstimated | 7GB (have 48GB) |
| petals-team/StableBeluga2 | Q4 | Fits comfortably | 48.36 tok/sEstimated | 4GB (have 48GB) |
| vikhyatk/moondream2 | Q8 | Fits comfortably | 34.87 tok/sEstimated | 7GB (have 48GB) |
| vikhyatk/moondream2 | Q4 | Fits comfortably | 52.88 tok/sEstimated | 4GB (have 48GB) |
| meta-llama/Llama-3.2-3B-Instruct | Q8 | Fits comfortably | 50.53 tok/sEstimated | 3GB (have 48GB) |
| meta-llama/Llama-3.2-3B-Instruct | Q4 | Fits comfortably | 69.37 tok/sEstimated | 2GB (have 48GB) |
| RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamic | Q8 | Not supported | — | 70GB (have 48GB) |
| RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamic | Q4 | Fits comfortably | 19.37 tok/sEstimated | 35GB (have 48GB) |
| distilbert/distilgpt2 | Q8 | Fits comfortably | 36.09 tok/sEstimated | 7GB (have 48GB) |
| distilbert/distilgpt2 | Q4 | Fits comfortably | 52.30 tok/sEstimated | 4GB (have 48GB) |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-32B | Q8 | Fits comfortably | 18.49 tok/sEstimated | 32GB (have 48GB) |
| deepseek-ai/DeepSeek-R1-Distill-Qwen-32B | Q4 | Fits comfortably | 30.24 tok/sEstimated | 16GB (have 48GB) |
| inference-net/Schematron-3B | Q8 | Fits comfortably | 53.30 tok/sEstimated | 3GB (have 48GB) |
| inference-net/Schematron-3B | Q4 | Fits comfortably | 67.62 tok/sEstimated | 2GB (have 48GB) |
| Qwen/Qwen3-8B | Q8 | Fits comfortably | 33.55 tok/sEstimated | 8GB (have 48GB) |
| Qwen/Qwen3-8B | Q4 | Fits comfortably | 52.50 tok/sEstimated | 4GB (have 48GB) |
| mistralai/Mistral-7B-Instruct-v0.2 | Q8 | Fits comfortably | 39.86 tok/sEstimated | 7GB (have 48GB) |
| mistralai/Mistral-7B-Instruct-v0.2 | Q4 | Fits comfortably | 49.72 tok/sEstimated | 4GB (have 48GB) |
| context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16 | Q8 | Fits comfortably | 52.35 tok/sEstimated | 3GB (have 48GB) |
| context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16 | Q4 | Fits comfortably | 67.82 tok/sEstimated | 2GB (have 48GB) |
| bigscience/bloomz-560m | Q8 | Fits comfortably | 35.08 tok/sEstimated | 7GB (have 48GB) |
| bigscience/bloomz-560m | Q4 | Fits comfortably | 53.92 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen2.5-3B-Instruct | Q8 | Fits comfortably | 46.84 tok/sEstimated | 3GB (have 48GB) |
| Qwen/Qwen2.5-3B-Instruct | Q4 | Fits comfortably | 65.93 tok/sEstimated | 2GB (have 48GB) |
| openai/gpt-oss-120b | Q8 | Not supported | — | 120GB (have 48GB) |
| openai/gpt-oss-120b | Q4 | Not supported | — | 60GB (have 48GB) |
| meta-llama/Llama-3.2-1B-Instruct | Q8 | Fits comfortably | 77.44 tok/sEstimated | 1GB (have 48GB) |
| meta-llama/Llama-3.2-1B-Instruct | Q4 | Fits comfortably | 110.71 tok/sEstimated | 1GB (have 48GB) |
| Qwen/Qwen3-4B-Instruct-2507 | Q8 | Fits comfortably | 44.69 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen3-4B-Instruct-2507 | Q4 | Fits comfortably | 58.72 tok/sEstimated | 2GB (have 48GB) |
| trl-internal-testing/tiny-Qwen2ForCausalLM-2.5 | Q8 | Fits comfortably | 38.35 tok/sEstimated | 7GB (have 48GB) |
| trl-internal-testing/tiny-Qwen2ForCausalLM-2.5 | Q4 | Fits comfortably | 53.82 tok/sEstimated | 4GB (have 48GB) |
| TinyLlama/TinyLlama-1.1B-Chat-v1.0 | Q8 | Fits comfortably | 74.92 tok/sEstimated | 1GB (have 48GB) |
| TinyLlama/TinyLlama-1.1B-Chat-v1.0 | Q4 | Fits comfortably | 102.48 tok/sEstimated | 1GB (have 48GB) |
| facebook/opt-125m | Q8 | Fits comfortably | 39.23 tok/sEstimated | 7GB (have 48GB) |
| facebook/opt-125m | Q4 | Fits comfortably | 53.05 tok/sEstimated | 4GB (have 48GB) |
| Qwen/Qwen2.5-1.5B-Instruct | Q8 | Fits comfortably | 44.60 tok/sEstimated | 5GB (have 48GB) |
| Qwen/Qwen2.5-1.5B-Instruct | Q4 | Fits comfortably | 61.61 tok/sEstimated | 3GB (have 48GB) |
| Qwen/Qwen3-Embedding-0.6B | Q8 | Fits comfortably | 36.03 tok/sEstimated | 6GB (have 48GB) |
| Qwen/Qwen3-Embedding-0.6B | Q4 | Fits comfortably | 51.61 tok/sEstimated | 3GB (have 48GB) |
| google/gemma-3-1b-it | Q8 | Fits comfortably | 68.62 tok/sEstimated | 1GB (have 48GB) |
| google/gemma-3-1b-it | Q4 | Fits comfortably | 103.17 tok/sEstimated | 1GB (have 48GB) |
| openai/gpt-oss-20b | Q8 | Fits comfortably | 23.52 tok/sEstimated | 20GB (have 48GB) |
| openai/gpt-oss-20b | Q4 | Fits comfortably | 33.15 tok/sEstimated | 10GB (have 48GB) |
| dphn/dolphin-2.9.1-yi-1.5-34b | Q8 | Fits comfortably | 17.61 tok/sEstimated | 34GB (have 48GB) |
| dphn/dolphin-2.9.1-yi-1.5-34b | Q4 | Fits comfortably | 26.45 tok/sEstimated | 17GB (have 48GB) |
| meta-llama/Llama-3.1-8B-Instruct | Q8 | Fits comfortably | 35.73 tok/sEstimated | 8GB (have 48GB) |
| meta-llama/Llama-3.1-8B-Instruct | Q4 | Fits comfortably | 45.54 tok/sEstimated | 4GB (have 48GB) |
| Gensyn/Qwen2.5-0.5B-Instruct | Q8 | Fits comfortably | 41.23 tok/sEstimated | 5GB (have 48GB) |
| Gensyn/Qwen2.5-0.5B-Instruct | Q4 | Fits comfortably | 56.57 tok/sEstimated | 3GB (have 48GB) |
| Qwen/Qwen3-0.6B | Q8 | Fits comfortably | 41.03 tok/sEstimated | 6GB (have 48GB) |
| Qwen/Qwen3-0.6B | Q4 | Fits comfortably | 57.24 tok/sEstimated | 3GB (have 48GB) |
| Qwen/Qwen2.5-7B-Instruct | Q8 | Fits comfortably | 37.93 tok/sEstimated | 7GB (have 48GB) |
| Qwen/Qwen2.5-7B-Instruct | Q4 | Fits comfortably | 48.82 tok/sEstimated | 4GB (have 48GB) |
| openai-community/gpt2 | Q8 | Fits comfortably | 34.37 tok/sEstimated | 7GB (have 48GB) |
| openai-community/gpt2 | Q4 | Fits comfortably | 47.49 tok/sEstimated | 4GB (have 48GB) |
Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data
Data-backed answers pulled from community benchmarks, manufacturer specs, and live pricing.
Operators running dual RTX A6000/RTX 8000 cards inside oobabooga report roughly 6–7 tokens/sec on 70B IQ4 MiQu workloads—adequate for shared inference queues.
Source: Reddit – /r/LocalLLaMA (lnv0ww3)
Enthusiasts caution that consumer boards seldom provide x16/x16 for two A6000s; dropping to x8/x4 starves llama.cpp workloads and erodes throughput.
Source: Reddit – /r/LocalLLaMA (mqpg0wp)
Even 2020-era RTX A6000 cards still list near $5,000, and the community expects scalpers to follow new workstation launches—showing how demand stays high.
Source: Reddit – /r/LocalLLaMA (movlqi2)
Some builders consider 48 GB 4090s, which keep full VRAM for inference but drop to 24 GB for PCIe peer-to-peer training—making the trade-off workload dependent.
Source: Reddit – /r/LocalLLaMA (mqoerg0)
RTX A6000 ships with 48 GB GDDR6 ECC and a 300 W TDP. On 3 Nov 2025 pricing sat at $4,699 (Newegg, in stock), $4,899 (Amazon), and $4,899 (Best Buy, out of stock).
Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.
Explore how RX 6800 XT stacks up for local inference workloads.
Explore how RTX 3080 stacks up for local inference workloads.