L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Community

  • Leaderboard

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. Apple M4 Max

Quick Answer: Apple M4 Max offers 128GB VRAM and starts around current market pricing. It delivers approximately 96 tokens/sec on meta-llama/Llama-3.2-3B. It typically draws 45W under load.

Apple M4 Max

Unknown
By AppleReleased 2024-11MSRP $3,999.00

This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.

Check Price on AmazonView Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM128GB
Cores40
TDP45W
ArchitectureApple Silicon M4

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

AmazonUnknown
See price on Amazon
Buy on Amazon

More Amazon options

Rotate out primary variants whenever validation flags an issue.

💡 Not ready to buy? Try cloud GPUs first

Test Apple M4 Max performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
meta-llama/Llama-3.2-3BQ4
96.22 tok/sEstimated

Auto-generated benchmark

2GB
facebook/sam3Q4
95.79 tok/sEstimated

Auto-generated benchmark

1GB
google-bert/bert-base-uncasedQ4
93.60 tok/sEstimated

Auto-generated benchmark

1GB
WeiboAI/VibeThinker-1.5BQ4
93.07 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2bQ4
92.62 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-1B-InstructQ4
91.57 tok/sEstimated

Auto-generated benchmark

1GB
allenai/OLMo-2-0425-1BQ4
91.07 tok/sEstimated

Auto-generated benchmark

1GB
google-t5/t5-3bQ4
90.45 tok/sEstimated

Auto-generated benchmark

2GB
inference-net/Schematron-3BQ4
90.36 tok/sEstimated

Auto-generated benchmark

2GB
google/embeddinggemma-300mQ4
90.08 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2-2b-itQ4
88.95 tok/sEstimated

Auto-generated benchmark

1GB
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q4
88.80 tok/sEstimated

Auto-generated benchmark

2GB
apple/OpenELM-1_1B-InstructQ4
88.43 tok/sEstimated

Auto-generated benchmark

1GB
ibm-granite/granite-3.3-2b-instructQ4
87.40 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3B-InstructQ4
86.68 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1B-InstructQ4
86.09 tok/sEstimated

Auto-generated benchmark

1GB
ibm-research/PowerMoE-3bQ4
85.97 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/Llama-3.2-3B-InstructQ4
84.68 tok/sEstimated

Auto-generated benchmark

2GB
bigcode/starcoder2-3bQ4
84.29 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/gemma-3-1b-itQ4
83.24 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/DeepSeek-OCRQ4
83.16 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1BQ4
83.03 tok/sEstimated

Auto-generated benchmark

1GB
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
82.98 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3BQ4
82.58 tok/sEstimated

Auto-generated benchmark

2GB
LiquidAI/LFM2-1.2BQ4
81.79 tok/sEstimated

Auto-generated benchmark

1GB
nari-labs/Dia2-2BQ4
81.37 tok/sEstimated

Auto-generated benchmark

2GB
google/gemma-3-1b-itQ4
80.72 tok/sEstimated

Auto-generated benchmark

1GB
microsoft/DialoGPT-smallQ4
80.47 tok/sEstimated

Auto-generated benchmark

4GB
GSAI-ML/LLaDA-8B-BaseQ4
80.43 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-V3.1Q4
80.30 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/deepseek-coder-1.3b-instructQ4
80.24 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Meta-Llama-3-8BQ4
80.09 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-4B-Thinking-2507-FP8Q4
79.87 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2-7B-InstructQ4
79.79 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-Guard-3-1BQ4
79.71 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2-0.5B-InstructQ4
79.69 tok/sEstimated

Auto-generated benchmark

3GB
ibm-granite/granite-docling-258MQ4
79.67 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-3.2-3B-InstructQ4
79.46 tok/sEstimated

Auto-generated benchmark

2GB
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5BQ4
79.30 tok/sEstimated

Auto-generated benchmark

3GB
tencent/HunyuanOCRQ4
79.24 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen3-Embedding-0.6BQ4
79.13 tok/sEstimated

Auto-generated benchmark

3GB
Qwen/Qwen3-8B-BaseQ4
79.02 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-2-7b-hfQ4
78.99 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-Embedding-4BQ4
78.86 tok/sEstimated

Auto-generated benchmark

2GB
MiniMaxAI/MiniMax-M2Q4
78.74 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen2.5-7BQ4
78.70 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen2.5-0.5B-InstructQ4
78.68 tok/sEstimated

Auto-generated benchmark

3GB
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitQ4
78.43 tok/sEstimated

Auto-generated benchmark

2GB
mistralai/Mistral-7B-v0.1Q4
78.39 tok/sEstimated

Auto-generated benchmark

4GB
Tongyi-MAI/Z-Image-TurboQ4
78.37 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-3.2-3B
Q4
2GB
96.22 tok/sEstimated
Auto-generated benchmark
facebook/sam3
Q4
1GB
95.79 tok/sEstimated
Auto-generated benchmark
google-bert/bert-base-uncased
Q4
1GB
93.60 tok/sEstimated
Auto-generated benchmark
WeiboAI/VibeThinker-1.5B
Q4
1GB
93.07 tok/sEstimated
Auto-generated benchmark
google/gemma-2b
Q4
1GB
92.62 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-1B-Instruct
Q4
1GB
91.57 tok/sEstimated
Auto-generated benchmark
allenai/OLMo-2-0425-1B
Q4
1GB
91.07 tok/sEstimated
Auto-generated benchmark
google-t5/t5-3b
Q4
2GB
90.45 tok/sEstimated
Auto-generated benchmark
inference-net/Schematron-3B
Q4
2GB
90.36 tok/sEstimated
Auto-generated benchmark
google/embeddinggemma-300m
Q4
1GB
90.08 tok/sEstimated
Auto-generated benchmark
google/gemma-2-2b-it
Q4
1GB
88.95 tok/sEstimated
Auto-generated benchmark
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16
Q4
2GB
88.80 tok/sEstimated
Auto-generated benchmark
apple/OpenELM-1_1B-Instruct
Q4
1GB
88.43 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-2b-instruct
Q4
1GB
87.40 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B-Instruct
Q4
2GB
86.68 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B-Instruct
Q4
1GB
86.09 tok/sEstimated
Auto-generated benchmark
ibm-research/PowerMoE-3b
Q4
2GB
85.97 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-3B-Instruct
Q4
2GB
84.68 tok/sEstimated
Auto-generated benchmark
bigcode/starcoder2-3b
Q4
2GB
84.29 tok/sEstimated
Auto-generated benchmark
unsloth/gemma-3-1b-it
Q4
1GB
83.24 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR
Q4
2GB
83.16 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B
Q4
1GB
83.03 tok/sEstimated
Auto-generated benchmark
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Q4
1GB
82.98 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B
Q4
2GB
82.58 tok/sEstimated
Auto-generated benchmark
LiquidAI/LFM2-1.2B
Q4
1GB
81.79 tok/sEstimated
Auto-generated benchmark
nari-labs/Dia2-2B
Q4
2GB
81.37 tok/sEstimated
Auto-generated benchmark
google/gemma-3-1b-it
Q4
1GB
80.72 tok/sEstimated
Auto-generated benchmark
microsoft/DialoGPT-small
Q4
4GB
80.47 tok/sEstimated
Auto-generated benchmark
GSAI-ML/LLaDA-8B-Base
Q4
4GB
80.43 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-V3.1
Q4
4GB
80.30 tok/sEstimated
Auto-generated benchmark
deepseek-ai/deepseek-coder-1.3b-instruct
Q4
2GB
80.24 tok/sEstimated
Auto-generated benchmark
meta-llama/Meta-Llama-3-8B
Q4
4GB
80.09 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-4B-Thinking-2507-FP8
Q4
2GB
79.87 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2-7B-Instruct
Q4
4GB
79.79 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-1B
Q4
1GB
79.71 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2-0.5B-Instruct
Q4
3GB
79.69 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-docling-258M
Q4
4GB
79.67 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
79.46 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
Q4
3GB
79.30 tok/sEstimated
Auto-generated benchmark
tencent/HunyuanOCR
Q4
1GB
79.24 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Embedding-0.6B
Q4
3GB
79.13 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-8B-Base
Q4
4GB
79.02 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-2-7b-hf
Q4
4GB
78.99 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Embedding-4B
Q4
2GB
78.86 tok/sEstimated
Auto-generated benchmark
MiniMaxAI/MiniMax-M2
Q4
4GB
78.74 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-7B
Q4
4GB
78.70 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-0.5B-Instruct
Q4
3GB
78.68 tok/sEstimated
Auto-generated benchmark
kaitchup/Phi-3-mini-4k-instruct-gptq-4bit
Q4
2GB
78.43 tok/sEstimated
Auto-generated benchmark
mistralai/Mistral-7B-v0.1
Q4
4GB
78.39 tok/sEstimated
Auto-generated benchmark
Tongyi-MAI/Z-Image-Turbo
Q4
4GB
78.37 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
Qwen/Qwen2.5-0.5BQ8Fits comfortably
55.26 tok/sEstimated
5GB (have 128GB)
deepseek-ai/deepseek-coder-1.3b-instructQ8Fits comfortably
57.57 tok/sEstimated
3GB (have 128GB)
deepseek-ai/deepseek-coder-1.3b-instructFP16Fits comfortably
35.28 tok/sEstimated
6GB (have 128GB)
microsoft/phi-4Q4Fits comfortably
74.68 tok/sEstimated
4GB (have 128GB)
microsoft/phi-4Q8Fits comfortably
53.84 tok/sEstimated
7GB (have 128GB)
meta-llama/Llama-3.1-8BQ4Fits comfortably
71.58 tok/sEstimated
4GB (have 128GB)
LiquidAI/LFM2-1.2BFP16Fits comfortably
33.45 tok/sEstimated
4GB (have 128GB)
mistralai/Mistral-7B-Instruct-v0.1Q4Fits comfortably
76.39 tok/sEstimated
4GB (have 128GB)
Qwen/Qwen3-8B-FP8FP16Fits comfortably
29.00 tok/sEstimated
17GB (have 128GB)
microsoft/DialoGPT-smallQ4Fits comfortably
80.47 tok/sEstimated
4GB (have 128GB)
microsoft/DialoGPT-smallQ8Fits comfortably
53.01 tok/sEstimated
7GB (have 128GB)
microsoft/DialoGPT-smallFP16Fits comfortably
29.39 tok/sEstimated
15GB (have 128GB)
Qwen/Qwen3-30B-A3BQ8Fits comfortably
28.07 tok/sEstimated
31GB (have 128GB)
Qwen/Qwen3-30B-A3BFP16Fits comfortably
15.38 tok/sEstimated
61GB (have 128GB)
unsloth/mistral-7b-v0.3-bnb-4bitQ4Fits comfortably
73.71 tok/sEstimated
4GB (have 128GB)
swiss-ai/Apertus-8B-Instruct-2509Q8Fits comfortably
49.10 tok/sEstimated
9GB (have 128GB)
microsoft/Phi-3.5-mini-instructFP16Fits comfortably
26.21 tok/sEstimated
15GB (have 128GB)
unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bitQ4Fits comfortably
26.30 tok/sEstimated
16GB (have 128GB)
ibm-research/PowerMoE-3bQ4Fits comfortably
85.97 tok/sEstimated
2GB (have 128GB)
ibm-research/PowerMoE-3bQ8Fits comfortably
65.06 tok/sEstimated
3GB (have 128GB)
ibm-research/PowerMoE-3bFP16Fits comfortably
31.89 tok/sEstimated
6GB (have 128GB)
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bitQ4Fits comfortably
73.02 tok/sEstimated
2GB (have 128GB)
meta-llama/Llama-Guard-3-8BQ4Fits comfortably
66.68 tok/sEstimated
4GB (have 128GB)
meta-llama/Llama-Guard-3-8BQ8Fits comfortably
51.02 tok/sEstimated
9GB (have 128GB)
deepseek-ai/DeepSeek-V3-0324FP16Fits comfortably
27.61 tok/sEstimated
15GB (have 128GB)
huggyllama/llama-7bQ4Fits comfortably
68.90 tok/sEstimated
4GB (have 128GB)
huggyllama/llama-7bQ8Fits comfortably
52.00 tok/sEstimated
7GB (have 128GB)
huggyllama/llama-7bFP16Fits comfortably
25.72 tok/sEstimated
15GB (have 128GB)
Qwen/Qwen2.5-Coder-7B-InstructQ8Fits comfortably
48.54 tok/sEstimated
7GB (have 128GB)
Qwen/Qwen3-4B-Thinking-2507Q8Fits comfortably
48.45 tok/sEstimated
4GB (have 128GB)
Qwen/Qwen3-4B-Thinking-2507FP16Fits comfortably
26.29 tok/sEstimated
9GB (have 128GB)
HuggingFaceH4/zephyr-7b-betaQ4Fits comfortably
70.43 tok/sEstimated
4GB (have 128GB)
HuggingFaceH4/zephyr-7b-betaQ8Fits comfortably
50.62 tok/sEstimated
7GB (have 128GB)
Qwen/Qwen3-235B-A22BFP16Not supported
3.28 tok/sEstimated
460GB (have 128GB)
trl-internal-testing/tiny-LlamaForCausalLM-3.2Q4Fits comfortably
68.17 tok/sEstimated
4GB (have 128GB)
skt/kogpt2-base-v2Q8Fits comfortably
51.87 tok/sEstimated
7GB (have 128GB)
skt/kogpt2-base-v2FP16Fits comfortably
26.74 tok/sEstimated
15GB (have 128GB)
ibm-granite/granite-docling-258MQ4Fits comfortably
79.67 tok/sEstimated
4GB (have 128GB)
bigcode/starcoder2-3bFP16Fits comfortably
36.05 tok/sEstimated
6GB (have 128GB)
Qwen/Qwen2.5-3BFP16Fits comfortably
36.52 tok/sEstimated
6GB (have 128GB)
nvidia/NVIDIA-Nemotron-Nano-9B-v2Q4Fits comfortably
56.11 tok/sEstimated
5GB (have 128GB)
Alibaba-NLP/gte-Qwen2-1.5B-instructQ4Fits comfortably
72.49 tok/sEstimated
3GB (have 128GB)
Qwen/Qwen3-Next-80B-A3B-Instruct-FP8Q8Fits comfortably
10.44 tok/sEstimated
78GB (have 128GB)
Qwen/Qwen3-Next-80B-A3B-Instruct-FP8FP16Not supported
5.12 tok/sEstimated
156GB (have 128GB)
Qwen/Qwen2.5-Coder-32B-InstructQ4Fits comfortably
24.68 tok/sEstimated
17GB (have 128GB)
Qwen/Qwen2.5-32B-InstructFP16Fits comfortably
10.32 tok/sEstimated
66GB (have 128GB)
mistralai/Mistral-7B-v0.1Q4Fits comfortably
78.39 tok/sEstimated
4GB (have 128GB)
mistralai/Mistral-7B-v0.1Q8Fits comfortably
52.72 tok/sEstimated
7GB (have 128GB)
LiquidAI/LFM2-1.2BQ4Fits comfortably
81.79 tok/sEstimated
1GB (have 128GB)
mlx-community/gpt-oss-20b-MXFP4-Q8FP16Fits comfortably
13.97 tok/sEstimated
41GB (have 128GB)
Qwen/Qwen2.5-0.5BQ8
Fits comfortably5GB required · 128GB available
55.26 tok/sEstimated
deepseek-ai/deepseek-coder-1.3b-instructQ8
Fits comfortably3GB required · 128GB available
57.57 tok/sEstimated
deepseek-ai/deepseek-coder-1.3b-instructFP16
Fits comfortably6GB required · 128GB available
35.28 tok/sEstimated
microsoft/phi-4Q4
Fits comfortably4GB required · 128GB available
74.68 tok/sEstimated
microsoft/phi-4Q8
Fits comfortably7GB required · 128GB available
53.84 tok/sEstimated
meta-llama/Llama-3.1-8BQ4
Fits comfortably4GB required · 128GB available
71.58 tok/sEstimated
LiquidAI/LFM2-1.2BFP16
Fits comfortably4GB required · 128GB available
33.45 tok/sEstimated
mistralai/Mistral-7B-Instruct-v0.1Q4
Fits comfortably4GB required · 128GB available
76.39 tok/sEstimated
Qwen/Qwen3-8B-FP8FP16
Fits comfortably17GB required · 128GB available
29.00 tok/sEstimated
microsoft/DialoGPT-smallQ4
Fits comfortably4GB required · 128GB available
80.47 tok/sEstimated
microsoft/DialoGPT-smallQ8
Fits comfortably7GB required · 128GB available
53.01 tok/sEstimated
microsoft/DialoGPT-smallFP16
Fits comfortably15GB required · 128GB available
29.39 tok/sEstimated
Qwen/Qwen3-30B-A3BQ8
Fits comfortably31GB required · 128GB available
28.07 tok/sEstimated
Qwen/Qwen3-30B-A3BFP16
Fits comfortably61GB required · 128GB available
15.38 tok/sEstimated
unsloth/mistral-7b-v0.3-bnb-4bitQ4
Fits comfortably4GB required · 128GB available
73.71 tok/sEstimated
swiss-ai/Apertus-8B-Instruct-2509Q8
Fits comfortably9GB required · 128GB available
49.10 tok/sEstimated
microsoft/Phi-3.5-mini-instructFP16
Fits comfortably15GB required · 128GB available
26.21 tok/sEstimated
unsloth/DeepSeek-R1-Distill-Qwen-32B-bnb-4bitQ4
Fits comfortably16GB required · 128GB available
26.30 tok/sEstimated
ibm-research/PowerMoE-3bQ4
Fits comfortably2GB required · 128GB available
85.97 tok/sEstimated
ibm-research/PowerMoE-3bQ8
Fits comfortably3GB required · 128GB available
65.06 tok/sEstimated
ibm-research/PowerMoE-3bFP16
Fits comfortably6GB required · 128GB available
31.89 tok/sEstimated
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bitQ4
Fits comfortably2GB required · 128GB available
73.02 tok/sEstimated
meta-llama/Llama-Guard-3-8BQ4
Fits comfortably4GB required · 128GB available
66.68 tok/sEstimated
meta-llama/Llama-Guard-3-8BQ8
Fits comfortably9GB required · 128GB available
51.02 tok/sEstimated
deepseek-ai/DeepSeek-V3-0324FP16
Fits comfortably15GB required · 128GB available
27.61 tok/sEstimated
huggyllama/llama-7bQ4
Fits comfortably4GB required · 128GB available
68.90 tok/sEstimated
huggyllama/llama-7bQ8
Fits comfortably7GB required · 128GB available
52.00 tok/sEstimated
huggyllama/llama-7bFP16
Fits comfortably15GB required · 128GB available
25.72 tok/sEstimated
Qwen/Qwen2.5-Coder-7B-InstructQ8
Fits comfortably7GB required · 128GB available
48.54 tok/sEstimated
Qwen/Qwen3-4B-Thinking-2507Q8
Fits comfortably4GB required · 128GB available
48.45 tok/sEstimated
Qwen/Qwen3-4B-Thinking-2507FP16
Fits comfortably9GB required · 128GB available
26.29 tok/sEstimated
HuggingFaceH4/zephyr-7b-betaQ4
Fits comfortably4GB required · 128GB available
70.43 tok/sEstimated
HuggingFaceH4/zephyr-7b-betaQ8
Fits comfortably7GB required · 128GB available
50.62 tok/sEstimated
Qwen/Qwen3-235B-A22BFP16
Not supported460GB required · 128GB available
3.28 tok/sEstimated
trl-internal-testing/tiny-LlamaForCausalLM-3.2Q4
Fits comfortably4GB required · 128GB available
68.17 tok/sEstimated
skt/kogpt2-base-v2Q8
Fits comfortably7GB required · 128GB available
51.87 tok/sEstimated
skt/kogpt2-base-v2FP16
Fits comfortably15GB required · 128GB available
26.74 tok/sEstimated
ibm-granite/granite-docling-258MQ4
Fits comfortably4GB required · 128GB available
79.67 tok/sEstimated
bigcode/starcoder2-3bFP16
Fits comfortably6GB required · 128GB available
36.05 tok/sEstimated
Qwen/Qwen2.5-3BFP16
Fits comfortably6GB required · 128GB available
36.52 tok/sEstimated
nvidia/NVIDIA-Nemotron-Nano-9B-v2Q4
Fits comfortably5GB required · 128GB available
56.11 tok/sEstimated
Alibaba-NLP/gte-Qwen2-1.5B-instructQ4
Fits comfortably3GB required · 128GB available
72.49 tok/sEstimated
Qwen/Qwen3-Next-80B-A3B-Instruct-FP8Q8
Fits comfortably78GB required · 128GB available
10.44 tok/sEstimated
Qwen/Qwen3-Next-80B-A3B-Instruct-FP8FP16
Not supported156GB required · 128GB available
5.12 tok/sEstimated
Qwen/Qwen2.5-Coder-32B-InstructQ4
Fits comfortably17GB required · 128GB available
24.68 tok/sEstimated
Qwen/Qwen2.5-32B-InstructFP16
Fits comfortably66GB required · 128GB available
10.32 tok/sEstimated
mistralai/Mistral-7B-v0.1Q4
Fits comfortably4GB required · 128GB available
78.39 tok/sEstimated
mistralai/Mistral-7B-v0.1Q8
Fits comfortably7GB required · 128GB available
52.72 tok/sEstimated
LiquidAI/LFM2-1.2BQ4
Fits comfortably1GB required · 128GB available
81.79 tok/sEstimated
mlx-community/gpt-oss-20b-MXFP4-Q8FP16
Fits comfortably41GB required · 128GB available
13.97 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Alternative GPUs

RTX 5070
12GB

Explore how RTX 5070 stacks up for local inference workloads.

RTX 4060 Ti 16GB
16GB

Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.

RX 6800 XT
16GB

Explore how RX 6800 XT stacks up for local inference workloads.

RTX 4070 Super
12GB

Explore how RTX 4070 Super stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.