L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Community

  • Leaderboard

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. Intel Arc A770 16GB

Quick Answer: Intel Arc A770 16GB offers 16GB VRAM and starts around $467.08. It delivers approximately 113 tokens/sec on deepseek-ai/DeepSeek-OCR. It typically draws 225W under load.

Intel Arc A770 16GB

In Stock
By IntelReleased 2022-10MSRP $349.00

This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.

Buy on Amazon - $467.08View Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM16GB
Cores4,096
TDP225W
ArchitectureAlchemist Xe HPG

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

AmazonIn Stock
$467.08
Buy on Amazon

More Amazon options

Rotate out primary variants whenever validation flags an issue.

💡 Not ready to buy? Try cloud GPUs first

Test Intel Arc A770 16GB performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
deepseek-ai/DeepSeek-OCRQ4
112.76 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-3B-InstructQ4
110.87 tok/sEstimated

Auto-generated benchmark

2GB
google-bert/bert-base-uncasedQ4
110.45 tok/sEstimated

Auto-generated benchmark

1GB
nari-labs/Dia2-2BQ4
109.18 tok/sEstimated

Auto-generated benchmark

2GB
apple/OpenELM-1_1B-InstructQ4
108.47 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-Guard-3-1BQ4
108.22 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/deepseek-coder-1.3b-instructQ4
108.15 tok/sEstimated

Auto-generated benchmark

2GB
ibm-research/PowerMoE-3bQ4
107.34 tok/sEstimated

Auto-generated benchmark

2GB
LiquidAI/LFM2-1.2BQ4
107.07 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-1B-InstructQ4
105.32 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-3BQ4
103.85 tok/sEstimated

Auto-generated benchmark

2GB
WeiboAI/VibeThinker-1.5BQ4
103.23 tok/sEstimated

Auto-generated benchmark

1GB
facebook/sam3Q4
102.72 tok/sEstimated

Auto-generated benchmark

1GB
allenai/OLMo-2-0425-1BQ4
102.34 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-3-1b-itQ4
101.01 tok/sEstimated

Auto-generated benchmark

1GB
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
100.43 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3B-InstructQ4
98.87 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1B-InstructQ4
97.39 tok/sEstimated

Auto-generated benchmark

1GB
tencent/HunyuanOCRQ4
97.27 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/gemma-3-1b-itQ4
96.88 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-3B-InstructQ4
96.22 tok/sEstimated

Auto-generated benchmark

2GB
google/gemma-2bQ4
96.17 tok/sEstimated

Auto-generated benchmark

1GB
bigcode/starcoder2-3bQ4
95.54 tok/sEstimated

Auto-generated benchmark

2GB
inference-net/Schematron-3BQ4
95.52 tok/sEstimated

Auto-generated benchmark

2GB
google/embeddinggemma-300mQ4
95.29 tok/sEstimated

Auto-generated benchmark

1GB
HuggingFaceTB/SmolLM2-135MQ4
94.86 tok/sEstimated

Auto-generated benchmark

4GB
openai-community/gpt2Q4
94.81 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-1.7BQ4
94.73 tok/sEstimated

Auto-generated benchmark

4GB
google/gemma-2-2b-itQ4
94.47 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-Math-1.5BQ4
94.45 tok/sEstimated

Auto-generated benchmark

3GB
Qwen/Qwen2-7B-InstructQ4
94.40 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-R1-Distill-Llama-8BQ4
94.25 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/VibeVoice-1.5BQ4
94.24 tok/sEstimated

Auto-generated benchmark

3GB
Qwen/Qwen2.5-Coder-1.5BQ4
94.21 tok/sEstimated

Auto-generated benchmark

3GB
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q4
94.16 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2.5-1.5BQ4
94.14 tok/sEstimated

Auto-generated benchmark

3GB
ibm-granite/granite-3.3-2b-instructQ4
94.04 tok/sEstimated

Auto-generated benchmark

1GB
microsoft/DialoGPT-mediumQ4
93.68 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-2-7b-chat-hfQ4
93.64 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-1.7B-BaseQ4
93.55 tok/sEstimated

Auto-generated benchmark

4GB
lmsys/vicuna-7b-v1.5Q4
93.54 tok/sEstimated

Auto-generated benchmark

4GB
ibm-granite/granite-3.3-8b-instructQ4
93.52 tok/sEstimated

Auto-generated benchmark

4GB
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-8bitQ4
93.46 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2.5-3BQ4
93.39 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1BQ4
93.32 tok/sEstimated

Auto-generated benchmark

1GB
google-t5/t5-3bQ4
93.19 tok/sEstimated

Auto-generated benchmark

2GB
MiniMaxAI/MiniMax-M2Q4
93.03 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/phi-4Q4
93.03 tok/sEstimated

Auto-generated benchmark

4GB
HuggingFaceTB/SmolLM-135MQ4
92.97 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-3.1-8BQ4
92.83 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-OCR
Q4
2GB
112.76 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
110.87 tok/sEstimated
Auto-generated benchmark
google-bert/bert-base-uncased
Q4
1GB
110.45 tok/sEstimated
Auto-generated benchmark
nari-labs/Dia2-2B
Q4
2GB
109.18 tok/sEstimated
Auto-generated benchmark
apple/OpenELM-1_1B-Instruct
Q4
1GB
108.47 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-1B
Q4
1GB
108.22 tok/sEstimated
Auto-generated benchmark
deepseek-ai/deepseek-coder-1.3b-instruct
Q4
2GB
108.15 tok/sEstimated
Auto-generated benchmark
ibm-research/PowerMoE-3b
Q4
2GB
107.34 tok/sEstimated
Auto-generated benchmark
LiquidAI/LFM2-1.2B
Q4
1GB
107.07 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-1B-Instruct
Q4
1GB
105.32 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B
Q4
2GB
103.85 tok/sEstimated
Auto-generated benchmark
WeiboAI/VibeThinker-1.5B
Q4
1GB
103.23 tok/sEstimated
Auto-generated benchmark
facebook/sam3
Q4
1GB
102.72 tok/sEstimated
Auto-generated benchmark
allenai/OLMo-2-0425-1B
Q4
1GB
102.34 tok/sEstimated
Auto-generated benchmark
google/gemma-3-1b-it
Q4
1GB
101.01 tok/sEstimated
Auto-generated benchmark
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Q4
1GB
100.43 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B-Instruct
Q4
2GB
98.87 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B-Instruct
Q4
1GB
97.39 tok/sEstimated
Auto-generated benchmark
tencent/HunyuanOCR
Q4
1GB
97.27 tok/sEstimated
Auto-generated benchmark
unsloth/gemma-3-1b-it
Q4
1GB
96.88 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-3B-Instruct
Q4
2GB
96.22 tok/sEstimated
Auto-generated benchmark
google/gemma-2b
Q4
1GB
96.17 tok/sEstimated
Auto-generated benchmark
bigcode/starcoder2-3b
Q4
2GB
95.54 tok/sEstimated
Auto-generated benchmark
inference-net/Schematron-3B
Q4
2GB
95.52 tok/sEstimated
Auto-generated benchmark
google/embeddinggemma-300m
Q4
1GB
95.29 tok/sEstimated
Auto-generated benchmark
HuggingFaceTB/SmolLM2-135M
Q4
4GB
94.86 tok/sEstimated
Auto-generated benchmark
openai-community/gpt2
Q4
4GB
94.81 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-1.7B
Q4
4GB
94.73 tok/sEstimated
Auto-generated benchmark
google/gemma-2-2b-it
Q4
1GB
94.47 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-Math-1.5B
Q4
3GB
94.45 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2-7B-Instruct
Q4
4GB
94.40 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Q4
4GB
94.25 tok/sEstimated
Auto-generated benchmark
microsoft/VibeVoice-1.5B
Q4
3GB
94.24 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-Coder-1.5B
Q4
3GB
94.21 tok/sEstimated
Auto-generated benchmark
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16
Q4
2GB
94.16 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-1.5B
Q4
3GB
94.14 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-2b-instruct
Q4
1GB
94.04 tok/sEstimated
Auto-generated benchmark
microsoft/DialoGPT-medium
Q4
4GB
93.68 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-2-7b-chat-hf
Q4
4GB
93.64 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-1.7B-Base
Q4
4GB
93.55 tok/sEstimated
Auto-generated benchmark
lmsys/vicuna-7b-v1.5
Q4
4GB
93.54 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-8b-instruct
Q4
4GB
93.52 tok/sEstimated
Auto-generated benchmark
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-8bit
Q4
2GB
93.46 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B
Q4
2GB
93.39 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B
Q4
1GB
93.32 tok/sEstimated
Auto-generated benchmark
google-t5/t5-3b
Q4
2GB
93.19 tok/sEstimated
Auto-generated benchmark
MiniMaxAI/MiniMax-M2
Q4
4GB
93.03 tok/sEstimated
Auto-generated benchmark
microsoft/phi-4
Q4
4GB
93.03 tok/sEstimated
Auto-generated benchmark
HuggingFaceTB/SmolLM-135M
Q4
4GB
92.97 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.1-8B
Q4
4GB
92.83 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
XiaomiMiMo/MiMo-V2-FlashQ4Not supported
10.19 tok/sEstimated
174GB (have 16GB)
XiaomiMiMo/MiMo-V2-FlashQ8Not supported
7.39 tok/sEstimated
347GB (have 16GB)
XiaomiMiMo/MiMo-V2-FlashFP16Not supported
3.57 tok/sEstimated
693GB (have 16GB)
zai-org/GLM-4.7Q4Not supported
10.01 tok/sEstimated
201GB (have 16GB)
zai-org/GLM-4.7Q8Not supported
7.96 tok/sEstimated
401GB (have 16GB)
zai-org/GLM-4.7FP16Not supported
4.29 tok/sEstimated
801GB (have 16GB)
Qwen/Qwen3-0.6BQ4Fits comfortably
81.68 tok/sEstimated
3GB (have 16GB)
openai-community/gpt2FP16Fits (tight)
35.32 tok/sEstimated
15GB (have 16GB)
Qwen/Qwen2.5-7B-InstructQ4Fits comfortably
83.32 tok/sEstimated
4GB (have 16GB)
Qwen/Qwen2.5-7B-InstructQ8Fits comfortably
61.90 tok/sEstimated
7GB (have 16GB)
dphn/dolphin-2.9.1-yi-1.5-34bQ8Not supported
19.65 tok/sEstimated
35GB (have 16GB)
dphn/dolphin-2.9.1-yi-1.5-34bFP16Not supported
11.21 tok/sEstimated
70GB (have 16GB)
Qwen/Qwen2.5-1.5B-InstructQ4Fits comfortably
79.95 tok/sEstimated
3GB (have 16GB)
facebook/opt-125mQ8Fits comfortably
65.42 tok/sEstimated
7GB (have 16GB)
facebook/opt-125mFP16Fits (tight)
30.42 tok/sEstimated
15GB (have 16GB)
facebook/opt-125mQ4Fits comfortably
87.95 tok/sEstimated
4GB (have 16GB)
Qwen/Qwen3-4B-Instruct-2507Q8Fits comfortably
56.15 tok/sEstimated
4GB (have 16GB)
Qwen/Qwen3-4B-Instruct-2507FP16Fits comfortably
35.54 tok/sEstimated
9GB (have 16GB)
meta-llama/Llama-3.2-1B-InstructQ4Fits comfortably
97.39 tok/sEstimated
1GB (have 16GB)
meta-llama/Llama-3.2-1B-InstructQ8Fits comfortably
77.20 tok/sEstimated
1GB (have 16GB)
openai/gpt-oss-120bQ4Not supported
18.62 tok/sEstimated
59GB (have 16GB)
openai/gpt-oss-120bQ8Not supported
10.91 tok/sEstimated
117GB (have 16GB)
openai/gpt-oss-120bFP16Not supported
5.98 tok/sEstimated
235GB (have 16GB)
Qwen/Qwen2.5-3B-InstructQ4Fits comfortably
98.87 tok/sEstimated
2GB (have 16GB)
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q8Fits comfortably
65.71 tok/sEstimated
3GB (have 16GB)
mistralai/Mistral-7B-Instruct-v0.2Q8Fits comfortably
59.95 tok/sEstimated
7GB (have 16GB)
mistralai/Mistral-7B-Instruct-v0.2FP16Fits (tight)
35.43 tok/sEstimated
15GB (have 16GB)
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16FP16Fits comfortably
40.83 tok/sEstimated
6GB (have 16GB)
mistralai/Mistral-7B-Instruct-v0.2Q4Fits comfortably
80.58 tok/sEstimated
4GB (have 16GB)
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicQ4Not supported
28.56 tok/sEstimated
34GB (have 16GB)
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicQ8Not supported
23.08 tok/sEstimated
68GB (have 16GB)
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicFP16Not supported
12.27 tok/sEstimated
137GB (have 16GB)
meta-llama/Llama-3.2-3B-InstructQ4Fits comfortably
110.87 tok/sEstimated
2GB (have 16GB)
microsoft/Phi-3-mini-4k-instructQ4Fits comfortably
82.02 tok/sEstimated
4GB (have 16GB)
microsoft/Phi-3-mini-4k-instructFP16Fits (tight)
35.28 tok/sEstimated
15GB (have 16GB)
openai-community/gpt2-largeQ4Fits comfortably
87.11 tok/sEstimated
4GB (have 16GB)
microsoft/Phi-3-mini-4k-instructQ8Fits comfortably
58.45 tok/sEstimated
7GB (have 16GB)
Qwen/Qwen3-1.7BQ8Fits comfortably
58.21 tok/sEstimated
7GB (have 16GB)
Qwen/Qwen3-1.7BFP16Fits (tight)
31.34 tok/sEstimated
15GB (have 16GB)
Qwen/Qwen3-30B-A3B-Instruct-2507FP16Not supported
18.98 tok/sEstimated
61GB (have 16GB)
google-t5/t5-3bQ4Fits comfortably
93.19 tok/sEstimated
2GB (have 16GB)
google-t5/t5-3bQ8Fits comfortably
72.73 tok/sEstimated
3GB (have 16GB)
google-t5/t5-3bFP16Fits comfortably
37.60 tok/sEstimated
6GB (have 16GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5BFP16Fits comfortably
33.07 tok/sEstimated
11GB (have 16GB)
mlx-community/gpt-oss-20b-MXFP4-Q8Q4Fits comfortably
45.20 tok/sEstimated
10GB (have 16GB)
mlx-community/gpt-oss-20b-MXFP4-Q8Q8Not supported
35.15 tok/sEstimated
20GB (have 16GB)
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitQ8Fits comfortably
65.81 tok/sEstimated
4GB (have 16GB)
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitFP16Fits comfortably
32.65 tok/sEstimated
9GB (have 16GB)
Qwen/Qwen2.5-1.5BQ4Fits comfortably
94.14 tok/sEstimated
3GB (have 16GB)
WeiboAI/VibeThinker-1.5BFP16Fits comfortably
42.45 tok/sEstimated
4GB (have 16GB)
XiaomiMiMo/MiMo-V2-FlashQ4
Not supported174GB required · 16GB available
10.19 tok/sEstimated
XiaomiMiMo/MiMo-V2-FlashQ8
Not supported347GB required · 16GB available
7.39 tok/sEstimated
XiaomiMiMo/MiMo-V2-FlashFP16
Not supported693GB required · 16GB available
3.57 tok/sEstimated
zai-org/GLM-4.7Q4
Not supported201GB required · 16GB available
10.01 tok/sEstimated
zai-org/GLM-4.7Q8
Not supported401GB required · 16GB available
7.96 tok/sEstimated
zai-org/GLM-4.7FP16
Not supported801GB required · 16GB available
4.29 tok/sEstimated
Qwen/Qwen3-0.6BQ4
Fits comfortably3GB required · 16GB available
81.68 tok/sEstimated
openai-community/gpt2FP16
Fits (tight)15GB required · 16GB available
35.32 tok/sEstimated
Qwen/Qwen2.5-7B-InstructQ4
Fits comfortably4GB required · 16GB available
83.32 tok/sEstimated
Qwen/Qwen2.5-7B-InstructQ8
Fits comfortably7GB required · 16GB available
61.90 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bQ8
Not supported35GB required · 16GB available
19.65 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bFP16
Not supported70GB required · 16GB available
11.21 tok/sEstimated
Qwen/Qwen2.5-1.5B-InstructQ4
Fits comfortably3GB required · 16GB available
79.95 tok/sEstimated
facebook/opt-125mQ8
Fits comfortably7GB required · 16GB available
65.42 tok/sEstimated
facebook/opt-125mFP16
Fits (tight)15GB required · 16GB available
30.42 tok/sEstimated
facebook/opt-125mQ4
Fits comfortably4GB required · 16GB available
87.95 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507Q8
Fits comfortably4GB required · 16GB available
56.15 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507FP16
Fits comfortably9GB required · 16GB available
35.54 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructQ4
Fits comfortably1GB required · 16GB available
97.39 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructQ8
Fits comfortably1GB required · 16GB available
77.20 tok/sEstimated
openai/gpt-oss-120bQ4
Not supported59GB required · 16GB available
18.62 tok/sEstimated
openai/gpt-oss-120bQ8
Not supported117GB required · 16GB available
10.91 tok/sEstimated
openai/gpt-oss-120bFP16
Not supported235GB required · 16GB available
5.98 tok/sEstimated
Qwen/Qwen2.5-3B-InstructQ4
Fits comfortably2GB required · 16GB available
98.87 tok/sEstimated
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q8
Fits comfortably3GB required · 16GB available
65.71 tok/sEstimated
mistralai/Mistral-7B-Instruct-v0.2Q8
Fits comfortably7GB required · 16GB available
59.95 tok/sEstimated
mistralai/Mistral-7B-Instruct-v0.2FP16
Fits (tight)15GB required · 16GB available
35.43 tok/sEstimated
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16FP16
Fits comfortably6GB required · 16GB available
40.83 tok/sEstimated
mistralai/Mistral-7B-Instruct-v0.2Q4
Fits comfortably4GB required · 16GB available
80.58 tok/sEstimated
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicQ4
Not supported34GB required · 16GB available
28.56 tok/sEstimated
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicQ8
Not supported68GB required · 16GB available
23.08 tok/sEstimated
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicFP16
Not supported137GB required · 16GB available
12.27 tok/sEstimated
meta-llama/Llama-3.2-3B-InstructQ4
Fits comfortably2GB required · 16GB available
110.87 tok/sEstimated
microsoft/Phi-3-mini-4k-instructQ4
Fits comfortably4GB required · 16GB available
82.02 tok/sEstimated
microsoft/Phi-3-mini-4k-instructFP16
Fits (tight)15GB required · 16GB available
35.28 tok/sEstimated
openai-community/gpt2-largeQ4
Fits comfortably4GB required · 16GB available
87.11 tok/sEstimated
microsoft/Phi-3-mini-4k-instructQ8
Fits comfortably7GB required · 16GB available
58.45 tok/sEstimated
Qwen/Qwen3-1.7BQ8
Fits comfortably7GB required · 16GB available
58.21 tok/sEstimated
Qwen/Qwen3-1.7BFP16
Fits (tight)15GB required · 16GB available
31.34 tok/sEstimated
Qwen/Qwen3-30B-A3B-Instruct-2507FP16
Not supported61GB required · 16GB available
18.98 tok/sEstimated
google-t5/t5-3bQ4
Fits comfortably2GB required · 16GB available
93.19 tok/sEstimated
google-t5/t5-3bQ8
Fits comfortably3GB required · 16GB available
72.73 tok/sEstimated
google-t5/t5-3bFP16
Fits comfortably6GB required · 16GB available
37.60 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5BFP16
Fits comfortably11GB required · 16GB available
33.07 tok/sEstimated
mlx-community/gpt-oss-20b-MXFP4-Q8Q4
Fits comfortably10GB required · 16GB available
45.20 tok/sEstimated
mlx-community/gpt-oss-20b-MXFP4-Q8Q8
Not supported20GB required · 16GB available
35.15 tok/sEstimated
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitQ8
Fits comfortably4GB required · 16GB available
65.81 tok/sEstimated
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitFP16
Fits comfortably9GB required · 16GB available
32.65 tok/sEstimated
Qwen/Qwen2.5-1.5BQ4
Fits comfortably3GB required · 16GB available
94.14 tok/sEstimated
WeiboAI/VibeThinker-1.5BFP16
Fits comfortably4GB required · 16GB available
42.45 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Alternative GPUs

RTX 5070
12GB

Explore how RTX 5070 stacks up for local inference workloads.

RTX 4060 Ti 16GB
16GB

Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.

RX 6800 XT
16GB

Explore how RX 6800 XT stacks up for local inference workloads.

RTX 4070 Super
12GB

Explore how RTX 4070 Super stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.