L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Community

  • Leaderboard

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. Intel Arc Pro A60

Quick Answer: Intel Arc Pro A60 offers 12GB VRAM and starts around current market pricing. It delivers approximately 80 tokens/sec on ibm-research/PowerMoE-3b. It typically draws 130W under load.

Intel Arc Pro A60

Check availability
By IntelReleased 2023-03MSRP $599.00

This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.

Search on AmazonView Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM12GB
Cores3,584
TDP130W
ArchitectureAlchemist Xe HPG

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

No purchase links available yet. Try the Amazon search results to find this GPU.

💡 Not ready to buy? Try cloud GPUs first

Test Intel Arc Pro A60 performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
ibm-research/PowerMoE-3bQ4
80.43 tok/sEstimated

Auto-generated benchmark

2GB
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
80.15 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-1BQ4
79.80 tok/sEstimated

Auto-generated benchmark

1GB
ibm-granite/granite-3.3-2b-instructQ4
79.60 tok/sEstimated

Auto-generated benchmark

1GB
facebook/sam3Q4
78.80 tok/sEstimated

Auto-generated benchmark

1GB
tencent/HunyuanOCRQ4
78.68 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-3B-InstructQ4
78.45 tok/sEstimated

Auto-generated benchmark

2GB
deepseek-ai/deepseek-coder-1.3b-instructQ4
78.04 tok/sEstimated

Auto-generated benchmark

2GB
google-bert/bert-base-uncasedQ4
77.18 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3B-InstructQ4
77.16 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2.5-3BQ4
77.11 tok/sEstimated

Auto-generated benchmark

2GB
allenai/OLMo-2-0425-1BQ4
77.06 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/DeepSeek-OCRQ4
75.66 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/gemma-3-1b-itQ4
75.47 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-3-1b-itQ4
74.70 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-1B-InstructQ4
74.02 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-Guard-3-1BQ4
73.62 tok/sEstimated

Auto-generated benchmark

1GB
bigcode/starcoder2-3bQ4
73.43 tok/sEstimated

Auto-generated benchmark

2GB
WeiboAI/VibeThinker-1.5BQ4
73.42 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-3B-InstructQ4
73.28 tok/sEstimated

Auto-generated benchmark

2GB
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q4
73.22 tok/sEstimated

Auto-generated benchmark

2GB
apple/OpenELM-1_1B-InstructQ4
73.15 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2bQ4
72.04 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-1B-InstructQ4
71.79 tok/sEstimated

Auto-generated benchmark

1GB
google-t5/t5-3bQ4
71.54 tok/sEstimated

Auto-generated benchmark

2GB
google/embeddinggemma-300mQ4
69.60 tok/sEstimated

Auto-generated benchmark

1GB
nari-labs/Dia2-2BQ4
69.39 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-3BQ4
68.11 tok/sEstimated

Auto-generated benchmark

2GB
EleutherAI/gpt-neo-125mQ4
67.39 tok/sEstimated

Auto-generated benchmark

4GB
black-forest-labs/FLUX.2-devQ4
67.33 tok/sEstimated

Auto-generated benchmark

4GB
google/gemma-2-2b-itQ4
67.30 tok/sEstimated

Auto-generated benchmark

1GB
black-forest-labs/FLUX.1-devQ4
67.29 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-8B-BaseQ4
67.25 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-Embedding-4BQ4
67.20 tok/sEstimated

Auto-generated benchmark

2GB
GSAI-ML/LLaDA-8B-InstructQ4
67.15 tok/sEstimated

Auto-generated benchmark

4GB
inference-net/Schematron-3BQ4
67.09 tok/sEstimated

Auto-generated benchmark

2GB
rinna/japanese-gpt-neox-smallQ4
67.08 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/DialoGPT-smallQ4
67.00 tok/sEstimated

Auto-generated benchmark

4GB
LiquidAI/LFM2-1.2BQ4
66.96 tok/sEstimated

Auto-generated benchmark

1GB
skt/kogpt2-base-v2Q4
66.78 tok/sEstimated

Auto-generated benchmark

4GB
unsloth/Meta-Llama-3.1-8B-InstructQ4
66.74 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-Reranker-0.6BQ4
66.71 tok/sEstimated

Auto-generated benchmark

3GB
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bitQ4
66.51 tok/sEstimated

Auto-generated benchmark

2GB
deepseek-ai/DeepSeek-R1-Distill-Llama-8BQ4
66.42 tok/sEstimated

Auto-generated benchmark

4GB
ibm-granite/granite-3.3-8b-instructQ4
66.34 tok/sEstimated

Auto-generated benchmark

4GB
lmsys/vicuna-7b-v1.5Q4
66.26 tok/sEstimated

Auto-generated benchmark

4GB
openai-community/gpt2-mediumQ4
65.97 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-4B-BaseQ4
65.86 tok/sEstimated

Auto-generated benchmark

2GB
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bitQ4
65.81 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen2.5-1.5BQ4
65.68 tok/sEstimated

Auto-generated benchmark

3GB
ibm-research/PowerMoE-3b
Q4
2GB
80.43 tok/sEstimated
Auto-generated benchmark
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Q4
1GB
80.15 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B
Q4
1GB
79.80 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-2b-instruct
Q4
1GB
79.60 tok/sEstimated
Auto-generated benchmark
facebook/sam3
Q4
1GB
78.80 tok/sEstimated
Auto-generated benchmark
tencent/HunyuanOCR
Q4
1GB
78.68 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-3B-Instruct
Q4
2GB
78.45 tok/sEstimated
Auto-generated benchmark
deepseek-ai/deepseek-coder-1.3b-instruct
Q4
2GB
78.04 tok/sEstimated
Auto-generated benchmark
google-bert/bert-base-uncased
Q4
1GB
77.18 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B-Instruct
Q4
2GB
77.16 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B
Q4
2GB
77.11 tok/sEstimated
Auto-generated benchmark
allenai/OLMo-2-0425-1B
Q4
1GB
77.06 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR
Q4
2GB
75.66 tok/sEstimated
Auto-generated benchmark
unsloth/gemma-3-1b-it
Q4
1GB
75.47 tok/sEstimated
Auto-generated benchmark
google/gemma-3-1b-it
Q4
1GB
74.70 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-1B-Instruct
Q4
1GB
74.02 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-1B
Q4
1GB
73.62 tok/sEstimated
Auto-generated benchmark
bigcode/starcoder2-3b
Q4
2GB
73.43 tok/sEstimated
Auto-generated benchmark
WeiboAI/VibeThinker-1.5B
Q4
1GB
73.42 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
73.28 tok/sEstimated
Auto-generated benchmark
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16
Q4
2GB
73.22 tok/sEstimated
Auto-generated benchmark
apple/OpenELM-1_1B-Instruct
Q4
1GB
73.15 tok/sEstimated
Auto-generated benchmark
google/gemma-2b
Q4
1GB
72.04 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B-Instruct
Q4
1GB
71.79 tok/sEstimated
Auto-generated benchmark
google-t5/t5-3b
Q4
2GB
71.54 tok/sEstimated
Auto-generated benchmark
google/embeddinggemma-300m
Q4
1GB
69.60 tok/sEstimated
Auto-generated benchmark
nari-labs/Dia2-2B
Q4
2GB
69.39 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B
Q4
2GB
68.11 tok/sEstimated
Auto-generated benchmark
EleutherAI/gpt-neo-125m
Q4
4GB
67.39 tok/sEstimated
Auto-generated benchmark
black-forest-labs/FLUX.2-dev
Q4
4GB
67.33 tok/sEstimated
Auto-generated benchmark
google/gemma-2-2b-it
Q4
1GB
67.30 tok/sEstimated
Auto-generated benchmark
black-forest-labs/FLUX.1-dev
Q4
4GB
67.29 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-8B-Base
Q4
4GB
67.25 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Embedding-4B
Q4
2GB
67.20 tok/sEstimated
Auto-generated benchmark
GSAI-ML/LLaDA-8B-Instruct
Q4
4GB
67.15 tok/sEstimated
Auto-generated benchmark
inference-net/Schematron-3B
Q4
2GB
67.09 tok/sEstimated
Auto-generated benchmark
rinna/japanese-gpt-neox-small
Q4
4GB
67.08 tok/sEstimated
Auto-generated benchmark
microsoft/DialoGPT-small
Q4
4GB
67.00 tok/sEstimated
Auto-generated benchmark
LiquidAI/LFM2-1.2B
Q4
1GB
66.96 tok/sEstimated
Auto-generated benchmark
skt/kogpt2-base-v2
Q4
4GB
66.78 tok/sEstimated
Auto-generated benchmark
unsloth/Meta-Llama-3.1-8B-Instruct
Q4
4GB
66.74 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Reranker-0.6B
Q4
3GB
66.71 tok/sEstimated
Auto-generated benchmark
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bit
Q4
2GB
66.51 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Q4
4GB
66.42 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-8b-instruct
Q4
4GB
66.34 tok/sEstimated
Auto-generated benchmark
lmsys/vicuna-7b-v1.5
Q4
4GB
66.26 tok/sEstimated
Auto-generated benchmark
openai-community/gpt2-medium
Q4
4GB
65.97 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-4B-Base
Q4
2GB
65.86 tok/sEstimated
Auto-generated benchmark
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bit
Q4
4GB
65.81 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-1.5B
Q4
3GB
65.68 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
dphn/dolphin-2.9.1-yi-1.5-34bQ8Not supported
14.35 tok/sEstimated
35GB (have 12GB)
dphn/dolphin-2.9.1-yi-1.5-34bFP16Not supported
7.86 tok/sEstimated
70GB (have 12GB)
Qwen/Qwen3-Embedding-0.6BFP16Not supported
21.30 tok/sEstimated
13GB (have 12GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q4Fits comfortably
63.77 tok/sEstimated
4GB (have 12GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5FP16Not supported
22.16 tok/sEstimated
15GB (have 12GB)
meta-llama/Llama-3.2-1B-InstructQ4Fits comfortably
71.79 tok/sEstimated
1GB (have 12GB)
meta-llama/Llama-3.2-1B-InstructQ8Fits comfortably
49.11 tok/sEstimated
1GB (have 12GB)
meta-llama/Llama-3.2-1B-InstructFP16Fits comfortably
26.63 tok/sEstimated
2GB (have 12GB)
openai/gpt-oss-120bQ4Not supported
11.82 tok/sEstimated
59GB (have 12GB)
bigscience/bloomz-560mQ8Fits comfortably
43.34 tok/sEstimated
7GB (have 12GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q8Fits comfortably
42.63 tok/sEstimated
7GB (have 12GB)
bigscience/bloomz-560mFP16Not supported
21.19 tok/sEstimated
15GB (have 12GB)
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q8Fits comfortably
46.89 tok/sEstimated
3GB (have 12GB)
mistralai/Mistral-7B-Instruct-v0.2Q4Fits comfortably
61.93 tok/sEstimated
4GB (have 12GB)
mistralai/Mistral-7B-Instruct-v0.2Q8Fits comfortably
46.82 tok/sEstimated
7GB (have 12GB)
inference-net/Schematron-3BQ4Fits comfortably
67.09 tok/sEstimated
2GB (have 12GB)
meta-llama/Meta-Llama-3-8B-InstructFP16Not supported
23.23 tok/sEstimated
17GB (have 12GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5BQ4Fits comfortably
63.33 tok/sEstimated
3GB (have 12GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5BQ8Fits comfortably
44.12 tok/sEstimated
5GB (have 12GB)
Qwen/Qwen3-14BFP16Not supported
17.18 tok/sEstimated
29GB (have 12GB)
unsloth/Meta-Llama-3.1-8B-InstructQ8Fits comfortably
44.77 tok/sEstimated
9GB (have 12GB)
meta-llama/Llama-3.2-3B-InstructFP16Fits comfortably
29.48 tok/sEstimated
6GB (have 12GB)
vikhyatk/moondream2Q4Fits comfortably
62.57 tok/sEstimated
4GB (have 12GB)
vikhyatk/moondream2FP16Not supported
22.09 tok/sEstimated
15GB (have 12GB)
petals-team/StableBeluga2Q4Fits comfortably
56.71 tok/sEstimated
4GB (have 12GB)
microsoft/Phi-3-mini-4k-instructQ4Fits comfortably
58.58 tok/sEstimated
4GB (have 12GB)
unsloth/gpt-oss-20b-BF16Q4Fits comfortably
32.88 tok/sEstimated
10GB (have 12GB)
HuggingFaceTB/SmolLM-135MQ4Fits comfortably
56.46 tok/sEstimated
4GB (have 12GB)
HuggingFaceTB/SmolLM-135MFP16Not supported
25.38 tok/sEstimated
15GB (have 12GB)
Qwen/Qwen2.5-Math-1.5BQ4Fits comfortably
57.88 tok/sEstimated
3GB (have 12GB)
trl-internal-testing/tiny-random-LlamaForCausalLMQ4Fits comfortably
61.17 tok/sEstimated
4GB (have 12GB)
trl-internal-testing/tiny-random-LlamaForCausalLMFP16Not supported
25.44 tok/sEstimated
15GB (have 12GB)
openai-community/gpt2-mediumQ4Fits comfortably
65.97 tok/sEstimated
4GB (have 12GB)
openai-community/gpt2-mediumQ8Fits comfortably
39.53 tok/sEstimated
7GB (have 12GB)
openai-community/gpt2-mediumFP16Not supported
23.93 tok/sEstimated
15GB (have 12GB)
Qwen/Qwen3-0.6B-BaseQ4Fits comfortably
57.14 tok/sEstimated
3GB (have 12GB)
Qwen/Qwen3-0.6B-BaseQ8Fits comfortably
41.92 tok/sEstimated
6GB (have 12GB)
Qwen/Qwen3-4B-BaseQ8Fits comfortably
38.92 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen3-4B-BaseFP16Fits comfortably
22.44 tok/sEstimated
9GB (have 12GB)
microsoft/Phi-3.5-mini-instructQ4Fits comfortably
59.90 tok/sEstimated
4GB (have 12GB)
google/gemma-2-2b-itQ4Fits comfortably
67.30 tok/sEstimated
1GB (have 12GB)
google/gemma-2-2b-itFP16Fits comfortably
26.10 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen2-1.5B-InstructQ4Fits comfortably
64.89 tok/sEstimated
3GB (have 12GB)
meta-llama/Llama-Guard-3-1BQ4Fits comfortably
73.62 tok/sEstimated
1GB (have 12GB)
meta-llama/Llama-Guard-3-1BQ8Fits comfortably
50.96 tok/sEstimated
1GB (have 12GB)
meta-llama/Llama-Guard-3-1BFP16Fits comfortably
30.64 tok/sEstimated
2GB (have 12GB)
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bitQ8Fits comfortably
46.64 tok/sEstimated
4GB (have 12GB)
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bitFP16Fits comfortably
20.97 tok/sEstimated
9GB (have 12GB)
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16Q4Not supported
21.51 tok/sEstimated
34GB (have 12GB)
dphn/dolphin-2.9.1-yi-1.5-34bQ4Not supported
19.93 tok/sEstimated
17GB (have 12GB)
dphn/dolphin-2.9.1-yi-1.5-34bQ8
Not supported35GB required · 12GB available
14.35 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bFP16
Not supported70GB required · 12GB available
7.86 tok/sEstimated
Qwen/Qwen3-Embedding-0.6BFP16
Not supported13GB required · 12GB available
21.30 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q4
Fits comfortably4GB required · 12GB available
63.77 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5FP16
Not supported15GB required · 12GB available
22.16 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructQ4
Fits comfortably1GB required · 12GB available
71.79 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructQ8
Fits comfortably1GB required · 12GB available
49.11 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructFP16
Fits comfortably2GB required · 12GB available
26.63 tok/sEstimated
openai/gpt-oss-120bQ4
Not supported59GB required · 12GB available
11.82 tok/sEstimated
bigscience/bloomz-560mQ8
Fits comfortably7GB required · 12GB available
43.34 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q8
Fits comfortably7GB required · 12GB available
42.63 tok/sEstimated
bigscience/bloomz-560mFP16
Not supported15GB required · 12GB available
21.19 tok/sEstimated
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q8
Fits comfortably3GB required · 12GB available
46.89 tok/sEstimated
mistralai/Mistral-7B-Instruct-v0.2Q4
Fits comfortably4GB required · 12GB available
61.93 tok/sEstimated
mistralai/Mistral-7B-Instruct-v0.2Q8
Fits comfortably7GB required · 12GB available
46.82 tok/sEstimated
inference-net/Schematron-3BQ4
Fits comfortably2GB required · 12GB available
67.09 tok/sEstimated
meta-llama/Meta-Llama-3-8B-InstructFP16
Not supported17GB required · 12GB available
23.23 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5BQ4
Fits comfortably3GB required · 12GB available
63.33 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5BQ8
Fits comfortably5GB required · 12GB available
44.12 tok/sEstimated
Qwen/Qwen3-14BFP16
Not supported29GB required · 12GB available
17.18 tok/sEstimated
unsloth/Meta-Llama-3.1-8B-InstructQ8
Fits comfortably9GB required · 12GB available
44.77 tok/sEstimated
meta-llama/Llama-3.2-3B-InstructFP16
Fits comfortably6GB required · 12GB available
29.48 tok/sEstimated
vikhyatk/moondream2Q4
Fits comfortably4GB required · 12GB available
62.57 tok/sEstimated
vikhyatk/moondream2FP16
Not supported15GB required · 12GB available
22.09 tok/sEstimated
petals-team/StableBeluga2Q4
Fits comfortably4GB required · 12GB available
56.71 tok/sEstimated
microsoft/Phi-3-mini-4k-instructQ4
Fits comfortably4GB required · 12GB available
58.58 tok/sEstimated
unsloth/gpt-oss-20b-BF16Q4
Fits comfortably10GB required · 12GB available
32.88 tok/sEstimated
HuggingFaceTB/SmolLM-135MQ4
Fits comfortably4GB required · 12GB available
56.46 tok/sEstimated
HuggingFaceTB/SmolLM-135MFP16
Not supported15GB required · 12GB available
25.38 tok/sEstimated
Qwen/Qwen2.5-Math-1.5BQ4
Fits comfortably3GB required · 12GB available
57.88 tok/sEstimated
trl-internal-testing/tiny-random-LlamaForCausalLMQ4
Fits comfortably4GB required · 12GB available
61.17 tok/sEstimated
trl-internal-testing/tiny-random-LlamaForCausalLMFP16
Not supported15GB required · 12GB available
25.44 tok/sEstimated
openai-community/gpt2-mediumQ4
Fits comfortably4GB required · 12GB available
65.97 tok/sEstimated
openai-community/gpt2-mediumQ8
Fits comfortably7GB required · 12GB available
39.53 tok/sEstimated
openai-community/gpt2-mediumFP16
Not supported15GB required · 12GB available
23.93 tok/sEstimated
Qwen/Qwen3-0.6B-BaseQ4
Fits comfortably3GB required · 12GB available
57.14 tok/sEstimated
Qwen/Qwen3-0.6B-BaseQ8
Fits comfortably6GB required · 12GB available
41.92 tok/sEstimated
Qwen/Qwen3-4B-BaseQ8
Fits comfortably4GB required · 12GB available
38.92 tok/sEstimated
Qwen/Qwen3-4B-BaseFP16
Fits comfortably9GB required · 12GB available
22.44 tok/sEstimated
microsoft/Phi-3.5-mini-instructQ4
Fits comfortably4GB required · 12GB available
59.90 tok/sEstimated
google/gemma-2-2b-itQ4
Fits comfortably1GB required · 12GB available
67.30 tok/sEstimated
google/gemma-2-2b-itFP16
Fits comfortably4GB required · 12GB available
26.10 tok/sEstimated
Qwen/Qwen2-1.5B-InstructQ4
Fits comfortably3GB required · 12GB available
64.89 tok/sEstimated
meta-llama/Llama-Guard-3-1BQ4
Fits comfortably1GB required · 12GB available
73.62 tok/sEstimated
meta-llama/Llama-Guard-3-1BQ8
Fits comfortably1GB required · 12GB available
50.96 tok/sEstimated
meta-llama/Llama-Guard-3-1BFP16
Fits comfortably2GB required · 12GB available
30.64 tok/sEstimated
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bitQ8
Fits comfortably4GB required · 12GB available
46.64 tok/sEstimated
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bitFP16
Fits comfortably9GB required · 12GB available
20.97 tok/sEstimated
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16Q4
Not supported34GB required · 12GB available
21.51 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bQ4
Not supported17GB required · 12GB available
19.93 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Alternative GPUs

RTX 5070
12GB

Explore how RTX 5070 stacks up for local inference workloads.

RTX 4060 Ti 16GB
16GB

Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.

RX 6800 XT
16GB

Explore how RX 6800 XT stacks up for local inference workloads.

RTX 4070 Super
12GB

Explore how RTX 4070 Super stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.