L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Community

  • Leaderboard

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. Intel Arc Pro A40

Quick Answer: Intel Arc Pro A40 offers 6GB VRAM and starts around current market pricing. It delivers approximately 58 tokens/sec on meta-llama/Llama-Guard-3-1B. It typically draws 50W under load.

Intel Arc Pro A40

Check availability
By IntelReleased 2023-03MSRP $399.00

This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.

Search on AmazonView Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM6GB
Cores2,048
TDP50W
ArchitectureAlchemist Xe HPG

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

No purchase links available yet. Try the Amazon search results to find this GPU.

💡 Not ready to buy? Try cloud GPUs first

Test Intel Arc Pro A40 performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
meta-llama/Llama-Guard-3-1BQ4
58.10 tok/sEstimated

Auto-generated benchmark

1GB
inference-net/Schematron-3BQ4
57.61 tok/sEstimated

Auto-generated benchmark

2GB
allenai/OLMo-2-0425-1BQ4
56.36 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-1B-InstructQ4
54.97 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-1B-InstructQ4
54.43 tok/sEstimated

Auto-generated benchmark

1GB
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
53.73 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-3-1b-itQ4
53.70 tok/sEstimated

Auto-generated benchmark

1GB
WeiboAI/VibeThinker-1.5BQ4
53.67 tok/sEstimated

Auto-generated benchmark

1GB
ibm-research/PowerMoE-3bQ4
53.66 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2.5-3BQ4
53.52 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2.5-3B-InstructQ4
53.29 tok/sEstimated

Auto-generated benchmark

2GB
nari-labs/Dia2-2BQ4
53.03 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1BQ4
52.86 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2bQ4
52.39 tok/sEstimated

Auto-generated benchmark

1GB
google/embeddinggemma-300mQ4
52.36 tok/sEstimated

Auto-generated benchmark

1GB
google-bert/bert-base-uncasedQ4
52.28 tok/sEstimated

Auto-generated benchmark

1GB
apple/OpenELM-1_1B-InstructQ4
51.82 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2-2b-itQ4
51.26 tok/sEstimated

Auto-generated benchmark

1GB
facebook/sam3Q4
51.19 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/DeepSeek-OCRQ4
49.77 tok/sEstimated

Auto-generated benchmark

2GB
google-t5/t5-3bQ4
49.75 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/gemma-3-1b-itQ4
49.43 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-3B-InstructQ4
49.13 tok/sEstimated

Auto-generated benchmark

2GB
ibm-granite/granite-3.3-2b-instructQ4
48.65 tok/sEstimated

Auto-generated benchmark

1GB
LiquidAI/LFM2-1.2BQ4
48.62 tok/sEstimated

Auto-generated benchmark

1GB
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q4
48.51 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen3-8B-FP8Q4
48.48 tok/sEstimated

Auto-generated benchmark

4GB
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bitQ4
48.48 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-R1-Distill-Llama-8BQ4
48.43 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-Embedding-4BQ4
48.42 tok/sEstimated

Auto-generated benchmark

2GB
deepseek-ai/DeepSeek-R1-0528Q4
48.23 tok/sEstimated

Auto-generated benchmark

4GB
petals-team/StableBeluga2Q4
48.20 tok/sEstimated

Auto-generated benchmark

4GB
tencent/HunyuanOCRQ4
48.18 tok/sEstimated

Auto-generated benchmark

1GB
distilbert/distilgpt2Q4
48.16 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-Embedding-8BQ4
48.16 tok/sEstimated

Auto-generated benchmark

4GB
bigcode/starcoder2-3bQ4
48.14 tok/sEstimated

Auto-generated benchmark

2GB
openai-community/gpt2Q4
48.09 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-0.6BQ4
48.09 tok/sEstimated

Auto-generated benchmark

3GB
vikhyatk/moondream2Q4
48.05 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/deepseek-coder-1.3b-instructQ4
48.05 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-3B-InstructQ4
48.02 tok/sEstimated

Auto-generated benchmark

2GB
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q4
47.98 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen2.5-Coder-1.5BQ4
47.96 tok/sEstimated

Auto-generated benchmark

3GB
black-forest-labs/FLUX.2-devQ4
47.96 tok/sEstimated

Auto-generated benchmark

4GB
BSC-LT/salamandraTA-7b-instructQ4
47.90 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-3.2-3BQ4
47.86 tok/sEstimated

Auto-generated benchmark

2GB
swiss-ai/Apertus-8B-Instruct-2509Q4
47.84 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-1.7BQ4
47.47 tok/sEstimated

Auto-generated benchmark

4GB
GSAI-ML/LLaDA-8B-InstructQ4
47.47 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/phi-2Q4
47.39 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-Guard-3-1B
Q4
1GB
58.10 tok/sEstimated
Auto-generated benchmark
inference-net/Schematron-3B
Q4
2GB
57.61 tok/sEstimated
Auto-generated benchmark
allenai/OLMo-2-0425-1B
Q4
1GB
56.36 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B-Instruct
Q4
1GB
54.97 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-1B-Instruct
Q4
1GB
54.43 tok/sEstimated
Auto-generated benchmark
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Q4
1GB
53.73 tok/sEstimated
Auto-generated benchmark
google/gemma-3-1b-it
Q4
1GB
53.70 tok/sEstimated
Auto-generated benchmark
WeiboAI/VibeThinker-1.5B
Q4
1GB
53.67 tok/sEstimated
Auto-generated benchmark
ibm-research/PowerMoE-3b
Q4
2GB
53.66 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B
Q4
2GB
53.52 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B-Instruct
Q4
2GB
53.29 tok/sEstimated
Auto-generated benchmark
nari-labs/Dia2-2B
Q4
2GB
53.03 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B
Q4
1GB
52.86 tok/sEstimated
Auto-generated benchmark
google/gemma-2b
Q4
1GB
52.39 tok/sEstimated
Auto-generated benchmark
google/embeddinggemma-300m
Q4
1GB
52.36 tok/sEstimated
Auto-generated benchmark
google-bert/bert-base-uncased
Q4
1GB
52.28 tok/sEstimated
Auto-generated benchmark
apple/OpenELM-1_1B-Instruct
Q4
1GB
51.82 tok/sEstimated
Auto-generated benchmark
google/gemma-2-2b-it
Q4
1GB
51.26 tok/sEstimated
Auto-generated benchmark
facebook/sam3
Q4
1GB
51.19 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR
Q4
2GB
49.77 tok/sEstimated
Auto-generated benchmark
google-t5/t5-3b
Q4
2GB
49.75 tok/sEstimated
Auto-generated benchmark
unsloth/gemma-3-1b-it
Q4
1GB
49.43 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-3B-Instruct
Q4
2GB
49.13 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-2b-instruct
Q4
1GB
48.65 tok/sEstimated
Auto-generated benchmark
LiquidAI/LFM2-1.2B
Q4
1GB
48.62 tok/sEstimated
Auto-generated benchmark
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16
Q4
2GB
48.51 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-8B-FP8
Q4
4GB
48.48 tok/sEstimated
Auto-generated benchmark
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bit
Q4
4GB
48.48 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Q4
4GB
48.43 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Embedding-4B
Q4
2GB
48.42 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-R1-0528
Q4
4GB
48.23 tok/sEstimated
Auto-generated benchmark
petals-team/StableBeluga2
Q4
4GB
48.20 tok/sEstimated
Auto-generated benchmark
tencent/HunyuanOCR
Q4
1GB
48.18 tok/sEstimated
Auto-generated benchmark
distilbert/distilgpt2
Q4
4GB
48.16 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Embedding-8B
Q4
4GB
48.16 tok/sEstimated
Auto-generated benchmark
bigcode/starcoder2-3b
Q4
2GB
48.14 tok/sEstimated
Auto-generated benchmark
openai-community/gpt2
Q4
4GB
48.09 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-0.6B
Q4
3GB
48.09 tok/sEstimated
Auto-generated benchmark
vikhyatk/moondream2
Q4
4GB
48.05 tok/sEstimated
Auto-generated benchmark
deepseek-ai/deepseek-coder-1.3b-instruct
Q4
2GB
48.05 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
48.02 tok/sEstimated
Auto-generated benchmark
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5
Q4
4GB
47.98 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-Coder-1.5B
Q4
3GB
47.96 tok/sEstimated
Auto-generated benchmark
black-forest-labs/FLUX.2-dev
Q4
4GB
47.96 tok/sEstimated
Auto-generated benchmark
BSC-LT/salamandraTA-7b-instruct
Q4
4GB
47.90 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B
Q4
2GB
47.86 tok/sEstimated
Auto-generated benchmark
swiss-ai/Apertus-8B-Instruct-2509
Q4
4GB
47.84 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-1.7B
Q4
4GB
47.47 tok/sEstimated
Auto-generated benchmark
GSAI-ML/LLaDA-8B-Instruct
Q4
4GB
47.47 tok/sEstimated
Auto-generated benchmark
microsoft/phi-2
Q4
4GB
47.39 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
Qwen/Qwen2.5-7B-InstructFP16Not supported
15.16 tok/sEstimated
15GB (have 6GB)
Gensyn/Qwen2.5-0.5B-InstructQ4Fits comfortably
42.09 tok/sEstimated
3GB (have 6GB)
meta-llama/Llama-3.1-8B-InstructQ8Not supported
28.47 tok/sEstimated
9GB (have 6GB)
google/gemma-3-1b-itQ8Fits comfortably
37.40 tok/sEstimated
1GB (have 6GB)
google/gemma-3-1b-itFP16Fits comfortably
20.46 tok/sEstimated
2GB (have 6GB)
Qwen/Qwen3-Embedding-0.6BQ4Fits comfortably
46.96 tok/sEstimated
3GB (have 6GB)
Qwen/Qwen3-Embedding-0.6BQ8Fits (tight)
31.64 tok/sEstimated
6GB (have 6GB)
Qwen/Qwen2.5-3B-InstructQ4Fits comfortably
53.29 tok/sEstimated
2GB (have 6GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BQ8Not supported
11.83 tok/sEstimated
33GB (have 6GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BFP16Not supported
6.30 tok/sEstimated
66GB (have 6GB)
distilbert/distilgpt2Q4Fits comfortably
48.16 tok/sEstimated
4GB (have 6GB)
distilbert/distilgpt2Q8Not supported
31.32 tok/sEstimated
7GB (have 6GB)
meta-llama/Llama-3.2-1BQ4Fits comfortably
52.86 tok/sEstimated
1GB (have 6GB)
meta-llama/Llama-3.2-1BQ8Fits comfortably
40.37 tok/sEstimated
1GB (have 6GB)
meta-llama/Llama-3.2-1BFP16Fits comfortably
20.47 tok/sEstimated
2GB (have 6GB)
meta-llama/Meta-Llama-3-8BQ4Fits comfortably
47.14 tok/sEstimated
4GB (have 6GB)
meta-llama/Meta-Llama-3-8BQ8Not supported
28.62 tok/sEstimated
9GB (have 6GB)
meta-llama/Meta-Llama-3-8BFP16Not supported
18.38 tok/sEstimated
17GB (have 6GB)
Qwen/Qwen2.5-7BQ4Fits comfortably
41.26 tok/sEstimated
4GB (have 6GB)
Qwen/Qwen2.5-7BQ8Not supported
29.66 tok/sEstimated
7GB (have 6GB)
Qwen/Qwen2.5-7BFP16Not supported
15.83 tok/sEstimated
15GB (have 6GB)
Qwen/Qwen2.5-0.5B-InstructQ4Fits comfortably
47.28 tok/sEstimated
3GB (have 6GB)
Qwen/Qwen2.5-0.5B-InstructQ8Fits (tight)
28.09 tok/sEstimated
5GB (have 6GB)
Qwen/Qwen2.5-0.5B-InstructFP16Not supported
15.28 tok/sEstimated
11GB (have 6GB)
Qwen/Qwen3-32BQ4Not supported
14.17 tok/sEstimated
16GB (have 6GB)
Qwen/Qwen3-Next-80B-A3B-InstructQ4Not supported
8.95 tok/sEstimated
39GB (have 6GB)
Qwen/Qwen3-Next-80B-A3B-InstructQ8Not supported
6.66 tok/sEstimated
78GB (have 6GB)
microsoft/Phi-3-mini-4k-instructFP16Not supported
16.00 tok/sEstimated
15GB (have 6GB)
Qwen/Qwen3-4BQ8Fits comfortably
30.71 tok/sEstimated
4GB (have 6GB)
google-t5/t5-3bFP16Fits (tight)
18.93 tok/sEstimated
6GB (have 6GB)
rednote-hilab/dots.ocrFP16Not supported
17.95 tok/sEstimated
15GB (have 6GB)
meta-llama/Llama-3.3-70B-InstructFP16Not supported
5.45 tok/sEstimated
137GB (have 6GB)
Qwen/Qwen3-14BQ4Not supported
35.03 tok/sEstimated
7GB (have 6GB)
Qwen/Qwen3-14BQ8Not supported
25.31 tok/sEstimated
14GB (have 6GB)
microsoft/phi-2FP16Not supported
15.27 tok/sEstimated
15GB (have 6GB)
meta-llama/Llama-2-7b-hfQ4Fits comfortably
46.63 tok/sEstimated
4GB (have 6GB)
meta-llama/Llama-2-7b-hfQ8Not supported
30.03 tok/sEstimated
7GB (have 6GB)
HuggingFaceTB/SmolLM2-135MQ4Fits comfortably
47.14 tok/sEstimated
4GB (have 6GB)
HuggingFaceTB/SmolLM2-135MQ8Not supported
30.45 tok/sEstimated
7GB (have 6GB)
microsoft/DialoGPT-mediumQ8Not supported
33.44 tok/sEstimated
7GB (have 6GB)
microsoft/DialoGPT-mediumFP16Not supported
17.64 tok/sEstimated
15GB (have 6GB)
MiniMaxAI/MiniMax-M2Q4Fits comfortably
41.84 tok/sEstimated
4GB (have 6GB)
MiniMaxAI/MiniMax-M2Q8Not supported
33.83 tok/sEstimated
7GB (have 6GB)
MiniMaxAI/MiniMax-M2FP16Not supported
17.77 tok/sEstimated
15GB (have 6GB)
microsoft/phi-4FP16Not supported
17.15 tok/sEstimated
15GB (have 6GB)
meta-llama/Llama-3.1-8BQ8Not supported
28.39 tok/sEstimated
9GB (have 6GB)
deepseek-ai/DeepSeek-R1-0528Q4Fits comfortably
48.23 tok/sEstimated
4GB (have 6GB)
deepseek-ai/DeepSeek-R1-0528Q8Not supported
27.86 tok/sEstimated
7GB (have 6GB)
deepseek-ai/DeepSeek-R1-0528FP16Not supported
16.57 tok/sEstimated
15GB (have 6GB)
Qwen/Qwen2.5-7B-InstructQ8Not supported
30.57 tok/sEstimated
7GB (have 6GB)
Qwen/Qwen2.5-7B-InstructFP16
Not supported15GB required · 6GB available
15.16 tok/sEstimated
Gensyn/Qwen2.5-0.5B-InstructQ4
Fits comfortably3GB required · 6GB available
42.09 tok/sEstimated
meta-llama/Llama-3.1-8B-InstructQ8
Not supported9GB required · 6GB available
28.47 tok/sEstimated
google/gemma-3-1b-itQ8
Fits comfortably1GB required · 6GB available
37.40 tok/sEstimated
google/gemma-3-1b-itFP16
Fits comfortably2GB required · 6GB available
20.46 tok/sEstimated
Qwen/Qwen3-Embedding-0.6BQ4
Fits comfortably3GB required · 6GB available
46.96 tok/sEstimated
Qwen/Qwen3-Embedding-0.6BQ8
Fits (tight)6GB required · 6GB available
31.64 tok/sEstimated
Qwen/Qwen2.5-3B-InstructQ4
Fits comfortably2GB required · 6GB available
53.29 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BQ8
Not supported33GB required · 6GB available
11.83 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BFP16
Not supported66GB required · 6GB available
6.30 tok/sEstimated
distilbert/distilgpt2Q4
Fits comfortably4GB required · 6GB available
48.16 tok/sEstimated
distilbert/distilgpt2Q8
Not supported7GB required · 6GB available
31.32 tok/sEstimated
meta-llama/Llama-3.2-1BQ4
Fits comfortably1GB required · 6GB available
52.86 tok/sEstimated
meta-llama/Llama-3.2-1BQ8
Fits comfortably1GB required · 6GB available
40.37 tok/sEstimated
meta-llama/Llama-3.2-1BFP16
Fits comfortably2GB required · 6GB available
20.47 tok/sEstimated
meta-llama/Meta-Llama-3-8BQ4
Fits comfortably4GB required · 6GB available
47.14 tok/sEstimated
meta-llama/Meta-Llama-3-8BQ8
Not supported9GB required · 6GB available
28.62 tok/sEstimated
meta-llama/Meta-Llama-3-8BFP16
Not supported17GB required · 6GB available
18.38 tok/sEstimated
Qwen/Qwen2.5-7BQ4
Fits comfortably4GB required · 6GB available
41.26 tok/sEstimated
Qwen/Qwen2.5-7BQ8
Not supported7GB required · 6GB available
29.66 tok/sEstimated
Qwen/Qwen2.5-7BFP16
Not supported15GB required · 6GB available
15.83 tok/sEstimated
Qwen/Qwen2.5-0.5B-InstructQ4
Fits comfortably3GB required · 6GB available
47.28 tok/sEstimated
Qwen/Qwen2.5-0.5B-InstructQ8
Fits (tight)5GB required · 6GB available
28.09 tok/sEstimated
Qwen/Qwen2.5-0.5B-InstructFP16
Not supported11GB required · 6GB available
15.28 tok/sEstimated
Qwen/Qwen3-32BQ4
Not supported16GB required · 6GB available
14.17 tok/sEstimated
Qwen/Qwen3-Next-80B-A3B-InstructQ4
Not supported39GB required · 6GB available
8.95 tok/sEstimated
Qwen/Qwen3-Next-80B-A3B-InstructQ8
Not supported78GB required · 6GB available
6.66 tok/sEstimated
microsoft/Phi-3-mini-4k-instructFP16
Not supported15GB required · 6GB available
16.00 tok/sEstimated
Qwen/Qwen3-4BQ8
Fits comfortably4GB required · 6GB available
30.71 tok/sEstimated
google-t5/t5-3bFP16
Fits (tight)6GB required · 6GB available
18.93 tok/sEstimated
rednote-hilab/dots.ocrFP16
Not supported15GB required · 6GB available
17.95 tok/sEstimated
meta-llama/Llama-3.3-70B-InstructFP16
Not supported137GB required · 6GB available
5.45 tok/sEstimated
Qwen/Qwen3-14BQ4
Not supported7GB required · 6GB available
35.03 tok/sEstimated
Qwen/Qwen3-14BQ8
Not supported14GB required · 6GB available
25.31 tok/sEstimated
microsoft/phi-2FP16
Not supported15GB required · 6GB available
15.27 tok/sEstimated
meta-llama/Llama-2-7b-hfQ4
Fits comfortably4GB required · 6GB available
46.63 tok/sEstimated
meta-llama/Llama-2-7b-hfQ8
Not supported7GB required · 6GB available
30.03 tok/sEstimated
HuggingFaceTB/SmolLM2-135MQ4
Fits comfortably4GB required · 6GB available
47.14 tok/sEstimated
HuggingFaceTB/SmolLM2-135MQ8
Not supported7GB required · 6GB available
30.45 tok/sEstimated
microsoft/DialoGPT-mediumQ8
Not supported7GB required · 6GB available
33.44 tok/sEstimated
microsoft/DialoGPT-mediumFP16
Not supported15GB required · 6GB available
17.64 tok/sEstimated
MiniMaxAI/MiniMax-M2Q4
Fits comfortably4GB required · 6GB available
41.84 tok/sEstimated
MiniMaxAI/MiniMax-M2Q8
Not supported7GB required · 6GB available
33.83 tok/sEstimated
MiniMaxAI/MiniMax-M2FP16
Not supported15GB required · 6GB available
17.77 tok/sEstimated
microsoft/phi-4FP16
Not supported15GB required · 6GB available
17.15 tok/sEstimated
meta-llama/Llama-3.1-8BQ8
Not supported9GB required · 6GB available
28.39 tok/sEstimated
deepseek-ai/DeepSeek-R1-0528Q4
Fits comfortably4GB required · 6GB available
48.23 tok/sEstimated
deepseek-ai/DeepSeek-R1-0528Q8
Not supported7GB required · 6GB available
27.86 tok/sEstimated
deepseek-ai/DeepSeek-R1-0528FP16
Not supported15GB required · 6GB available
16.57 tok/sEstimated
Qwen/Qwen2.5-7B-InstructQ8
Not supported7GB required · 6GB available
30.57 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Alternative GPUs

RTX 5070
12GB

Explore how RTX 5070 stacks up for local inference workloads.

RTX 4060 Ti 16GB
16GB

Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.

RX 6800 XT
16GB

Explore how RX 6800 XT stacks up for local inference workloads.

RTX 4070 Super
12GB

Explore how RTX 4070 Super stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.