L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Community

  • Leaderboard

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. AMD Radeon Pro W7800

Quick Answer: AMD Radeon Pro W7800 offers 32GB VRAM and starts around $2497.96. It delivers approximately 117 tokens/sec on ibm-granite/granite-3.3-2b-instruct. It typically draws 260W under load.

AMD Radeon Pro W7800

Unknown
By AMDReleased 2023-05MSRP $2,499.00

This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.

Buy on Amazon - $2,497.96View Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM32GB
Cores4,480
TDP260W
ArchitectureRDNA 3

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

AmazonUnknown
$2,497.96
Buy on Amazon

💡 Not ready to buy? Try cloud GPUs first

Test AMD Radeon Pro W7800 performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
ibm-granite/granite-3.3-2b-instructQ4
116.90 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3B-InstructQ4
116.47 tok/sEstimated

Auto-generated benchmark

2GB
nari-labs/Dia2-2BQ4
116.26 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/Llama-3.2-1B-InstructQ4
115.83 tok/sEstimated

Auto-generated benchmark

1GB
google-bert/bert-base-uncasedQ4
114.83 tok/sEstimated

Auto-generated benchmark

1GB
apple/OpenELM-1_1B-InstructQ4
114.79 tok/sEstimated

Auto-generated benchmark

1GB
tencent/HunyuanOCRQ4
114.60 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2bQ4
113.62 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3BQ4
113.58 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-3B-InstructQ4
113.48 tok/sEstimated

Auto-generated benchmark

2GB
google/gemma-3-1b-itQ4
113.18 tok/sEstimated

Auto-generated benchmark

1GB
google-t5/t5-3bQ4
112.49 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/gemma-3-1b-itQ4
111.83 tok/sEstimated

Auto-generated benchmark

1GB
LiquidAI/LFM2-1.2BQ4
111.41 tok/sEstimated

Auto-generated benchmark

1GB
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
110.81 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2-2b-itQ4
110.60 tok/sEstimated

Auto-generated benchmark

1GB
allenai/OLMo-2-0425-1BQ4
109.94 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-3B-InstructQ4
108.54 tok/sEstimated

Auto-generated benchmark

2GB
deepseek-ai/DeepSeek-OCRQ4
108.42 tok/sEstimated

Auto-generated benchmark

2GB
deepseek-ai/deepseek-coder-1.3b-instructQ4
107.71 tok/sEstimated

Auto-generated benchmark

2GB
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q4
106.82 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-3BQ4
106.74 tok/sEstimated

Auto-generated benchmark

2GB
inference-net/Schematron-3BQ4
104.25 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-Guard-3-1BQ4
104.04 tok/sEstimated

Auto-generated benchmark

1GB
WeiboAI/VibeThinker-1.5BQ4
103.65 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-1BQ4
103.19 tok/sEstimated

Auto-generated benchmark

1GB
ibm-research/PowerMoE-3bQ4
101.79 tok/sEstimated

Auto-generated benchmark

2GB
facebook/sam3Q4
101.28 tok/sEstimated

Auto-generated benchmark

1GB
openai-community/gpt2-xlQ4
98.30 tok/sEstimated

Auto-generated benchmark

4GB
BSC-LT/salamandraTA-7b-instructQ4
98.29 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-3.2-3B-InstructQ4
98.20 tok/sEstimated

Auto-generated benchmark

2GB
Alibaba-NLP/gte-Qwen2-1.5B-instructQ4
98.19 tok/sEstimated

Auto-generated benchmark

3GB
meta-llama/Llama-3.2-1B-InstructQ4
97.89 tok/sEstimated

Auto-generated benchmark

1GB
bigcode/starcoder2-3bQ4
97.82 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.1-8B-InstructQ4
97.69 tok/sEstimated

Auto-generated benchmark

4GB
zai-org/GLM-4.6-FP8Q4
97.29 tok/sEstimated

Auto-generated benchmark

4GB
google/embeddinggemma-300mQ4
96.81 tok/sEstimated

Auto-generated benchmark

1GB
EleutherAI/gpt-neo-125mQ4
96.81 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-8BQ4
96.33 tok/sEstimated

Auto-generated benchmark

4GB
unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bitQ4
96.13 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/DialoGPT-smallQ4
96.11 tok/sEstimated

Auto-generated benchmark

4GB
trl-internal-testing/tiny-random-LlamaForCausalLMQ4
96.09 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/Phi-3.5-vision-instructQ4
95.81 tok/sEstimated

Auto-generated benchmark

4GB
GSAI-ML/LLaDA-8B-BaseQ4
95.81 tok/sEstimated

Auto-generated benchmark

4GB
bigscience/bloomz-560mQ4
95.63 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-Embedding-0.6BQ4
95.56 tok/sEstimated

Auto-generated benchmark

3GB
skt/kogpt2-base-v2Q4
95.51 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-4B-Thinking-2507Q4
95.43 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen3-8B-FP8Q4
95.03 tok/sEstimated

Auto-generated benchmark

4GB
sshleifer/tiny-gpt2Q4
94.95 tok/sEstimated

Auto-generated benchmark

4GB
ibm-granite/granite-3.3-2b-instruct
Q4
1GB
116.90 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B-Instruct
Q4
2GB
116.47 tok/sEstimated
Auto-generated benchmark
nari-labs/Dia2-2B
Q4
2GB
116.26 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-1B-Instruct
Q4
1GB
115.83 tok/sEstimated
Auto-generated benchmark
google-bert/bert-base-uncased
Q4
1GB
114.83 tok/sEstimated
Auto-generated benchmark
apple/OpenELM-1_1B-Instruct
Q4
1GB
114.79 tok/sEstimated
Auto-generated benchmark
tencent/HunyuanOCR
Q4
1GB
114.60 tok/sEstimated
Auto-generated benchmark
google/gemma-2b
Q4
1GB
113.62 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B
Q4
2GB
113.58 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
113.48 tok/sEstimated
Auto-generated benchmark
google/gemma-3-1b-it
Q4
1GB
113.18 tok/sEstimated
Auto-generated benchmark
google-t5/t5-3b
Q4
2GB
112.49 tok/sEstimated
Auto-generated benchmark
unsloth/gemma-3-1b-it
Q4
1GB
111.83 tok/sEstimated
Auto-generated benchmark
LiquidAI/LFM2-1.2B
Q4
1GB
111.41 tok/sEstimated
Auto-generated benchmark
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Q4
1GB
110.81 tok/sEstimated
Auto-generated benchmark
google/gemma-2-2b-it
Q4
1GB
110.60 tok/sEstimated
Auto-generated benchmark
allenai/OLMo-2-0425-1B
Q4
1GB
109.94 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-3B-Instruct
Q4
2GB
108.54 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR
Q4
2GB
108.42 tok/sEstimated
Auto-generated benchmark
deepseek-ai/deepseek-coder-1.3b-instruct
Q4
2GB
107.71 tok/sEstimated
Auto-generated benchmark
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16
Q4
2GB
106.82 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B
Q4
2GB
106.74 tok/sEstimated
Auto-generated benchmark
inference-net/Schematron-3B
Q4
2GB
104.25 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-1B
Q4
1GB
104.04 tok/sEstimated
Auto-generated benchmark
WeiboAI/VibeThinker-1.5B
Q4
1GB
103.65 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B
Q4
1GB
103.19 tok/sEstimated
Auto-generated benchmark
ibm-research/PowerMoE-3b
Q4
2GB
101.79 tok/sEstimated
Auto-generated benchmark
facebook/sam3
Q4
1GB
101.28 tok/sEstimated
Auto-generated benchmark
openai-community/gpt2-xl
Q4
4GB
98.30 tok/sEstimated
Auto-generated benchmark
BSC-LT/salamandraTA-7b-instruct
Q4
4GB
98.29 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
98.20 tok/sEstimated
Auto-generated benchmark
Alibaba-NLP/gte-Qwen2-1.5B-instruct
Q4
3GB
98.19 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B-Instruct
Q4
1GB
97.89 tok/sEstimated
Auto-generated benchmark
bigcode/starcoder2-3b
Q4
2GB
97.82 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.1-8B-Instruct
Q4
4GB
97.69 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-4.6-FP8
Q4
4GB
97.29 tok/sEstimated
Auto-generated benchmark
google/embeddinggemma-300m
Q4
1GB
96.81 tok/sEstimated
Auto-generated benchmark
EleutherAI/gpt-neo-125m
Q4
4GB
96.81 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-8B
Q4
4GB
96.33 tok/sEstimated
Auto-generated benchmark
unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
Q4
4GB
96.13 tok/sEstimated
Auto-generated benchmark
microsoft/DialoGPT-small
Q4
4GB
96.11 tok/sEstimated
Auto-generated benchmark
trl-internal-testing/tiny-random-LlamaForCausalLM
Q4
4GB
96.09 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-3.5-vision-instruct
Q4
4GB
95.81 tok/sEstimated
Auto-generated benchmark
GSAI-ML/LLaDA-8B-Base
Q4
4GB
95.81 tok/sEstimated
Auto-generated benchmark
bigscience/bloomz-560m
Q4
4GB
95.63 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Embedding-0.6B
Q4
3GB
95.56 tok/sEstimated
Auto-generated benchmark
skt/kogpt2-base-v2
Q4
4GB
95.51 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-4B-Thinking-2507
Q4
2GB
95.43 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-8B-FP8
Q4
4GB
95.03 tok/sEstimated
Auto-generated benchmark
sshleifer/tiny-gpt2
Q4
4GB
94.95 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
deepseek-ai/DeepSeek-R1FP16Fits comfortably
31.15 tok/sEstimated
15GB (have 32GB)
Qwen/Qwen3-30B-A3BQ4Fits comfortably
45.98 tok/sEstimated
15GB (have 32GB)
google/gemma-2-2b-itQ4Fits comfortably
110.60 tok/sEstimated
1GB (have 32GB)
openai-community/gpt2-xlQ4Fits comfortably
98.30 tok/sEstimated
4GB (have 32GB)
Qwen/Qwen3-4B-Instruct-2507Q8Fits comfortably
63.05 tok/sEstimated
4GB (have 32GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q4Fits comfortably
81.02 tok/sEstimated
4GB (have 32GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q8Fits comfortably
58.54 tok/sEstimated
7GB (have 32GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5FP16Fits comfortably
32.02 tok/sEstimated
15GB (have 32GB)
Qwen/Qwen3-4B-Instruct-2507Q4Fits comfortably
80.51 tok/sEstimated
2GB (have 32GB)
mistralai/Mistral-7B-Instruct-v0.2Q4Fits comfortably
90.47 tok/sEstimated
4GB (have 32GB)
mistralai/Mistral-7B-Instruct-v0.2Q8Fits comfortably
67.48 tok/sEstimated
7GB (have 32GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BQ8Not supported
20.17 tok/sEstimated
33GB (have 32GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BFP16Not supported
12.17 tok/sEstimated
66GB (have 32GB)
distilbert/distilgpt2Q4Fits comfortably
91.64 tok/sEstimated
4GB (have 32GB)
distilbert/distilgpt2Q8Fits comfortably
61.28 tok/sEstimated
7GB (have 32GB)
distilbert/distilgpt2FP16Fits comfortably
35.54 tok/sEstimated
15GB (have 32GB)
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicQ4Not supported
33.57 tok/sEstimated
34GB (have 32GB)
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicQ8Not supported
20.55 tok/sEstimated
68GB (have 32GB)
Qwen/Qwen2.5-0.5BFP16Fits comfortably
34.53 tok/sEstimated
11GB (have 32GB)
meta-llama/Llama-2-7b-hfFP16Fits comfortably
35.07 tok/sEstimated
15GB (have 32GB)
deepseek-ai/DeepSeek-R1-Distill-Llama-8BQ4Fits comfortably
84.11 tok/sEstimated
4GB (have 32GB)
deepseek-ai/DeepSeek-R1-Distill-Llama-8BQ8Fits comfortably
62.99 tok/sEstimated
9GB (have 32GB)
deepseek-ai/DeepSeek-R1-Distill-Llama-8BFP16Fits comfortably
36.31 tok/sEstimated
17GB (have 32GB)
microsoft/DialoGPT-mediumFP16Fits comfortably
31.86 tok/sEstimated
15GB (have 32GB)
MiniMaxAI/MiniMax-M2Q4Fits comfortably
91.48 tok/sEstimated
4GB (have 32GB)
deepseek-ai/DeepSeek-V3.1Q8Fits comfortably
56.85 tok/sEstimated
7GB (have 32GB)
deepseek-ai/DeepSeek-R1-0528Q4Fits comfortably
84.06 tok/sEstimated
4GB (have 32GB)
deepseek-ai/DeepSeek-R1-0528Q8Fits comfortably
65.16 tok/sEstimated
7GB (have 32GB)
HuggingFaceTB/SmolLM-135MFP16Fits comfortably
33.73 tok/sEstimated
15GB (have 32GB)
trl-internal-testing/tiny-random-LlamaForCausalLMQ4Fits comfortably
96.09 tok/sEstimated
4GB (have 32GB)
trl-internal-testing/tiny-random-LlamaForCausalLMQ8Fits comfortably
61.32 tok/sEstimated
7GB (have 32GB)
trl-internal-testing/tiny-random-LlamaForCausalLMFP16Fits comfortably
36.77 tok/sEstimated
15GB (have 32GB)
Qwen/Qwen3-Coder-30B-A3B-InstructQ4Fits comfortably
45.78 tok/sEstimated
15GB (have 32GB)
IlyaGusev/saiga_llama3_8bQ8Fits comfortably
64.63 tok/sEstimated
9GB (have 32GB)
meta-llama/Llama-Guard-3-1BQ4Fits comfortably
104.04 tok/sEstimated
1GB (have 32GB)
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bitQ8Fits comfortably
56.59 tok/sEstimated
4GB (have 32GB)
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bitFP16Fits comfortably
32.16 tok/sEstimated
9GB (have 32GB)
hmellor/tiny-random-LlamaForCausalLMQ4Fits comfortably
85.79 tok/sEstimated
4GB (have 32GB)
Qwen/Qwen2.5-72B-InstructQ8Not supported
12.28 tok/sEstimated
70GB (have 32GB)
Qwen/Qwen2.5-72B-InstructFP16Not supported
7.13 tok/sEstimated
141GB (have 32GB)
ibm-granite/granite-3.3-2b-instructQ4Fits comfortably
116.90 tok/sEstimated
1GB (have 32GB)
Qwen/Qwen2.5-32BFP16Not supported
11.14 tok/sEstimated
66GB (have 32GB)
Qwen/Qwen3-4B-Thinking-2507-FP8Q4Fits comfortably
93.10 tok/sEstimated
2GB (have 32GB)
Qwen/Qwen3-4B-Thinking-2507-FP8Q8Fits comfortably
57.74 tok/sEstimated
4GB (have 32GB)
Qwen/Qwen3-4B-Thinking-2507-FP8FP16Fits comfortably
36.02 tok/sEstimated
9GB (have 32GB)
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-8bitQ4Fits comfortably
51.03 tok/sEstimated
15GB (have 32GB)
GSAI-ML/LLaDA-8B-BaseFP16Fits comfortably
35.31 tok/sEstimated
17GB (have 32GB)
HuggingFaceM4/tiny-random-LlamaForCausalLMQ8Fits comfortably
61.04 tok/sEstimated
7GB (have 32GB)
meta-llama/Llama-3.1-8B-InstructFP16Fits comfortably
26.50 tok/sEstimated
17GB (have 32GB)
Qwen/Qwen2.5-0.5BQ8Fits comfortably
68.04 tok/sEstimated
5GB (have 32GB)
deepseek-ai/DeepSeek-R1FP16
Fits comfortably15GB required · 32GB available
31.15 tok/sEstimated
Qwen/Qwen3-30B-A3BQ4
Fits comfortably15GB required · 32GB available
45.98 tok/sEstimated
google/gemma-2-2b-itQ4
Fits comfortably1GB required · 32GB available
110.60 tok/sEstimated
openai-community/gpt2-xlQ4
Fits comfortably4GB required · 32GB available
98.30 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507Q8
Fits comfortably4GB required · 32GB available
63.05 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q4
Fits comfortably4GB required · 32GB available
81.02 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q8
Fits comfortably7GB required · 32GB available
58.54 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5FP16
Fits comfortably15GB required · 32GB available
32.02 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507Q4
Fits comfortably2GB required · 32GB available
80.51 tok/sEstimated
mistralai/Mistral-7B-Instruct-v0.2Q4
Fits comfortably4GB required · 32GB available
90.47 tok/sEstimated
mistralai/Mistral-7B-Instruct-v0.2Q8
Fits comfortably7GB required · 32GB available
67.48 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BQ8
Not supported33GB required · 32GB available
20.17 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BFP16
Not supported66GB required · 32GB available
12.17 tok/sEstimated
distilbert/distilgpt2Q4
Fits comfortably4GB required · 32GB available
91.64 tok/sEstimated
distilbert/distilgpt2Q8
Fits comfortably7GB required · 32GB available
61.28 tok/sEstimated
distilbert/distilgpt2FP16
Fits comfortably15GB required · 32GB available
35.54 tok/sEstimated
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicQ4
Not supported34GB required · 32GB available
33.57 tok/sEstimated
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicQ8
Not supported68GB required · 32GB available
20.55 tok/sEstimated
Qwen/Qwen2.5-0.5BFP16
Fits comfortably11GB required · 32GB available
34.53 tok/sEstimated
meta-llama/Llama-2-7b-hfFP16
Fits comfortably15GB required · 32GB available
35.07 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Llama-8BQ4
Fits comfortably4GB required · 32GB available
84.11 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Llama-8BQ8
Fits comfortably9GB required · 32GB available
62.99 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Llama-8BFP16
Fits comfortably17GB required · 32GB available
36.31 tok/sEstimated
microsoft/DialoGPT-mediumFP16
Fits comfortably15GB required · 32GB available
31.86 tok/sEstimated
MiniMaxAI/MiniMax-M2Q4
Fits comfortably4GB required · 32GB available
91.48 tok/sEstimated
deepseek-ai/DeepSeek-V3.1Q8
Fits comfortably7GB required · 32GB available
56.85 tok/sEstimated
deepseek-ai/DeepSeek-R1-0528Q4
Fits comfortably4GB required · 32GB available
84.06 tok/sEstimated
deepseek-ai/DeepSeek-R1-0528Q8
Fits comfortably7GB required · 32GB available
65.16 tok/sEstimated
HuggingFaceTB/SmolLM-135MFP16
Fits comfortably15GB required · 32GB available
33.73 tok/sEstimated
trl-internal-testing/tiny-random-LlamaForCausalLMQ4
Fits comfortably4GB required · 32GB available
96.09 tok/sEstimated
trl-internal-testing/tiny-random-LlamaForCausalLMQ8
Fits comfortably7GB required · 32GB available
61.32 tok/sEstimated
trl-internal-testing/tiny-random-LlamaForCausalLMFP16
Fits comfortably15GB required · 32GB available
36.77 tok/sEstimated
Qwen/Qwen3-Coder-30B-A3B-InstructQ4
Fits comfortably15GB required · 32GB available
45.78 tok/sEstimated
IlyaGusev/saiga_llama3_8bQ8
Fits comfortably9GB required · 32GB available
64.63 tok/sEstimated
meta-llama/Llama-Guard-3-1BQ4
Fits comfortably1GB required · 32GB available
104.04 tok/sEstimated
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bitQ8
Fits comfortably4GB required · 32GB available
56.59 tok/sEstimated
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bitFP16
Fits comfortably9GB required · 32GB available
32.16 tok/sEstimated
hmellor/tiny-random-LlamaForCausalLMQ4
Fits comfortably4GB required · 32GB available
85.79 tok/sEstimated
Qwen/Qwen2.5-72B-InstructQ8
Not supported70GB required · 32GB available
12.28 tok/sEstimated
Qwen/Qwen2.5-72B-InstructFP16
Not supported141GB required · 32GB available
7.13 tok/sEstimated
ibm-granite/granite-3.3-2b-instructQ4
Fits comfortably1GB required · 32GB available
116.90 tok/sEstimated
Qwen/Qwen2.5-32BFP16
Not supported66GB required · 32GB available
11.14 tok/sEstimated
Qwen/Qwen3-4B-Thinking-2507-FP8Q4
Fits comfortably2GB required · 32GB available
93.10 tok/sEstimated
Qwen/Qwen3-4B-Thinking-2507-FP8Q8
Fits comfortably4GB required · 32GB available
57.74 tok/sEstimated
Qwen/Qwen3-4B-Thinking-2507-FP8FP16
Fits comfortably9GB required · 32GB available
36.02 tok/sEstimated
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-8bitQ4
Fits comfortably15GB required · 32GB available
51.03 tok/sEstimated
GSAI-ML/LLaDA-8B-BaseFP16
Fits comfortably17GB required · 32GB available
35.31 tok/sEstimated
HuggingFaceM4/tiny-random-LlamaForCausalLMQ8
Fits comfortably7GB required · 32GB available
61.04 tok/sEstimated
meta-llama/Llama-3.1-8B-InstructFP16
Fits comfortably17GB required · 32GB available
26.50 tok/sEstimated
Qwen/Qwen2.5-0.5BQ8
Fits comfortably5GB required · 32GB available
68.04 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Alternative GPUs

RTX 5070
12GB

Explore how RTX 5070 stacks up for local inference workloads.

RTX 4060 Ti 16GB
16GB

Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.

RX 6800 XT
16GB

Explore how RX 6800 XT stacks up for local inference workloads.

RTX 4070 Super
12GB

Explore how RTX 4070 Super stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.