L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Community

  • Leaderboard

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. RX 7600 XT

Quick Answer: RX 7600 XT offers 16GB VRAM and starts around $529.00. It delivers approximately 58 tokens/sec on unsloth/Llama-3.2-1B-Instruct. It typically draws 190W under load.

RX 7600 XT

In Stock
By AMDReleased 2024-01MSRP $329.00

This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.

Buy on Amazon - $529.00View Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM16GB
Cores2,048
TDP190W
ArchitectureRDNA 3

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

AmazonIn Stock
$529.00
Buy on Amazon

More Amazon options

Rotate out primary variants whenever validation flags an issue.

💡 Not ready to buy? Try cloud GPUs first

Test RX 7600 XT performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
unsloth/Llama-3.2-1B-InstructQ4
58.16 tok/sEstimated

Auto-generated benchmark

1GB
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q4
57.37 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2.5-3B-InstructQ4
57.34 tok/sEstimated

Auto-generated benchmark

2GB
google/gemma-3-1b-itQ4
57.09 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-1BQ4
55.77 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/DeepSeek-OCRQ4
55.56 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/Llama-3.2-3B-InstructQ4
55.28 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-Guard-3-1BQ4
55.09 tok/sEstimated

Auto-generated benchmark

1GB
facebook/sam3Q4
55.03 tok/sEstimated

Auto-generated benchmark

1GB
LiquidAI/LFM2-1.2BQ4
54.04 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-3BQ4
53.86 tok/sEstimated

Auto-generated benchmark

2GB
WeiboAI/VibeThinker-1.5BQ4
53.72 tok/sEstimated

Auto-generated benchmark

1GB
ibm-granite/granite-3.3-2b-instructQ4
52.89 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-3B-InstructQ4
52.57 tok/sEstimated

Auto-generated benchmark

2GB
google-t5/t5-3bQ4
52.55 tok/sEstimated

Auto-generated benchmark

2GB
allenai/OLMo-2-0425-1BQ4
52.48 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-1B-InstructQ4
52.45 tok/sEstimated

Auto-generated benchmark

1GB
nari-labs/Dia2-2BQ4
52.28 tok/sEstimated

Auto-generated benchmark

2GB
bigcode/starcoder2-3bQ4
51.79 tok/sEstimated

Auto-generated benchmark

2GB
google/embeddinggemma-300mQ4
51.38 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/deepseek-coder-1.3b-instructQ4
50.85 tok/sEstimated

Auto-generated benchmark

2GB
google/gemma-2-2b-itQ4
50.67 tok/sEstimated

Auto-generated benchmark

1GB
inference-net/Schematron-3BQ4
50.48 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2.5-3BQ4
50.35 tok/sEstimated

Auto-generated benchmark

2GB
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
49.75 tok/sEstimated

Auto-generated benchmark

1GB
apple/OpenELM-1_1B-InstructQ4
48.83 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2bQ4
48.81 tok/sEstimated

Auto-generated benchmark

1GB
ibm-research/PowerMoE-3bQ4
48.62 tok/sEstimated

Auto-generated benchmark

2GB
tencent/HunyuanOCRQ4
48.49 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen3-0.6B-BaseQ4
48.39 tok/sEstimated

Auto-generated benchmark

3GB
unsloth/gemma-3-1b-itQ4
48.38 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen3-4B-Thinking-2507Q4
48.37 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-Guard-3-8BQ4
48.35 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/Phi-3.5-vision-instructQ4
48.18 tok/sEstimated

Auto-generated benchmark

4GB
google-bert/bert-base-uncasedQ4
48.12 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/DeepSeek-V3Q4
48.04 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen2.5-7BQ4
48.03 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-8B-BaseQ4
47.82 tok/sEstimated

Auto-generated benchmark

4GB
HuggingFaceTB/SmolLM2-135MQ4
47.78 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-4B-Thinking-2507-FP8Q4
47.74 tok/sEstimated

Auto-generated benchmark

2GB
skt/kogpt2-base-v2Q4
47.74 tok/sEstimated

Auto-generated benchmark

4GB
zai-org/GLM-4.6-FP8Q4
47.56 tok/sEstimated

Auto-generated benchmark

4GB
sshleifer/tiny-gpt2Q4
47.43 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-Reranker-0.6BQ4
47.40 tok/sEstimated

Auto-generated benchmark

3GB
ibm-granite/granite-3.3-8b-instructQ4
47.38 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Meta-Llama-3-8B-InstructQ4
47.36 tok/sEstimated

Auto-generated benchmark

4GB
rednote-hilab/dots.ocrQ4
47.34 tok/sEstimated

Auto-generated benchmark

4GB
lmsys/vicuna-7b-v1.5Q4
47.33 tok/sEstimated

Auto-generated benchmark

4GB
GSAI-ML/LLaDA-8B-BaseQ4
47.26 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-R1-Distill-Llama-8BQ4
47.23 tok/sEstimated

Auto-generated benchmark

4GB
unsloth/Llama-3.2-1B-Instruct
Q4
1GB
58.16 tok/sEstimated
Auto-generated benchmark
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16
Q4
2GB
57.37 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B-Instruct
Q4
2GB
57.34 tok/sEstimated
Auto-generated benchmark
google/gemma-3-1b-it
Q4
1GB
57.09 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B
Q4
1GB
55.77 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR
Q4
2GB
55.56 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-3B-Instruct
Q4
2GB
55.28 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-1B
Q4
1GB
55.09 tok/sEstimated
Auto-generated benchmark
facebook/sam3
Q4
1GB
55.03 tok/sEstimated
Auto-generated benchmark
LiquidAI/LFM2-1.2B
Q4
1GB
54.04 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B
Q4
2GB
53.86 tok/sEstimated
Auto-generated benchmark
WeiboAI/VibeThinker-1.5B
Q4
1GB
53.72 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-2b-instruct
Q4
1GB
52.89 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
52.57 tok/sEstimated
Auto-generated benchmark
google-t5/t5-3b
Q4
2GB
52.55 tok/sEstimated
Auto-generated benchmark
allenai/OLMo-2-0425-1B
Q4
1GB
52.48 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B-Instruct
Q4
1GB
52.45 tok/sEstimated
Auto-generated benchmark
nari-labs/Dia2-2B
Q4
2GB
52.28 tok/sEstimated
Auto-generated benchmark
bigcode/starcoder2-3b
Q4
2GB
51.79 tok/sEstimated
Auto-generated benchmark
google/embeddinggemma-300m
Q4
1GB
51.38 tok/sEstimated
Auto-generated benchmark
deepseek-ai/deepseek-coder-1.3b-instruct
Q4
2GB
50.85 tok/sEstimated
Auto-generated benchmark
google/gemma-2-2b-it
Q4
1GB
50.67 tok/sEstimated
Auto-generated benchmark
inference-net/Schematron-3B
Q4
2GB
50.48 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B
Q4
2GB
50.35 tok/sEstimated
Auto-generated benchmark
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Q4
1GB
49.75 tok/sEstimated
Auto-generated benchmark
apple/OpenELM-1_1B-Instruct
Q4
1GB
48.83 tok/sEstimated
Auto-generated benchmark
google/gemma-2b
Q4
1GB
48.81 tok/sEstimated
Auto-generated benchmark
ibm-research/PowerMoE-3b
Q4
2GB
48.62 tok/sEstimated
Auto-generated benchmark
tencent/HunyuanOCR
Q4
1GB
48.49 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-0.6B-Base
Q4
3GB
48.39 tok/sEstimated
Auto-generated benchmark
unsloth/gemma-3-1b-it
Q4
1GB
48.38 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-4B-Thinking-2507
Q4
2GB
48.37 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-8B
Q4
4GB
48.35 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-3.5-vision-instruct
Q4
4GB
48.18 tok/sEstimated
Auto-generated benchmark
google-bert/bert-base-uncased
Q4
1GB
48.12 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-V3
Q4
4GB
48.04 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-7B
Q4
4GB
48.03 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-8B-Base
Q4
4GB
47.82 tok/sEstimated
Auto-generated benchmark
HuggingFaceTB/SmolLM2-135M
Q4
4GB
47.78 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-4B-Thinking-2507-FP8
Q4
2GB
47.74 tok/sEstimated
Auto-generated benchmark
skt/kogpt2-base-v2
Q4
4GB
47.74 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-4.6-FP8
Q4
4GB
47.56 tok/sEstimated
Auto-generated benchmark
sshleifer/tiny-gpt2
Q4
4GB
47.43 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Reranker-0.6B
Q4
3GB
47.40 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-8b-instruct
Q4
4GB
47.38 tok/sEstimated
Auto-generated benchmark
meta-llama/Meta-Llama-3-8B-Instruct
Q4
4GB
47.36 tok/sEstimated
Auto-generated benchmark
rednote-hilab/dots.ocr
Q4
4GB
47.34 tok/sEstimated
Auto-generated benchmark
lmsys/vicuna-7b-v1.5
Q4
4GB
47.33 tok/sEstimated
Auto-generated benchmark
GSAI-ML/LLaDA-8B-Base
Q4
4GB
47.26 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Q4
4GB
47.23 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
mistralai/Mistral-Large-3-675B-Instruct-2512Q8Not supported
3.83 tok/sEstimated
755GB (have 16GB)
mistralai/Mistral-Large-3-675B-Instruct-2512FP16Not supported
2.16 tok/sEstimated
1509GB (have 16GB)
EssentialAI/rnj-1Q4Fits comfortably
31.04 tok/sEstimated
5GB (have 16GB)
EssentialAI/rnj-1Q8Fits comfortably
23.21 tok/sEstimated
10GB (have 16GB)
EssentialAI/rnj-1FP16Not supported
11.41 tok/sEstimated
19GB (have 16GB)
dphn/dolphin-2.9.1-yi-1.5-34bQ4Not supported
14.10 tok/sEstimated
17GB (have 16GB)
openai/gpt-oss-20bFP16Not supported
9.54 tok/sEstimated
41GB (have 16GB)
google/gemma-3-1b-itQ4Fits comfortably
57.09 tok/sEstimated
1GB (have 16GB)
google/gemma-3-1b-itQ8Fits comfortably
35.67 tok/sEstimated
1GB (have 16GB)
google/gemma-3-1b-itFP16Fits comfortably
20.49 tok/sEstimated
2GB (have 16GB)
Qwen/Qwen3-Embedding-0.6BQ4Fits comfortably
45.40 tok/sEstimated
3GB (have 16GB)
facebook/opt-125mQ8Fits comfortably
30.65 tok/sEstimated
7GB (have 16GB)
facebook/opt-125mFP16Fits (tight)
16.03 tok/sEstimated
15GB (have 16GB)
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16FP16Fits comfortably
20.37 tok/sEstimated
6GB (have 16GB)
mistralai/Mistral-7B-Instruct-v0.2Q4Fits comfortably
43.77 tok/sEstimated
4GB (have 16GB)
mistralai/Mistral-7B-Instruct-v0.2Q8Fits comfortably
28.13 tok/sEstimated
7GB (have 16GB)
mistralai/Mistral-7B-Instruct-v0.2FP16Fits (tight)
16.48 tok/sEstimated
15GB (have 16GB)
Qwen/Qwen3-8BQ4Fits comfortably
41.97 tok/sEstimated
4GB (have 16GB)
Qwen/Qwen3-8BQ8Fits comfortably
29.01 tok/sEstimated
9GB (have 16GB)
Qwen/Qwen3-8BFP16Not supported
16.92 tok/sEstimated
17GB (have 16GB)
inference-net/Schematron-3BQ4Fits comfortably
50.48 tok/sEstimated
2GB (have 16GB)
inference-net/Schematron-3BQ8Fits comfortably
33.89 tok/sEstimated
3GB (have 16GB)
inference-net/Schematron-3BFP16Fits comfortably
19.83 tok/sEstimated
6GB (have 16GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BQ4Fits (tight)
14.77 tok/sEstimated
16GB (have 16GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BQ8Not supported
11.73 tok/sEstimated
33GB (have 16GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BFP16Not supported
5.65 tok/sEstimated
66GB (have 16GB)
distilbert/distilgpt2Q4Fits comfortably
44.33 tok/sEstimated
4GB (have 16GB)
distilbert/distilgpt2Q8Fits comfortably
32.73 tok/sEstimated
7GB (have 16GB)
distilbert/distilgpt2FP16Fits (tight)
15.50 tok/sEstimated
15GB (have 16GB)
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicQ4Not supported
15.29 tok/sEstimated
34GB (have 16GB)
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicQ8Not supported
10.93 tok/sEstimated
68GB (have 16GB)
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicFP16Not supported
5.55 tok/sEstimated
137GB (have 16GB)
meta-llama/Llama-3.2-3B-InstructQ4Fits comfortably
52.57 tok/sEstimated
2GB (have 16GB)
meta-llama/Llama-3.2-3B-InstructQ8Fits comfortably
35.65 tok/sEstimated
3GB (have 16GB)
meta-llama/Llama-3.2-3B-InstructFP16Fits comfortably
21.82 tok/sEstimated
6GB (have 16GB)
vikhyatk/moondream2Q4Fits comfortably
43.18 tok/sEstimated
4GB (have 16GB)
vikhyatk/moondream2Q8Fits comfortably
32.37 tok/sEstimated
7GB (have 16GB)
vikhyatk/moondream2FP16Fits (tight)
16.30 tok/sEstimated
15GB (have 16GB)
petals-team/StableBeluga2Q4Fits comfortably
46.29 tok/sEstimated
4GB (have 16GB)
petals-team/StableBeluga2Q8Fits comfortably
29.38 tok/sEstimated
7GB (have 16GB)
petals-team/StableBeluga2FP16Fits (tight)
17.73 tok/sEstimated
15GB (have 16GB)
meta-llama/Llama-3.2-1BQ4Fits comfortably
55.77 tok/sEstimated
1GB (have 16GB)
meta-llama/Llama-3.2-1BQ8Fits comfortably
35.56 tok/sEstimated
1GB (have 16GB)
meta-llama/Llama-3.2-1BFP16Fits comfortably
19.17 tok/sEstimated
2GB (have 16GB)
meta-llama/Meta-Llama-3-8BQ4Fits comfortably
42.06 tok/sEstimated
4GB (have 16GB)
meta-llama/Meta-Llama-3-8BQ8Fits comfortably
31.23 tok/sEstimated
9GB (have 16GB)
meta-llama/Meta-Llama-3-8BFP16Not supported
16.57 tok/sEstimated
17GB (have 16GB)
Qwen/Qwen2.5-7BQ4Fits comfortably
48.03 tok/sEstimated
4GB (have 16GB)
Qwen/Qwen2.5-7BQ8Fits comfortably
29.93 tok/sEstimated
7GB (have 16GB)
mistralai/Mistral-Large-3-675B-Instruct-2512Q4Not supported
5.57 tok/sEstimated
378GB (have 16GB)
mistralai/Mistral-Large-3-675B-Instruct-2512Q8
Not supported755GB required · 16GB available
3.83 tok/sEstimated
mistralai/Mistral-Large-3-675B-Instruct-2512FP16
Not supported1509GB required · 16GB available
2.16 tok/sEstimated
EssentialAI/rnj-1Q4
Fits comfortably5GB required · 16GB available
31.04 tok/sEstimated
EssentialAI/rnj-1Q8
Fits comfortably10GB required · 16GB available
23.21 tok/sEstimated
EssentialAI/rnj-1FP16
Not supported19GB required · 16GB available
11.41 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bQ4
Not supported17GB required · 16GB available
14.10 tok/sEstimated
openai/gpt-oss-20bFP16
Not supported41GB required · 16GB available
9.54 tok/sEstimated
google/gemma-3-1b-itQ4
Fits comfortably1GB required · 16GB available
57.09 tok/sEstimated
google/gemma-3-1b-itQ8
Fits comfortably1GB required · 16GB available
35.67 tok/sEstimated
google/gemma-3-1b-itFP16
Fits comfortably2GB required · 16GB available
20.49 tok/sEstimated
Qwen/Qwen3-Embedding-0.6BQ4
Fits comfortably3GB required · 16GB available
45.40 tok/sEstimated
facebook/opt-125mQ8
Fits comfortably7GB required · 16GB available
30.65 tok/sEstimated
facebook/opt-125mFP16
Fits (tight)15GB required · 16GB available
16.03 tok/sEstimated
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16FP16
Fits comfortably6GB required · 16GB available
20.37 tok/sEstimated
mistralai/Mistral-7B-Instruct-v0.2Q4
Fits comfortably4GB required · 16GB available
43.77 tok/sEstimated
mistralai/Mistral-7B-Instruct-v0.2Q8
Fits comfortably7GB required · 16GB available
28.13 tok/sEstimated
mistralai/Mistral-7B-Instruct-v0.2FP16
Fits (tight)15GB required · 16GB available
16.48 tok/sEstimated
Qwen/Qwen3-8BQ4
Fits comfortably4GB required · 16GB available
41.97 tok/sEstimated
Qwen/Qwen3-8BQ8
Fits comfortably9GB required · 16GB available
29.01 tok/sEstimated
Qwen/Qwen3-8BFP16
Not supported17GB required · 16GB available
16.92 tok/sEstimated
inference-net/Schematron-3BQ4
Fits comfortably2GB required · 16GB available
50.48 tok/sEstimated
inference-net/Schematron-3BQ8
Fits comfortably3GB required · 16GB available
33.89 tok/sEstimated
inference-net/Schematron-3BFP16
Fits comfortably6GB required · 16GB available
19.83 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BQ4
Fits (tight)16GB required · 16GB available
14.77 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BQ8
Not supported33GB required · 16GB available
11.73 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BFP16
Not supported66GB required · 16GB available
5.65 tok/sEstimated
distilbert/distilgpt2Q4
Fits comfortably4GB required · 16GB available
44.33 tok/sEstimated
distilbert/distilgpt2Q8
Fits comfortably7GB required · 16GB available
32.73 tok/sEstimated
distilbert/distilgpt2FP16
Fits (tight)15GB required · 16GB available
15.50 tok/sEstimated
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicQ4
Not supported34GB required · 16GB available
15.29 tok/sEstimated
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicQ8
Not supported68GB required · 16GB available
10.93 tok/sEstimated
RedHatAI/Llama-3.3-70B-Instruct-FP8-dynamicFP16
Not supported137GB required · 16GB available
5.55 tok/sEstimated
meta-llama/Llama-3.2-3B-InstructQ4
Fits comfortably2GB required · 16GB available
52.57 tok/sEstimated
meta-llama/Llama-3.2-3B-InstructQ8
Fits comfortably3GB required · 16GB available
35.65 tok/sEstimated
meta-llama/Llama-3.2-3B-InstructFP16
Fits comfortably6GB required · 16GB available
21.82 tok/sEstimated
vikhyatk/moondream2Q4
Fits comfortably4GB required · 16GB available
43.18 tok/sEstimated
vikhyatk/moondream2Q8
Fits comfortably7GB required · 16GB available
32.37 tok/sEstimated
vikhyatk/moondream2FP16
Fits (tight)15GB required · 16GB available
16.30 tok/sEstimated
petals-team/StableBeluga2Q4
Fits comfortably4GB required · 16GB available
46.29 tok/sEstimated
petals-team/StableBeluga2Q8
Fits comfortably7GB required · 16GB available
29.38 tok/sEstimated
petals-team/StableBeluga2FP16
Fits (tight)15GB required · 16GB available
17.73 tok/sEstimated
meta-llama/Llama-3.2-1BQ4
Fits comfortably1GB required · 16GB available
55.77 tok/sEstimated
meta-llama/Llama-3.2-1BQ8
Fits comfortably1GB required · 16GB available
35.56 tok/sEstimated
meta-llama/Llama-3.2-1BFP16
Fits comfortably2GB required · 16GB available
19.17 tok/sEstimated
meta-llama/Meta-Llama-3-8BQ4
Fits comfortably4GB required · 16GB available
42.06 tok/sEstimated
meta-llama/Meta-Llama-3-8BQ8
Fits comfortably9GB required · 16GB available
31.23 tok/sEstimated
meta-llama/Meta-Llama-3-8BFP16
Not supported17GB required · 16GB available
16.57 tok/sEstimated
Qwen/Qwen2.5-7BQ4
Fits comfortably4GB required · 16GB available
48.03 tok/sEstimated
Qwen/Qwen2.5-7BQ8
Fits comfortably7GB required · 16GB available
29.93 tok/sEstimated
mistralai/Mistral-Large-3-675B-Instruct-2512Q4
Not supported378GB required · 16GB available
5.57 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Alternative GPUs

RTX 5070
12GB

Explore how RTX 5070 stacks up for local inference workloads.

RTX 4060 Ti 16GB
16GB

Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.

RX 6800 XT
16GB

Explore how RX 6800 XT stacks up for local inference workloads.

RTX 4070 Super
12GB

Explore how RTX 4070 Super stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.