L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Community

  • Leaderboard

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. RTX 4070 Super

Quick Answer: RTX 4070 Super offers 12GB VRAM and starts around $1073.31. It delivers approximately 116 tokens/sec on deepseek-ai/DeepSeek-OCR. It typically draws 220W under load.

RTX 4070 Super

Unknown
By NVIDIAReleased 2024-01MSRP $599.00

This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.

Buy on Amazon - $1,073.31View Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM12GB
Cores7,168
TDP220W
ArchitectureAda Lovelace

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

AmazonUnknown
$1,073.31
Buy on Amazon

More Amazon options

Rotate out primary variants whenever validation flags an issue.

💡 Not ready to buy? Try cloud GPUs first

Test RTX 4070 Super performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
deepseek-ai/DeepSeek-OCRQ4
116.29 tok/sEstimated

Auto-generated benchmark

2GB
apple/OpenELM-1_1B-InstructQ4
114.91 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3B-InstructQ4
113.39 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-3B-InstructQ4
113.02 tok/sEstimated

Auto-generated benchmark

2GB
facebook/sam3Q4
112.75 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2-2b-itQ4
112.46 tok/sEstimated

Auto-generated benchmark

1GB
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
112.22 tok/sEstimated

Auto-generated benchmark

1GB
allenai/OLMo-2-0425-1BQ4
109.79 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-3B-InstructQ4
108.69 tok/sEstimated

Auto-generated benchmark

2GB
nari-labs/Dia2-2BQ4
107.70 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1B-InstructQ4
107.56 tok/sEstimated

Auto-generated benchmark

1GB
tencent/HunyuanOCRQ4
106.78 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-1BQ4
105.49 tok/sEstimated

Auto-generated benchmark

1GB
google/embeddinggemma-300mQ4
105.46 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-1B-InstructQ4
105.28 tok/sEstimated

Auto-generated benchmark

1GB
LiquidAI/LFM2-1.2BQ4
104.50 tok/sEstimated

Auto-generated benchmark

1GB
bigcode/starcoder2-3bQ4
104.15 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2.5-3BQ4
103.65 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-3BQ4
102.73 tok/sEstimated

Auto-generated benchmark

2GB
google/gemma-3-1b-itQ4
102.40 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2bQ4
102.01 tok/sEstimated

Auto-generated benchmark

1GB
inference-net/Schematron-3BQ4
101.01 tok/sEstimated

Auto-generated benchmark

2GB
google-bert/bert-base-uncasedQ4
100.88 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/deepseek-coder-1.3b-instructQ4
100.85 tok/sEstimated

Auto-generated benchmark

2GB
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q4
100.69 tok/sEstimated

Auto-generated benchmark

2GB
ibm-granite/granite-3.3-2b-instructQ4
100.35 tok/sEstimated

Auto-generated benchmark

1GB
ibm-research/PowerMoE-3bQ4
100.32 tok/sEstimated

Auto-generated benchmark

2GB
google-t5/t5-3bQ4
99.15 tok/sEstimated

Auto-generated benchmark

2GB
WeiboAI/VibeThinker-1.5BQ4
96.35 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/gemma-3-1b-itQ4
96.06 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Meta-Llama-3-8BQ4
95.83 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-4BQ4
95.60 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen3-1.7BQ4
95.46 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-Guard-3-1BQ4
95.28 tok/sEstimated

Auto-generated benchmark

1GB
lmsys/vicuna-7b-v1.5Q4
94.44 tok/sEstimated

Auto-generated benchmark

4GB
Alibaba-NLP/gte-Qwen2-1.5B-instructQ4
94.39 tok/sEstimated

Auto-generated benchmark

3GB
Qwen/Qwen3-Embedding-4BQ4
94.39 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen3-8B-BaseQ4
94.33 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/DialoGPT-smallQ4
94.32 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-3.2-3B-InstructQ4
94.15 tok/sEstimated

Auto-generated benchmark

2GB
google/gemma-3-270m-itQ4
94.05 tok/sEstimated

Auto-generated benchmark

4GB
llamafactory/tiny-random-Llama-3Q4
94.03 tok/sEstimated

Auto-generated benchmark

4GB
ibm-granite/granite-3.3-8b-instructQ4
93.97 tok/sEstimated

Auto-generated benchmark

4GB
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-8bitQ4
93.93 tok/sEstimated

Auto-generated benchmark

2GB
zai-org/GLM-4.5-AirQ4
93.85 tok/sEstimated

Auto-generated benchmark

4GB
EleutherAI/pythia-70m-dedupedQ4
93.82 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen2-0.5B-InstructQ4
93.76 tok/sEstimated

Auto-generated benchmark

3GB
openai-community/gpt2Q4
93.73 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/VibeVoice-1.5BQ4
93.72 tok/sEstimated

Auto-generated benchmark

3GB
swiss-ai/Apertus-8B-Instruct-2509Q4
93.45 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-OCR
Q4
2GB
116.29 tok/sEstimated
Auto-generated benchmark
apple/OpenELM-1_1B-Instruct
Q4
1GB
114.91 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B-Instruct
Q4
2GB
113.39 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
113.02 tok/sEstimated
Auto-generated benchmark
facebook/sam3
Q4
1GB
112.75 tok/sEstimated
Auto-generated benchmark
google/gemma-2-2b-it
Q4
1GB
112.46 tok/sEstimated
Auto-generated benchmark
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Q4
1GB
112.22 tok/sEstimated
Auto-generated benchmark
allenai/OLMo-2-0425-1B
Q4
1GB
109.79 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-3B-Instruct
Q4
2GB
108.69 tok/sEstimated
Auto-generated benchmark
nari-labs/Dia2-2B
Q4
2GB
107.70 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B-Instruct
Q4
1GB
107.56 tok/sEstimated
Auto-generated benchmark
tencent/HunyuanOCR
Q4
1GB
106.78 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B
Q4
1GB
105.49 tok/sEstimated
Auto-generated benchmark
google/embeddinggemma-300m
Q4
1GB
105.46 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-1B-Instruct
Q4
1GB
105.28 tok/sEstimated
Auto-generated benchmark
LiquidAI/LFM2-1.2B
Q4
1GB
104.50 tok/sEstimated
Auto-generated benchmark
bigcode/starcoder2-3b
Q4
2GB
104.15 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B
Q4
2GB
103.65 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B
Q4
2GB
102.73 tok/sEstimated
Auto-generated benchmark
google/gemma-3-1b-it
Q4
1GB
102.40 tok/sEstimated
Auto-generated benchmark
google/gemma-2b
Q4
1GB
102.01 tok/sEstimated
Auto-generated benchmark
inference-net/Schematron-3B
Q4
2GB
101.01 tok/sEstimated
Auto-generated benchmark
google-bert/bert-base-uncased
Q4
1GB
100.88 tok/sEstimated
Auto-generated benchmark
deepseek-ai/deepseek-coder-1.3b-instruct
Q4
2GB
100.85 tok/sEstimated
Auto-generated benchmark
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16
Q4
2GB
100.69 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-2b-instruct
Q4
1GB
100.35 tok/sEstimated
Auto-generated benchmark
ibm-research/PowerMoE-3b
Q4
2GB
100.32 tok/sEstimated
Auto-generated benchmark
google-t5/t5-3b
Q4
2GB
99.15 tok/sEstimated
Auto-generated benchmark
WeiboAI/VibeThinker-1.5B
Q4
1GB
96.35 tok/sEstimated
Auto-generated benchmark
unsloth/gemma-3-1b-it
Q4
1GB
96.06 tok/sEstimated
Auto-generated benchmark
meta-llama/Meta-Llama-3-8B
Q4
4GB
95.83 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-4B
Q4
2GB
95.60 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-1.7B
Q4
4GB
95.46 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-1B
Q4
1GB
95.28 tok/sEstimated
Auto-generated benchmark
lmsys/vicuna-7b-v1.5
Q4
4GB
94.44 tok/sEstimated
Auto-generated benchmark
Alibaba-NLP/gte-Qwen2-1.5B-instruct
Q4
3GB
94.39 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Embedding-4B
Q4
2GB
94.39 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-8B-Base
Q4
4GB
94.33 tok/sEstimated
Auto-generated benchmark
microsoft/DialoGPT-small
Q4
4GB
94.32 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
94.15 tok/sEstimated
Auto-generated benchmark
google/gemma-3-270m-it
Q4
4GB
94.05 tok/sEstimated
Auto-generated benchmark
llamafactory/tiny-random-Llama-3
Q4
4GB
94.03 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-8b-instruct
Q4
4GB
93.97 tok/sEstimated
Auto-generated benchmark
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-8bit
Q4
2GB
93.93 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-4.5-Air
Q4
4GB
93.85 tok/sEstimated
Auto-generated benchmark
EleutherAI/pythia-70m-deduped
Q4
4GB
93.82 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2-0.5B-Instruct
Q4
3GB
93.76 tok/sEstimated
Auto-generated benchmark
openai-community/gpt2
Q4
4GB
93.73 tok/sEstimated
Auto-generated benchmark
microsoft/VibeVoice-1.5B
Q4
3GB
93.72 tok/sEstimated
Auto-generated benchmark
swiss-ai/Apertus-8B-Instruct-2509
Q4
4GB
93.45 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16FP16Fits comfortably
35.92 tok/sEstimated
6GB (have 12GB)
mistralai/Mistral-7B-Instruct-v0.2Q4Fits comfortably
86.13 tok/sEstimated
4GB (have 12GB)
meta-llama/Llama-3.2-1BQ8Fits comfortably
77.91 tok/sEstimated
1GB (have 12GB)
Qwen/Qwen2.5-7BQ4Fits comfortably
80.91 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen2.5-7BQ8Fits comfortably
63.84 tok/sEstimated
7GB (have 12GB)
Qwen/Qwen2.5-7BFP16Not supported
31.55 tok/sEstimated
15GB (have 12GB)
microsoft/Phi-3-mini-4k-instructQ4Fits comfortably
83.83 tok/sEstimated
4GB (have 12GB)
microsoft/Phi-3-mini-4k-instructQ8Fits comfortably
55.56 tok/sEstimated
7GB (have 12GB)
microsoft/Phi-3-mini-4k-instructFP16Not supported
29.86 tok/sEstimated
15GB (have 12GB)
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitQ8Fits comfortably
59.40 tok/sEstimated
4GB (have 12GB)
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitFP16Fits comfortably
34.16 tok/sEstimated
9GB (have 12GB)
Qwen/Qwen2.5-1.5BQ4Fits comfortably
79.72 tok/sEstimated
3GB (have 12GB)
Qwen/Qwen2.5-1.5BQ8Fits comfortably
61.04 tok/sEstimated
5GB (have 12GB)
unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bitQ4Fits comfortably
88.62 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen3-14BQ8Not supported
49.10 tok/sEstimated
14GB (have 12GB)
Qwen/Qwen3-14BFP16Not supported
25.27 tok/sEstimated
29GB (have 12GB)
Qwen/Qwen2.5-0.5BQ4Fits comfortably
81.82 tok/sEstimated
3GB (have 12GB)
Qwen/Qwen2-0.5BQ8Fits comfortably
63.98 tok/sEstimated
5GB (have 12GB)
meta-llama/Meta-Llama-3-70B-InstructQ8Not supported
21.64 tok/sEstimated
68GB (have 12GB)
meta-llama/Meta-Llama-3-70B-InstructFP16Not supported
11.33 tok/sEstimated
137GB (have 12GB)
Qwen/Qwen3-Embedding-4BQ4Fits comfortably
94.39 tok/sEstimated
2GB (have 12GB)
Qwen/Qwen3-Embedding-4BQ8Fits comfortably
57.53 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen3-Coder-30B-A3B-InstructQ4Not supported
51.42 tok/sEstimated
15GB (have 12GB)
Qwen/Qwen3-Coder-30B-A3B-InstructQ8Not supported
33.88 tok/sEstimated
31GB (have 12GB)
Qwen/Qwen2-7B-InstructQ8Fits comfortably
64.50 tok/sEstimated
7GB (have 12GB)
Qwen/Qwen2-7B-InstructFP16Not supported
33.00 tok/sEstimated
15GB (have 12GB)
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16FP16Not supported
11.31 tok/sEstimated
137GB (have 12GB)
OpenPipe/Qwen3-14B-InstructQ4Fits comfortably
60.88 tok/sEstimated
7GB (have 12GB)
OpenPipe/Qwen3-14B-InstructQ8Not supported
46.69 tok/sEstimated
14GB (have 12GB)
OpenPipe/Qwen3-14B-InstructFP16Not supported
24.73 tok/sEstimated
29GB (have 12GB)
openai-community/gpt2-xlQ4Fits comfortably
92.69 tok/sEstimated
4GB (have 12GB)
sshleifer/tiny-gpt2FP16Not supported
33.26 tok/sEstimated
15GB (have 12GB)
microsoft/Phi-3-mini-128k-instructQ4Fits comfortably
93.35 tok/sEstimated
4GB (have 12GB)
microsoft/Phi-3-mini-128k-instructQ8Fits comfortably
55.63 tok/sEstimated
7GB (have 12GB)
microsoft/Phi-3-mini-128k-instructFP16Not supported
30.85 tok/sEstimated
15GB (have 12GB)
numind/NuExtract-1.5FP16Not supported
30.87 tok/sEstimated
15GB (have 12GB)
Qwen/Qwen2.5-Coder-7B-InstructQ4Fits comfortably
81.72 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen2.5-Coder-7B-InstructQ8Fits comfortably
60.95 tok/sEstimated
7GB (have 12GB)
Qwen/Qwen2.5-Coder-7B-InstructFP16Not supported
36.05 tok/sEstimated
15GB (have 12GB)
RedHatAI/Llama-3.2-90B-Vision-Instruct-FP8-dynamicQ8Not supported
12.10 tok/sEstimated
88GB (have 12GB)
GSAI-ML/LLaDA-8B-InstructQ4Fits comfortably
85.79 tok/sEstimated
4GB (have 12GB)
GSAI-ML/LLaDA-8B-InstructQ8Fits comfortably
58.22 tok/sEstimated
9GB (have 12GB)
GSAI-ML/LLaDA-8B-InstructFP16Not supported
31.19 tok/sEstimated
17GB (have 12GB)
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-4bitQ8Fits comfortably
55.05 tok/sEstimated
9GB (have 12GB)
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-4bitFP16Not supported
35.16 tok/sEstimated
17GB (have 12GB)
Qwen/Qwen3-30B-A3B-Instruct-2507-FP8FP16Not supported
18.47 tok/sEstimated
61GB (have 12GB)
Qwen/Qwen3-4B-Thinking-2507Q4Fits comfortably
93.10 tok/sEstimated
2GB (have 12GB)
Qwen/Qwen3-4B-Thinking-2507Q8Fits comfortably
66.03 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen3-4B-Thinking-2507FP16Fits comfortably
33.27 tok/sEstimated
9GB (have 12GB)
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q8Fits comfortably
67.80 tok/sEstimated
3GB (have 12GB)
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16FP16
Fits comfortably6GB required · 12GB available
35.92 tok/sEstimated
mistralai/Mistral-7B-Instruct-v0.2Q4
Fits comfortably4GB required · 12GB available
86.13 tok/sEstimated
meta-llama/Llama-3.2-1BQ8
Fits comfortably1GB required · 12GB available
77.91 tok/sEstimated
Qwen/Qwen2.5-7BQ4
Fits comfortably4GB required · 12GB available
80.91 tok/sEstimated
Qwen/Qwen2.5-7BQ8
Fits comfortably7GB required · 12GB available
63.84 tok/sEstimated
Qwen/Qwen2.5-7BFP16
Not supported15GB required · 12GB available
31.55 tok/sEstimated
microsoft/Phi-3-mini-4k-instructQ4
Fits comfortably4GB required · 12GB available
83.83 tok/sEstimated
microsoft/Phi-3-mini-4k-instructQ8
Fits comfortably7GB required · 12GB available
55.56 tok/sEstimated
microsoft/Phi-3-mini-4k-instructFP16
Not supported15GB required · 12GB available
29.86 tok/sEstimated
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitQ8
Fits comfortably4GB required · 12GB available
59.40 tok/sEstimated
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitFP16
Fits comfortably9GB required · 12GB available
34.16 tok/sEstimated
Qwen/Qwen2.5-1.5BQ4
Fits comfortably3GB required · 12GB available
79.72 tok/sEstimated
Qwen/Qwen2.5-1.5BQ8
Fits comfortably5GB required · 12GB available
61.04 tok/sEstimated
unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bitQ4
Fits comfortably4GB required · 12GB available
88.62 tok/sEstimated
Qwen/Qwen3-14BQ8
Not supported14GB required · 12GB available
49.10 tok/sEstimated
Qwen/Qwen3-14BFP16
Not supported29GB required · 12GB available
25.27 tok/sEstimated
Qwen/Qwen2.5-0.5BQ4
Fits comfortably3GB required · 12GB available
81.82 tok/sEstimated
Qwen/Qwen2-0.5BQ8
Fits comfortably5GB required · 12GB available
63.98 tok/sEstimated
meta-llama/Meta-Llama-3-70B-InstructQ8
Not supported68GB required · 12GB available
21.64 tok/sEstimated
meta-llama/Meta-Llama-3-70B-InstructFP16
Not supported137GB required · 12GB available
11.33 tok/sEstimated
Qwen/Qwen3-Embedding-4BQ4
Fits comfortably2GB required · 12GB available
94.39 tok/sEstimated
Qwen/Qwen3-Embedding-4BQ8
Fits comfortably4GB required · 12GB available
57.53 tok/sEstimated
Qwen/Qwen3-Coder-30B-A3B-InstructQ4
Not supported15GB required · 12GB available
51.42 tok/sEstimated
Qwen/Qwen3-Coder-30B-A3B-InstructQ8
Not supported31GB required · 12GB available
33.88 tok/sEstimated
Qwen/Qwen2-7B-InstructQ8
Fits comfortably7GB required · 12GB available
64.50 tok/sEstimated
Qwen/Qwen2-7B-InstructFP16
Not supported15GB required · 12GB available
33.00 tok/sEstimated
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16FP16
Not supported137GB required · 12GB available
11.31 tok/sEstimated
OpenPipe/Qwen3-14B-InstructQ4
Fits comfortably7GB required · 12GB available
60.88 tok/sEstimated
OpenPipe/Qwen3-14B-InstructQ8
Not supported14GB required · 12GB available
46.69 tok/sEstimated
OpenPipe/Qwen3-14B-InstructFP16
Not supported29GB required · 12GB available
24.73 tok/sEstimated
openai-community/gpt2-xlQ4
Fits comfortably4GB required · 12GB available
92.69 tok/sEstimated
sshleifer/tiny-gpt2FP16
Not supported15GB required · 12GB available
33.26 tok/sEstimated
microsoft/Phi-3-mini-128k-instructQ4
Fits comfortably4GB required · 12GB available
93.35 tok/sEstimated
microsoft/Phi-3-mini-128k-instructQ8
Fits comfortably7GB required · 12GB available
55.63 tok/sEstimated
microsoft/Phi-3-mini-128k-instructFP16
Not supported15GB required · 12GB available
30.85 tok/sEstimated
numind/NuExtract-1.5FP16
Not supported15GB required · 12GB available
30.87 tok/sEstimated
Qwen/Qwen2.5-Coder-7B-InstructQ4
Fits comfortably4GB required · 12GB available
81.72 tok/sEstimated
Qwen/Qwen2.5-Coder-7B-InstructQ8
Fits comfortably7GB required · 12GB available
60.95 tok/sEstimated
Qwen/Qwen2.5-Coder-7B-InstructFP16
Not supported15GB required · 12GB available
36.05 tok/sEstimated
RedHatAI/Llama-3.2-90B-Vision-Instruct-FP8-dynamicQ8
Not supported88GB required · 12GB available
12.10 tok/sEstimated
GSAI-ML/LLaDA-8B-InstructQ4
Fits comfortably4GB required · 12GB available
85.79 tok/sEstimated
GSAI-ML/LLaDA-8B-InstructQ8
Fits comfortably9GB required · 12GB available
58.22 tok/sEstimated
GSAI-ML/LLaDA-8B-InstructFP16
Not supported17GB required · 12GB available
31.19 tok/sEstimated
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-4bitQ8
Fits comfortably9GB required · 12GB available
55.05 tok/sEstimated
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-4bitFP16
Not supported17GB required · 12GB available
35.16 tok/sEstimated
Qwen/Qwen3-30B-A3B-Instruct-2507-FP8FP16
Not supported61GB required · 12GB available
18.47 tok/sEstimated
Qwen/Qwen3-4B-Thinking-2507Q4
Fits comfortably2GB required · 12GB available
93.10 tok/sEstimated
Qwen/Qwen3-4B-Thinking-2507Q8
Fits comfortably4GB required · 12GB available
66.03 tok/sEstimated
Qwen/Qwen3-4B-Thinking-2507FP16
Fits comfortably9GB required · 12GB available
33.27 tok/sEstimated
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q8
Fits comfortably3GB required · 12GB available
67.80 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Alternative GPUs

RTX 5070
12GB

Explore how RTX 5070 stacks up for local inference workloads.

RTX 4060 Ti 16GB
16GB

Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.

RX 6800 XT
16GB

Explore how RX 6800 XT stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.

RTX 3090
24GB

Explore how RTX 3090 stacks up for local inference workloads.