L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Community

  • Leaderboard

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. RTX 3090

Quick Answer: RTX 3090 offers 24GB VRAM and starts around $979.99. It delivers approximately 209 tokens/sec on deepseek-ai/DeepSeek-OCR. It typically draws 350W under load.

RTX 3090

In Stock
By NVIDIAReleased 2020-09MSRP $1,499.00

RTX 3090 still delivers strong results for large language models thanks to its 24GB VRAM. It is ideal for enthusiasts leveraging the Ampere generation for budget workstation builds.

Buy on Amazon - $979.99View Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM24GB
Cores10,496
TDP350W
ArchitectureAmpere

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

AmazonIn Stock
$979.99
Buy on Amazon

More Amazon options

Rotate out primary variants whenever validation flags an issue.

💡 Not ready to buy? Try cloud GPUs first

Test RTX 3090 performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
deepseek-ai/DeepSeek-OCRQ4
209.15 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/Llama-3.2-1B-InstructQ4
203.52 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-3-1b-itQ4
201.55 tok/sEstimated

Auto-generated benchmark

1GB
nari-labs/Dia2-2BQ4
200.81 tok/sEstimated

Auto-generated benchmark

2GB
allenai/OLMo-2-0425-1BQ4
200.76 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3BQ4
198.27 tok/sEstimated

Auto-generated benchmark

2GB
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
197.60 tok/sEstimated

Auto-generated benchmark

1GB
tencent/HunyuanOCRQ4
196.73 tok/sEstimated

Auto-generated benchmark

1GB
ibm-granite/granite-3.3-2b-instructQ4
196.55 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/deepseek-coder-1.3b-instructQ4
194.47 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2.5-3B-InstructQ4
193.79 tok/sEstimated

Auto-generated benchmark

2GB
ibm-research/PowerMoE-3bQ4
193.59 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1B-InstructQ4
191.64 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-3BQ4
191.33 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/gemma-3-1b-itQ4
189.53 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2-2b-itQ4
186.96 tok/sEstimated

Auto-generated benchmark

1GB
apple/OpenELM-1_1B-InstructQ4
185.77 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2bQ4
185.33 tok/sEstimated

Auto-generated benchmark

1GB
google-bert/bert-base-uncasedQ4
184.49 tok/sEstimated

Auto-generated benchmark

1GB
WeiboAI/VibeThinker-1.5BQ4
184.26 tok/sEstimated

Auto-generated benchmark

1GB
google-t5/t5-3bQ4
183.09 tok/sEstimated

Auto-generated benchmark

2GB
LiquidAI/LFM2-1.2BQ4
179.42 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-3B-InstructQ4
178.80 tok/sEstimated

Auto-generated benchmark

2GB
inference-net/Schematron-3BQ4
176.51 tok/sEstimated

Auto-generated benchmark

2GB
bigcode/starcoder2-3bQ4
174.50 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-Guard-3-1BQ4
173.15 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-1BQ4
170.29 tok/sEstimated

Auto-generated benchmark

1GB
facebook/sam3Q4
169.50 tok/sEstimated

Auto-generated benchmark

1GB
HuggingFaceTB/SmolLM-135MQ4
169.26 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/Phi-4-multimodal-instructQ4
169.13 tok/sEstimated

Auto-generated benchmark

4GB
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q4
168.93 tok/sEstimated

Auto-generated benchmark

2GB
bigscience/bloomz-560mQ4
168.41 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/Phi-3-mini-4k-instructQ4
168.28 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen2-7B-InstructQ4
168.09 tok/sEstimated

Auto-generated benchmark

4GB
openai-community/gpt2Q4
167.92 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/Phi-3.5-mini-instructQ4
167.77 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Meta-Llama-3-8B-InstructQ4
167.63 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-3.2-3B-InstructQ4
167.42 tok/sEstimated

Auto-generated benchmark

2GB
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bitQ4
167.39 tok/sEstimated

Auto-generated benchmark

4GB
MiniMaxAI/MiniMax-M2Q4
167.28 tok/sEstimated

Auto-generated benchmark

4GB
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-4bitQ4
167.22 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-4B-BaseQ4
166.99 tok/sEstimated

Auto-generated benchmark

2GB
EleutherAI/gpt-neo-125mQ4
166.69 tok/sEstimated

Auto-generated benchmark

4GB
google/embeddinggemma-300mQ4
166.67 tok/sEstimated

Auto-generated benchmark

1GB
llamafactory/tiny-random-Llama-3Q4
166.46 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/phi-2Q4
166.33 tok/sEstimated

Auto-generated benchmark

4GB
Tongyi-MAI/Z-Image-TurboQ4
166.28 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen2.5-1.5B-InstructQ4
166.09 tok/sEstimated

Auto-generated benchmark

3GB
black-forest-labs/FLUX.1-devQ4
166.02 tok/sEstimated

Auto-generated benchmark

4GB
petals-team/StableBeluga2Q4
165.96 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-OCR
Q4
2GB
209.15 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-1B-Instruct
Q4
1GB
203.52 tok/sEstimated
Auto-generated benchmark
google/gemma-3-1b-it
Q4
1GB
201.55 tok/sEstimated
Auto-generated benchmark
nari-labs/Dia2-2B
Q4
2GB
200.81 tok/sEstimated
Auto-generated benchmark
allenai/OLMo-2-0425-1B
Q4
1GB
200.76 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B
Q4
2GB
198.27 tok/sEstimated
Auto-generated benchmark
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Q4
1GB
197.60 tok/sEstimated
Auto-generated benchmark
tencent/HunyuanOCR
Q4
1GB
196.73 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-2b-instruct
Q4
1GB
196.55 tok/sEstimated
Auto-generated benchmark
deepseek-ai/deepseek-coder-1.3b-instruct
Q4
2GB
194.47 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B-Instruct
Q4
2GB
193.79 tok/sEstimated
Auto-generated benchmark
ibm-research/PowerMoE-3b
Q4
2GB
193.59 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B-Instruct
Q4
1GB
191.64 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B
Q4
2GB
191.33 tok/sEstimated
Auto-generated benchmark
unsloth/gemma-3-1b-it
Q4
1GB
189.53 tok/sEstimated
Auto-generated benchmark
google/gemma-2-2b-it
Q4
1GB
186.96 tok/sEstimated
Auto-generated benchmark
apple/OpenELM-1_1B-Instruct
Q4
1GB
185.77 tok/sEstimated
Auto-generated benchmark
google/gemma-2b
Q4
1GB
185.33 tok/sEstimated
Auto-generated benchmark
google-bert/bert-base-uncased
Q4
1GB
184.49 tok/sEstimated
Auto-generated benchmark
WeiboAI/VibeThinker-1.5B
Q4
1GB
184.26 tok/sEstimated
Auto-generated benchmark
google-t5/t5-3b
Q4
2GB
183.09 tok/sEstimated
Auto-generated benchmark
LiquidAI/LFM2-1.2B
Q4
1GB
179.42 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-3B-Instruct
Q4
2GB
178.80 tok/sEstimated
Auto-generated benchmark
inference-net/Schematron-3B
Q4
2GB
176.51 tok/sEstimated
Auto-generated benchmark
bigcode/starcoder2-3b
Q4
2GB
174.50 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-1B
Q4
1GB
173.15 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B
Q4
1GB
170.29 tok/sEstimated
Auto-generated benchmark
facebook/sam3
Q4
1GB
169.50 tok/sEstimated
Auto-generated benchmark
HuggingFaceTB/SmolLM-135M
Q4
4GB
169.26 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-4-multimodal-instruct
Q4
4GB
169.13 tok/sEstimated
Auto-generated benchmark
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16
Q4
2GB
168.93 tok/sEstimated
Auto-generated benchmark
bigscience/bloomz-560m
Q4
4GB
168.41 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-3-mini-4k-instruct
Q4
4GB
168.28 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2-7B-Instruct
Q4
4GB
168.09 tok/sEstimated
Auto-generated benchmark
openai-community/gpt2
Q4
4GB
167.92 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-3.5-mini-instruct
Q4
4GB
167.77 tok/sEstimated
Auto-generated benchmark
meta-llama/Meta-Llama-3-8B-Instruct
Q4
4GB
167.63 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
167.42 tok/sEstimated
Auto-generated benchmark
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bit
Q4
4GB
167.39 tok/sEstimated
Auto-generated benchmark
MiniMaxAI/MiniMax-M2
Q4
4GB
167.28 tok/sEstimated
Auto-generated benchmark
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-4bit
Q4
4GB
167.22 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-4B-Base
Q4
2GB
166.99 tok/sEstimated
Auto-generated benchmark
EleutherAI/gpt-neo-125m
Q4
4GB
166.69 tok/sEstimated
Auto-generated benchmark
google/embeddinggemma-300m
Q4
1GB
166.67 tok/sEstimated
Auto-generated benchmark
llamafactory/tiny-random-Llama-3
Q4
4GB
166.46 tok/sEstimated
Auto-generated benchmark
microsoft/phi-2
Q4
4GB
166.33 tok/sEstimated
Auto-generated benchmark
Tongyi-MAI/Z-Image-Turbo
Q4
4GB
166.28 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-1.5B-Instruct
Q4
3GB
166.09 tok/sEstimated
Auto-generated benchmark
black-forest-labs/FLUX.1-dev
Q4
4GB
166.02 tok/sEstimated
Auto-generated benchmark
petals-team/StableBeluga2
Q4
4GB
165.96 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
vikhyatk/moondream2Q8Fits comfortably
112.41 tok/sEstimated
7GB (have 24GB)
vikhyatk/moondream2FP16Fits comfortably
53.70 tok/sEstimated
15GB (have 24GB)
Qwen/Qwen3-32BQ4Fits comfortably
58.82 tok/sEstimated
16GB (have 24GB)
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16FP16Not supported
21.12 tok/sEstimated
137GB (have 24GB)
OpenPipe/Qwen3-14B-InstructFP16Not supported
48.10 tok/sEstimated
29GB (have 24GB)
openai-community/gpt2-xlQ4Fits comfortably
142.13 tok/sEstimated
4GB (have 24GB)
meta-llama/Llama-Guard-3-8BQ4Fits comfortably
157.06 tok/sEstimated
4GB (have 24GB)
ibm-granite/granite-3.3-2b-instructFP16Fits comfortably
74.36 tok/sEstimated
4GB (have 24GB)
microsoft/VibeVoice-1.5BQ4Fits comfortably
145.91 tok/sEstimated
3GB (have 24GB)
ibm-granite/granite-docling-258MQ4Fits comfortably
149.28 tok/sEstimated
4GB (have 24GB)
Qwen/Qwen3-Next-80B-A3B-Thinking-FP8FP16Not supported
12.63 tok/sEstimated
156GB (have 24GB)
bigcode/starcoder2-3bQ4Fits comfortably
174.50 tok/sEstimated
2GB (have 24GB)
bigcode/starcoder2-3bQ8Fits comfortably
120.28 tok/sEstimated
3GB (have 24GB)
bigcode/starcoder2-3bFP16Fits comfortably
67.78 tok/sEstimated
6GB (have 24GB)
unsloth/gemma-3-1b-itQ4Fits comfortably
189.53 tok/sEstimated
1GB (have 24GB)
unsloth/gemma-3-1b-itQ8Fits comfortably
133.23 tok/sEstimated
1GB (have 24GB)
lmsys/vicuna-7b-v1.5Q4Fits comfortably
158.19 tok/sEstimated
4GB (have 24GB)
lmsys/vicuna-7b-v1.5Q8Fits comfortably
114.95 tok/sEstimated
7GB (have 24GB)
lmsys/vicuna-7b-v1.5FP16Fits comfortably
58.14 tok/sEstimated
15GB (have 24GB)
Qwen/Qwen2.5-3BQ4Fits comfortably
198.27 tok/sEstimated
2GB (have 24GB)
Qwen/Qwen2.5-3BFP16Fits comfortably
75.48 tok/sEstimated
6GB (have 24GB)
nvidia/NVIDIA-Nemotron-Nano-9B-v2FP16Fits comfortably
43.94 tok/sEstimated
19GB (have 24GB)
NousResearch/Meta-Llama-3.1-8B-InstructQ4Fits comfortably
141.35 tok/sEstimated
4GB (have 24GB)
AI-MO/Kimina-Prover-72BFP16Not supported
11.75 tok/sEstimated
141GB (have 24GB)
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-4bitQ4Fits comfortably
82.53 tok/sEstimated
15GB (have 24GB)
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-8bitQ8Not supported
62.29 tok/sEstimated
31GB (have 24GB)
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-8bitFP16Not supported
32.86 tok/sEstimated
61GB (have 24GB)
GSAI-ML/LLaDA-8B-BaseQ4Fits comfortably
139.81 tok/sEstimated
4GB (have 24GB)
baichuan-inc/Baichuan-M2-32BQ4Fits comfortably
50.12 tok/sEstimated
16GB (have 24GB)
baichuan-inc/Baichuan-M2-32BQ8Not supported
40.01 tok/sEstimated
33GB (have 24GB)
WeiboAI/VibeThinker-1.5BQ4Fits comfortably
184.26 tok/sEstimated
1GB (have 24GB)
WeiboAI/VibeThinker-1.5BQ8Fits comfortably
135.30 tok/sEstimated
2GB (have 24GB)
WeiboAI/VibeThinker-1.5BFP16Fits comfortably
71.67 tok/sEstimated
4GB (have 24GB)
Gensyn/Qwen2.5-0.5B-InstructFP16Fits comfortably
56.86 tok/sEstimated
11GB (have 24GB)
Qwen/Qwen3-Reranker-0.6BFP16Fits comfortably
56.45 tok/sEstimated
13GB (have 24GB)
Qwen/Qwen2.5-32B-InstructQ8Not supported
37.82 tok/sEstimated
33GB (have 24GB)
meta-llama/Llama-3.1-8BQ4Fits comfortably
162.11 tok/sEstimated
4GB (have 24GB)
meta-llama/Llama-3.1-8BQ8Fits comfortably
98.74 tok/sEstimated
9GB (have 24GB)
deepseek-ai/DeepSeek-Coder-V2-Lite-InstructFP16Fits comfortably
62.51 tok/sEstimated
15GB (have 24GB)
numind/NuExtract-1.5Q4Fits comfortably
153.01 tok/sEstimated
4GB (have 24GB)
mistralai/Mistral-7B-v0.1FP16Fits comfortably
53.27 tok/sEstimated
15GB (have 24GB)
LiquidAI/LFM2-1.2BQ4Fits comfortably
179.42 tok/sEstimated
1GB (have 24GB)
meta-llama/Llama-3.3-70B-InstructQ4Not supported
30.02 tok/sEstimated
34GB (have 24GB)
meta-llama/Llama-3.3-70B-InstructQ8Not supported
23.43 tok/sEstimated
69GB (have 24GB)
google-t5/t5-3bQ4Fits comfortably
183.09 tok/sEstimated
2GB (have 24GB)
rednote-hilab/dots.ocrQ4Fits comfortably
146.45 tok/sEstimated
4GB (have 24GB)
meta-llama/Llama-3.1-70B-InstructQ4Not supported
55.03 tok/sEstimated
34GB (have 24GB)
zai-org/GLM-4.5-AirQ4Fits comfortably
149.90 tok/sEstimated
4GB (have 24GB)
zai-org/GLM-4.5-AirQ8Fits comfortably
106.82 tok/sEstimated
7GB (have 24GB)
vikhyatk/moondream2Q4Fits comfortably
139.82 tok/sEstimated
4GB (have 24GB)
vikhyatk/moondream2Q8
Fits comfortably7GB required · 24GB available
112.41 tok/sEstimated
vikhyatk/moondream2FP16
Fits comfortably15GB required · 24GB available
53.70 tok/sEstimated
Qwen/Qwen3-32BQ4
Fits comfortably16GB required · 24GB available
58.82 tok/sEstimated
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16FP16
Not supported137GB required · 24GB available
21.12 tok/sEstimated
OpenPipe/Qwen3-14B-InstructFP16
Not supported29GB required · 24GB available
48.10 tok/sEstimated
openai-community/gpt2-xlQ4
Fits comfortably4GB required · 24GB available
142.13 tok/sEstimated
meta-llama/Llama-Guard-3-8BQ4
Fits comfortably4GB required · 24GB available
157.06 tok/sEstimated
ibm-granite/granite-3.3-2b-instructFP16
Fits comfortably4GB required · 24GB available
74.36 tok/sEstimated
microsoft/VibeVoice-1.5BQ4
Fits comfortably3GB required · 24GB available
145.91 tok/sEstimated
ibm-granite/granite-docling-258MQ4
Fits comfortably4GB required · 24GB available
149.28 tok/sEstimated
Qwen/Qwen3-Next-80B-A3B-Thinking-FP8FP16
Not supported156GB required · 24GB available
12.63 tok/sEstimated
bigcode/starcoder2-3bQ4
Fits comfortably2GB required · 24GB available
174.50 tok/sEstimated
bigcode/starcoder2-3bQ8
Fits comfortably3GB required · 24GB available
120.28 tok/sEstimated
bigcode/starcoder2-3bFP16
Fits comfortably6GB required · 24GB available
67.78 tok/sEstimated
unsloth/gemma-3-1b-itQ4
Fits comfortably1GB required · 24GB available
189.53 tok/sEstimated
unsloth/gemma-3-1b-itQ8
Fits comfortably1GB required · 24GB available
133.23 tok/sEstimated
lmsys/vicuna-7b-v1.5Q4
Fits comfortably4GB required · 24GB available
158.19 tok/sEstimated
lmsys/vicuna-7b-v1.5Q8
Fits comfortably7GB required · 24GB available
114.95 tok/sEstimated
lmsys/vicuna-7b-v1.5FP16
Fits comfortably15GB required · 24GB available
58.14 tok/sEstimated
Qwen/Qwen2.5-3BQ4
Fits comfortably2GB required · 24GB available
198.27 tok/sEstimated
Qwen/Qwen2.5-3BFP16
Fits comfortably6GB required · 24GB available
75.48 tok/sEstimated
nvidia/NVIDIA-Nemotron-Nano-9B-v2FP16
Fits comfortably19GB required · 24GB available
43.94 tok/sEstimated
NousResearch/Meta-Llama-3.1-8B-InstructQ4
Fits comfortably4GB required · 24GB available
141.35 tok/sEstimated
AI-MO/Kimina-Prover-72BFP16
Not supported141GB required · 24GB available
11.75 tok/sEstimated
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-4bitQ4
Fits comfortably15GB required · 24GB available
82.53 tok/sEstimated
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-8bitQ8
Not supported31GB required · 24GB available
62.29 tok/sEstimated
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-8bitFP16
Not supported61GB required · 24GB available
32.86 tok/sEstimated
GSAI-ML/LLaDA-8B-BaseQ4
Fits comfortably4GB required · 24GB available
139.81 tok/sEstimated
baichuan-inc/Baichuan-M2-32BQ4
Fits comfortably16GB required · 24GB available
50.12 tok/sEstimated
baichuan-inc/Baichuan-M2-32BQ8
Not supported33GB required · 24GB available
40.01 tok/sEstimated
WeiboAI/VibeThinker-1.5BQ4
Fits comfortably1GB required · 24GB available
184.26 tok/sEstimated
WeiboAI/VibeThinker-1.5BQ8
Fits comfortably2GB required · 24GB available
135.30 tok/sEstimated
WeiboAI/VibeThinker-1.5BFP16
Fits comfortably4GB required · 24GB available
71.67 tok/sEstimated
Gensyn/Qwen2.5-0.5B-InstructFP16
Fits comfortably11GB required · 24GB available
56.86 tok/sEstimated
Qwen/Qwen3-Reranker-0.6BFP16
Fits comfortably13GB required · 24GB available
56.45 tok/sEstimated
Qwen/Qwen2.5-32B-InstructQ8
Not supported33GB required · 24GB available
37.82 tok/sEstimated
meta-llama/Llama-3.1-8BQ4
Fits comfortably4GB required · 24GB available
162.11 tok/sEstimated
meta-llama/Llama-3.1-8BQ8
Fits comfortably9GB required · 24GB available
98.74 tok/sEstimated
deepseek-ai/DeepSeek-Coder-V2-Lite-InstructFP16
Fits comfortably15GB required · 24GB available
62.51 tok/sEstimated
numind/NuExtract-1.5Q4
Fits comfortably4GB required · 24GB available
153.01 tok/sEstimated
mistralai/Mistral-7B-v0.1FP16
Fits comfortably15GB required · 24GB available
53.27 tok/sEstimated
LiquidAI/LFM2-1.2BQ4
Fits comfortably1GB required · 24GB available
179.42 tok/sEstimated
meta-llama/Llama-3.3-70B-InstructQ4
Not supported34GB required · 24GB available
30.02 tok/sEstimated
meta-llama/Llama-3.3-70B-InstructQ8
Not supported69GB required · 24GB available
23.43 tok/sEstimated
google-t5/t5-3bQ4
Fits comfortably2GB required · 24GB available
183.09 tok/sEstimated
rednote-hilab/dots.ocrQ4
Fits comfortably4GB required · 24GB available
146.45 tok/sEstimated
meta-llama/Llama-3.1-70B-InstructQ4
Not supported34GB required · 24GB available
55.03 tok/sEstimated
zai-org/GLM-4.5-AirQ4
Fits comfortably4GB required · 24GB available
149.90 tok/sEstimated
zai-org/GLM-4.5-AirQ8
Fits comfortably7GB required · 24GB available
106.82 tok/sEstimated
vikhyatk/moondream2Q4
Fits comfortably4GB required · 24GB available
139.82 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

GPU FAQs

Data-backed answers pulled from community benchmarks, manufacturer specs, and live pricing.

What throughput can an RTX 3090 hit on 70B models?

An Ampere builder reports ~18 tok/s on Llama 3 70B Q4 with a single 3090, and ~36 tok/s after adding tensor parallelism across four cards.

Source: Reddit – /r/LocalLLaMA (mqzh3yo)

How fast can RTX 3090 push Qwen 30B?

Enthusiasts routinely see ~100 tokens/sec on Qwen 30B-A3B when tuned on a single RTX 3090, making it a budget-friendly coding workhorse.

Source: Reddit – /r/LocalLLaMA (mqs2r45)

Do PCIe risers bottleneck RTX 3090 inference?

Builders using x1 risers for dual 3080/3090 rigs measured no meaningful tokens/sec loss—the main penalty is slower model swaps, not slower inference.

Source: Reddit – /r/LocalLLaMA (mr10ib4)

What are the board specs for RTX 3090?

RTX 3090 provides 24 GB GDDR6X, draws 350 W, and uses triple 8-pin PCIe power connectors. NVIDIA recommends a 750 W PSU.

Source: TechPowerUp – RTX 3090 Specs

What does pricing look like today?

Nov 2025 snapshot: Amazon at $1,099 (check current availability).

Source: Supabase price tracker snapshot – 2025-11-03

Alternative GPUs

RTX 4090
24GB

Explore how RTX 4090 stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.

RTX 3070
8GB

Explore how RTX 3070 stacks up for local inference workloads.

NVIDIA A6000
48GB

Explore how NVIDIA A6000 stacks up for local inference workloads.

RX 7900 XTX
24GB

Explore how RX 7900 XTX stacks up for local inference workloads.

Compare RTX 3090

RTX 3090 vs RTX 3080

Side-by-side VRAM, throughput, efficiency, and pricing benchmarks for both GPUs.

RTX 3090 vs RTX 3070

Side-by-side VRAM, throughput, efficiency, and pricing benchmarks for both GPUs.

RTX 3090 vs RX 7900 XTX

Side-by-side VRAM, throughput, efficiency, and pricing benchmarks for both GPUs.