L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Community

  • Leaderboard

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. RTX 3060 12GB

Quick Answer: RTX 3060 12GB offers 12GB VRAM and starts around $309.99. It delivers approximately 77 tokens/sec on meta-llama/Llama-3.2-3B-Instruct. It typically draws 170W under load.

RTX 3060 12GB

In Stock
By NVIDIAReleased 2021-02MSRP $329.00

This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.

Buy on Amazon - $309.99View Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM12GB
Cores3,584
TDP170W
ArchitectureAmpere

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

AmazonIn Stock
$309.99
Buy on Amazon

More Amazon options

Rotate out primary variants whenever validation flags an issue.

💡 Not ready to buy? Try cloud GPUs first

Test RTX 3060 12GB performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
meta-llama/Llama-3.2-3B-InstructQ4
76.52 tok/sEstimated

Auto-generated benchmark

2GB
allenai/OLMo-2-0425-1BQ4
76.14 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-1B-InstructQ4
75.86 tok/sEstimated

Auto-generated benchmark

1GB
ibm-granite/granite-3.3-2b-instructQ4
75.25 tok/sEstimated

Auto-generated benchmark

1GB
nari-labs/Dia2-2BQ4
74.31 tok/sEstimated

Auto-generated benchmark

2GB
inference-net/Schematron-3BQ4
73.69 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1B-InstructQ4
72.99 tok/sEstimated

Auto-generated benchmark

1GB
bigcode/starcoder2-3bQ4
72.15 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/Llama-3.2-3B-InstructQ4
71.78 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-Guard-3-1BQ4
71.38 tok/sEstimated

Auto-generated benchmark

1GB
LiquidAI/LFM2-1.2BQ4
70.47 tok/sEstimated

Auto-generated benchmark

1GB
ibm-research/PowerMoE-3bQ4
70.21 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1BQ4
70.18 tok/sEstimated

Auto-generated benchmark

1GB
facebook/sam3Q4
70.15 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-3-1b-itQ4
69.46 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/DeepSeek-OCRQ4
69.22 tok/sEstimated

Auto-generated benchmark

2GB
google/embeddinggemma-300mQ4
69.13 tok/sEstimated

Auto-generated benchmark

1GB
WeiboAI/VibeThinker-1.5BQ4
68.66 tok/sEstimated

Auto-generated benchmark

1GB
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
67.97 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-2-2b-itQ4
67.82 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3B-InstructQ4
67.78 tok/sEstimated

Auto-generated benchmark

2GB
apple/OpenELM-1_1B-InstructQ4
67.65 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/deepseek-coder-1.3b-instructQ4
67.25 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/gemma-3-1b-itQ4
66.77 tok/sEstimated

Auto-generated benchmark

1GB
google-t5/t5-3bQ4
66.74 tok/sEstimated

Auto-generated benchmark

2GB
google/gemma-2bQ4
66.22 tok/sEstimated

Auto-generated benchmark

1GB
Qwen/Qwen2.5-3BQ4
65.62 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-3BQ4
65.56 tok/sEstimated

Auto-generated benchmark

2GB
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q4
65.12 tok/sEstimated

Auto-generated benchmark

2GB
google-bert/bert-base-uncasedQ4
64.48 tok/sEstimated

Auto-generated benchmark

1GB
zai-org/GLM-4.6-FP8Q4
63.86 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/Phi-4-multimodal-instructQ4
63.76 tok/sEstimated

Auto-generated benchmark

4GB
microsoft/Phi-3-mini-128k-instructQ4
63.75 tok/sEstimated

Auto-generated benchmark

4GB
IlyaGusev/saiga_llama3_8bQ4
63.64 tok/sEstimated

Auto-generated benchmark

4GB
distilbert/distilgpt2Q4
63.52 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-R1-0528Q4
63.49 tok/sEstimated

Auto-generated benchmark

4GB
unsloth/Meta-Llama-3.1-8B-InstructQ4
63.19 tok/sEstimated

Auto-generated benchmark

4GB
openai-community/gpt2Q4
63.10 tok/sEstimated

Auto-generated benchmark

4GB
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bitQ4
63.09 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Meta-Llama-3-8B-InstructQ4
62.94 tok/sEstimated

Auto-generated benchmark

4GB
HuggingFaceTB/SmolLM2-135MQ4
62.83 tok/sEstimated

Auto-generated benchmark

4GB
tencent/HunyuanOCRQ4
62.71 tok/sEstimated

Auto-generated benchmark

1GB
petals-team/StableBeluga2Q4
62.27 tok/sEstimated

Auto-generated benchmark

4GB
vikhyatk/moondream2Q4
62.15 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen2.5-Math-1.5BQ4
62.02 tok/sEstimated

Auto-generated benchmark

3GB
microsoft/Phi-3-mini-4k-instructQ4
61.89 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-3.1-8BQ4
61.69 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-R1Q4
61.67 tok/sEstimated

Auto-generated benchmark

4GB
ibm-granite/granite-docling-258MQ4
61.55 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen2.5-0.5B-InstructQ4
61.42 tok/sEstimated

Auto-generated benchmark

3GB
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
76.52 tok/sEstimated
Auto-generated benchmark
allenai/OLMo-2-0425-1B
Q4
1GB
76.14 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-1B-Instruct
Q4
1GB
75.86 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-2b-instruct
Q4
1GB
75.25 tok/sEstimated
Auto-generated benchmark
nari-labs/Dia2-2B
Q4
2GB
74.31 tok/sEstimated
Auto-generated benchmark
inference-net/Schematron-3B
Q4
2GB
73.69 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B-Instruct
Q4
1GB
72.99 tok/sEstimated
Auto-generated benchmark
bigcode/starcoder2-3b
Q4
2GB
72.15 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-3B-Instruct
Q4
2GB
71.78 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-1B
Q4
1GB
71.38 tok/sEstimated
Auto-generated benchmark
LiquidAI/LFM2-1.2B
Q4
1GB
70.47 tok/sEstimated
Auto-generated benchmark
ibm-research/PowerMoE-3b
Q4
2GB
70.21 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B
Q4
1GB
70.18 tok/sEstimated
Auto-generated benchmark
facebook/sam3
Q4
1GB
70.15 tok/sEstimated
Auto-generated benchmark
google/gemma-3-1b-it
Q4
1GB
69.46 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR
Q4
2GB
69.22 tok/sEstimated
Auto-generated benchmark
google/embeddinggemma-300m
Q4
1GB
69.13 tok/sEstimated
Auto-generated benchmark
WeiboAI/VibeThinker-1.5B
Q4
1GB
68.66 tok/sEstimated
Auto-generated benchmark
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Q4
1GB
67.97 tok/sEstimated
Auto-generated benchmark
google/gemma-2-2b-it
Q4
1GB
67.82 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B-Instruct
Q4
2GB
67.78 tok/sEstimated
Auto-generated benchmark
apple/OpenELM-1_1B-Instruct
Q4
1GB
67.65 tok/sEstimated
Auto-generated benchmark
deepseek-ai/deepseek-coder-1.3b-instruct
Q4
2GB
67.25 tok/sEstimated
Auto-generated benchmark
unsloth/gemma-3-1b-it
Q4
1GB
66.77 tok/sEstimated
Auto-generated benchmark
google-t5/t5-3b
Q4
2GB
66.74 tok/sEstimated
Auto-generated benchmark
google/gemma-2b
Q4
1GB
66.22 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B
Q4
2GB
65.62 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B
Q4
2GB
65.56 tok/sEstimated
Auto-generated benchmark
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16
Q4
2GB
65.12 tok/sEstimated
Auto-generated benchmark
google-bert/bert-base-uncased
Q4
1GB
64.48 tok/sEstimated
Auto-generated benchmark
zai-org/GLM-4.6-FP8
Q4
4GB
63.86 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-4-multimodal-instruct
Q4
4GB
63.76 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-3-mini-128k-instruct
Q4
4GB
63.75 tok/sEstimated
Auto-generated benchmark
IlyaGusev/saiga_llama3_8b
Q4
4GB
63.64 tok/sEstimated
Auto-generated benchmark
distilbert/distilgpt2
Q4
4GB
63.52 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-R1-0528
Q4
4GB
63.49 tok/sEstimated
Auto-generated benchmark
unsloth/Meta-Llama-3.1-8B-Instruct
Q4
4GB
63.19 tok/sEstimated
Auto-generated benchmark
openai-community/gpt2
Q4
4GB
63.10 tok/sEstimated
Auto-generated benchmark
lmstudio-community/Qwen3-4B-Thinking-2507-MLX-6bit
Q4
2GB
63.09 tok/sEstimated
Auto-generated benchmark
meta-llama/Meta-Llama-3-8B-Instruct
Q4
4GB
62.94 tok/sEstimated
Auto-generated benchmark
HuggingFaceTB/SmolLM2-135M
Q4
4GB
62.83 tok/sEstimated
Auto-generated benchmark
tencent/HunyuanOCR
Q4
1GB
62.71 tok/sEstimated
Auto-generated benchmark
petals-team/StableBeluga2
Q4
4GB
62.27 tok/sEstimated
Auto-generated benchmark
vikhyatk/moondream2
Q4
4GB
62.15 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-Math-1.5B
Q4
3GB
62.02 tok/sEstimated
Auto-generated benchmark
microsoft/Phi-3-mini-4k-instruct
Q4
4GB
61.89 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.1-8B
Q4
4GB
61.69 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-R1
Q4
4GB
61.67 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-docling-258M
Q4
4GB
61.55 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-0.5B-Instruct
Q4
3GB
61.42 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
black-forest-labs/FLUX.1-devFP16Not supported
22.16 tok/sEstimated
16GB (have 12GB)
openai-community/gpt2FP16Not supported
20.76 tok/sEstimated
15GB (have 12GB)
meta-llama/Llama-3.1-8B-InstructQ4Fits comfortably
54.37 tok/sEstimated
4GB (have 12GB)
meta-llama/Llama-3.1-8B-InstructQ8Fits comfortably
44.03 tok/sEstimated
9GB (have 12GB)
openai/gpt-oss-20bQ8Not supported
21.29 tok/sEstimated
20GB (have 12GB)
openai/gpt-oss-20bFP16Not supported
11.28 tok/sEstimated
41GB (have 12GB)
google/gemma-3-1b-itQ4Fits comfortably
69.46 tok/sEstimated
1GB (have 12GB)
google/gemma-3-1b-itQ8Fits comfortably
51.78 tok/sEstimated
1GB (have 12GB)
facebook/opt-125mQ4Fits comfortably
53.19 tok/sEstimated
4GB (have 12GB)
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q8Fits comfortably
51.19 tok/sEstimated
1GB (have 12GB)
TinyLlama/TinyLlama-1.1B-Chat-v1.0FP16Fits comfortably
26.14 tok/sEstimated
2GB (have 12GB)
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q8Fits comfortably
36.94 tok/sEstimated
7GB (have 12GB)
Qwen/Qwen3-4B-Instruct-2507Q8Fits comfortably
38.10 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen3-4B-Instruct-2507FP16Fits comfortably
20.40 tok/sEstimated
9GB (have 12GB)
meta-llama/Llama-3.2-1B-InstructQ4Fits comfortably
72.99 tok/sEstimated
1GB (have 12GB)
openai/gpt-oss-120bQ8Not supported
8.27 tok/sEstimated
117GB (have 12GB)
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q8Fits comfortably
49.39 tok/sEstimated
3GB (have 12GB)
mistralai/Mistral-7B-Instruct-v0.2FP16Not supported
23.44 tok/sEstimated
15GB (have 12GB)
Qwen/Qwen3-8BQ4Fits comfortably
60.40 tok/sEstimated
4GB (have 12GB)
inference-net/Schematron-3BQ8Fits comfortably
48.91 tok/sEstimated
3GB (have 12GB)
inference-net/Schematron-3BFP16Fits comfortably
24.51 tok/sEstimated
6GB (have 12GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BQ4Not supported
19.95 tok/sEstimated
16GB (have 12GB)
meta-llama/Llama-3.2-1BQ4Fits comfortably
70.18 tok/sEstimated
1GB (have 12GB)
meta-llama/Llama-3.2-1BQ8Fits comfortably
50.98 tok/sEstimated
1GB (have 12GB)
meta-llama/Meta-Llama-3-8BQ8Fits comfortably
37.85 tok/sEstimated
9GB (have 12GB)
meta-llama/Meta-Llama-3-8BFP16Not supported
23.96 tok/sEstimated
17GB (have 12GB)
Qwen/Qwen2.5-7BQ4Fits comfortably
58.49 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen2.5-7BQ8Fits comfortably
39.96 tok/sEstimated
7GB (have 12GB)
allenai/OLMo-2-0425-1BQ4Fits comfortably
76.14 tok/sEstimated
1GB (have 12GB)
allenai/OLMo-2-0425-1BQ8Fits comfortably
46.62 tok/sEstimated
1GB (have 12GB)
openai-community/gpt2-largeQ8Fits comfortably
44.01 tok/sEstimated
7GB (have 12GB)
openai-community/gpt2-largeFP16Not supported
23.83 tok/sEstimated
15GB (have 12GB)
Qwen/Qwen3-1.7BQ4Fits comfortably
53.15 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen3-Reranker-0.6BQ4Fits comfortably
54.84 tok/sEstimated
3GB (have 12GB)
Qwen/Qwen3-Reranker-0.6BQ8Fits comfortably
39.01 tok/sEstimated
6GB (have 12GB)
dphn/dolphin-2.9.1-yi-1.5-34bQ4Not supported
21.90 tok/sEstimated
17GB (have 12GB)
Qwen/Qwen3-Reranker-0.6BFP16Not supported
20.14 tok/sEstimated
13GB (have 12GB)
meta-llama/Meta-Llama-3-8B-InstructQ8Fits comfortably
36.94 tok/sEstimated
9GB (have 12GB)
dphn/dolphin-2.9.1-yi-1.5-34bQ8Not supported
14.85 tok/sEstimated
35GB (have 12GB)
dphn/dolphin-2.9.1-yi-1.5-34bFP16Not supported
7.17 tok/sEstimated
70GB (have 12GB)
openai/gpt-oss-20bQ4Fits comfortably
30.47 tok/sEstimated
10GB (have 12GB)
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitQ8Fits comfortably
43.82 tok/sEstimated
4GB (have 12GB)
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitFP16Fits comfortably
19.98 tok/sEstimated
9GB (have 12GB)
Qwen/Qwen2.5-1.5BQ4Fits comfortably
53.79 tok/sEstimated
3GB (have 12GB)
Qwen/Qwen2.5-1.5BFP16Fits (tight)
23.42 tok/sEstimated
11GB (have 12GB)
unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bitQ8Fits comfortably
38.75 tok/sEstimated
9GB (have 12GB)
Qwen/Qwen3-Embedding-8BQ4Fits comfortably
55.70 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen3-14BFP16Not supported
16.36 tok/sEstimated
29GB (have 12GB)
Qwen/Qwen2.5-0.5BQ4Fits comfortably
59.91 tok/sEstimated
3GB (have 12GB)
openai/gpt-oss-120bFP16Not supported
4.67 tok/sEstimated
235GB (have 12GB)
black-forest-labs/FLUX.1-devFP16
Not supported16GB required · 12GB available
22.16 tok/sEstimated
openai-community/gpt2FP16
Not supported15GB required · 12GB available
20.76 tok/sEstimated
meta-llama/Llama-3.1-8B-InstructQ4
Fits comfortably4GB required · 12GB available
54.37 tok/sEstimated
meta-llama/Llama-3.1-8B-InstructQ8
Fits comfortably9GB required · 12GB available
44.03 tok/sEstimated
openai/gpt-oss-20bQ8
Not supported20GB required · 12GB available
21.29 tok/sEstimated
openai/gpt-oss-20bFP16
Not supported41GB required · 12GB available
11.28 tok/sEstimated
google/gemma-3-1b-itQ4
Fits comfortably1GB required · 12GB available
69.46 tok/sEstimated
google/gemma-3-1b-itQ8
Fits comfortably1GB required · 12GB available
51.78 tok/sEstimated
facebook/opt-125mQ4
Fits comfortably4GB required · 12GB available
53.19 tok/sEstimated
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q8
Fits comfortably1GB required · 12GB available
51.19 tok/sEstimated
TinyLlama/TinyLlama-1.1B-Chat-v1.0FP16
Fits comfortably2GB required · 12GB available
26.14 tok/sEstimated
trl-internal-testing/tiny-Qwen2ForCausalLM-2.5Q8
Fits comfortably7GB required · 12GB available
36.94 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507Q8
Fits comfortably4GB required · 12GB available
38.10 tok/sEstimated
Qwen/Qwen3-4B-Instruct-2507FP16
Fits comfortably9GB required · 12GB available
20.40 tok/sEstimated
meta-llama/Llama-3.2-1B-InstructQ4
Fits comfortably1GB required · 12GB available
72.99 tok/sEstimated
openai/gpt-oss-120bQ8
Not supported117GB required · 12GB available
8.27 tok/sEstimated
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q8
Fits comfortably3GB required · 12GB available
49.39 tok/sEstimated
mistralai/Mistral-7B-Instruct-v0.2FP16
Not supported15GB required · 12GB available
23.44 tok/sEstimated
Qwen/Qwen3-8BQ4
Fits comfortably4GB required · 12GB available
60.40 tok/sEstimated
inference-net/Schematron-3BQ8
Fits comfortably3GB required · 12GB available
48.91 tok/sEstimated
inference-net/Schematron-3BFP16
Fits comfortably6GB required · 12GB available
24.51 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-32BQ4
Not supported16GB required · 12GB available
19.95 tok/sEstimated
meta-llama/Llama-3.2-1BQ4
Fits comfortably1GB required · 12GB available
70.18 tok/sEstimated
meta-llama/Llama-3.2-1BQ8
Fits comfortably1GB required · 12GB available
50.98 tok/sEstimated
meta-llama/Meta-Llama-3-8BQ8
Fits comfortably9GB required · 12GB available
37.85 tok/sEstimated
meta-llama/Meta-Llama-3-8BFP16
Not supported17GB required · 12GB available
23.96 tok/sEstimated
Qwen/Qwen2.5-7BQ4
Fits comfortably4GB required · 12GB available
58.49 tok/sEstimated
Qwen/Qwen2.5-7BQ8
Fits comfortably7GB required · 12GB available
39.96 tok/sEstimated
allenai/OLMo-2-0425-1BQ4
Fits comfortably1GB required · 12GB available
76.14 tok/sEstimated
allenai/OLMo-2-0425-1BQ8
Fits comfortably1GB required · 12GB available
46.62 tok/sEstimated
openai-community/gpt2-largeQ8
Fits comfortably7GB required · 12GB available
44.01 tok/sEstimated
openai-community/gpt2-largeFP16
Not supported15GB required · 12GB available
23.83 tok/sEstimated
Qwen/Qwen3-1.7BQ4
Fits comfortably4GB required · 12GB available
53.15 tok/sEstimated
Qwen/Qwen3-Reranker-0.6BQ4
Fits comfortably3GB required · 12GB available
54.84 tok/sEstimated
Qwen/Qwen3-Reranker-0.6BQ8
Fits comfortably6GB required · 12GB available
39.01 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bQ4
Not supported17GB required · 12GB available
21.90 tok/sEstimated
Qwen/Qwen3-Reranker-0.6BFP16
Not supported13GB required · 12GB available
20.14 tok/sEstimated
meta-llama/Meta-Llama-3-8B-InstructQ8
Fits comfortably9GB required · 12GB available
36.94 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bQ8
Not supported35GB required · 12GB available
14.85 tok/sEstimated
dphn/dolphin-2.9.1-yi-1.5-34bFP16
Not supported70GB required · 12GB available
7.17 tok/sEstimated
openai/gpt-oss-20bQ4
Fits comfortably10GB required · 12GB available
30.47 tok/sEstimated
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitQ8
Fits comfortably4GB required · 12GB available
43.82 tok/sEstimated
kaitchup/Phi-3-mini-4k-instruct-gptq-4bitFP16
Fits comfortably9GB required · 12GB available
19.98 tok/sEstimated
Qwen/Qwen2.5-1.5BQ4
Fits comfortably3GB required · 12GB available
53.79 tok/sEstimated
Qwen/Qwen2.5-1.5BFP16
Fits (tight)11GB required · 12GB available
23.42 tok/sEstimated
unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bitQ8
Fits comfortably9GB required · 12GB available
38.75 tok/sEstimated
Qwen/Qwen3-Embedding-8BQ4
Fits comfortably4GB required · 12GB available
55.70 tok/sEstimated
Qwen/Qwen3-14BFP16
Not supported29GB required · 12GB available
16.36 tok/sEstimated
Qwen/Qwen2.5-0.5BQ4
Fits comfortably3GB required · 12GB available
59.91 tok/sEstimated
openai/gpt-oss-120bFP16
Not supported235GB required · 12GB available
4.67 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

GPU FAQs

Data-backed answers pulled from community benchmarks, manufacturer specs, and live pricing.

How fast is a single RTX 3060 on 7B Q8 models?

Even with modest specs, a 12 GB RTX 3060 can drive 7B Q8 quant models at over 60 tokens/sec—fast enough for iterative coding and agents.

Source: Reddit – /r/LocalLLaMA (l6nfptd)

What throughput does a triple-3060 rig deliver?

One builder running three RTX 3060 cards reports Gemma 3 27B Q4 at ~15 tok/sec, Mistral 24B Q4 at ~18 tok/sec, and DeepSeek R1 32B Q4 at ~20 tok/sec via Ollama.

Source: Reddit – /r/LocalLLaMA (mo6ttds)

Do scaling calculators match real-world performance?

Not always—2× RTX 3060 was projected to hit ~29 tok/sec on DeepSeek R1 32B (16K ctx), but real benchmarks landed closer to 14 tok/sec.

Source: Reddit – /r/LocalLLaMA (mq781cj)

How does CPU-only performance compare?

A dual-Xeon workstation without GPU offload only mustered ~1.68 tok/sec on DeepSeek R1 Q4—showing why even a single 3060 is a major upgrade.

Source: Reddit – /r/LocalLLaMA (mm9ladj)

What are the specs and current prices?

RTX 3060 12 GB draws 170 W, uses an 8-pin PCIe connector, and NVIDIA recommends a 550 W PSU. As of Nov 2025 the card was around $329 on Amazon.

Source: TechPowerUp – RTX 3060 Specs

Alternative GPUs

RTX 3070
8GB

Explore how RTX 3070 stacks up for local inference workloads.

RTX 4060 Ti 16GB
16GB

Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.

RX 6800 XT
16GB

Explore how RX 6800 XT stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.

RTX 4070
12GB

Explore how RTX 4070 stacks up for local inference workloads.