L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Community

  • Leaderboard

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. GPUs
  3. RTX 4070 Ti

Quick Answer: RTX 4070 Ti offers 12GB VRAM and starts around current market pricing. It delivers approximately 117 tokens/sec on google/gemma-2b. It typically draws 285W under load.

RTX 4070 Ti

Unknown
By NVIDIAReleased 2023-01MSRP $799.00

RTX 4070 Ti is the sweet spot for 7B–13B inference. It hits solid tokens/sec without the power demands or price tag of the higher-end Ada cards.

Check Price on AmazonView Benchmarks
Specs snapshot
Key hardware metrics for AI workloads.
VRAM12GB
Cores7,680
TDP285W
ArchitectureAda Lovelace

Where to Buy

Buy directly on Amazon with fast shipping and reliable customer service.

AmazonUnknown
See price on Amazon
Buy on Amazon

More Amazon options

Rotate out primary variants whenever validation flags an issue.

💡 Not ready to buy? Try cloud GPUs first

Test RTX 4070 Ti performance in the cloud before investing in hardware. Pay by the hour with no commitment.

Vast.aifrom $0.20/hrRunPodfrom $0.30/hrLambda Labsenterprise-grade

AI benchmarks

ModelQuantizationTokens/secVRAM used
google/gemma-2bQ4
116.83 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-3B-InstructQ4
116.79 tok/sEstimated

Auto-generated benchmark

2GB
tencent/HunyuanOCRQ4
115.40 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/DeepSeek-OCRQ4
115.07 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1B-InstructQ4
114.97 tok/sEstimated

Auto-generated benchmark

1GB
google/gemma-3-1b-itQ4
114.84 tok/sEstimated

Auto-generated benchmark

1GB
ibm-research/PowerMoE-3bQ4
114.62 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-3B-InstructQ4
113.85 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2.5-3B-InstructQ4
112.47 tok/sEstimated

Auto-generated benchmark

2GB
google-t5/t5-3bQ4
112.19 tok/sEstimated

Auto-generated benchmark

2GB
LiquidAI/LFM2-1.2BQ4
110.03 tok/sEstimated

Auto-generated benchmark

1GB
inference-net/Schematron-3BQ4
109.43 tok/sEstimated

Auto-generated benchmark

2GB
meta-llama/Llama-3.2-1BQ4
109.35 tok/sEstimated

Auto-generated benchmark

1GB
WeiboAI/VibeThinker-1.5BQ4
109.20 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/Llama-3.2-1B-InstructQ4
108.05 tok/sEstimated

Auto-generated benchmark

1GB
allenai/OLMo-2-0425-1BQ4
107.93 tok/sEstimated

Auto-generated benchmark

1GB
facebook/sam3Q4
107.76 tok/sEstimated

Auto-generated benchmark

1GB
google-bert/bert-base-uncasedQ4
107.18 tok/sEstimated

Auto-generated benchmark

1GB
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16Q4
105.27 tok/sEstimated

Auto-generated benchmark

2GB
apple/OpenELM-1_1B-InstructQ4
105.01 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-Guard-3-1BQ4
104.49 tok/sEstimated

Auto-generated benchmark

1GB
meta-llama/Llama-3.2-3BQ4
104.23 tok/sEstimated

Auto-generated benchmark

2GB
google/gemma-2-2b-itQ4
102.69 tok/sEstimated

Auto-generated benchmark

1GB
ibm-granite/granite-3.3-2b-instructQ4
101.66 tok/sEstimated

Auto-generated benchmark

1GB
deepseek-ai/deepseek-coder-1.3b-instructQ4
101.39 tok/sEstimated

Auto-generated benchmark

2GB
Qwen/Qwen2.5-3BQ4
100.87 tok/sEstimated

Auto-generated benchmark

2GB
bigcode/starcoder2-3bQ4
100.75 tok/sEstimated

Auto-generated benchmark

2GB
google/embeddinggemma-300mQ4
99.84 tok/sEstimated

Auto-generated benchmark

1GB
TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4
99.78 tok/sEstimated

Auto-generated benchmark

1GB
black-forest-labs/FLUX.1-devQ4
97.06 tok/sEstimated

Auto-generated benchmark

4GB
google/gemma-3-270m-itQ4
96.98 tok/sEstimated

Auto-generated benchmark

4GB
NousResearch/Meta-Llama-3.1-8B-InstructQ4
96.93 tok/sEstimated

Auto-generated benchmark

4GB
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bitQ4
96.71 tok/sEstimated

Auto-generated benchmark

4GB
bigscience/bloomz-560mQ4
96.64 tok/sEstimated

Auto-generated benchmark

4GB
GSAI-ML/LLaDA-8B-InstructQ4
96.57 tok/sEstimated

Auto-generated benchmark

4GB
nari-labs/Dia2-2BQ4
96.55 tok/sEstimated

Auto-generated benchmark

2GB
unsloth/gemma-3-1b-itQ4
96.48 tok/sEstimated

Auto-generated benchmark

1GB
unsloth/mistral-7b-v0.3-bnb-4bitQ4
96.46 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-4B-Thinking-2507-FP8Q4
95.98 tok/sEstimated

Auto-generated benchmark

2GB
llamafactory/tiny-random-Llama-3Q4
95.78 tok/sEstimated

Auto-generated benchmark

4GB
deepseek-ai/DeepSeek-R1Q4
95.67 tok/sEstimated

Auto-generated benchmark

4GB
HuggingFaceTB/SmolLM-135MQ4
95.64 tok/sEstimated

Auto-generated benchmark

4GB
rednote-hilab/dots.ocrQ4
95.54 tok/sEstimated

Auto-generated benchmark

4GB
Tongyi-MAI/Z-Image-TurboQ4
95.42 tok/sEstimated

Auto-generated benchmark

4GB
dicta-il/dictalm2.0-instructQ4
95.33 tok/sEstimated

Auto-generated benchmark

4GB
skt/kogpt2-base-v2Q4
95.29 tok/sEstimated

Auto-generated benchmark

4GB
Qwen/Qwen3-Embedding-0.6BQ4
95.24 tok/sEstimated

Auto-generated benchmark

3GB
meta-llama/Meta-Llama-3-8B-InstructQ4
95.08 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-2-7b-hfQ4
94.95 tok/sEstimated

Auto-generated benchmark

4GB
meta-llama/Llama-Guard-3-8BQ4
94.61 tok/sEstimated

Auto-generated benchmark

4GB
google/gemma-2b
Q4
1GB
116.83 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-3B-Instruct
Q4
2GB
116.79 tok/sEstimated
Auto-generated benchmark
tencent/HunyuanOCR
Q4
1GB
115.40 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-OCR
Q4
2GB
115.07 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B-Instruct
Q4
1GB
114.97 tok/sEstimated
Auto-generated benchmark
google/gemma-3-1b-it
Q4
1GB
114.84 tok/sEstimated
Auto-generated benchmark
ibm-research/PowerMoE-3b
Q4
2GB
114.62 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B-Instruct
Q4
2GB
113.85 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B-Instruct
Q4
2GB
112.47 tok/sEstimated
Auto-generated benchmark
google-t5/t5-3b
Q4
2GB
112.19 tok/sEstimated
Auto-generated benchmark
LiquidAI/LFM2-1.2B
Q4
1GB
110.03 tok/sEstimated
Auto-generated benchmark
inference-net/Schematron-3B
Q4
2GB
109.43 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-1B
Q4
1GB
109.35 tok/sEstimated
Auto-generated benchmark
WeiboAI/VibeThinker-1.5B
Q4
1GB
109.20 tok/sEstimated
Auto-generated benchmark
unsloth/Llama-3.2-1B-Instruct
Q4
1GB
108.05 tok/sEstimated
Auto-generated benchmark
allenai/OLMo-2-0425-1B
Q4
1GB
107.93 tok/sEstimated
Auto-generated benchmark
facebook/sam3
Q4
1GB
107.76 tok/sEstimated
Auto-generated benchmark
google-bert/bert-base-uncased
Q4
1GB
107.18 tok/sEstimated
Auto-generated benchmark
context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16
Q4
2GB
105.27 tok/sEstimated
Auto-generated benchmark
apple/OpenELM-1_1B-Instruct
Q4
1GB
105.01 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-1B
Q4
1GB
104.49 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-3.2-3B
Q4
2GB
104.23 tok/sEstimated
Auto-generated benchmark
google/gemma-2-2b-it
Q4
1GB
102.69 tok/sEstimated
Auto-generated benchmark
ibm-granite/granite-3.3-2b-instruct
Q4
1GB
101.66 tok/sEstimated
Auto-generated benchmark
deepseek-ai/deepseek-coder-1.3b-instruct
Q4
2GB
101.39 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen2.5-3B
Q4
2GB
100.87 tok/sEstimated
Auto-generated benchmark
bigcode/starcoder2-3b
Q4
2GB
100.75 tok/sEstimated
Auto-generated benchmark
google/embeddinggemma-300m
Q4
1GB
99.84 tok/sEstimated
Auto-generated benchmark
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Q4
1GB
99.78 tok/sEstimated
Auto-generated benchmark
black-forest-labs/FLUX.1-dev
Q4
4GB
97.06 tok/sEstimated
Auto-generated benchmark
google/gemma-3-270m-it
Q4
4GB
96.98 tok/sEstimated
Auto-generated benchmark
NousResearch/Meta-Llama-3.1-8B-Instruct
Q4
4GB
96.93 tok/sEstimated
Auto-generated benchmark
lmstudio-community/DeepSeek-R1-0528-Qwen3-8B-MLX-8bit
Q4
4GB
96.71 tok/sEstimated
Auto-generated benchmark
bigscience/bloomz-560m
Q4
4GB
96.64 tok/sEstimated
Auto-generated benchmark
GSAI-ML/LLaDA-8B-Instruct
Q4
4GB
96.57 tok/sEstimated
Auto-generated benchmark
nari-labs/Dia2-2B
Q4
2GB
96.55 tok/sEstimated
Auto-generated benchmark
unsloth/gemma-3-1b-it
Q4
1GB
96.48 tok/sEstimated
Auto-generated benchmark
unsloth/mistral-7b-v0.3-bnb-4bit
Q4
4GB
96.46 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-4B-Thinking-2507-FP8
Q4
2GB
95.98 tok/sEstimated
Auto-generated benchmark
llamafactory/tiny-random-Llama-3
Q4
4GB
95.78 tok/sEstimated
Auto-generated benchmark
deepseek-ai/DeepSeek-R1
Q4
4GB
95.67 tok/sEstimated
Auto-generated benchmark
HuggingFaceTB/SmolLM-135M
Q4
4GB
95.64 tok/sEstimated
Auto-generated benchmark
rednote-hilab/dots.ocr
Q4
4GB
95.54 tok/sEstimated
Auto-generated benchmark
Tongyi-MAI/Z-Image-Turbo
Q4
4GB
95.42 tok/sEstimated
Auto-generated benchmark
dicta-il/dictalm2.0-instruct
Q4
4GB
95.33 tok/sEstimated
Auto-generated benchmark
skt/kogpt2-base-v2
Q4
4GB
95.29 tok/sEstimated
Auto-generated benchmark
Qwen/Qwen3-Embedding-0.6B
Q4
3GB
95.24 tok/sEstimated
Auto-generated benchmark
meta-llama/Meta-Llama-3-8B-Instruct
Q4
4GB
95.08 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-2-7b-hf
Q4
4GB
94.95 tok/sEstimated
Auto-generated benchmark
meta-llama/Llama-Guard-3-8B
Q4
4GB
94.61 tok/sEstimated
Auto-generated benchmark

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

Model compatibility

ModelQuantizationVerdictEstimated speedVRAM needed
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-6bitQ4Not supported
46.67 tok/sEstimated
15GB (have 12GB)
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-6bitQ8Not supported
33.40 tok/sEstimated
31GB (have 12GB)
Qwen/Qwen2.5-72B-InstructFP16Not supported
6.99 tok/sEstimated
142GB (have 12GB)
Qwen/Qwen2.5-7B-InstructQ4Fits comfortably
88.74 tok/sEstimated
4GB (have 12GB)
mistralai/Mistral-Large-Instruct-2411Q8Not supported
12.25 tok/sEstimated
120GB (have 12GB)
microsoft/Phi-3.5-mini-instructQ8Fits comfortably
56.92 tok/sEstimated
4GB (have 12GB)
microsoft/Phi-3.5-mini-instructFP16Fits comfortably
34.46 tok/sEstimated
8GB (have 12GB)
openai/gpt-oss-120bFP16Not supported
6.95 tok/sEstimated
235GB (have 12GB)
petals-team/StableBeluga2FP16Not supported
33.50 tok/sEstimated
15GB (have 12GB)
meta-llama/Llama-3.2-1BFP16Fits comfortably
39.36 tok/sEstimated
2GB (have 12GB)
meta-llama/Meta-Llama-3-8BQ4Fits comfortably
83.81 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen3-Next-80B-A3B-InstructQ4Not supported
16.35 tok/sEstimated
39GB (have 12GB)
vikhyatk/moondream2Q8Fits comfortably
66.41 tok/sEstimated
7GB (have 12GB)
vikhyatk/moondream2FP16Not supported
35.49 tok/sEstimated
15GB (have 12GB)
petals-team/StableBeluga2Q4Fits comfortably
92.35 tok/sEstimated
4GB (have 12GB)
petals-team/StableBeluga2Q8Fits comfortably
57.52 tok/sEstimated
7GB (have 12GB)
meta-llama/Llama-3.2-1BQ4Fits comfortably
109.35 tok/sEstimated
1GB (have 12GB)
meta-llama/Llama-3.2-1BQ8Fits comfortably
73.35 tok/sEstimated
1GB (have 12GB)
meta-llama/Meta-Llama-3-8BQ8Fits comfortably
58.40 tok/sEstimated
9GB (have 12GB)
meta-llama/Meta-Llama-3-8BFP16Not supported
36.45 tok/sEstimated
17GB (have 12GB)
Qwen/Qwen2.5-7BQ4Fits comfortably
83.31 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen2.5-0.5B-InstructFP16Fits (tight)
33.09 tok/sEstimated
11GB (have 12GB)
Qwen/Qwen3-Next-80B-A3B-InstructQ8Not supported
12.25 tok/sEstimated
78GB (have 12GB)
Qwen/Qwen3-Next-80B-A3B-InstructFP16Not supported
7.13 tok/sEstimated
156GB (have 12GB)
allenai/OLMo-2-0425-1BQ4Fits comfortably
107.93 tok/sEstimated
1GB (have 12GB)
microsoft/Phi-3-mini-4k-instructQ4Fits comfortably
92.66 tok/sEstimated
4GB (have 12GB)
microsoft/Phi-3-mini-4k-instructQ8Fits comfortably
60.96 tok/sEstimated
7GB (have 12GB)
microsoft/Phi-3-mini-4k-instructFP16Not supported
32.12 tok/sEstimated
15GB (have 12GB)
openai-community/gpt2-largeQ4Fits comfortably
91.09 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen3-30B-A3B-Instruct-2507Q4Not supported
51.15 tok/sEstimated
15GB (have 12GB)
Qwen/Qwen3-30B-A3B-Instruct-2507Q8Not supported
33.76 tok/sEstimated
31GB (have 12GB)
Qwen/Qwen3-30B-A3B-Instruct-2507FP16Not supported
19.48 tok/sEstimated
61GB (have 12GB)
rednote-hilab/dots.ocrQ4Fits comfortably
95.54 tok/sEstimated
4GB (have 12GB)
Qwen/Qwen3-Reranker-0.6BFP16Not supported
36.95 tok/sEstimated
13GB (have 12GB)
meta-llama/Meta-Llama-3-8B-InstructQ4Fits comfortably
95.08 tok/sEstimated
4GB (have 12GB)
mlx-community/gpt-oss-20b-MXFP4-Q8Q4Fits comfortably
53.19 tok/sEstimated
10GB (have 12GB)
mlx-community/gpt-oss-20b-MXFP4-Q8Q8Not supported
35.04 tok/sEstimated
20GB (have 12GB)
mlx-community/gpt-oss-20b-MXFP4-Q8FP16Not supported
19.96 tok/sEstimated
41GB (have 12GB)
Qwen/Qwen2.5-1.5BQ8Fits comfortably
58.89 tok/sEstimated
5GB (have 12GB)
Qwen/Qwen2.5-1.5BFP16Fits (tight)
31.03 tok/sEstimated
11GB (have 12GB)
Qwen/Qwen2.5-14B-InstructFP16Not supported
26.18 tok/sEstimated
29GB (have 12GB)
unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bitQ4Fits comfortably
94.23 tok/sEstimated
4GB (have 12GB)
unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bitQ8Fits comfortably
59.17 tok/sEstimated
9GB (have 12GB)
microsoft/phi-2Q4Fits comfortably
93.79 tok/sEstimated
4GB (have 12GB)
microsoft/phi-2Q8Fits comfortably
59.46 tok/sEstimated
7GB (have 12GB)
microsoft/phi-2FP16Not supported
35.92 tok/sEstimated
15GB (have 12GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-7BQ4Fits comfortably
89.53 tok/sEstimated
4GB (have 12GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-7BQ8Fits comfortably
60.83 tok/sEstimated
7GB (have 12GB)
deepseek-ai/DeepSeek-R1-Distill-Qwen-7BFP16Not supported
35.29 tok/sEstimated
15GB (have 12GB)
GSAI-ML/LLaDA-8B-BaseFP16Not supported
31.59 tok/sEstimated
17GB (have 12GB)
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-6bitQ4
Not supported15GB required · 12GB available
46.67 tok/sEstimated
lmstudio-community/Qwen3-Coder-30B-A3B-Instruct-MLX-6bitQ8
Not supported31GB required · 12GB available
33.40 tok/sEstimated
Qwen/Qwen2.5-72B-InstructFP16
Not supported142GB required · 12GB available
6.99 tok/sEstimated
Qwen/Qwen2.5-7B-InstructQ4
Fits comfortably4GB required · 12GB available
88.74 tok/sEstimated
mistralai/Mistral-Large-Instruct-2411Q8
Not supported120GB required · 12GB available
12.25 tok/sEstimated
microsoft/Phi-3.5-mini-instructQ8
Fits comfortably4GB required · 12GB available
56.92 tok/sEstimated
microsoft/Phi-3.5-mini-instructFP16
Fits comfortably8GB required · 12GB available
34.46 tok/sEstimated
openai/gpt-oss-120bFP16
Not supported235GB required · 12GB available
6.95 tok/sEstimated
petals-team/StableBeluga2FP16
Not supported15GB required · 12GB available
33.50 tok/sEstimated
meta-llama/Llama-3.2-1BFP16
Fits comfortably2GB required · 12GB available
39.36 tok/sEstimated
meta-llama/Meta-Llama-3-8BQ4
Fits comfortably4GB required · 12GB available
83.81 tok/sEstimated
Qwen/Qwen3-Next-80B-A3B-InstructQ4
Not supported39GB required · 12GB available
16.35 tok/sEstimated
vikhyatk/moondream2Q8
Fits comfortably7GB required · 12GB available
66.41 tok/sEstimated
vikhyatk/moondream2FP16
Not supported15GB required · 12GB available
35.49 tok/sEstimated
petals-team/StableBeluga2Q4
Fits comfortably4GB required · 12GB available
92.35 tok/sEstimated
petals-team/StableBeluga2Q8
Fits comfortably7GB required · 12GB available
57.52 tok/sEstimated
meta-llama/Llama-3.2-1BQ4
Fits comfortably1GB required · 12GB available
109.35 tok/sEstimated
meta-llama/Llama-3.2-1BQ8
Fits comfortably1GB required · 12GB available
73.35 tok/sEstimated
meta-llama/Meta-Llama-3-8BQ8
Fits comfortably9GB required · 12GB available
58.40 tok/sEstimated
meta-llama/Meta-Llama-3-8BFP16
Not supported17GB required · 12GB available
36.45 tok/sEstimated
Qwen/Qwen2.5-7BQ4
Fits comfortably4GB required · 12GB available
83.31 tok/sEstimated
Qwen/Qwen2.5-0.5B-InstructFP16
Fits (tight)11GB required · 12GB available
33.09 tok/sEstimated
Qwen/Qwen3-Next-80B-A3B-InstructQ8
Not supported78GB required · 12GB available
12.25 tok/sEstimated
Qwen/Qwen3-Next-80B-A3B-InstructFP16
Not supported156GB required · 12GB available
7.13 tok/sEstimated
allenai/OLMo-2-0425-1BQ4
Fits comfortably1GB required · 12GB available
107.93 tok/sEstimated
microsoft/Phi-3-mini-4k-instructQ4
Fits comfortably4GB required · 12GB available
92.66 tok/sEstimated
microsoft/Phi-3-mini-4k-instructQ8
Fits comfortably7GB required · 12GB available
60.96 tok/sEstimated
microsoft/Phi-3-mini-4k-instructFP16
Not supported15GB required · 12GB available
32.12 tok/sEstimated
openai-community/gpt2-largeQ4
Fits comfortably4GB required · 12GB available
91.09 tok/sEstimated
Qwen/Qwen3-30B-A3B-Instruct-2507Q4
Not supported15GB required · 12GB available
51.15 tok/sEstimated
Qwen/Qwen3-30B-A3B-Instruct-2507Q8
Not supported31GB required · 12GB available
33.76 tok/sEstimated
Qwen/Qwen3-30B-A3B-Instruct-2507FP16
Not supported61GB required · 12GB available
19.48 tok/sEstimated
rednote-hilab/dots.ocrQ4
Fits comfortably4GB required · 12GB available
95.54 tok/sEstimated
Qwen/Qwen3-Reranker-0.6BFP16
Not supported13GB required · 12GB available
36.95 tok/sEstimated
meta-llama/Meta-Llama-3-8B-InstructQ4
Fits comfortably4GB required · 12GB available
95.08 tok/sEstimated
mlx-community/gpt-oss-20b-MXFP4-Q8Q4
Fits comfortably10GB required · 12GB available
53.19 tok/sEstimated
mlx-community/gpt-oss-20b-MXFP4-Q8Q8
Not supported20GB required · 12GB available
35.04 tok/sEstimated
mlx-community/gpt-oss-20b-MXFP4-Q8FP16
Not supported41GB required · 12GB available
19.96 tok/sEstimated
Qwen/Qwen2.5-1.5BQ8
Fits comfortably5GB required · 12GB available
58.89 tok/sEstimated
Qwen/Qwen2.5-1.5BFP16
Fits (tight)11GB required · 12GB available
31.03 tok/sEstimated
Qwen/Qwen2.5-14B-InstructFP16
Not supported29GB required · 12GB available
26.18 tok/sEstimated
unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bitQ4
Fits comfortably4GB required · 12GB available
94.23 tok/sEstimated
unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bitQ8
Fits comfortably9GB required · 12GB available
59.17 tok/sEstimated
microsoft/phi-2Q4
Fits comfortably4GB required · 12GB available
93.79 tok/sEstimated
microsoft/phi-2Q8
Fits comfortably7GB required · 12GB available
59.46 tok/sEstimated
microsoft/phi-2FP16
Not supported15GB required · 12GB available
35.92 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-7BQ4
Fits comfortably4GB required · 12GB available
89.53 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-7BQ8
Fits comfortably7GB required · 12GB available
60.83 tok/sEstimated
deepseek-ai/DeepSeek-R1-Distill-Qwen-7BFP16
Not supported15GB required · 12GB available
35.29 tok/sEstimated
GSAI-ML/LLaDA-8B-BaseFP16
Not supported17GB required · 12GB available
31.59 tok/sEstimated

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data

GPU FAQs

Data-backed answers pulled from community benchmarks, manufacturer specs, and live pricing.

How does RTX 4070 Ti perform on 14B coding models?

A CUDA user benchmarking Qwen 2.5 14B Instruct Q4_K on a 4070 Ti Super reported ~72 tokens/sec—roughly 40% faster than the LocalScore baseline for that quant.

Source: Reddit – /r/LocalLLaMA (mlbgc2j)

What speeds can dual 4070 Ti Supers reach on Gemma 2?

LM Studio logs ~35 tok/s on gemma2-9b Q8_0 and ~25 tok/s on gemma-2-27b Q4_K_M with dual 4070 Ti Supers, delivering sub-0.2 s time-to-first-token for 9B workloads.

Source: Reddit – /r/LocalLLaMA (mehsra3)

Why do laptop variants struggle with QWQ?

Users on 12 GB mobile 4070 Ti rigs see QWQ stuck near 3.7 tok/s until they manually offload additional layers to the GPU—highlighting how VRAM ceilings restrict high-context models.

Source: Reddit – /r/LocalLLaMA (mjbgky4)

What are the reference power specs?

RTX 4070 Ti is rated at 285 W, ships with 12 GB GDDR6X, and relies on the 16-pin 12VHPWR connector. NVIDIA’s PSU guidance is 700 W or higher.

Source: TechPowerUp – RTX 4070 Ti Specs

What is the current pricing landscape?

As of Nov 2025, Amazon listed the 4070 Ti at $799 in stock.

Source: Supabase price tracker snapshot – 2025-11-03

Alternative GPUs

RTX 4080
16GB

Explore how RTX 4080 stacks up for local inference workloads.

RTX 4070
12GB

Explore how RTX 4070 stacks up for local inference workloads.

RTX 3080
10GB

Explore how RTX 3080 stacks up for local inference workloads.

RTX 4090
24GB

Explore how RTX 4090 stacks up for local inference workloads.

RX 7900 XT
20GB

Explore how RX 7900 XT stacks up for local inference workloads.

Compare RTX 4070 Ti

RTX 4070 Ti vs RTX 4080

Side-by-side VRAM, throughput, efficiency, and pricing benchmarks for both GPUs.

RTX 4070 Ti vs RTX 4070

Side-by-side VRAM, throughput, efficiency, and pricing benchmarks for both GPUs.

RTX 4070 Ti vs RX 7900 XT

Side-by-side VRAM, throughput, efficiency, and pricing benchmarks for both GPUs.