L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsOpenClawMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Guides

  • OpenClaw Guide
  • How-To Guides

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. Models
  3. meta-llama/Llama-3.2-3B
  4. Speed on RTX 4090
RTX 4090~212 tok/s (Q4)

meta-llama/Llama-3.2-3B speed on RTX 4090

Quantization-specific throughput and VRAM requirements for meta-llama/Llama-3.2-3B running on RTX 4090.

Speed Snapshot
Topline estimate from compatibility data
Modelmeta-llama/Llama-3.2-3B
GPURTX 4090
Q4 speed212 tok/s
Q4 VRAM required2GB
Data Source
Calculation and benchmark status

Speed values come from the compatibility dataset (`estimatedTokensPerSec`) and are sorted by quantization.

For full verdict logic and alternate GPUs, see the canonical compatibility page.

Open full compatibility report

Quantization Speed Table

QuantizationVRAM neededVRAM availableSpeedVerdict
Q42GB24GB212 tok/s✅ Fits
Q83GB24GB145 tok/s✅ Fits
FP166GB24GB85 tok/s✅ Fits
Back to meta-llama/Llama-3.2-3BQ4 requirement pageFull compatibility breakdown