Quantization-specific throughput and VRAM requirements for context-labs/meta-llama-Llama-3.2-3B-Instruct-FP16 running on NVIDIA A100 80GB SXM4.
Speed values come from the compatibility dataset (`estimatedTokensPerSec`) and are sorted by quantization.
For full verdict logic and alternate GPUs, see the canonical compatibility page.
Open full compatibility report| Quantization | VRAM needed | VRAM available | Speed | Verdict |
|---|---|---|---|---|
| Q4 | 2GB | 80GB | 379 tok/s | ✅ Fits |
| Q8 | 3GB | 80GB | 254 tok/s | ✅ Fits |
| FP16 | 6GB | 80GB | 122 tok/s | ✅ Fits |