Quantization-specific throughput and VRAM requirements for meta-llama/Llama-3.1-8B running on NVIDIA A100 40GB PCIe.
Speed values come from the compatibility dataset (`estimatedTokensPerSec`) and are sorted by quantization.
For full verdict logic and alternate GPUs, see the canonical compatibility page.
Open full compatibility report| Quantization | VRAM needed | VRAM available | Speed | Verdict |
|---|---|---|---|---|
| Q4 | 4GB | 40GB | 206 tok/s | ✅ Fits |
| Q8 | 9GB | 40GB | 169 tok/s | ✅ Fits |
| FP16 | 17GB | 40GB | 78 tok/s | ✅ Fits |