NVIDIA H100 PCIe 80GB meets the minimum VRAM requirement for Q4 inference of codellama/CodeLlama-34b-hf. Review the quantization breakdown below to see how higher precision settings impact VRAM and throughput.
NVIDIA H100 PCIe 80GB can run codellama/CodeLlama-34b-hf with Q4 quantization. At approximately 116 tokens/second, you can expect Excellent speed - conversational response times under 1 second.
You have 63GB headroom, which is sufficient for system overhead and smooth operation.
| Quantization | VRAM needed | VRAM available | Estimated speed | Verdict |
|---|---|---|---|---|
| Q4 | 17GB | 80GB | 116.43 tok/s | ✅ Fits comfortably |
| Q8 | 35GB | 80GB | 78.73 tok/s | ✅ Fits comfortably |
| FP16 | 70GB | 80GB | 37.72 tok/s | ✅ Fits comfortably |