This page answers TinyLlama/TinyLlama-1.1B-Chat-v1.0 q4 quantization queries with explicit calculations from our model requirement dataset and compatibility speed table.
Exact Q4 requirement from model requirement data.
Throughput data below uses available compatibility measurements/estimates and is sorted by tokens per second for this model.
Need general guidance? Review full methodology.
| GPU | VRAM | Quantization | Speed | Compatibility |
|---|---|---|---|---|
| AMD Instinct MI300X | 192GB | Q4 | 964 tok/s | View full compatibility |
| NVIDIA H200 SXM 141GB | 141GB | Q4 | 900 tok/s | View full compatibility |
| NVIDIA H100 SXM5 80GB | 80GB | Q4 | 623 tok/s | View full compatibility |
| AMD Instinct MI250X | 128GB | Q4 | 525 tok/s | View full compatibility |
| RTX 5090 | 32GB | Q4 | 395 tok/s | View full compatibility |
| NVIDIA H100 PCIe 80GB | 80GB | Q4 | 393 tok/s | View full compatibility |
| NVIDIA A100 80GB SXM4 | 80GB | Q4 | 336 tok/s | View full compatibility |
| AMD Instinct MI210 | 64GB | Q4 | 298 tok/s | View full compatibility |
| NVIDIA A100 40GB PCIe | 40GB | Q4 | 264 tok/s | View full compatibility |
| RTX 4090 | 24GB | Q4 | 225 tok/s | View full compatibility |
| NVIDIA RTX 6000 Ada | 48GB | Q4 | 217 tok/s | View full compatibility |
| RTX 5080 | 16GB | Q4 | 207 tok/s | View full compatibility |