This page answers openai/gpt-oss-safeguard-20b q4 quantization queries with explicit calculations from our model requirement dataset and compatibility speed table.
Exact Q4 requirement from model requirement data.
Throughput data below uses available compatibility measurements/estimates and is sorted by tokens per second for this model.
Need general guidance? Review full methodology.
| GPU | VRAM | Quantization | Speed | Compatibility |
|---|---|---|---|---|
| NVIDIA H200 SXM 141GB | 141GB | Q4 | 406 tok/s | View full compatibility |
| AMD Instinct MI300X | 192GB | Q4 | 405 tok/s | View full compatibility |
| AMD Instinct MI250X | 128GB | Q4 | 283 tok/s | View full compatibility |
| NVIDIA H100 SXM5 80GB | 80GB | Q4 | 259 tok/s | View full compatibility |
| NVIDIA H100 PCIe 80GB | 80GB | Q4 | 180 tok/s | View full compatibility |
| RTX 5090 | 32GB | Q4 | 171 tok/s | View full compatibility |
| NVIDIA A100 80GB SXM4 | 80GB | Q4 | 151 tok/s | View full compatibility |
| AMD Instinct MI210 | 64GB | Q4 | 125 tok/s | View full compatibility |
| NVIDIA A100 40GB PCIe | 40GB | Q4 | 121 tok/s | View full compatibility |
| NVIDIA RTX 6000 Ada | 48GB | Q4 | 108 tok/s | View full compatibility |
| RTX 4090 | 24GB | Q4 | 107 tok/s | View full compatibility |
| NVIDIA L40 | 48GB | Q4 | 95 tok/s | View full compatibility |