Apple M2 Max meets the minimum VRAM requirement for Q4 inference of mlx-community/gpt-oss-20b-MXFP4-Q8. Review the quantization breakdown below to see how higher precision settings impact VRAM and throughput.
Apple M2 Max can run mlx-community/gpt-oss-20b-MXFP4-Q8 with Q4 quantization. At approximately 29 tokens/second, you can expect Moderate speed - useful for batch processing.
You have 86GB headroom, which is sufficient for system overhead and smooth operation.
| Quantization | VRAM needed | VRAM available | Estimated speed | Verdict |
|---|---|---|---|---|
| Q4 | 10GB | 96GB | 28.66 tok/s | ✅ Fits comfortably |
| Q8 | 20GB | 96GB | 21.99 tok/s | ✅ Fits comfortably |
| FP16 | 41GB | 96GB | 11.55 tok/s | ✅ Fits comfortably |