L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsOpenClawMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Guides

  • OpenClaw Guide
  • How-To Guides

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. Models
  3. TinyLlama/TinyLlama-1.1B-Chat-v1.0
  4. Speed on NVIDIA A100 40GB PCIe
NVIDIA A100 40GB PCIe~264 tok/s (Q4)

TinyLlama/TinyLlama-1.1B-Chat-v1.0 speed on NVIDIA A100 40GB PCIe

Quantization-specific throughput and VRAM requirements for TinyLlama/TinyLlama-1.1B-Chat-v1.0 running on NVIDIA A100 40GB PCIe.

Speed Snapshot
Topline estimate from compatibility data
ModelTinyLlama/TinyLlama-1.1B-Chat-v1.0
GPUNVIDIA A100 40GB PCIe
Q4 speed264 tok/s
Q4 VRAM required1GB
Data Source
Calculation and benchmark status

Speed values come from the compatibility dataset (`estimatedTokensPerSec`) and are sorted by quantization.

For full verdict logic and alternate GPUs, see the canonical compatibility page.

Open full compatibility report

Quantization Speed Table

QuantizationVRAM neededVRAM availableSpeedVerdict
Q41GB40GB264 tok/s✅ Fits
Q81GB40GB180 tok/s✅ Fits
FP162GB40GB111 tok/s✅ Fits
Back to TinyLlama/TinyLlama-1.1B-Chat-v1.0Q4 requirement pageFull compatibility breakdown