Quick Answer: George Hotz runs a 6x AMD Radeon RX 7900 XTX 24GB (0GB VRAM total) configuration for ai hardware & autonomous driving. optimized for 7B-13B models.
Specs & Performance
| Component | Product | Price | Purchase |
|---|---|---|---|
| GPU | AMD Radeon RX 7900 XTX 24GB×6 Using AMD GPUs to prove tinygrad works outside NVIDIA ecosystem | $5,394 $899 each | View on Amazon |
| CPU | AMD EPYC 7443P 24-core server CPU with 128 PCIe lanes | $1,450 | View on Amazon |
| MOTHERBOARD | Supermicro H12SSL-i Single-socket EPYC board with multiple PCIe slots |
No. This setup has 0GB VRAM but Llama 70B needs 40GB minimum. It can run Llama 13B and smaller models.
$9,844 total. Budget alternatives: Single RTX 4090 (~$4,200) or RTX 4080 (~$2,400) for smaller models.
Llama 405B and similar 400B+ models need 200GB+ VRAM (requires 8x A100 or H100 GPUs). This 0GB setup handles up to 70B models.
| $650 |
| View on Amazon |
| RAM | 256GB DDR4 ECC Server-grade memory for training stability | $800 | View on Amazon |
| STORAGE | 4TB NVMe RAID Array Fast dataset storage for training | $600 | View on Amazon |
| PSU | EVGA SuperNOVA 1600 T2×2 Dual PSU for 6 GPUs | $800 $400 each | View on Amazon |
| COOLING | Open-air custom rack Custom rack with industrial fans | $150 | View on Amazon |