L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Community

  • Leaderboard

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. Home
  3. Developers
  4. apple

apple

apple publishes models from 1B to 1B parameters focused on edge-friendly quantizations.

1 models1B parametersFrom $799

Models

Search and sort every tracked release before opening a dedicated requirements page.

apple/OpenELM-1_1B-Instruct
1B
Minimum VRAM: Q4 · 1GB
Throughput: 607.3 tok/s
From $499View requirements →
Recommended hardware
Budget$300-$700
AMD Instinct MI250X • Pricing soon
1GB VRAM target • Q4

Edge inference, personal copilots, prototype agents.

Mid-range$700-$2,000
RTX 4070 Ti Super • $799
1GB VRAM target • Q8

Team chatbots, offline knowledge bases, enterprise POCs.

High-end$2,000+
RTX 4070 Ti Super • $799
2GB VRAM target • FP16

Agent orchestration, evaluation pipelines, long-context tasks.