L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Community

  • Leaderboard

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. Models
  3. Qwen/Qwen3-235B-A22B

Qwen/Qwen3-235B-A22B

460GB VRAM (FP16)
235B parametersBy QwenReleased 2025-118,192 token context

Minimum VRAM

460GB

FP16 (full model) • Q4 option ≈ 115GB

Best Performance

AMD Instinct MI300X

~97 tok/s • Q4

Most Affordable

Apple M3 Max

Q4 • ~7 tok/s • From $3,999

Full-model (FP16) requirements are shown by default. Quantized builds like Q4 trade accuracy for lower VRAM usage.


Compatible GPUs

Filter by quantization, price, and VRAM to compare performance estimates.

ℹ️Speeds are estimates based on hardware specs. Actual performance depends on software configuration. Learn more

Showing Q4 compatibility. Switch tabs to explore other quantizations.

GPUSpeedVRAM RequirementTypical price
Apple M2 UltraEstimated
Apple
~13 tok/s
Q4
115GB VRAM used192GB total on card
$5,999View GPU →
Apple M3 MaxEstimated
Apple
~7 tok/s
Q4
115GB VRAM used128GB total on card
$3,999View GPU →
Don’t see your GPU? View all compatible hardware →

Detailed Specifications

Hardware requirements and model sizes at a glance.

Technical details

Parameters
235,000,000,000 (235B)
Architecture
Transformer
Developer
Qwen
Released
November 2025
Context window
8,192 tokens

Quantization support

Q4
115GB VRAM required • 115GB download
Q8
230GB VRAM required • 230GB download
FP16
460GB VRAM required • 460GB download

Hardware Requirements

ComponentMinimumRecommendedOptimal
VRAM115GB (Q4)230GB (Q8)460GB (FP16)
RAM16GB32GB64GB
Disk50GB100GB-
Model size115GB (Q4)230GB (Q8)460GB (FP16)
CPUModern CPU (Ryzen 5/Intel i5 or better)Modern CPU (Ryzen 5/Intel i5 or better)Modern CPU (Ryzen 5/Intel i5 or better)

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data


Frequently Asked Questions

Common questions about running Qwen/Qwen3-235B-A22B locally

What should I know before running Qwen/Qwen3-235B-A22B?

This model delivers strong local performance when paired with modern GPUs. Use the hardware guidance below to choose the right quantization tier for your build.

How do I deploy this model locally?

Use runtimes like llama.cpp, text-generation-webui, or vLLM. Download the quantized weights from Hugging Face, ensure you have enough VRAM for your target quantization, and launch with GPU acceleration (CUDA/ROCm/Metal).

Which quantization should I choose?

Start with Q4 for wide GPU compatibility. Upgrade to Q8 if you have spare VRAM and want extra quality. FP16 delivers the highest fidelity but demands workstation or multi-GPU setups.

What is the difference between Q4, Q4_K_M, Q5_K_M, and Q8 quantization for Qwen/Qwen3-235B-A22B?

Q4_K_M and Q5_K_M are GGUF quantization formats that balance quality and VRAM usage. Q4_K_M uses ~115GB VRAM with good quality retention. Q5_K_M uses slightly more VRAM but preserves more model accuracy. Q8 (~230GB) offers near-FP16 quality. Standard Q4 is the most memory-efficient option for Qwen/Qwen3-235B-A22B.

Where can I download Qwen/Qwen3-235B-A22B?

Official weights are available via Hugging Face. Quantized builds (Q4, Q8) can be loaded into runtimes like llama.cpp, text-generation-webui, or vLLM. Always verify the publisher before downloading.


Related models

openai/gpt-oss-120b120B params
RedHatAI/Llama-3.2-90B-Vision-Instruct-FP8-dynamic90B params
Qwen/Qwen3-Next-80B-A3B-Instruct80B params