L
localai.computer
ModelsGPUsSystemsAI SetupsBuildsOpenClawMethodology

Resources

  • Methodology
  • Submit Benchmark
  • About

Browse

  • AI Models
  • GPUs
  • PC Builds

Guides

  • OpenClaw Guide
  • How-To Guides

Legal

  • Privacy
  • Terms
  • Contact

© 2025 localai.computer. Hardware recommendations for running AI models locally.

ℹ️We earn from qualifying purchases through affiliate links at no extra cost to you. This supports our free content and research.

  1. Home
  2. Models
  3. Qwen/Qwen3.5-397B-A17B

Qwen/Qwen3.5-397B-A17B

888GB VRAM (FP16)
397B parametersBy QwenReleased 2026-024,096 token context

Minimum VRAM

888GB

FP16 (full model) • Q4 option ≈ 222GB

Best Performance

Collecting data

Benchmark incoming

Most Affordable

Retail data pending

Waiting for retailers

Full-model (FP16) requirements are shown by default. Quantized builds like Q4 trade accuracy for lower VRAM usage.

Quantization requirement shortcuts
Built for high-intent queries like "Qwen/Qwen3.5-397B-A17B q4 vram requirements".
Q4 VRAM usageQ4_K_M VRAM usageQ5_K_M VRAM usageQ8 VRAM usageFP16 VRAM usage
Model speed shortcuts
Direct answers for "Qwen/Qwen3.5-397B-A17B speed on [GPU]" searches.
Qwen/Qwen3.5-397B-A17B speed on AMD Instinct MI300X
Q4 • ~90 tok/s
Qwen/Qwen3.5-397B-A17B speed on NVIDIA H200 SXM 141GB
Q4 • ~84 tok/s
Qwen/Qwen3.5-397B-A17B speed on NVIDIA H100 SXM5 80GB
Q4 • ~60 tok/s
Qwen/Qwen3.5-397B-A17B speed on AMD Instinct MI250X
Q4 • ~58 tok/s
Qwen/Qwen3.5-397B-A17B speed on RTX 5090
Q4 • ~39 tok/s

Compatible GPUs

Filter by quantization, price, and VRAM to compare performance estimates.

ℹ️Speeds are estimates based on hardware specs. Actual performance depends on software configuration. Learn more

Showing FP16 compatibility. Switch tabs to explore other quantizations.

GPUSpeedVRAM RequirementTypical price
No GPUs with FP16 data match these filters yet. Try another quantization or adjust filters.
Recommended Multi-GPU Stacks
These options are estimated from VRAM requirements and pooling efficiency. Validate with your own prompts before purchasing.
This model is likely too large for practical consumer multi-GPU setups. Consider a smaller quantization, smaller model family, or datacenter-grade hardware.

We could not derive a stable consumer multi-GPU configuration from current compatibility and pricing data.

Don't see your GPU? View all compatible hardware →
Best GPU Options for Qwen/Qwen3.5-397B-A17B

Qwen/Qwen3.5-397B-A17B 397B parametre içerir ve 222GB VRAM gerektirir - choose the best GPU for your needs

MinimumBudget
AMD Instinct MI300X
VRAM192GB
Price$0
View on Amazon

For Better Performance

Compare GPU options to find the best fit for your needs.

Browse All GPUsCompare Options
Faster inference speed
Run larger models

Detailed Specifications

Hardware requirements and model sizes at a glance.

Technical details

Parameters
397,000,000,000 (397B)
Architecture
qwen3_5_moe
Developer
Qwen
Released
February 2026
Context window
4,096 tokens

Quantization support

Q4
222GB VRAM required • 222GB download
Q8
444GB VRAM required • 444GB download
FP16
888GB VRAM required • 888GB download

Hardware Requirements

ComponentMinimumRecommendedOptimal
VRAM222GB (Q4)444GB (Q8)888GB (FP16)
RAM32GB64GB64GB
Disk50GB100GB-
Model size222GB (Q4)444GB (Q8)888GB (FP16)
CPUModern CPU (Ryzen 5/Intel i5 or better)Modern CPU (Ryzen 5/Intel i5 or better)Modern CPU (Ryzen 5/Intel i5 or better)

Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data


Frequently Asked Questions

Common questions about running Qwen/Qwen3.5-397B-A17B locally

What should I know before running Qwen/Qwen3.5-397B-A17B?

This model delivers strong local performance when paired with modern GPUs. Use the hardware guidance below to choose the right quantization tier for your build.

How do I deploy this model locally?

Use runtimes like llama.cpp, text-generation-webui, or vLLM. Download the quantized weights from Hugging Face, ensure you have enough VRAM for your target quantization, and launch with GPU acceleration (CUDA/ROCm/Metal).

Which quantization should I choose?

Start with Q4 for wide GPU compatibility. Upgrade to Q8 if you have spare VRAM and want extra quality. FP16 delivers the highest fidelity but demands workstation or multi-GPU setups.

What is the difference between Q4, Q4_K_M, Q5_K_M, and Q8 quantization for Qwen/Qwen3.5-397B-A17B?

Q4_K_M and Q5_K_M are GGUF quantization formats that balance quality and VRAM usage. Q4_K_M uses ~222GB VRAM with good quality retention. Q5_K_M uses slightly more VRAM but preserves more model accuracy. Q8 (~444GB) offers near-FP16 quality. Standard Q4 is the most memory-efficient option for Qwen/Qwen3.5-397B-A17B.

Where can I download Qwen/Qwen3.5-397B-A17B?

Official weights are available via Hugging Face. Quantized builds (Q4, Q8) can be loaded into runtimes like llama.cpp, text-generation-webui, or vLLM. Always verify the publisher before downloading.


More from Qwen

View all →

Explore other AI models developed by Qwen.

Related models

Qwen/Qwen3-235B-A22B235B params
Qwen/Qwen3-Next-80B-A3B-Instruct80B params
Qwen/Qwen3-Next-80B-A3B-Thinking-FP880B params

Compare models

See how Qwen/Qwen3.5-397B-A17B compares to other popular models.

All comparisons →Qwen/Qwen3.5-397B-A17B vs others