Quick Answer: Intel Arc B580 offers 12GB VRAM and starts around current market pricing. It delivers approximately 88 tokens/sec on deepseek-ai/DeepSeek-OCR-2. It typically draws 190W under load.
This GPU offers reliable throughput for local AI workloads. Pair it with the right model quantization to hit your desired tokens/sec, and monitor prices below to catch the best deal.
Buy directly on Amazon with fast shipping and reliable customer service.
Essential accessories to pair with Intel Arc B580
Total Bundle Price
All items from Amazon
💡 Not ready to buy? Try cloud GPUs first
Test Intel Arc B580 performance in the cloud before investing in hardware. Pay by the hour with no commitment.
| Model | Quantization | Tokens/sec | VRAM used |
|---|---|---|---|
| deepseek-ai/DeepSeek-OCR-2 | Q4 | 87.61 tok/sEstimated Auto-generated benchmark | 2GB |
| Qwen/Qwen3-ASR-1.7B | Q4 | 81.24 tok/sEstimated Auto-generated benchmark | 2GB |
| zai-org/GLM-OCR | Q4 | 77.49 tok/sEstimated Auto-generated benchmark | 4GB |
| moonshotai/Kimi-K2.5 | Q4 | 76.29 tok/sEstimated Auto-generated benchmark | 4GB |
| nvidia/personaplex-7b-v1 | Q4 | 72.14 tok/sEstimated Auto-generated benchmark | 4GB |
| deepseek-ai/DeepSeek-OCR-2 | Q8 | 66.93 tok/sEstimated Auto-generated benchmark | 4GB |
| Qwen/Qwen3-ASR-1.7B | Q8 | 64.83 tok/sEstimated Auto-generated benchmark | 3GB |
| moonshotai/Kimi-K2.5 | Q8 | 55.87 tok/sEstimated Auto-generated benchmark | 8GB |
| zai-org/GLM-OCR | Q8 | 55.69 tok/sEstimated Auto-generated benchmark | 8GB |
| nvidia/personaplex-7b-v1 | Q8 | 51.30 tok/sEstimated Auto-generated benchmark | 8GB |
| Qwen/Qwen3-ASR-1.7B | FP16 | 36.60 tok/sEstimated Auto-generated benchmark | 6GB |
| deepseek-ai/DeepSeek-OCR-2 | FP16 | 35.14 tok/sEstimated Auto-generated benchmark | 8GB |
| moonshotai/Kimi-K2.5 | FP16 | 28.74 tok/sEstimated Auto-generated benchmark | 16GB |
| nvidia/personaplex-7b-v1 | FP16 | 28.52 tok/sEstimated Auto-generated benchmark | 16GB |
| zai-org/GLM-OCR | FP16 | 27.87 tok/sEstimated Auto-generated benchmark | 16GB |
| zai-org/GLM-4.7-Flash | Q4 | 24.80 tok/sEstimated Auto-generated benchmark | 18GB |
| zai-org/GLM-4.7-Flash | Q8 | 16.97 tok/sEstimated Auto-generated benchmark | 35GB |
| Qwen/Qwen3-Coder-Next | Q4 | 14.28 tok/sEstimated Auto-generated benchmark | 45GB |
| zai-org/GLM-4.7-Flash | FP16 | 10.97 tok/sEstimated Auto-generated benchmark | 70GB |
| Qwen/Qwen3-Coder-Next | Q8 | 10.53 tok/sEstimated Auto-generated benchmark | 90GB |
| stepfun-ai/Step-3.5-Flash | Q4 | 9.40 tok/sEstimated Auto-generated benchmark | 112GB |
| stepfun-ai/Step-3.5-Flash | Q8 | 6.28 tok/sEstimated Auto-generated benchmark | 223GB |
| Qwen/Qwen3-Coder-Next | FP16 | 5.57 tok/sEstimated Auto-generated benchmark | 179GB |
| stepfun-ai/Step-3.5-Flash | FP16 | 3.65 tok/sEstimated Auto-generated benchmark | 446GB |
Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data
| Model | Quantization | Verdict | Estimated speed | VRAM needed |
|---|---|---|---|---|
| nvidia/personaplex-7b-v1 | Q4 | Fits comfortably | 72.14 tok/sEstimated | 4GB (have 12GB) |
| moonshotai/Kimi-K2.5 | Q4 | Fits comfortably | 76.29 tok/sEstimated | 4GB (have 12GB) |
| Qwen/Qwen3-Coder-Next | Q4 | Not supported | 14.28 tok/sEstimated | 45GB (have 12GB) |
| Qwen/Qwen3-ASR-1.7B | Q4 | Fits comfortably | 81.24 tok/sEstimated | 2GB (have 12GB) |
| stepfun-ai/Step-3.5-Flash | Q4 | Not supported | 9.40 tok/sEstimated | 112GB (have 12GB) |
| deepseek-ai/DeepSeek-OCR-2 | Q4 | Fits comfortably | 87.61 tok/sEstimated | 2GB (have 12GB) |
| zai-org/GLM-4.7-Flash | Q4 | Not supported | 24.80 tok/sEstimated | 18GB (have 12GB) |
| zai-org/GLM-OCR | Q4 | Fits comfortably | 77.49 tok/sEstimated | 4GB (have 12GB) |
| nvidia/personaplex-7b-v1 | Q8 | Fits comfortably | 51.30 tok/sEstimated | 8GB (have 12GB) |
| moonshotai/Kimi-K2.5 | Q8 | Fits comfortably | 55.87 tok/sEstimated | 8GB (have 12GB) |
| Qwen/Qwen3-Coder-Next | Q8 | Not supported | 10.53 tok/sEstimated | 90GB (have 12GB) |
| Qwen/Qwen3-ASR-1.7B | Q8 | Fits comfortably | 64.83 tok/sEstimated | 3GB (have 12GB) |
| stepfun-ai/Step-3.5-Flash | Q8 | Not supported | 6.28 tok/sEstimated | 223GB (have 12GB) |
| deepseek-ai/DeepSeek-OCR-2 | Q8 | Fits comfortably | 66.93 tok/sEstimated | 4GB (have 12GB) |
| zai-org/GLM-4.7-Flash | Q8 | Not supported | 16.97 tok/sEstimated | 35GB (have 12GB) |
| zai-org/GLM-OCR | Q8 | Fits comfortably | 55.69 tok/sEstimated | 8GB (have 12GB) |
| nvidia/personaplex-7b-v1 | FP16 | Not supported | 28.52 tok/sEstimated | 16GB (have 12GB) |
| moonshotai/Kimi-K2.5 | FP16 | Not supported | 28.74 tok/sEstimated | 16GB (have 12GB) |
| Qwen/Qwen3-Coder-Next | FP16 | Not supported | 5.57 tok/sEstimated | 179GB (have 12GB) |
| Qwen/Qwen3-ASR-1.7B | FP16 | Fits comfortably | 36.60 tok/sEstimated | 6GB (have 12GB) |
| stepfun-ai/Step-3.5-Flash | FP16 | Not supported | 3.65 tok/sEstimated | 446GB (have 12GB) |
| deepseek-ai/DeepSeek-OCR-2 | FP16 | Fits comfortably | 35.14 tok/sEstimated | 8GB (have 12GB) |
| zai-org/GLM-4.7-Flash | FP16 | Not supported | 10.97 tok/sEstimated | 70GB (have 12GB) |
| zai-org/GLM-OCR | FP16 | Not supported | 27.87 tok/sEstimated | 16GB (have 12GB) |
Note: Performance estimates are calculated. Real results may vary. Methodology · Submit real data
Explore how RTX 5070 stacks up for local inference workloads.
Explore how RTX 4060 Ti 16GB stacks up for local inference workloads.
Explore how RX 6800 XT stacks up for local inference workloads.
Explore how RTX 4070 Super stacks up for local inference workloads.
Explore how RTX 3080 stacks up for local inference workloads.
RPG • 2020
RPG • 2023
Action RPG • 2023
RPG • 2023
Survival Horror • 2023
Action RPG • 2022
Action RPG • 2024
Action Adventure • 2025
Survival Horror • 2023
Action • 2022
Action Adventure • 2023
Action Adventure • 2019