Run Mistral AI models on your own hardware
Mistral 7B outperforms Llama 2 13B while being half the size. This guide shows you how to run Mistral and Mixtral 8x7B locally using Jan - a simple desktop app with a built-in Model Hub.
Jan is a free desktop app that runs AI models locally with a beautiful interface.
# Download from:
https://jan.ai/download
# Available for Windows, macOS, and Linuxš” Jan automatically detects your GPU and configures optimal settings.
Open Jan and go to the Model Hub. Search for 'Mistral' to see available versions.
In Jan Model Hub, search for:
⢠"Mistral 7B" - Fast, efficient, great for general use
⢠"Mixtral 8x7B" - More powerful, uses MoE architecture
⢠"Codestral" - Optimized for coding tasksClick download on your chosen model, then start a new chat. Mixtral needs 16GB+ VRAM for best results.
Download sizes:
⢠Mistral 7B Q4: ~4GB
⢠Mixtral 8x7B Q4: ~26GB
⢠Codestral Q4: ~8GBš” Mixtral uses mixture-of-experts - only 13B parameters active at once, giving you 47B quality at 13B speed.
ā Mixtral runs slowly
ā Mixtral's MoE architecture needs more memory bandwidth. Ensure GPU acceleration is enabled in Jan Settings > Advanced.
ā Model not found in Hub
ā Jan's Model Hub updates regularly. If you don't see a model, try refreshing or check jan.ai for the latest supported models.