Retour aux modèles
Mistral AI
Mixtral
Open source2 variantes
Mistral AI's 141B 8x22B MoE base model — significantly stronger than 8x7B with 39B active params, matching GPT-3.5 on most benchmarks.
66K tokensGratuit / Poids ouvertsMoEApache 2.0
Mixtral-8x22B-v0.1
7.3%Mixtral-8x7B-Instruct-v0.1
20.6%Mixtral-8x22B (GAIA baseline era)
instruct47B
Mistral AI's landmark 8x7B MoE instruct model — sparse mixture-of-experts with GPT-3.5-level performance at fraction of the compute.
33K tokensGratuit / Poids ouverts