Retour à Mixtral
Mistral AI's 141B 8x22B MoE base model — significantly stronger than 8x7B with 39B active params, matching GPT-3.5 on most benchmarks.
66K tokensGratuit / Poids ouvertsMoEApache 2.0
Benchmarks
Mixtral-8x22B-v0.1
7.3%Mixtral-8x7B-Instruct-v0.1
20.6%Mixtral-8x22B (GAIA baseline era)