Back to Zephyr
Hugging Face's 7B DPO-tuned chat model — Mistral-based with strong alignment and helpfulness for its size.
33K tokensFree / Open weightsTransformerMIT
No benchmark scores available yet for this model.
Hugging Face's 7B DPO-tuned chat model — Mistral-based with strong alignment and helpfulness for its size.
No benchmark scores available yet for this model.