Back to Jamba
AI21 Labs' 52B hybrid SSM-Transformer model — Mamba + MoE architecture with 256K context and exceptional throughput.
256K tokensFree / Open weightsMoEApache 2.0
No benchmark scores available yet for this model.
AI21 Labs' 52B hybrid SSM-Transformer model — Mamba + MoE architecture with 256K context and exceptional throughput.
No benchmark scores available yet for this model.