Back to Phi 1
Microsoft's 1.3B Phi-1.5 model — trained on synthetic 'textbook quality' data, outperforms much larger models on reasoning tasks.
2K tokensFree / Open weightsTransformerMIT
No benchmark scores available yet for this model.
Microsoft's 1.3B Phi-1.5 model — trained on synthetic 'textbook quality' data, outperforms much larger models on reasoning tasks.
No benchmark scores available yet for this model.