Putting small language models under the microscope
Apr 18, 2025•21 min
Episode description
We’ve spent the past few years discussing large language models, the huge AI models that power a lot of the generative AI tools that have dominated the headlines.
But small language models are also possible – and rapidly growing in popularity.
What benefits do these tiny AI models offer, and how much use will they get in the coming months and years, and how do they differ from other lightweight, ‘low latency’ models?
In this episode, Jane and Rory take a look at some of the smallest AI models on the market, asking what they're for and if they could be the future of the technology.
Read more:
- Small language models are growing in popularity — but they have a “hidden fallacy” that enterprises must come to terms with
- Small language models set for take-off next year
- Google’s new ‘Gemma’ AI models show that bigger isn’t always better
- Microsoft wants to take the hassle out of AI development with its ‘Models as a Service’ offering
- Three open source large language models you can use today
- Chinese AI firm DeepSeek has Silicon Valley flustered