Measuring Bias, Toxicity, and Truthfulness in LLMs With Python - podcast episode cover

Measuring Bias, Toxicity, and Truthfulness in LLMs With Python

Jan 19, 20241 hr 16 minEp. 188
--:--
--:--
Listen in podcast apps:

Episode description

How can you measure the quality of a large language model? What tools can measure bias, toxicity, and truthfulness levels in a model using Python? This week on the show, Jodie Burchell, developer advocate for data science at JetBrains, returns to discuss techniques and tools for evaluating LLMs With Python.
Measuring Bias, Toxicity, and Truthfulness in LLMs With Python | The Real Python Podcast - Listen or read transcript on Metacast