Fixing LLM Hallucinations with Facts - podcast episode cover

Fixing LLM Hallucinations with Facts

Sep 23, 202412 min
--:--
--:--
Listen in podcast apps:

Episode description

This episode explores how Google researchers are tackling the issue of "hallucinations" in Large Language Models (LLMs) by connecting them to Data Commons, a vast repository of publicly available statistical data.https://datacommons.org/The researchers experiment with two techniques: Retrieval Interleaved Generation (RIG), where the LLM is trained to generate natural language queries to fetch data from Data Commons and Retrieval Augmented Generation (RAG), where relevant data tables from Data...
Fixing LLM Hallucinations with Facts | AI Blindspot podcast - Listen or read transcript on Metacast