021. Large Language Models, Open Letter Moratorium on AI, NIST's AI Risk Management Framework, and Algorithmic Bias Lab - podcast episode cover

021. Large Language Models, Open Letter Moratorium on AI, NIST's AI Risk Management Framework, and Algorithmic Bias Lab

Apr 02, 202354 minEp. 21
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

This week on Lunchtime BABLing, we discuss: 1: The power, hype, and dangers of large language models like ChatGPT. 2: The recent open letter asking for a moratorium on AI research. 3: In context learning of large language models the problems for auditing. 4: NIST's AI Risk Management Framework and its influence on public policy like California's ASSEMBLY BILL NO. 331. 5: Updates on The Algorithmic Bias Lab's new training program for AI auditors. https://babl.ai https://courses.babl.ai/?affcode=616760_7ts3gujlCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
021. Large Language Models, Open Letter Moratorium on AI, NIST's AI Risk Management Framework, and Algorithmic Bias Lab | Lunchtime BABLing with Dr. Shea Brown podcast - Listen or read transcript on Metacast