AI Safety, Security, And Play With David Haber - podcast episode cover

AI Safety, Security, And Play With David Haber

Sep 19, 202352 minSeason 8Ep. 137
--:--
--:--
Listen in podcast apps:

Episode description

Security is changing quickly in the fast-paced world of AI. During this episode, we explore AI safety and security with the help of David Haber, who co-founded Lakera.ai. David is also the creator of Gandalf, an AI tool that makes Large Language Models (LLMs) accessible to everyone. Join us as we dive into the world of prompt injections, AI behavior, and its corresponding risks and vulnerabilities. We discuss questions about data poisoning and protections and explore David’s motivation to create Gandalf and how he has used it to gain vital insights into the complex topic of LLM security. This episode also includes a foray into the two approaches to informing an LLM about sensitive data and the pros and cons of each. Lastly, David emphasises the importance of considering what is known about each model on a case-by-case basis and using that as a starting point. Tune in to hear all this and more about AI safety, security, and play from a veritable expert in the field, David Haber!
 

Follow Us

AI Safety, Security, And Play With David Haber | The Secure Developer podcast - Listen or read transcript on Metacast