Stopping AI Oversharing with Knostic - podcast episode cover

Stopping AI Oversharing with Knostic

May 28, 202517 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Large language models are most useful to your business when they have access to your data. But these models also overshare by default, providing need-to-know information without sophisticated access controls. But organizations that try to limit the data accessed by an LLM risk undersharing within their organization, not giving the information users need to do their jobs more efficiently.

In this episode, Sounil Yu, CTO at Knostic, explains how they address internal knowledge segmentation, offer continuous assessments, and help prevent oversharing while also identifying under-sharing opportunities. Joining him are our panelists, Ross Young, CISO-in-residence at Team8, and David Cross, CISO at Atlassian.

Huge thanks to our sponsor, Knostic


Knostic protects enterprises from LLM oversharing by applying need-to-know access controls to AI tools like Microsoft 365 Copilot. Get visibility into overshared data, fix risky exposures, and deploy AI confidently—without data leakage. If you’re rolling out Copilot or Glean, you need Knostic.

 

For the best experience, listen in Metacast app for iOS or Android
Open in Metacast
Stopping AI Oversharing with Knostic | Security You Should Know podcast - Listen or read transcript on Metacast